repo_name stringlengths 6 77 | path stringlengths 8 215 | license stringclasses 15
values | content stringlengths 335 154k |
|---|---|---|---|
junghao/fdsn | examples/GeoNet_FDSN_demo_clients.ipynb | mit | from obspy.core import UTCDateTime
from obspy.clients.fdsn import Client
arc_client = 'http://service.geonet.org.nz'
# or arc_client = "GEONET"
nrt_client = 'http://service-nrt.geonet.org.nz'
"""
Explanation: GeoNet FDSN webservice with Obspy demo - GeoNet FDSN Clients
GeoNet operates two FDNS wave servers
- An archive server holds verified data starting 7 days after collection
- A near real-time servers holds unverified data for the last 8 days
There are a few different ways this could be done.
Import Modules and Define Clients
End of explanation
"""
t = UTCDateTime('2017-10-04')
#first try to get data from archive server
try:
client = Client(arc_client)
st = client.get_waveforms('NZ', 'KRVZ', '10', 'EHZ', t, t + 300)
print('arc client successful')
#if this raises an exception, try the near real-time server
except:
client = Client(nrt_client)
st = client.get_waveforms('NZ', 'KRVZ', '10', 'EHZ', t, t + 300)
print('nrt client successful')
print(st)
"""
Explanation: Option 1: By trial
You will need to specify the correct wave server to get the data you want. If you try to get data from one server and that raises an exception (because the data you requested are not available) repeat the request with the other server.
End of explanation
"""
starttime = UTCDateTime.now()-518400 #6 days ago
endtime = starttime+300
days7 = UTCDateTime.now()-604800 #7 days ago
days8 = UTCDateTime.now()-691200 #8 days ago
if endtime < days7:
client = Client(arc_client)
print("Client is archive client")
elif starttime > days8:
client = Client(nrt_client)
print("Client is near real-time client")
else:
print("Time range requires both clients")
st = client.get_waveforms('NZ', 'WEL', '10', 'HHZ', starttime, endtime)
print(st)
"""
Explanation: Option 2: Use the time range to select client
Use the time range to select a client, if the time range requires both clients then use Option 3.
End of explanation
"""
#Define time period
t1 = UTCDateTime.now()-777600 #9 days ago
t2 = UTCDateTime.now()-518400 #6 days ago
#nrt client
try:
client = Client(nrt_client)
stnrt = client.get_waveforms('NZ', 'WEL', '10', 'HHZ', t1, t2)
print('nrt client successful')
except:
print('nrt client not successful')
#arc client
try:
client = Client(arc_client)
starc = client.get_waveforms('NZ', 'WEL', '10', 'HHZ', t1, t2)
print('arc client successful')
except:
print('arc client not successful')
print(stnrt,starc)
"""
Explanation: Option 3: Request from both clients and merge
This is useful if the time window for the data request spans both time periods.
First, request data from both clients
End of explanation
"""
st = stnrt
st += starc
st.merge(fill_value = 'interpolate')
print(st)
"""
Explanation: Now merge waveforms into a single stream object.
End of explanation
"""
|
spencerchan/ctabus | notebooks/55 Garfield late-afternoon wait and travel time analysis.ipynb | gpl-3.0 | garfield_red_eb = pd.read_csv("../data/processed/trips_and_waits/55/GarfieldRed_eb.csv")
garfield_red_eb["hr_bin"] = pd.cut(garfield_red_eb.decimal_time, np.linspace(0, 24, num=24+1), labels=np.linspace(0, 23, num=24), right=False)
"""
Explanation: Wait/Travel Time Analysis Draft
The question
This project began as an attempt to answer the question: is the bus schedule for CTA route 55 Garfield overpromising on its late-afternoon wait times or am I just unlucky? To ask the question more precisely: is 20 minutes an unreasonable amount of time to wait for an eastbound 55 Garfield bus (headed to Woodlawn Ave) at the Garfield Red Line Station at 4pm on a weekday given that I just missed the bus? I answer this question in this notebook. I also determine the expected travel time from the Garfield Red Line Station to Woodlawn Ave at 4pm on a weekday.
Note: I specifically ask about the wait times of buses not just departing from the Red Line, but also heading to Woodlawn. When analyzing bus headways and wait times, it is important to specify both an origin stop and a destination stop. Consider the following example. Route 55 Garfield makes trips throughout the daytime from the Museum of Science & Industry to Midway Airport and vice versa. Sometimes though buses only begin and end their trips as far west as St. Louis Avenue, a full two miles east of the airport. If you live near the MSI, and you need to catch a flight out of Midway, it wouldn't matter to you if a St. Louis-bound Garfield bus left the MSI every 10 minutes or every 10 seconds. Only the wait times between Garfield buses headed to Midway matter to you. It is, therefore, critical to calculate headways and wait times with respect to both the origin and destination.
Notes on the data
The dataset I use is derived from 55 Garfield bus positions data collected using the CTA Bus Tracker API between January 30, 2017 and April, 4, 2017, with a gap in collection between February 17 and February 28. The dataset contains the travel and wait times for all pairs of bus stops (Garfield Red Line, Stop B), where Stop B is a possibly eastbound destination stop. Travel and wait times are calculated in minutes rounded to two decimal places.
Please note, the derived wait times in this dataset are precisely the headways of adjacent buses (departing the Red Line heading to stop B). We can think of the headway as the worst-case waiting time: the amount of time you would have to wait for the next bus to your destination if you just missed the departing bus.
Finding the regular average wait time between buses depends on the model of passenger arrival at the bus stop. In many cases, it's safe to assume that passengers arrive at a bus stop uniformly and at random. In this model, the average wait time for a bus (departing stop A heading to stop B) is half of the average headway (i.e. average worst-case wait time) of the buses (departing stop A and heading to stop B). I will discuss the validity of this model of passenger arrival at the Red Line later in the notebook.
Analysis
First, I load the data into a pandas DataFrame. The rate at which buses are dispatched and the volume of traffic on the road changes throughout the day, so we should expect travel and wait times to fluctuate in response. In order to aggregate the data in a meaningful way, I sort the data into equally sized hour-long bins (not including the rightmost edge) that correspond to the hours of the day. The bins are labelled according the value of leftmost edge. So, [0:00, 1:00) is a bin with label 0:00, and so on. I will refer to data falling in a particular bin by its label. For example, buses arriving at a stop any time between [0:00, 1:00) will be said to be arriving at 0:00.
End of explanation
"""
filtered = garfield_red_eb[
(garfield_red_eb.stop == "Woodlawn")
& (garfield_red_eb.day_of_week < 5)
& ((garfield_red_eb.decimal_time >= 15) & (garfield_red_eb.decimal_time < 18))
].copy()
filtered["date"] = filtered.tripid.str.split("_", expand=True)[0]
print """
There were {} observed eastbound trips from the Garfield Red Line station to Woodlawn
between 3 and 6pm during the {} weekdays of the data collection period.
""".format(filtered.shape[0], len(filtered.date.unique()))
"""
Explanation: I also limit the analysis to trips that occurred on weekdays in which the buses departed the Garfield Red Line between 3 and 6pm and arrived later at Woodlawn. I will refer to this filtered data as trips between the Red Line and Woodlawn.
End of explanation
"""
fig, axs = plt.subplots(ncols=3, figsize=(18,5))
sns.scatterplot(x="decimal_time", y="travel_time", data=filtered, s=10, ax=axs[0])
sns.boxplot(x="hr_bin", y="travel_time", data=filtered, ax=axs[1]).set(xlim=(14.5,17.5))
four_pm = filtered[filtered.hr_bin == 16]
sns.distplot(four_pm.travel_time, bins=10, kde=False, axlabel="travel time [16:00-17:00)", ax=axs[2])
"""
Explanation: I plot the data to get an understanding of the distribution of travel and wait times. This understanding will help me determine appropriate representative values for the typical travel and wait times during the late afternoon. I will start with the analysis of travel times, since it is simpler.
End of explanation
"""
def build_barplot(agg_df, bin_col, obs_col):
df = agg_df[[bin_col, obs_col]]
df.columns = [j if j != '' else i for i,j in df.columns]
df = df.melt(bin_col, var_name="agg_type", value_name=obs_col)
sns.barplot(x=bin_col, y=obs_col, hue="agg_type", data=df)
aggregated = filtered.groupby(["hr_bin"]).agg(["mean", "median"]).reset_index().dropna()
print aggregated.travel_time
build_barplot(aggregated, "hr_bin", "travel_time")
"""
Explanation: During the late afternoon, almost all travel times fall between about 7.5 and 15 minutes with the exception of a few outliers. Over half of the travel times fall between 10 and 12.5 minutes. The median travel times at the start of and during rush hour (16:00-18:00) appear slightly longer (by 30-60 seconds) than at 15:00. Overall, the late-afternoon travel times appear normally distributed by each hour. Consequently, either the median or mean travel time at 16:00 would serve well as a representative travel time during that hour.
Notice the trip at 17:00 with travel time 0 minutes. It is likely that the bus's onboard GPS malfunctioned or lagged during the trip, causing the bus's reported location to quickly jump from one point to another.
Let's determine the value of the mean and median wait times.
End of explanation
"""
fig, axs = plt.subplots(ncols=3, figsize=(18,5))
sns.scatterplot(x="decimal_time", y="wait_time", data=filtered, s=10, ax=axs[0])
sns.boxplot(x="hr_bin", y="wait_time", data=filtered, ax=axs[1]).set(xlim=(14.5,17.5))
sns.distplot(four_pm.wait_time, bins=10, kde=False, axlabel="wait time [16:00-17:00)", ax=axs[2])
print "Histogram values: {}".format(np.histogram(four_pm.wait_time, bins=10)[0])
print "Bin edges: {}".format([float(x) for x in np.histogram_bin_edges(four_pm.wait_time, bins=10)])
"""
Explanation: Just as expected, the mean and median travel times are almost equal for each hour. I feel okay saying that one should expect a weekday trip at 16:00 from the Red Line to Woodlawn to take a little over 11.5 minutes.
Next, I'll analyze the wait times.
End of explanation
"""
print aggregated.wait_time
build_barplot(aggregated, "hr_bin", "wait_time")
"""
Explanation: While travel times are more or less normally distributed during each hour of the late afternoon, the wait times have a much wider spread and more extreme outliers at the high end of the distribution. Most wait times fall between 5 and 15 minutes, but some were observed to be as long as 20-30 minutes, with a couple outliers exceeding 35 minutes.
At 16:00, a 20 minute wait time would fall in the top 25% longest observed wait times. Such an observation would not be considered an outlier. Outlying wait times start around 24 minutes (Note: An outlier is defined at falling 1.5 times the inner quartile range (IQR) above the upper quartile or below the lower quartile). Because the data is not normally distributed, the mean and median wait times will show a greater discrepancy.
Let's try to come up with a representative value for the late-afternoon wait times. First, I'll compare the mean and median wait times for each hour.
End of explanation
"""
four_pm.groupby("date").tripid.count().describe()
"""
Explanation: As expected, there's a greater discrepancy between the mean and median wait times than travel times. At 15:00, the mean wait time is almost 2 minutes longer than the median wait time, while the mean wait times are 40 seconds longer at 16:00 and 15:00. Despite the discrepancy at 16:00, the median and mean wait times are still relatively close. Is the average wait time, 8.8 minutes, representative of wait times at this hour?
Something to keep in mind is the distribution of wait times at 16:00, and in particular, the peak at 0.03-3.28 minutes. The high count of very short wait times suggests that bus bunching is likely a problem at this time on this route. There were 52 observed very short wait times at 16:00 over the course of 37 weekdays, for an average of 1.4 incidents per day. Bus bunching often translates into longer experienced wait times for riders, so it is possible that the experienced average wait time is longer than 8.8 minutes. From the perspective of passengers, a group of two buses arriving within a minute of each other every 20 minutes is not the same as two buses arriving every 10 minutes. To passengers, the former situation is experienced as longer wait times. It can be useful to average grouped arrivals (e.g. arrivals within two or three minutes of each other) into a single arrival time to determine a more realistic wait time. This more nuanced analysis requires revisions to the data processing scripts that I will not undertake now but is an approach I will consider for the future.
Instead, to get a sense of the impact of bus bunching, I examine the number of buses arriving at/departing from the Red Line at 16:00 and the distribution of the arrival/departure times.
End of explanation
"""
sns.distplot(four_pm.decimal_time, bins=7, kde=False, axlabel="arrival/departure times")
"""
Explanation: 7 to 8 buses arrived at 16:00 for at least half of the days observed, with an average of 6.6 buses arriving each day during that hour. If bus arrivals/departures were evenly distributed, then we should see one bus arrival/departure every 9.1 minutes. Of course, the arrivals/departures are not evenly distributed, as inferred from the above wait time distribution.
Let's now take a look at the distribution of arrival/departure times at 16:00 to see if there are any service clusters. I sort the arrival/departure times into 7 bins of roughly 8.6 minutes each.
End of explanation
"""
from scipy import stats
def percentile_summary(df, bin_col, obs_col, score):
for _bin in df[bin_col].unique():
percentile = stats.percentileofscore(df.loc[df[bin_col] == _bin, obs_col].values, score)
print "At {:02.0f}:{:02.0f}, {:.1f}% of wait times are shorter than {} minutes.".format(_bin // 1, 60 * (_bin % 1), percentile, score)
percentile_summary(filtered, "hr_bin", "wait_time", 20)
"""
Explanation: There is an arrival/departure peak at the start of the hour and a smaller peak right before the end of the hour. Also, notice the slight lull in arrivals right after 16:30. The taller peak at 8.6 minutes suggests bus bunching is more frequent toward the start of the hour. It does not seem, however, that service is dramatically clustered during any part of the hour. I believe several more month's worth of data is necessary to reach a more definitive conclusion. I feel comfortable saying that 8.8 minutes is close to a representative worst-case wait time between buses, though perhaps slightly on the low side.
Finally, a quick way to gauge the unluckiness of a 20 minute wait time in the late-afternoon is to see what proportion of wait times are shorter.
End of explanation
"""
|
tensorflow/docs-l10n | site/zh-cn/tutorials/keras/keras_tuner.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
"""
import tensorflow as tf
from tensorflow import keras
"""
Explanation: Keras Tuner 简介
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https://tensorflow.google.cn/tutorials/keras/keras_tuner"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png"> 在 TensorFlow.org 上查看</a> </td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/keras/keras_tuner.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">在 Google Colab 中运行</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/keras/keras_tuner.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png">在 GitHub 上查看源代码</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/tutorials/keras/keras_tuner.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png">下载笔记本</a></td>
</table>
概述
Keras Tuner 是一个库,可帮助您为 TensorFlow 程序选择最佳的超参数集。为您的机器学习 (ML) 应用选择正确的超参数集,这一过程称为超参数调节或超调。
超参数是控制训练过程和 ML 模型拓扑的变量。这些变量在训练过程中保持不变,并会直接影响 ML 程序的性能。超参数有两种类型:
模型超参数:影响模型的选择,例如隐藏层的数量和宽度
算法超参数:影响学习算法的速度和质量,例如随机梯度下降 (SGD) 的学习率以及 k 近邻 (KNN) 分类器的近邻数
在本教程中,您将使用 Keras Tuner 对图像分类应用执行超调。
设置
End of explanation
"""
!pip install -q -U keras-tuner
import keras_tuner as kt
"""
Explanation: 安装并导入 Keras Tuner。
End of explanation
"""
(img_train, label_train), (img_test, label_test) = keras.datasets.fashion_mnist.load_data()
# Normalize pixel values between 0 and 1
img_train = img_train.astype('float32') / 255.0
img_test = img_test.astype('float32') / 255.0
"""
Explanation: 下载并准备数据集
在本教程中,您将使用 Keras Tuner 为某个对 Fashion MNIST 数据集内的服装图像进行分类的机器学习模型找到最佳超参数。
加载数据。
End of explanation
"""
def model_builder(hp):
model = keras.Sequential()
model.add(keras.layers.Flatten(input_shape=(28, 28)))
# Tune the number of units in the first Dense layer
# Choose an optimal value between 32-512
hp_units = hp.Int('units', min_value=32, max_value=512, step=32)
model.add(keras.layers.Dense(units=hp_units, activation='relu'))
model.add(keras.layers.Dense(10))
# Tune the learning rate for the optimizer
# Choose an optimal value from 0.01, 0.001, or 0.0001
hp_learning_rate = hp.Choice('learning_rate', values=[1e-2, 1e-3, 1e-4])
model.compile(optimizer=keras.optimizers.Adam(learning_rate=hp_learning_rate),
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
return model
"""
Explanation: 定义模型
构建用于超调的模型时,除了模型架构之外,还要定义超参数搜索空间。您为超调设置的模型称为超模型。
您可以通过两种方式定义超模型:
使用模型构建工具函数
将 Keras Tuner API 的 HyperModel 类子类化
您还可以将两个预定义的 HyperModel 类(HyperXception 和 HyperResNet)用于计算机视觉应用。
在本教程中,您将使用模型构建工具函数来定义图像分类模型。模型构建工具函数将返回已编译的模型,并使用您以内嵌方式定义的超参数对模型进行超调。
End of explanation
"""
tuner = kt.Hyperband(model_builder,
objective='val_accuracy',
max_epochs=10,
factor=3,
directory='my_dir',
project_name='intro_to_kt')
"""
Explanation: 实例化调节器并执行超调
实例化调节器以执行超调。Keras Tuner 提供了四种调节器:RandomSearch、Hyperband、BayesianOptimization 和 Sklearn。在本教程中,您将使用 Hyperband 调节器。
要实例化 Hyperband 调节器,必须指定超模型、要优化的 objective 和要训练的最大周期数 (max_epochs)。
End of explanation
"""
stop_early = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=5)
"""
Explanation: Hyperband 调节算法使用自适应资源分配和早停法来快速收敛到高性能模型。该过程采用了体育竞技争冠模式的排除法。算法会将大量模型训练多个周期,并仅将性能最高的一半模型送入下一轮训练。Hyperband 通过计算 1 + log<sub><code>factor</code></sub>(max_epochs) 并将其向上舍入到最接近的整数来确定要训练的模型的数量。
创建回调以在验证损失达到特定值后提前停止训练。
End of explanation
"""
tuner.search(img_train, label_train, epochs=50, validation_split=0.2, callbacks=[stop_early])
# Get the optimal hyperparameters
best_hps=tuner.get_best_hyperparameters(num_trials=1)[0]
print(f"""
The hyperparameter search is complete. The optimal number of units in the first densely-connected
layer is {best_hps.get('units')} and the optimal learning rate for the optimizer
is {best_hps.get('learning_rate')}.
""")
"""
Explanation: 运行超参数搜索。除了上面的回调外,搜索方法的参数也与 tf.keras.model.fit 所用参数相同。
End of explanation
"""
# Build the model with the optimal hyperparameters and train it on the data for 50 epochs
model = tuner.hypermodel.build(best_hps)
history = model.fit(img_train, label_train, epochs=50, validation_split=0.2)
val_acc_per_epoch = history.history['val_accuracy']
best_epoch = val_acc_per_epoch.index(max(val_acc_per_epoch)) + 1
print('Best epoch: %d' % (best_epoch,))
"""
Explanation: 训练模型
使用从搜索中获得的超参数找到训练模型的最佳周期数。
End of explanation
"""
hypermodel = tuner.hypermodel.build(best_hps)
# Retrain the model
hypermodel.fit(img_train, label_train, epochs=best_epoch, validation_split=0.2)
"""
Explanation: 重新实例化超模型并使用上面的最佳周期数对其进行训练。
End of explanation
"""
eval_result = hypermodel.evaluate(img_test, label_test)
print("[test loss, test accuracy]:", eval_result)
"""
Explanation: 要完成本教程,请在测试数据上评估超模型。
End of explanation
"""
|
PySEE/PyRankine | notebook/RankineCycle83-84.ipynb | mit | from seuif97 import *
# Fix the states
# State1
t1=480
p1=8
h1 =pt2h(p1,t1)
s1=pt2s(p1,t1)
# State 2
p2=0.7
s2=s1
h2 =ps2h(p2,s2)
t2=ps2t(p2,s2)
# State 3
t3=440
p3=p2
h3 =pt2h(p3,t3)
s3 =pt2s(p3,t3)
# State 4
p4=0.008
s4=s3
h4 =ps2h(p4,s4)
t4=ps2t(p4,s4)
# State 5
p5=0.008
t5=px2t(p5,0)
h5=px2h(p5,0)
s5=px2s(p5,0)
# State 6
p6=8.0
s6=s5
h6 =ps2h(p6,s6)
t6=ps2t(p6,s6)
"""
Explanation: Evaluating Performance of the Reheat Cycle 8.3,8.4
Chapter 8 : Vapour Power Systems:Reheat Cycle
Example 8.3: Evaluating Performance of an Ideal Reheat Cycle(449-451)
Example 8.4: Evaluating Performance of a Reheat Cycle with Turbine Irreversibility Page451
1 Example 8.3: Evaluating Performance of an Ideal Reheat Cycle(449-451)
Steam is the working fluid in an ideal Rankine cycle with superheat and reheat.
Steam enters the first-stage turbine at 8.0 MPa, 4808C, and expands to 0.7 MPa. It is then reheated to 4408C before entering the second-stage turbine, where it expands to the condenser pressure of 0.008 MPa. The net power output is 100 MW.
Determine
(a) the thermal efficiency of the cycle,
(b) the mass flow rate of steam, in $kg/h$,
(c) the rate of heat transfer $Q_{out}$ from the condensing steam as it passes through the condenser, in $MW$. Discuss the effects of reheat on the vapor power cycle.
SOLUTION
Known: An ideal reheat cycle operates with steam as the working fluid. Operating pressures and temperatures are specified, and the net power output is given.
Find: Determine the thermal efficiency, the mass flow rate of the steam, in $kg/h$
Engineering Model:
1. Each component in the cycle is analyzed as a control volume at steady state. The control volumes are shown on the accompanying sketch by dashed lines.
All processes of the working fluid are internally reversible.
The turbine and pump operate adiabatically.
Condensate exits the condenser as saturated liquid.
Kinetic and potential energy effects are negligible.
Analysis:
To begin, we fix each of the principal states.
End of explanation
"""
eta = ((h1-h2)+(h3-h4)-(h6-h5))/((h1-h6)+(h3-h2))
# Result
print('The thermal efficiency is {:>.2f}%'.format(100*eta))
"""
Explanation: Part (a)
The net power developed by the cycle is
$\dot{W}{cycle}=\dot{W}{t1}+\dot{W}{t2}-\dot{W}{p}$
Mass and energy rate balances for the two turbine stages and the pump reduce to give, respectively
Turbine 1: ${\dot{W}_{t1}}/{\dot{m}}=h_1-h_2$
Turbine 2: ${\dot{W}_{t2}}/{\dot{m}}=h_3-h_4$
Pump: ${\dot{W}_{p}}/{\dot{m}}=h_6-h_5$
where $\dot{m}$ is the mass flow rate of the steam.
The total rate of heat transfer to the working fluid as it passes through the boiler–superheater and reheater is
$\frac{\dot{Q}_{in}}{\dot{m}}=(h_1-h_6)+(h_3-h_2)$
Using these expressions, the thermal efficiency is
$\eta=\frac{(h_1-h_2)+(h_3-h_4)--(h_6-h_5)}{(h_1-h_6)+(h_3-h_2}$
End of explanation
"""
# Part(b)
Wcycledot = 100.0
mdot = (Wcycledot*3600*10**3)/((h1-h2)+(h3-h4)-(h6-h5))
print('The mass flow rate of steam, is {:>.2f}kg/h.'.format(mdot))
"""
Explanation: (b) The mass flow rate of the steam can be obtained with the expression for net power given in part (a).
$\dot{m}=\frac{\dot{W}_{cycle}}{(h_1-h_2)+(h_3-h_4)-(h_6-h_5)}$
End of explanation
"""
# Part(c)
Qoutdot = (mdot*(h4-h5))/(3600*10**3)
print('The rate of heat transfer Qoutdot from the condensing steam as it passes through the condenser is {:>.2f}kg/h'.format(Qoutdot))
"""
Explanation: (c) The rate of heat transfer from the condensing steam to the cooling water is
$\dot{Q}_{out}=\dot{m}(h_4-h_5)$
End of explanation
"""
from seuif97 import *
# Fix the states
def FixStates(etat):
# State1
t1=480
p1=8
h1 =pt2h(p1,t1)
s1=pt2s(p1,t1)
# State 2
p2=0.7
s2s=s1
h2s =ps2h(p2,s2s)
etat1=etat
h2=h1-etat1*(h1-h2s)
s2=ph2s(p2,h2)
t2=ph2t(p2,h2)
# State 3
t3=440
p3=p2
h3 =pt2h(p3,t3)
s3 =pt2s(p3,t3)
# State 4
p4=0.008
s4s=s3
h4s =ps2h(p4,s4s)
etat2=etat1
h4=h3-etat2*(h3-h4s)
s4 =ph2s(p4,h4)
s4 =ph2t(p4,h4)
# State 5
p5=0.008
t5=px2t(p5,0)
h5=px2h(p5,0)
s5=px2s(p5,0)
# State 6
p6=8.0
s6=s5
h6 =ps2h(p6,s6)
t6=ps2t(p6,s6)
return h1,h2,h3,h4,h5,h6
"""
Explanation: To see the effects of reheat, we compare the present values with their counterparts in Example 8.1. With superheat and reheat, the thermal efficiency is
increased over that of the cycle of Example 8.1. For a specified net power output(100 MW), a larger thermal efficiency means that a smaller mass flow rate
of steam is required. Moreover, with a greater thermal efficiency the rate of heat transfer to the cooling water is also less, resulting in a reduced demand
for cooling water. With reheating, the steam quality at the turbine exhaust is substantially increased over the value for the cycle of Example 8.1
2 Example 8.4: Evaluating Performance of a Reheat Cycle with Turbine Irreversibility Page451
Reconsider the reheat cycle of Example 8.3, but include in the analysis that each turbine stage has the same isentropic efficiency.
(a) If $\eta_t=85$%, determine the thermal efficiency.
(b) Plot the thermal efficiency versus turbine stage isentropic efficiency ranging from 85 to 100%.
SOLUTION
Known: A reheat cycle operates with steam as the working fluid. Operating pressures and temperatures are specified. Each turbine stage has the same isentropic efficiency.
Find:
If $\eta_t=85$%, determine the thermal efficiency.
plot the thermal efficiency versus turbine stage isentropic efficiency ranging from 85 to 100%.
Engineering Model:
As in Example 8.3, each component is analyzed as a control volume at steady state.
Except for the two turbine stages, all processes are internally reversible.
The turbine and pump operate adiabatically.
The condensate exits the condenser as saturated liquid.
Kinetic and potential energy effects are negligible.
End of explanation
"""
etat=0.85
h1,h2,h3,h4,h5,h6= FixStates(etat)
eta = ((h1-h2)+(h3-h4)-(h6-h5))/((h1-h6)+(h3-h2))
# Result
print('The thermal efficiency is {:>.2f}%'.format(100*eta))
"""
Explanation: The thermal efficiency is then
End of explanation
"""
%matplotlib inline
# Part (b)
from numpy import linspace
import matplotlib.pyplot as plt
etas = []
etats = linspace(0.85,1,50)
for i in range(0,50):
h1,h2,h3,h4,h5,h6= FixStates(etats[i])
eta = ((h1-h2)+(h3-h4)-(h6-h5))/((h1-h6)+(h3-h2))
etas.append(eta)
plt.plot(etats,etas)
plt.xlabel('isentropic turbine efficiency')
plt.ylabel('cycle thermal efficiency')
plt.show()
"""
Explanation: Turbine isentropic efficiency & cycle thermal efficiency
Sweep eta from 0.85 to 1.0 in steps of 0.01,then, using the matplotlib.pyplot
End of explanation
"""
|
NeuPhysics/aNN | ipynb/vacuum-Copy1.ipynb | mit | # This line configures matplotlib to show figures embedded in the notebook,
# instead of opening a new window for each figure. More about that later.
# If you are using an old version of IPython, try using '%pylab inline' instead.
%matplotlib inline
%load_ext snakeviz
import numpy as np
from scipy.optimize import minimize
from scipy.special import expit
import matplotlib.pyplot as plt
from matplotlib.lines import Line2D
import timeit
import pandas as pd
import plotly.plotly as py
from plotly.graph_objs import *
import plotly.tools as tls
# hbar=1.054571726*10**(-34)
hbar=1.0
delm2E=1.0
# lamb=1.0 ## lambda for neutrinos
# lambb=1.0 ## lambda for anti neutrinos
# gF=1.0
# nd=1.0 ## number density
# ndb=1.0 ## number density
omega=1.0
omegab=-1.0
## Here are some matrices to be used
elM = np.array([[1.0,0.0],[0.0,0.0]])
bM = 1.0/2*np.array( [ [ - 0.38729833462,0.31622776601] , [0.31622776601,0.38729833462] ] )
## sqareroot of 2
sqrt2=np.sqrt(2.0)
"""
Explanation: Vacuum Neutrino Oscillations
Here is a notebook for homogeneous gas model.
Here we are talking about a homogeneous gas bulk of neutrinos with single energy. The EoM is
$$
i \partial_t \rho_E = \left[ \frac{\delta m^2}{2E}B ,\rho_E \right]
$$
while the EoM for antineutrinos is
$$
i \partial_t \bar\rho_E = \left[- \frac{\delta m^2}{2E}B ,\bar\rho_E \right]
$$
Initial:
Homogeneous, Isotropic, Monoenergetic $\nu_e$ and $\bar\nu_e$
The equations becomes
$$
i \partial_t \rho_E = \left[ \frac{\delta m^2}{2E} B ,\rho_E \right]
$$
$$
i \partial_t \bar\rho_E = \left[- \frac{\delta m^2}{2E}B,\bar\rho_E \right]
$$
Define $\omega=\frac{\delta m^2}{2E}$, $\omega = \frac{\delta m^2}{-2E}$, $\mu=\sqrt{2}G_F n_\nu$
$$
i \partial_t \rho_E = \left[ \omega B ,\rho_E \right]
$$
$$
i \partial_t \bar\rho_E = \left[\bar\omega B,\bar\rho_E \right]
$$
where
$$
B = \frac{1}{2} \begin{pmatrix}
-\cos 2\theta_v & \sin 2\theta_v \
\sin 2\theta_v & \cos 2\theta_v
\end{pmatrix} =
\begin{pmatrix}
-0.38729833462 & 0.31622776601\
0.31622776601 & 0.38729833462
\end{pmatrix}
$$
$$
L = \begin{pmatrix}
1 & 0 \
0 & 0
\end{pmatrix}
$$
Initial condition
$$
\rho(t=0) = \begin{pmatrix}
1 & 0 \
0 & 0
\end{pmatrix}
$$
$$
\bar\rho(t=0) =\begin{pmatrix}
1 & 0 \
0 & 0
\end{pmatrix}
$$
define the following quantities
hbar$=\hbar$
%2. delm2E$= \delta m^2/2E$
%3. lamb $= \lambda$, lambb $= \bar\lambda$
%4. gF $= G_F$
%5. mu $=\mu$
omega $=\omega$, omegab $=-\bar\omega$
Numerical
End of explanation
"""
#r11prime(t)
## The matrix eqn for neutrinos. Symplify the equation to the form A.X=0. Here I am only writing down the LHS.
## Eqn for r11'
# 1/2*( r21(t)*( bM12*delm2E - 2*sqrt2*gF*rb12(t) ) + r12(t) * ( -bM21*delm2E + 2*sqrt2*gF*rb21(t) ) - 1j*r11prime(t) )
## Eqn for r12'
# 1/2*( r22(t)* ( bM12 ) )
### wait a minute I don't actually need to write down this. I can just do this part in numpy.
"""
Explanation: ~~Using Mathematica, I can find the 4*2 equations~~
End of explanation
"""
def trigf(x):
#return 1/(1+np.exp(-x)) # It's not bad to define this function here for people could use other functions other than expit(x).
return expit(x)
## The time derivative part
### Here are the initial conditions
init = np.array( [[1,0],[0,0]] )
### For neutrinos
def rho(x,ti,initialCondition): # x is the input structure arrays, ti is a time point
v11,w11,u11,v12,w12,u12,v21,w21,u21,v22,w22,u22 = x[:12]
elem11= np.sum(ti * v11 * trigf( ti*w11 +u11 ) )
elem12= np.sum(ti * v12 * trigf( ti*w12 +u12 ) )
elem21= np.sum(ti * v21 * trigf( ti*w21 +u21 ) )
elem22= np.sum(ti * v22 * trigf( ti*w22 +u22 ) )
return initialCondition + np.array([[ elem11 , elem12 ],[elem21, elem22]])
## Test
xtemp=np.ones(120)
rho(xtemp,0,init)
## Define Hamiltonians for both
def hamilv():
return delm2E*bM
## The commutator
def commv(x,ti,initialCondition):
return np.dot(hamilv(), rho(x,ti,initialCondition) ) - np.dot(rho(x,ti,initialCondition), hamilv() )
## Test
print bM
print hamilv()
print "neutrino\n",commv(xtemp,0,init)
## The COST of the eqn set
regularization = 0.0001
def costvTi(x,ti,initialCondition): # l is total length of x
v11,w11,u11,v12,w12,u12,v21,w21,u21,v22,w22,u22 = x[:12]
fvec11 = np.array(trigf(ti*w11 + u11) ) # This is a vector!!!
fvec12 = np.array(trigf(ti*w12 + u12) )
fvec21 = np.array(trigf(ti*w21 + u21) )
fvec22 = np.array(trigf(ti*w22 + u22) )
costi11= ( np.sum (v11*fvec11 + ti * v11* fvec11 * ( 1 - fvec11 ) * w11 ) + 1.0j* ( commv(x,ti,initialCondition)[0,0] ) )
costi12= ( np.sum (v12*fvec12 + ti * v12* fvec12 * ( 1 - fvec12 ) * w12 ) + 1.0j* ( commv(x,ti,initialCondition)[0,1] ) )
costi21= ( np.sum (v21*fvec21 + ti * v21* fvec21 * ( 1 - fvec21 ) * w21 ) + 1.0j* ( commv(x,ti,initialCondition)[1,0] ) )
costi22= ( np.sum (v22*fvec22 + ti * v22* fvec22 * ( 1 - fvec22 ) * w22 ) + 1.0j* ( commv(x,ti,initialCondition)[1,1] ) )
#return (np.real(costi11))**2 + (np.real(costi12))**2+ (np.real(costi21))**2 + (np.real(costi22))**2 + (np.imag(costi11))**2 + (np.imag(costi12))**2+ (np.imag(costi21))**2 + (np.imag(costi22))**2
#return np.abs(np.real(costi11)) + np.abs(np.real(costi12))+ np.abs(np.real(costi21)) + np.abs(np.real(costi22)) + np.abs(np.imag(costi11)) + np.abs(np.imag(costi12))+ np.abs(np.imag(costi21)) + np.abs(np.imag(costi22))
return ( (np.real(costi11))**2 + (np.real(costi12))**2+ (np.real(costi21))**2 + (np.real(costi22))**2 + (np.imag(costi11))**2 + (np.imag(costi12))**2+ (np.imag(costi21))**2 + (np.imag(costi22))**2 )/v11.size + regularization * ( np.sum(v11**2)+np.sum(v12**2)+np.sum(v21**2) + np.sum(v22**2) + np.sum(w11**2) + np.sum(w12**2)+ np.sum(w21**2)+ np.sum(w22**2) )
costvTi(xtemp,2,init)
## Calculate the total cost
def costv(x,t,initialCondition):
t = np.array(t)
costvTotal = np.sum( costvTList(x,t,initialCondition) )
return costvTotal
def costvTList(x,t,initialCondition): ## This is the function WITHOUT the square!!!
t = np.array(t)
costvList = np.asarray([])
for temp in t:
tempElement = costvTi(x,temp,initialCondition)
costvList = np.append(costvList, tempElement)
return np.array(costvList)
ttemp = np.linspace(0,10)
print ttemp
ttemp = np.linspace(0,10)
print costvTList(xtemp,ttemp,init)
print costv(xtemp,ttemp,init)
"""
Explanation: I am going to substitute all density matrix elements using their corrosponding network expressions.
So first of all, I need the network expression for the unknown functions.
A function is written as
$$ y_i= 1+t_i v_k f(t_i w_k+u_k) ,$$
while it's derivative is
$$v_k f(t w_k+u_k) + t v_k f(tw_k+u_k) (1-f(tw_k+u_k)) w_k .$$
Now I can write down the equations using these two forms.
End of explanation
"""
tlin = np.linspace(0,15,80)
# tlinTest = np.linspace(0,14,10) + 0.5
# initGuess = np.ones(120)
initGuess = np.asarray(np.split(np.random.rand(1,360)[0],12))
costvF = lambda x: costv(x,tlin,init)
costvFTest = lambda x: costv(x,tlinTest,init)
print costv(initGuess,tlin,init)#, costv(initGuess,tlinTest,init)
## %%snakeviz
# startCG = timeit.default_timer()
#costFResultCG = minimize(costF,initGuess,method="CG")
#stopCG = timeit.default_timer()
#print stopCG - startCG
#print costFResultCG
#%%snakeviz
#startBFGS = timeit.default_timer()
#costvFResultBFGS = minimize(costvF,initGuess,method="BFGS")
#stopBFGS = timeit.default_timer()
#print stopBFGS - startBFGS
#print costvFResultBFGS
%%snakeviz
startSLSQP = timeit.default_timer()
costvFResultSLSQP = minimize(costvF,initGuess,method="SLSQP")
stopSLSQP = timeit.default_timer()
print stopSLSQP - startSLSQP
print costvFResultSLSQP
#%%snakeviz
#startSLSQPTest = timeit.default_timer()
#costvFResultSLSQPTest = minimize(costvFTest,initGuess,method="SLSQP")
#stopSLSQPTest = timeit.default_timer()
#print stopSLSQPTest - startSLSQPTest
#print costvFResultSLSQPTest
costvFResultSLSQP.get('x')
#np.savetxt('./assets/homogen/optimize_ResultSLSQPT2120_Vac.txt', costvFResultSLSQP.get('x'), delimiter = ',')
"""
Explanation: Minimization
Here is the minimization
End of explanation
"""
# costvFResultSLSQPx = np.genfromtxt('./assets/homogen/optimize_ResultSLSQP.txt', delimiter = ',')
## The first element of neutrino density matrix
xresult = np.asarray(costvFResultSLSQP.get('x'))
#xresult = np.asarray(costvFResultBFGS.get('x'))
print xresult
plttlin=np.linspace(0,15,100)
pltdata11 = np.array([])
pltdata11Test = np.array([])
pltdata22 = np.array([])
for i in plttlin:
pltdata11 = np.append(pltdata11 ,rho(xresult,i,init)[0,0] )
print pltdata11
#for i in plttlin:
# pltdata11Test = np.append(pltdata11Test ,rho(xresultTest,i,init)[0,0] )
#
#print pltdata11Test
for i in plttlin:
pltdata22 = np.append(pltdata22 ,rho(xresult,i,init)[1,1] )
print pltdata22
print rho(xresult,0,init)
rho(xresult,6.6,init)
#np.savetxt('./assets/homogen/optimize_pltdatar11.txt', pltdata11, delimiter = ',')
#np.savetxt('./assets/homogen/optimize_pltdatar22.txt', pltdata22, delimiter = ',')
plt.figure(figsize=(16,9.36))
plt.ylabel('rho11')
plt.xlabel('Time')
plt11=plt.plot(plttlin,pltdata11,"b4-",label="vac_rho11")
#plt.plot(plttlin,pltdata11Test,"m4-",label="vac_rho11Test")
plt.show()
#py.iplot_mpl(plt.gcf(),filename="vac_HG-rho11")
# tls.embed("https://plot.ly/~emptymalei/73/")
plt.figure(figsize=(16,9.36))
plt.ylabel('Time')
plt.xlabel('rho22')
plt22=plt.plot(plttlin,pltdata22,"r4-",label="vac_rho22")
plt.show()
#py.iplot_mpl(plt.gcf(),filename="vac_HG-rho22")
MMA_optmize_Vac_pltdata = np.genfromtxt('./assets/homogen/MMA_optmize_Vac_pltdata.txt', delimiter = ',')
plt.figure(figsize=(16,9.36))
plt.ylabel('MMArho11')
plt.xlabel('Time')
plt.plot(np.linspace(0,15,4501),MMA_optmize_Vac_pltdata,"r-",label="MMAVacrho11")
plt.plot(plttlin,pltdata11,"b4-",label="vac_rho11")
plt.show()
#py.iplot_mpl(plt.gcf(),filename="MMA-rho11-Vac-80-60")
"""
Explanation: Functions
Find the solutions to each elements.
End of explanation
"""
xtemp1 = np.arange(4)
xtemp1.shape = (2,2)
print xtemp1
xtemp1[0,1]
np.dot(xtemp1,xtemp1)
xtemp1[0,1]
"""
Explanation: Practice
End of explanation
"""
|
xesscorp/skidl | examples/skywater/skywater.ipynb | mit | import pandas as pd # For data frames.
import matplotlib.pyplot as plt # For plotting.
from skidl.pyspice import * # For describing circuits and interfacing to ngspice.
"""
Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#From-Transistors-to-ALUs-With-Skywater" data-toc-modified-id="From-Transistors-to-ALUs-With-Skywater-1">From Transistors to ALUs With Skywater</a></span><ul class="toc-item"><li><span><a href="#Installing-Some-Tools" data-toc-modified-id="Installing-Some-Tools-1.1">Installing Some Tools</a></span></li><li><span><a href="#Getting-the-Skywater-PDK" data-toc-modified-id="Getting-the-Skywater-PDK-1.2">Getting the Skywater PDK</a></span></li><li><span><a href="#Some-Infrastructure" data-toc-modified-id="Some-Infrastructure-1.3">Some Infrastructure</a></span></li><li><span><a href="#The-Simplest:-the-Inverter" data-toc-modified-id="The-Simplest:-the-Inverter-1.4">The Simplest: the Inverter</a></span></li><li><span><a href="#The-Universal:-the-NAND-Gate" data-toc-modified-id="The-Universal:-the-NAND-Gate-1.5">The Universal: the NAND Gate</a></span></li><li><span><a href="#One-or-the-Other:-the-XOR-Gate" data-toc-modified-id="One-or-the-Other:-the-XOR-Gate-1.6">One or the Other: the XOR Gate</a></span></li><li><span><a href="#No,-It's-Not-a-Snake:-The-Adder" data-toc-modified-id="No,-It's-Not-a-Snake:-The-Adder-1.7">No, It's Not a Snake: The Adder</a></span></li><li><span><a href="#Fragments-of-Memory:-Latches,-Flip-Flops-and-Registers" data-toc-modified-id="Fragments-of-Memory:-Latches,-Flip-Flops-and-Registers-1.8">Fragments of Memory: Latches, Flip-Flops and Registers</a></span></li><li><span><a href="#The-Simplest-State-Machine:-the-Counter" data-toc-modified-id="The-Simplest-State-Machine:-the-Counter-1.9">The Simplest State Machine: the Counter</a></span></li><li><span><a href="#Bonus:-an-ALU" data-toc-modified-id="Bonus:-an-ALU-1.10">Bonus: an ALU</a></span></li><li><span><a href="#Extra-Bonus:-a-Down-Counter" data-toc-modified-id="Extra-Bonus:-a-Down-Counter-1.11">Extra Bonus: a Down Counter</a></span></li><li><span><a href="#End-of-the-Line" data-toc-modified-id="End-of-the-Line-1.12">End of the Line</a></span></li></ul></li></ul></div>
From Transistors to ALUs With Skywater
Google and SkyWater Technology are cooperating to provide open-source hardware designers with a way to build custom ASICs. Part of this effort is the release of the SkyWater PDK which describes the parameters for a 130nm CMOS process. I thought it might be fun to simulate a few logic gates in SPICE using this PDK.
Installing Some Tools
This repo provided some guidance when I first started investigating the PDK, but it uses external tools like XSCHEM (for schematic capture) and GAW (for displaying waveforms). To make my work easier to replicate and distribute, I wanted everything to be done in a Jupyter notebook. It wasn't immediately apparent how I would integrate XSCHEM/GAW into a notebook and schematics shouldn't be used in polite company any way, so I took a more Python-centric approach and used these tools:
ngspice: An open-source SPICE simulator.
SciPy bundle: General-purpose Python libraries for handling data.
SKiDL: Used to describe circuitry using Python code.
PySpice: A Python interface between SKiDL and ngspice.
Pre-built versions of ngspice are available for Windows and MacOS. For linux, I got the latest ngspice files (version 33) from here, unpacked it into the ngspice-33 directory, and built it using the instructions in the INSTALL file:
bash
$ cd ngspice-33
$ mkdir release
$ cd release
$ ../configure --with-x --enable-xspice --disable-debug --enable-cider --with-readline=yes --enable-openmp
$ make 2>&1 | tee make.log
$ sudo make install
Installing the SciPy tools was also easy since I already had Python:
bash
$ pip install matplotlib numpy pandas jupyter
I couldn't use the PyPi versions of PySpice and SKiDL because they had to be modified to make them work with the Skywater PDK. If you want to run this notebook, you'll need to install the development versions of those from GitHub:
bash
$ pip install git+https://github.com/xesscorp/PySpice
$ pip install git+https://github.com/xesscorp/skidl@development
Once all these tools were installed, I imported them into this notebook:
End of explanation
"""
!ls -F ~/tmp/skywater-pdk/libraries/sky130_fd_pr/latest/cells/
"""
Explanation: Getting the Skywater PDK
With the tooling in place, it was time to get the Skywater PDK. If I wanted to wait a long time, I could install the entire PDK like this:
bash
$ git clone --recurse-submodules https://github.com/google/skywater-pdk
But I don't need everything, just the latest SPICE models for the device primitives, so the following command is much quicker:
bash
$ git clone --recurse-submodules=libraries/sky130_fd_pr/latest https://github.com/google/skywater-pdk
Even with a stripped-down repo, there's a lot of stuff in there. The Skywater documentation provides some guidance, but there are a lot of sections that just contain TODO. Poking about in the PDK led me to these directories of device information files as decribed here:
End of explanation
"""
!ls -F ~/tmp/skywater-pdk/libraries/sky130_fd_pr/latest/cells/nfet_01v8
"""
Explanation: For my purposes, I only needed a simple NFET and PFET to build some logic gates. I figured 1.8V versions of these would be found in the nfet_01v8 and pfet_01v8 subdirectories, but I wasn't expecting all these files:
End of explanation
"""
import pandas as pd
nfet_sizes = pd.read_table("~/tmp/skywater-pdk/libraries/sky130_fd_pr/latest/cells/nfet_01v8/sky130_fd_pr__nfet_01v8.bins.csv", delimiter=",")
pfet_sizes = pd.read_table("~/tmp/skywater-pdk/libraries/sky130_fd_pr/latest/cells/pfet_01v8/sky130_fd_pr__pfet_01v8.bins.csv", delimiter=",")
pd.concat((nfet_sizes, pfet_sizes), axis=1)
"""
Explanation: After some further poking about and reading, I categorized the files as follows:
Any file ending in .cdl seems to be associated with Cadence. Since I'll never have the money to run their software, I can ignore these and the associated .tsv files.
Files with names containing _tt_, _ff_, _fs_, _sf_, or _ss_ refer to process corners where variations in the semiconductor fabrication process lead to NMOS/PMOS transistors that have typical, fast or slow switching characteristics.
Files ending with .pm3.spice contain most of the device parameters for a particular corner. These files are included inside an associated corner.spice file that makes a small adjustment to the parameters. Both these files are used for simulating transistors at a given process corner.
Files ending with leak.pm3.spice and leak.corner.spice are similar to the previous files, but are probably used for simulating transistor leakage currents.
Many of the .spice files contain multiple transistor models with differing parameters that are dependent on the length and width of the transistor gate. The file ending with .bins.csv contains a list of the supported transistor sizes. Some of the 1.8V NMOS and PMOS transistor dimensions are shown below:
End of explanation
"""
# Select a particular corner using tt, ff, fs, sf, ss, hh, hl, lh, ll.
corner = "tt" # Use typical transistor models.
# Create a SKiDL library for the Skywater devices at that process corner.
sky_lib = SchLib(
"/home/devb/tmp/skywater-pdk/libraries/sky130_fd_pr/latest/models/sky130.lib.spice",
lib_section=corner, # Load the transistor models for this corner.
recurse=True, # The master lib includes sublibraries, so recurse thru them to load everything.
)
print(sky_lib) # Print the list of devices in the library.
"""
Explanation: Knowing where the device files are and what they contain is great, but how do I actually use them? It turns out there is a master library file: skywater-pdk/libraries/sky130_fd_pr/latest/models/sky130.lib.spice. Internally, this file is divided into nine sections that cover the five process corners (tt, ff, fs, sf, ss) as well as four additional corners for the low/high variations in resistor and capacitor values (hh, hl, lh, ll). I'm only interested in using some typical FETs, so I can load that section into a SKiDL library like so:
End of explanation
"""
nfet_wl = Parameters(W=0.42, L=0.15)
pfet_wl = Parameters(W=1.26, L=0.15) # 3x the width of the NMOS FET.
"""
Explanation: The progress messages indicate a few errors in the libraries (maybe they're corrected by now), but I'm not using those particular devices so I'm not going to worry about it. From the list above, I can pick out the 1.8V general-purpose transistors I want, but I also need to specify their gate dimensions so the right model gets loaded. I picked out a small NMOS FET, and then a PMOS FET that's 3x the width. (That's because I learned in my 1983 VLSI class that PMOS transistors have 3x the resistance-per-square of NMOS ones, so make them wider to make the current-driving capability about the same in each.)
End of explanation
"""
nfet = Part(sky_lib, "sky130_fd_pr__nfet_01v8", params=nfet_wl)
pfet = Part(sky_lib, "sky130_fd_pr__pfet_01v8", params=pfet_wl)
"""
Explanation: Now I can extract the NMOS and PMOS transistors from the library and use them to build logic gates:
End of explanation
"""
disp_vmin, disp_vmax = -0.4@u_V, 2.4@u_V
disp_imin, disp_imax = -10@u_mA, 10@u_mA
def oscope(waveforms, *nets_or_parts):
"""
Plot selected waveforms as a stack of individual traces.
Args:
waveforms: Complete set of waveform data from ngspice simulation.
nets_or_parts: SKiDL Net or Part objects that correspond to individual waveforms.
vmin, vmax: Minimum/maximum voltage limits for each waveform trace.
imin, imax: Minimum/maximum current limits for each waveform trace.
"""
# Determine if this is a time-series plot, or something else.
try:
x = waveforms.time # Sample times are used for the data x coord.
except AttributeError:
# Use the first Net or Part data to supply the x coord.
nets_or_parts = list(nets_or_parts)
x_node = nets_or_parts.pop(0)
x = waveforms[node(x_node)]
# Create separate plot traces for each selected waveform.
num_traces = len(nets_or_parts)
trace_hgt = 1.0 / num_traces
fig, axes = plt.subplots(nrows=num_traces, sharex=True, squeeze=False,
subplot_kw=None, gridspec_kw=None)
traces = axes[:,0]
# Set the X axis label on the bottom-most trace.
if isinstance(x.unit, SiUnits.Second):
xlabel = 'Time (S)'
elif isinstance(x.unit, SiUnits.Volt):
xlabel = x_node.name + ' (V)'
elif isinstance(x.unit, SiUnits.Ampere):
xlabel = x_node.ref + ' (A)'
traces[-1].set_xlabel(xlabel)
# Set the Y axis label position for each plot trace.
trace_ylbl_position = dict(rotation=0,
horizontalalignment='right',
verticalalignment='center',
x=-0.01)
# Plot each Net/Part waveform in its own trace.
for i, (net_or_part, trace) in enumerate(zip(nets_or_parts, traces), 1):
y = waveforms[node(net_or_part)] # Extract the waveform data
# Set the Y axis label depending upon whether data is voltage or current.
if isinstance(y.unit, SiUnits.Volt):
trace.set_ylim(float(disp_vmin), float(disp_vmax))
trace.set_ylabel(net_or_part.name + ' (V)', trace_ylbl_position)
elif isinstance(y.unit, SiUnits.Ampere):
trace.set_ylim(float(disp_imin), float(disp_imax))
trace.set_ylabel(net_or_part.ref + ' (A)', trace_ylbl_position)
# Set position of trace within stacked traces.
trace.set_position([0.1, (num_traces-i) * trace_hgt, 0.8, trace_hgt])
# Place grid on X axis.
trace.grid(axis='x', color='orange', alpha=1.0)
# Plot the waveform data.
trace.plot(x, y)
"""
Explanation: Some Infrastructure
Before I start building gates, there's some stuff that I'll use over and over to test the circuitry. The first of these is an oscilloscope function that takes a complete set of waveform data from a simulation and plots selected waveforms from it:
End of explanation
"""
default_freq = 500@u_MHz # Specify a default frequency so it doesn't need to be set every time.
def cntgen(*bits, freq=default_freq):
"""
Generate one or more square waves varying in frequency by a factor of two.
Args:
bits: One or more Net objects, each of which will carry a square wave.
"""
bit_period = 1.0/freq
for bit in bits:
# Create a square-wave pulse generator with the current period.
pulse = PULSEV(initial_value=vdd_voltage, pulsed_value=0.0@u_V,
pulse_width=bit_period/2, period=bit_period)
# Attach the pulse generator between ground and the net that carries the square wave.
gnd & pulse["n, p"] & bit
# Double the period (halve the frequency) for each successive bit.
bit_period = 2 * bit_period
"""
Explanation: In addition to an oscilloscope, every electronics bench has a signal generator. For my purposes, I only need a simple function that generates one or more square waves whose frequencies decrease by a factor of two. (The collection of square waves looks like the output of a binary counter, hence the name.)
End of explanation
"""
default_voltage = 1.8@u_V # Specify a default supply voltage.
def pwr(voltage=default_voltage):
"""
Create a global power supply and voltage rail.
"""
# Clear any pre-existing circuitry. (Start with a clear slate.)
reset()
# Global variables for the power supply and voltage rail.
global vdd_ps, vdd, vdd_voltage
# Create a power supply and attach it between the Vdd rail and ground.
vdd_voltage = voltage
vdd_ps = V(ref="VDD_SUPPLY", dc_value=vdd_voltage)
vdd = Net("Vdd")
vdd & vdd_ps["p, n"] & gnd
"""
Explanation: All the circuits in this notebook will run from a 1.8V supply, so the following function instantiates a global power supply and a $V_{dd}$ voltage rail for them to use:
End of explanation
"""
get_sim = lambda : generate_netlist().simulator() # Compile netlist & create simulator.
do_dc = lambda **kwargs: get_sim().dc(**kwargs) # Run a DC-level analysis.
do_trans = lambda **kwargs: get_sim().transient(**kwargs) # Run a transient analysis.
def how_big(circuit=default_circuit):
from collections import defaultdict
parts = defaultdict(lambda: 0)
for p in circuit.parts:
parts[p.name] += 1
for part_name, num_parts in parts.items():
print(f"{part_name}: {num_parts}")
"""
Explanation: Finally, here are some convenience functions that 1) generate a netlist from the SKiDL code and use that to create a PySpice simulator object, 2) use the simulator object to perform a DC-level analysis, 3) use the simulator to perform a transient analysis, and 4) count the number of transistors in a circuit.
End of explanation
"""
@package
def inverter(a=Net(), out=Net()):
# Create the NFET and PFET transistors.
qp, qn = pfet(), nfet()
# Attach the NFET substrate to ground and the PFET substrate to Vdd.
gnd & qn.b
vdd & qp.b
# Connect Vdd through the PFET source-to-drain on to the output node.
# From the output node, connect through the NFET drain-to-source to ground.
vdd & qp["s,d"] & out & qn["d,s"] & gnd
# Attach the input to the NFET and PFET gate terminals.
a & qn.g & qp.g
"""
Explanation: With the infrastructure in place, I can begin building logic gates, starting from the simplest one I know.
The Simplest: the Inverter
Here's the gate-level schematic for a CMOS inverter:
And this is it's SKiDL version:
End of explanation
"""
pwr() # Apply power to the circuitry.
inv = inverter() # Create an inverter.
# Attach a voltage source between ground and the inverter's input.
# Then attach the output to a net.
gnd & V(ref="VIN", dc_value=0.0@u_V)["n, p"] & Net("VIN") & inv["a, out"] & Net("VOUT")
# Do a DC-level simulation while ramping the voltage source from 0 to Vdd.
vio = do_dc(VIN=slice(0, vdd_voltage, 0.01))
# Plot the inverter's output against its input.
oscope(vio, inv.a, inv.out)
"""
Explanation: First, I'll test the inverter's transfer function by attaching a voltage ramp to its input and see when the output transitions. (For those playing at home, you may notice the SPICE simulations take a minute or two to run. These transistor models are complicated.)
End of explanation
"""
# Add a trace for the Vdd power supply current.
disp_imin, disp_imax = -15@u_uA, 1@u_uA
oscope(vio, inv.a, inv.out, vdd_ps)
"""
Explanation: For a low-level input, the inverter's output is high and vice-versa as expected. From the shape of the transfer curve, I'd estimate the inverter's trigger point is around 0.8V.
It's also interesting to look at the current draw of the inverter as the input voltage ramps up:
End of explanation
"""
pwr()
# Connect a 500 MHz square wave to net A.
a = Net("A")
cntgen(a)
# Pump the square wave through an inverter.
inv = inverter()
a & inv["a, out"] & Net("A_BAR")
# Do a transient analysis and look at the timing between input and output.
waveforms = do_trans(step_time=0.01@u_ns, end_time=3.5@u_ns)
oscope(waveforms, a, inv.out)
"""
Explanation: As we learned in our textbooks so long ago, the quiescent current for CMOS logic is near zero but surges as the input voltage goes through the transition zone when both transistors are ON. For this inverter, the current maxes-out at about 13 $\mu$A at the trigger point.
It's equally easy to do a transient analysis of the inverter as it receives an input that varies over time:
End of explanation
"""
pwr()
a = Net("A")
cntgen(a)
# Create a list of 30 inverters.
invs = [inverter() for _ in range(30)]
# Attach the square wave to the first inverter in the list.
a & invs[0].a
# Go through the list, attaching the input of each inverter to the output of the previous one.
for i in range(1, len(invs)):
invs[i-1].out & invs[i].a
# Attach the output of the last inverter to the output net.
invs[-1].out & Net("A_DELAY")
# Do a transient analysis.
waveforms = do_trans(step_time=0.01@u_ns, end_time=3.5@u_ns)
oscope(waveforms, a, invs[-1].out)
"""
Explanation: There is a bit of ringing on the inverter's output but no appreciable propagation delay, probably because there is no real load on the output. In order to get more delay, I'll cascade thirty inverters together and look at the output of the last one:
End of explanation
"""
how_big()
"""
Explanation: Thirty cascaded inverters creates a total delay of around 0.65 ns, so each inverter contributes about 20 ps. This simulation doesn't include things like wiring delays, so don't get your hopes up about running at 50 GHz.
Finally, just to test the how_big function, let's see how many transistors are in 30 inverters:
End of explanation
"""
@package
def nand(a=Net(), b=Net(), out=Net()):
# Create the PFET and NFET transistors.
q1, q2 = pfet(2)
q3, q4 = nfet(2)
# Connect the PFET/NFET substrates to Vdd/gnd, respectively.
vdd & q1.b & q2.b
gnd & q3.b & q4.b
# Go from Vdd through a parallel-pair of PFETs to the output and then
# through a series-pair of NFETs to ground.
vdd & (q1["s,d"] | q2["s,d"]) & out & q3["d,s"] & q4["d,s"] & gnd
# Connect the pair of inputs to the gates of the transistors.
a & q1.g & q3.g
b & q2.g & q4.g
"""
Explanation: Thirty NMOS and thirty PMOS transistors. We're good to go.
The Universal: the NAND Gate
They say if you have a NAND gate, you have it all (if all you want is combinational logic, which seems a bit limited). Here's the schematic for one:
And this is its SKiDL version:
End of explanation
"""
pwr()
a, b, out = Net("A"), Net("B"), Net("OUT")
# Create two square waves: a at 500 MHz and b at 250 MHz.
cntgen(a, b)
# Create a NAND gate and connect its I/O to the nets.
nand()["a, b, out"] += a, b, out
# Perform a transient analysis.
waveforms = do_trans(step_time=0.01@u_ns, end_time=10@u_ns)
oscope(waveforms, a, b, out)
"""
Explanation: Like with the inverter, I'll do a transient analysis but using two square waves to drive both NAND inputs:
End of explanation
"""
@package
def xor(a=Net(), b=Net(), out=Net()):
# Create eight transistors: four NFETs and four PFETs.
qn_a, qn_ab, qn_b, qn_bb = nfet(4)
qp_a, qp_ab, qp_b, qp_bb = pfet(4)
# Connect the substrates of the transistors.
vdd & qp_a.b & qp_ab.b & qp_b.b & qp_bb.b
gnd & qn_a.b & qn_ab.b & qn_b.b & qn_bb.b
# Create the two parallel "legs" of series PFETs-NFETs with a
# common output node in the middle.
vdd & qp_ab["s,d"] & qp_b["s,d"] & out & qn_a["d,s"] & qn_b["d,s"] & gnd
vdd & qp_a["s,d"] & qp_bb["s,d"] & out & qn_ab["d,s"] & qn_bb["d,s"] & gnd
# Create two inverters to get the complements of both inputs.
ab, bb = inverter(), inverter()
ab.a += a
bb.a += b
# Attach the two inputs and their complements to the transistor gates.
a & qp_a.g & qn_a.g
ab.out & qp_ab.g & qn_ab.g
b & qp_b.g & qn_b.g
bb.out & qp_bb.g & qn_bb.g
pwr()
a, b, out = Net("A"), Net("B"), Net("OUT")
cntgen(a, b)
xor()["a, b, out"] += a, b, out
waveforms = do_trans(step_time=0.01@u_ns, end_time=10@u_ns)
oscope(waveforms, a, b, out)
"""
Explanation: The NAND gate output only goes low when both inputs are high, as expected. Ho hum.
One or the Other: the XOR Gate
Continuing on, here is the last combinational gate I'll do: the exclusive-OR. There's nothing really new here that you haven't already seen with the NAND gate, just more of it.
End of explanation
"""
@package
def full_adder(a=Net(), b=Net(), cin=Net(), s=Net(), cout=Net()):
# Use two XOR gates to compute the sum bit.
ab_sum = Net() # Net to carry the intermediate result of a+b.
xor()["a,b,out"] += a, b, ab_sum # Compute ab_sum=a+b
xor()["a,b,out"] += ab_sum, cin, s # Compute s=a+b+cin
# Through the magic of DeMorgan's Theorem, the AND-OR carry circuit
# can be done using three NAND gates.
nand1, nand2, nand3 = nand(), nand(), nand()
nand1["a,b"] += ab_sum, cin
nand2["a,b"] += a, b
nand3["a,b,out"] += nand1.out, nand2.out, cout
"""
Explanation: The output only goes high when the inputs have opposite values, so the XOR gate is working correctly.
No, It's Not a Snake: The Adder
Finally I've reached the level of abstraction where individual transistors aren't needed. I can use the gates I've already built to construct new stuff, like this full-adder bit:
End of explanation
"""
pwr()
# Generate nets for the inputs and outputs.
a, b, cin, s, cout = Net("A"), Net("B"), Net("CIN"), Net("S"), Net("COUT")
# Drive the A, B and CIN full-adder inputs with all eight combinations.
cntgen(a, b, cin)
# Connect the I/O nets to the full-adder.
full_adder()["a, b, cin, s, cout"] += a, b, cin, s, cout
# Do a transient analysis.
waveforms = do_trans(step_time=0.01@u_ns, end_time=8@u_ns)
oscope(waveforms, a, b, cin, s, cout)
"""
Explanation: I'll use a cntgen() with three outputs to apply all eight input combinations to the full-adder:
End of explanation
"""
@subcircuit
def adder(a, b, cin, s, cout):
# a, b and s are multi-bit buses. The width of the adder will
# be determined by the length of the sum output.
width = len(s)
# Create a list of full-adders equal to the width of the sum output.
fadds = [full_adder() for _ in range(width)]
# Iteratively connect the full-adders to the input and output bits.
for i in range(width):
# Connect the i'th full adder to the i'th bit of a, b and s.
fadds[i]["a, b, s"] += a[i], b[i], s[i]
if i == 0:
# Connect the carry input to the first full-adder.
fadds[i].cin += cin
else:
# Connect the carry input of the rest of the full-adders
# to the carry output from the previous one.
fadds[i].cin += fadds[i-1].cout
# Connect the carry output to the carry output from the last bit of the adder.
cout += fadds[-1].cout
"""
Explanation: The sum and carry-out bits of the full-adder match the truth-table for all the combinations of A, B and the carry input.
Now I'll combine multiple full-adders to build a multi-bit adder:
End of explanation
"""
pwr()
# Create the two-bit input and output buses and the carry input & output nets.
w = 2
a, b, cin, s, cout = Bus("A",w), Bus("B",w), Net("CIN"), Bus("S",w), Net("COUT")
# Drive the A0, A1, B0, B1, and CIN inputs with a five-bit counter.
cntgen(*a, *b, cin)
# Connect the I/O to an adder.
adder(a, b, cin, s, cout)
# Do a transient analysis
waveforms = do_trans(step_time=0.01@u_ns, end_time=32@u_ns)
oscope(waveforms, *a, *b, cin, *s, cout)
"""
Explanation: I'll instantiate a two-bit adder and test it with all 32 input combinations of A$0$, A$_1$, B$_0$, B$_1$, and C${in}$:
End of explanation
"""
def integerize(waveforms, *nets, threshold=0.9@u_V):
"""
Convert a set of N waveforms to a stream of N-bit integer values.
Args:
waveforms: Waveform data from ngspice.
nets: A set of nets comprising a digital word.
threshold: Voltage threshold for determining if a waveform value is 1 or 0.
Returns:
A list of integer values, one for each sample time in the waveform data.
"""
def binarize():
"""Convert multiple waveforms into streams of ones and zeros."""
binary_vals = []
for net in nets:
binary_vals.append([v > threshold for v in waveforms[node(net)]])
return binary_vals
# Convert the waveforms into streams of bits, then combine the bits into integers.
int_vals = []
for bin_vector in zip(*reversed(binarize())):
int_vals.append(int(bytes([ord('0')+b for b in bin_vector]), base=2))
return int_vals
def subsample(subsample_times, sample_times, *int_waveforms):
"""
Take a subset of samples from a set of integerized waveforms at a set of specific times.
Args:
subsample_times: A list of times (in ascending order) at which to take subsamples.
sample_times: A list of times (in ascending order) for when each integerized sample was taken.
int_waveforms: List of integerized waveform sample lists.
Returns:
A list of subsample lists.
"""
# Create a list of the empty lists to hold the subsamples from each integerized waveform.
subsamples = [[] for _ in int_waveforms]
# Get the first subsample time.
subsample_time = subsample_times.pop(0)
# Step through the sample times, looking for the time to take a subsample.
for sample_time, *samples in zip(sample_times, *int_waveforms):
# Take a subsample whenever the sample time is less than the current subsample time.
if sample_time > subsample_time:
# Store a subsample from each waveform.
for i, v in enumerate(samples):
subsamples[i].append(v)
# Get the next subsample time and break from loop if there isn't one.
try:
subsample_time = subsample_times.pop(0)
except IndexError:
break
return subsamples
# Convert the waveforms for A, B, Cin, S, and Cout into lists of integers.
a_ints = integerize(waveforms, *a)
b_ints = integerize(waveforms, *b)
cin_ints = integerize(waveforms, cin)
# Combine the N-bit sum and carry-out into a single N+1-bit integer.
s_ints = integerize(waveforms, *s, cout)
# Set the subsample times just before the adder's inputs change.
ts = [(i+0.9)@u_ns for i in range(32)]
# Subsample the integerized adder waveforms.
av, bv, cinv, sv = subsample(ts, waveforms.time, a_ints, b_ints, cin_ints, s_ints)
# Display a table of the adder's inputs and corresponding output.
pd.DataFrame({'A': av, 'B': bv, 'CIN': cinv, 'S': sv})
"""
Explanation: The outputs look like they might be correct, but I'm not going to waste my time trying to eyeball it when Python can do that. The following code subsamples the waveforms and converts them into a table of integers for the adder's inputs and outputs:
End of explanation
"""
error_flag = False
for a, b, cin, s in zip(av, bv, cinv, sv):
if a+b+cin != s:
print(f"ERROR: {a}+{b}+{cin} != {s}")
error_flag = True
if not error_flag:
print("No errors found.")
"""
Explanation: That's better, but even checking all the table entries is too much work so I'll write a little code to do that:
End of explanation
"""
@package
def tx_gate(i, g, g_b, o):
"""NMOS/PMOS transmission gate. When g is high and g_b is low, i and o are connected."""
# NMOS and PMOS transistors for passing input to output.
qn, qp = nfet(), pfet()
# Transistor substrate connections.
gnd & qn.b
vdd & qp.b
# Parallel NMOS/PMOS transistors between the input and output.
i & (qn["s,d"] | qp["s,d"]) & o
# Connect the gate input to the NMOS and the complement of the gate input
# to the PMOS. Both transistors will conduct when the gate input is high,
# and will block the input from the output when the gate input is low.
g & qn.g
g_b & qp.g
"""
Explanation: OK, at this point I'm convinced I have a working two-bit adder. And I can make any size adder I want just by changing the input and output bus widths.
Onward!
Fragments of Memory: Latches, Flip-Flops and Registers
Cross-coupled logic gates like this dynamic master-slave flip-flop are often used for storing bits:
A problem with this circuit is the use of NMOS FETs as pass gates for the input and feedback latch. Because I'm using 1.8V as my supply voltage, any logic-high signal passing through the NMOS FET is reduced by the approximately 0.6V threshold voltage to around 1.2V. While this is workable, it does lead to some increased propagation delay. Therefore, I built a transmission gate that parallels the NMOS FET with a PMOS FET with the gate of each being driven by complementary signals. This allows signals to pass through without being degraded by the threshold voltage.
<a id="tx_gate"/>
End of explanation
"""
@package
def latch_bit(wr=Net(), wr_b=Net(), d=Net(), out_b=Net()):
in_tx, fb_tx = tx_gate(), tx_gate()
in_inv, fb_inv = inverter(), inverter()
# Input data comes in through the input gate, goes through an inverter to the data output.
d & in_tx["i,o"] & in_inv["a, out"] & out_b
# The data output is fed back through another inverter and transmission gate to the input inverter.
out_b & fb_inv["a, out"] & fb_tx["i,o"] & in_inv.a # Feed output back to input.
# wr activates the input gate and deactivates the feedback gate, allowing data into the latch.
wr & in_tx.g & fb_tx.g_b
# Complement of wr deactivates the input gate and activates the feedback gate, latching the data.
wr_b & in_tx.g_b & fb_tx.g
"""
Explanation: The SKiDL implementation for half of this flip-flop creates a latch that allows data to enter and pass through when the write-enable is active, and then latches the data bit with a feedback gate when the write-enable is not asserted:
End of explanation
"""
@package
def ms_ff(wr=Net(), d=Net(), out=Net()):
# Create the master and slave latches.
master, slave = latch_bit(), latch_bit()
# Data passes from the input through the master to the slave latch and then to the output.
d & master["d, out_b"] & slave["d, out_b"] & out
# Data continually enters the master latch when the write-enable is low, but gets
# latched when the write-enable goes high..
wr & inverter()["a, out"] & master.wr & slave.wr_b
# Data from the master passes through the slave when the write-enable goes high, and
# this data stays stable in the slave when the write-enable goes low and new data
# is entering the master.
wr & slave.wr & master.wr_b
"""
Explanation: By cascading two of these latches, I arrive at the complete flip-flop:
End of explanation
"""
pwr()
wr, d, out = Net('WR'), Net('D'), Net('OUT')
cntgen(wr, d)
ms_ff()["wr, d, out"] += wr, d, out
waveforms = do_trans(step_time=0.01@u_ns, end_time=8@u_ns)
oscope(waveforms, wr, d, out)
"""
Explanation: A simple test shows the flip-flop retains data and the output only changes upon the rising edge of the write-enable (after a small propagation delay):
End of explanation
"""
@subcircuit
def register(wr, d, out):
# Create a flip-flop for each bit in the output bus.
reg_bits = [ms_ff() for _ in out]
# Connect the inputs and outputs to the flip-flops.
for i, rb in enumerate(reg_bits):
rb["wr, d, out"] += wr, d[i], out[i]
"""
Explanation: Once I have a basic flip-flop, it's easy to build multi-bit registers:
End of explanation
"""
@subcircuit
def cntr(clk, out):
# Create two buses: one for the next counter value, and one that's all zero bits.
width = len(out)
nxt, zero = Bus(width), Bus(width)
# Provide access to the global ground net.
global gnd
# Connect all the zero bus bits to ground (that's why it's zero).
gnd += zero
# The next counter value is the current counter value plus 1. Set the
# adder's carry input to 1 and the b input to zero to do this.
adder(a=out, b=zero, cin=vdd, s=nxt, cout=Net())
# Clock the next counter value into the register on the rising clock edge.
register(wr=clk, d=nxt, out=out)
"""
Explanation: The Simplest State Machine: the Counter
With both an adder and a register in hand, a counter is the obvious next step:
End of explanation
"""
pwr()
# Generate a clock signal.
clk = Net('clk')
cntgen(clk)
# Create a three-bit counter.
cnt = Bus('CNT', 3)
cntr(clk, cnt)
# Simulate the counter.
waveforms = do_trans(step_time=0.01@u_ns, end_time=30@u_ns)
# In addition to the clock and counter value, also look at the power supply current.
disp_imin, disp_imax = -3@u_mA, 3@u_mA
oscope(waveforms, clk, *cnt, vdd_ps)
"""
Explanation: Now just give it a clock and watch it go!
End of explanation
"""
time_steps = waveforms.time[1:] - waveforms.time[0:-1]
ps_current = -waveforms[node(vdd_ps)][0:-1] # Mult by -1 to get current FROM the + terminal of the supply.
ps_voltage = waveforms[node(vdd)][0:-1]
energy = sum(ps_current * ps_voltage * time_steps)@u_J
print(f"Total energy = {energy}")
"""
Explanation: Looking at the counter bits shows its obviously incrementing 0, 1, 2, ..., 7, 0, ... The bottom trace shows the pulses of supply current on every clock edge. (Remember that whole current-pulse-during-input-transition thing?) But how much energy is being used? Multiplying the supply current by its output voltage and summing over time will answer that:
End of explanation
"""
how_big()
"""
Explanation: As for the total number of transistors in the counter ...
End of explanation
"""
@package
def mux8(in_, i0=Net(), i1=Net(), i2=Net(), out=Net()):
# Create the complements of the selection inputs.
i0b, i1b, i2b = Net(), Net(), Net()
i0 & inverter()["a,out"] & i0b
i1 & inverter()["a,out"] & i1b
i2 & inverter()["a,out"] & i2b
out_ = Net() # Output from the eight legs of the mux.
i = 0 # Input bit index.
# Create the eight legs of the mux by nested iteration of the selection inputs
# and their complements. Each leg is turned on by a different combination of inputs.
for i2_g, i2_g_b in ((i2b, i2), (i2, i2b)):
for i1_g, i1_g_b in ((i1b, i1), (i1, i1b)):
for i0_g, i0_g_b in ((i0b, i0), (i0, i0b)):
# Place 3 transmission gates in series from input bit i to output.
i0_gate, i1_gate, i2_gate = tx_gate(), tx_gate(), tx_gate()
in_[i] & i0_gate["i,o"] & i1_gate["i,o"] & i2_gate["i,o"] & out_
# Attach the selection inputs and their complements to the transmission gates.
i0_gate["g, g_b"] += i0_g, i0_g_b
i1_gate["g, g_b"] += i1_g, i1_g_b
i2_gate["g, g_b"] += i2_g, i2_g_b
i = i+1 # Go to the next input bit.
# Run the output through two inverters to restore signal strength.
out_ & inverter()["a, out"] & inverter()["a, out"] & out
@subcircuit
def alu(a, b, cin, s, cout, s_opcode, c_opcode):
"""
Multi-bit ALU with the operation determined by the eight-bit codes
that determine the output from the sum and carry muxes.
"""
width = len(s)
s_bits = [mux8() for _ in range(width)]
c_bits = [mux8() for _ in range(width)]
# For each bit in the ALU...
for i in range(width):
# Connect truth-table bits to the sum and carry mux inputs.
s_bits[i].in_ += s_opcode
c_bits[i].in_ += c_opcode
# Connect inputs to the sum and carry mux selectors.
s_bits[i]["i0, i1"] += a[i], b[i]
c_bits[i]["i0, i1"] += a[i], b[i]
# Connect the carry input of each ALU bit to the carry output of the previous bit.
if i == 0:
s_bits[i].i2 & cin
c_bits[i].i2 & cin
else:
s_bits[i].i2 & c_bits[i-1].out
c_bits[i].i2 & c_bits[i-1].out
# Connect the output bit of each sum mux to the ALU sum output.
s[i] & s_bits[i].out
# Connect the carry output from the last ALU bit.
cout & c_bits[-1].out
"""
Explanation: Bonus: an ALU
An adder is great and all, but that's all it does: adds. Having a module that adds, subtracts, shifts, and performs logical operations is much cooler! That's an arithmetic logic unit (ALU).
You might think building an ALU is a lot harder than building an adder, but it's not. It can all be done using an 8-to-1 multiplexer (mux) as the basic building block:
Now, if you look real hard at the circuit above, you'll realize you can smash the sixteen legs of series NMOS/PMOS transistors into just eight legs of series transmission gates like the one I used above.
I can build a full-adder bit from a pair of 8-to-1 muxes by passing the A, B, and C${in}$ inputs as the selectors, and applying the eight-bit truth-table for the S and C${out}$ bits to the input of each mux, respectively. Then I'll combine the full-adder bits to build a complete $N$-bit adder as before.
But I can also build a subtractor, left-shifter, logical-AND, etc just by changing the truth-table bits that go to each mux. (If you're familiar with FPGAs, the mux is essentially the same as their look-up tables.)
The complete SKiDL code for an ALU is shown below. (Much easier to create, thankfully, than tediously drawing the circuit shown above.)
End of explanation
"""
@subcircuit
def subtractor(a, b, cin, s, cout):
"""
Create a subtractor by applying the required opcodes to the ALU.
"""
# Set the opcodes to perform subtraction (a - b - c), so in reality the carry
# is actually a borrow.
# cin b a s cout
# ====================
# 0 0 0 0 0
# 0 0 1 1 0
# 0 1 0 1 1
# 0 1 1 0 0
# 1 0 0 1 1
# 1 0 1 0 0
# 1 1 0 0 1
# 1 1 1 1 1
one = vdd
zero = gnd
s_opcode = Bus(zero, one, one, zero, one, zero, zero, one)
c_opcode = Bus(zero, zero, one, zero, one, zero, one, one)
# Connect the I/O and opcodes to the ALU.
alu(a=a, b=b, cin=cin, s=s, cout=cout, s_opcode=s_opcode, c_opcode=c_opcode)
"""
Explanation: By setting the sum and carry opcodes appropriately, I can build a subtractor from the ALU:
End of explanation
"""
pwr()
# Create the two-bit input and output buses and the carry input & output nets.
w = 2
a, b, cin, s, cout = Bus("A",w), Bus("B",w), Net("CIN"), Bus("S",w), Net("COUT")
# Drive the A0, A1, B0, B1, and CIN inputs with a five-bit counter.
cntgen(*a, *b, cin)
# Connect the I/O to the subtractor.
subtractor(a=a, b=b, cin=cin, s=s, cout=cout)
# Do a transient analysis
disp_vmax = 4@u_V
waveforms = do_trans(step_time=0.01@u_ns, end_time=32@u_ns)
# Display the output waveforms.
oscope(waveforms, *a, *b, cin, *s, cout)
# Convert the waveforms for A, B, Cin, S, and Cout into lists of integers.
a_ints = integerize(waveforms, *a)
b_ints = integerize(waveforms, *b)
cin_ints = integerize(waveforms, cin)
# Combine the N-bit sum and carry-out into a single N+1-bit integer.
s_ints = integerize(waveforms, *s, cout)
# Set the subsample times right before the ALU's inputs change.
ts = [(i+0.9)@u_ns for i in range(32)]
# Subsample the integerized ALU waveforms.
av, bv, cinv, sv = subsample(ts, waveforms.time, a_ints, b_ints, cin_ints, s_ints)
# Display a table of the ALU's inputs and corresponding output.
pd.DataFrame({'A': av, 'B': bv, 'CIN': cinv, 'S': sv})
"""
Explanation: Now I'll test the subtractor just as I did previously with the adder:
End of explanation
"""
@subcircuit
def down_cntr(clk, out):
# Provide access to the global ground net.
global gnd
width = len(out)
nxt, zero = Bus(width), Bus(width)
gnd += zero
# The next counter value is the current counter value minus 1. Set the
# subtractor's borrow input to 1 and the b input to zero to do this.
subtractor(a=out, b=zero, cin=vdd, s=nxt, cout=Net())
register(wr=clk, d=nxt, out=out)
pwr()
clk = Net('clk')
cntgen(clk)
# Create a three-bit down counter.
cnt = Bus('CNT', 3)
down_cntr(clk, cnt)
# Simulate it.
waveforms = do_trans(step_time=0.01@u_ns, end_time=30@u_ns)
oscope(waveforms, clk, *cnt, vdd_ps)
"""
Explanation: Extra Bonus: a Down Counter
Since I went to the trouble to build a subtractor, it would be a waste if I didn't use it to make a down-counter:
End of explanation
"""
time_steps = waveforms.time[1:] - waveforms.time[0:-1]
ps_current = -waveforms[node(vdd_ps)][0:-1] # Mult by -1 to get current FROM the + terminal of the supply.
ps_voltage = waveforms[node(vdd)][0:-1]
energy = sum(ps_current * ps_voltage * time_steps)@u_J
print(f"Total energy = {energy}")
"""
Explanation: From the waveforms, it's obvious the counter is decrementing: 7, 6, 5, ..., 0, 7, ... so chalk this one up as a win. But how does this compare to the counter I previously built using just an adder?
With regards to energy consumption, this ALU-based counter is about 2x worse (7 pJ compared to 3.3 pJ):
End of explanation
"""
how_big()
"""
Explanation: And it uses about 2.5x the number of transistors (402 versus 162):
End of explanation
"""
|
wuafeing/Python3-Tutorial | 01 data structures and algorithms/01.10 remove duplicates from seq order.ipynb | gpl-3.0 | def dedupe(items):
seen = set()
for item in items:
if item not in seen:
yield item
seen.add(item)
"""
Explanation: Previous
1.10 删除序列相同元素并保持顺序
问题
怎样在一个序列上面保持元素顺序的同时消除重复的值?
解决方案
如果序列上的值都是 hashable 类型,那么可以很简单的利用集合或者生成器来解决这个问题。比如:
End of explanation
"""
a = [1, 5, 2, 1, 9, 1, 5, 10]
list(dedupe(a))
"""
Explanation: 下面是使用上述函数的例子:
End of explanation
"""
def dedupe(items, key=None):
seen = set()
for item in items:
val = item if key is None else key(item)
if val not in seen:
yield item
seen.add(val)
"""
Explanation: 这个方法仅仅在序列中元素为 hashable 的时候才管用。 如果你想消除元素不可哈希(比如 dict 类型)的序列中重复元素的话,你需要将上述代码稍微改变一下,就像这样:
End of explanation
"""
a = [{"x":1, "y":2}, {"x":1, "y":3}, {"x":1, "y":2}, {"x":2, "y":4}]
list(dedupe(a, key = lambda d: (d["x"], d["y"])))
list(dedupe(a, key = lambda d: d["x"]))
"""
Explanation: 这里的 key 参数指定了一个函数,将序列元素转换成 hashable 类型。下面是它的用法示例:
End of explanation
"""
a = [1, 5, 2, 1, 9, 1, 5, 10]
a
set(a)
"""
Explanation: 如果你想基于单个字段、属性或者某个更大的数据结构来消除重复元素,第二种方案同样可以胜任。
讨论
如果你仅仅就是想消除重复元素,通常可以简单的构造一个集合。比如:
End of explanation
"""
|
deroneriksson/incubator-systemml | samples/jupyter-notebooks/Autoencoder.ipynb | apache-2.0 | !pip show systemml
import pandas as pd
from systemml import MLContext, dml
ml = MLContext(sc)
print(ml.info())
sc.version
"""
Explanation: Autoencoder
This notebook demonstrates the invocation of the SystemML autoencoder script, and alternative ways of passing in/out data.
This notebook is supported with SystemML 0.14.0 and above.
End of explanation
"""
FsPath = "/tmp/data/"
inp = FsPath + "Input/"
outp = FsPath + "Output/"
"""
Explanation: SystemML Read/Write data from local file system
End of explanation
"""
import numpy as np
X_pd = pd.DataFrame(np.arange(1,2001, dtype=np.float)).values.reshape(100,20)
# X_pd = pd.DataFrame(range(1, 2001,1),dtype=float).values.reshape(100,20)
script ="""
write(X, $Xfile)
"""
prog = dml(script).input(X=X_pd).input(**{"$Xfile":inp+"X.csv"})
ml.execute(prog)
!ls -l /tmp/data/Input
autoencoderURL = "https://raw.githubusercontent.com/apache/systemml/master/scripts/staging/autoencoder-2layer.dml"
rets = ("iter", "num_iters_per_epoch", "beg", "end", "o")
prog = dml(autoencoderURL).input(**{"$X":inp+"X.csv"}) \
.input(**{"$H1":500, "$H2":2, "$BATCH":36, "$EPOCH":5 \
, "$W1_out":outp+"W1_out", "$b1_out":outp+"b1_out" \
, "$W2_out":outp+"W2_out", "$b2_out":outp+"b2_out" \
, "$W3_out":outp+"W3_out", "$b3_out":outp+"b3_out" \
, "$W4_out":outp+"W4_out", "$b4_out":outp+"b4_out" \
}).output(*rets)
iter, num_iters_per_epoch, beg, end, o = ml.execute(prog).get(*rets)
print (iter, num_iters_per_epoch, beg, end, o)
!ls -l /tmp/data/Output
"""
Explanation: Generate Data and write out to file.
End of explanation
"""
autoencoderURL = "https://raw.githubusercontent.com/apache/systemml/master/scripts/staging/autoencoder-2layer.dml"
rets = ("iter", "num_iters_per_epoch", "beg", "end", "o")
rets2 = ("W1", "b1", "W2", "b2", "W3", "b3", "W4", "b4")
prog = dml(autoencoderURL).input(X=X_pd) \
.input(**{ "$H1":500, "$H2":2, "$BATCH":36, "$EPOCH":5}) \
.output(*rets) \
.output(*rets2)
result = ml.execute(prog)
iter, num_iters_per_epoch, beg, end, o = result.get(*rets)
W1, b1, W2, b2, W3, b3, W4, b4 = result.get(*rets2)
print (iter, num_iters_per_epoch, beg, end, o)
"""
Explanation: Alternatively to passing in/out file names, use Python variables.
End of explanation
"""
|
Leguark/GeMpy | Prototype Notebook/Sandstone Project_legacy.ipynb | mit | # Setting extend, grid and compile
# Setting the extent
sandstone = GeoMig.Interpolator(696000,747000,6863000,6950000,-20000, 2000,
range_var = np.float32(110000),
u_grade = 9) # Range used in geomodeller
# Setting resolution of the grid
sandstone.set_resolutions(40,40,80)
sandstone.create_regular_grid_3D()
# Compiling
sandstone.theano_compilation_3D()
"""
Explanation: Sandstone Model
First we make a GeMpy instance with most of the parameters default (except range that is given by the project). Then we also fix the extension and the resolution of the domain we want to interpolate. Finally we compile the function, only needed once every time we open the project (the guys of theano they are working on letting loading compiled files, even though in our case it is not a big deal).
General note. So far the reescaling factor is calculated for all series at the same time. GeoModeller does it individually for every potential field. I have to look better what this parameter exactly means
End of explanation
"""
sandstone.load_data_csv("foliations", os.pardir+"/input_data/a_Foliations.csv")
sandstone.load_data_csv("interfaces", os.pardir+"/input_data/a_Points.csv")
pn.set_option('display.max_rows', 25)
sandstone.Foliations;
sandstone.Foliations
"""
Explanation: Loading data from geomodeller
So there are 3 series, 2 of one single layer and 1 with 2. Therefore we need 3 potential fields, so lets begin.
End of explanation
"""
sandstone.set_series({"EarlyGranite_Series":sandstone.formations[-1],
"BIF_Series":(sandstone.formations[0], sandstone.formations[1]),
"SimpleMafic_Series":sandstone.formations[2]},
order = ["EarlyGranite_Series",
"BIF_Series",
"SimpleMafic_Series"])
sandstone.series
"""
Explanation: Defining Series
End of explanation
"""
sandstone.compute_potential_field("EarlyGranite_Series", verbose = 1)
sandstone.plot_potential_field_2D(direction = "y", cell_pos = 13, figsize=(7,6), contour_lines = 20,
potential_field = True)
sandstone.potential_interfaces;
%matplotlib qt4
block = np.ones_like(sandstone.Z_x)
block[sandstone.Z_x>sandstone.potential_interfaces[0]] = 0
block[sandstone.Z_x<sandstone.potential_interfaces[-1]] = 1
block = block.reshape(40,40,80)
#block = np.swapaxes(block, 0, 1)
plt.imshow(block[:,8,:].T, origin = "bottom", aspect = "equal", extent = (sandstone.xmin, sandstone.xmax,
sandstone.zmin, sandstone.zmax),
interpolation = "none")
"""
Explanation: Early granite
End of explanation
"""
sandstone.compute_potential_field("BIF_Series", verbose=1)
sandstone.plot_potential_field_2D(direction = "y", cell_pos = 12, figsize=(7,6), contour_lines = 100,
potential_field = True)
sandstone.potential_interfaces, sandstone.layers[0].shape;
%matplotlib qt4
block = np.ones_like(sandstone.Z_x)
block[sandstone.Z_x>sandstone.potential_interfaces[0]] = 0
block[(sandstone.Z_x<sandstone.potential_interfaces[0]) * (sandstone.Z_x>sandstone.potential_interfaces[-1])] = 1
block[sandstone.Z_x<sandstone.potential_interfaces[-1]] = 2
block = block.reshape(40,40,80)
plt.imshow(block[:,13,:].T, origin = "bottom", aspect = "equal", extent = (sandstone.xmin, sandstone.xmax,
sandstone.zmin, sandstone.zmax),
interpolation = "none")
"""
Explanation: BIF Series
End of explanation
"""
sandstone.compute_potential_field("SimpleMafic_Series", verbose = 1)
sandstone.plot_potential_field_2D(direction = "y", cell_pos = 15, figsize=(7,6), contour_lines = 20,
potential_field = True)
sandstone.potential_interfaces, sandstone.layers[0].shape;
%matplotlib qt4
block = np.ones_like(sandstone.Z_x)
block[sandstone.Z_x>sandstone.potential_interfaces[0]] = 0
block[sandstone.Z_x<sandstone.potential_interfaces[-1]] = 1
block = block.reshape(40,40,80)
#block = np.swapaxes(block, 0, 1)
plt.imshow(block[:,13,:].T, origin = "bottom", aspect = "equal", extent = (sandstone.xmin, sandstone.xmax,
sandstone.zmin, sandstone.zmax))
"""
Explanation: SImple mafic
End of explanation
"""
# Reset the block
sandstone.block.set_value(np.zeros_like(sandstone.grid[:,0]))
# Compute the block
sandstone.compute_block_model([0,1,2], verbose = 1)
%matplotlib qt4
plot_block = sandstone.block.get_value().reshape(40,40,80)
plt.imshow(plot_block[:,13,:].T, origin = "bottom", aspect = "equal",
extent = (sandstone.xmin, sandstone.xmax, sandstone.zmin, sandstone.zmax), interpolation = "none")
"""
Explanation: Optimizing the export of lithologies
Here I am going to try to return from the theano interpolate function the internal type of the result (in this case DK I guess) so I can make another function in python I guess to decide which potential field I calculate at every grid_pos
End of explanation
"""
"""Export model to VTK
Export the geology blocks to VTK for visualisation of the entire 3-D model in an
external VTK viewer, e.g. Paraview.
..Note:: Requires pyevtk, available for free on: https://github.com/firedrakeproject/firedrake/tree/master/python/evtk
**Optional keywords**:
- *vtk_filename* = string : filename of VTK file (default: output_name)
- *data* = np.array : data array to export to VKT (default: entire block model)
"""
vtk_filename = "noddyFunct2"
extent_x = 10
extent_y = 10
extent_z = 10
delx = 0.2
dely = 0.2
delz = 0.2
from pyevtk.hl import gridToVTK
# Coordinates
x = np.arange(0, extent_x + 0.1*delx, delx, dtype='float64')
y = np.arange(0, extent_y + 0.1*dely, dely, dtype='float64')
z = np.arange(0, extent_z + 0.1*delz, delz, dtype='float64')
# self.block = np.swapaxes(self.block, 0, 2)
gridToVTK(vtk_filename, x, y, z, cellData = {"geology" : sol})
"""
Explanation: Export vtk
End of explanation
"""
%%timeit
sol = interpolator.geoMigueller(dips,dips_angles,azimuths,polarity, rest, ref)[0]
sandstone.block_export.profile.summary()
"""
Explanation: Performance Analysis
CPU
End of explanation
"""
%%timeit
# Reset the block
sandstone.block.set_value(np.zeros_like(sandstone.grid[:,0]))
# Compute the block
sandstone.compute_block_model([0,1,2], verbose = 0)
sandstone.block_export.profile.summary()
"""
Explanation: GPU
End of explanation
"""
|
ProfessorKazarinoff/staticsite | content/code/statistics/mean_median_mode_stdev_statistics_module.ipynb | gpl-3.0 | from statistics import mean, median, mode, stdev
test_scores = [60 , 83, 83, 91, 100]
"""
Explanation: In this post, we'll look at a couple of statistics functions in Python. These statistics functions are part of the Python Standard Library in the statistics module. The four functions we'll use in this post are common in statistics:
mean - average value
median - middle value
mode - most often value
standard deviation - spread of values
To access Python's statistics functions, we need to import the functions from the statistics module using the statement:
python
from statistics import mean, median, mode, stdev
After the import statement, the functions mean(), median(), mode() and stdev()(standard deviation) can be used. Since the statistics module is part of the Python Standard Library, no external packages need to be installed.
Let's imagine we have a data set of 5 test scores. The test scores are 60, 83, 91 and 100. These test scores can be stored in a Python list. Python lists are defined with square brackets [ ]. Elements in Python lists are separated with commas.
End of explanation
"""
mean(test_scores)
"""
Explanation: Calculate the mean
To calculate the mean, or average of our test scores, use the statistics module's mean() function.
End of explanation
"""
median(test_scores)
83
"""
Explanation: Calculate the median
To calculate the median, or middle value of our test scores, use the statistics module's median() function.
If there are an odd number of values, median() returns the middle value. If there are an even number of values median() returns an average of the two middle values.
End of explanation
"""
mode(test_scores)
"""
Explanation: Calculate the mode
To calculate the mode, or most often value of our test scores, use the statistics module's mode() function.
If there is more than one number which occurs most often, mode() returns an error.
```python
mode([1, 1, 2, 2, 3])
StatisticsError: no unique mode; found 2 equally common values
```
If there is no value that occurs most often (all the values are unique or occur the same number of times), mode() also returns an error.
```python
mode([1,2,3])
StatisticsError: no unique mode; found 3 equally common values
```
End of explanation
"""
stdev(test_scores)
"""
Explanation: Calculate the standard deviation
To calculate the standard deviation, or spread of the test scores, use the statistics module's stdev() function. A large standard deviation indicates the data is spread out; a small standard deviation indicates the data is clustered close together.
End of explanation
"""
import statistics
test_scores = [60 , 83, 83, 91, 100]
statistics.mean(test_scores)
statistics.median(test_scores)
statistics.mode(test_scores)
statistics.stdev(test_scores)
"""
Explanation: Alternatively, we can import the whole statistics module at once (all the functions in the staticsitics module) using the the line:
python
import statistics
Then to use the functions from the module, we need to call the names statistics.mean(), statistics.median(), statistics.mode(), and statistics.stdev(). See below:
End of explanation
"""
|
mathLab/RBniCS | tutorials/13_elliptic_optimal_control/tutorial_elliptic_optimal_control_2_pod.ipynb | lgpl-3.0 | from dolfin import *
from rbnics import *
"""
Explanation: TUTORIAL 13 - Elliptic Optimal Control
Keywords: optimal control, inf-sup condition, POD-Galerkin
1. Introduction
This tutorial addresses a distributed optimal control problem for the Graetz conduction-convection equation on the domain $\Omega$ shown below:
<img src="data/mesh2.png" width="60%"/>
The problem is characterized by 3 parameters. The first parameter $\mu_0$ represents the Péclet number, which describes the heat transfer between the two domains. The second and third parameters, $\mu_1$ and $\mu_2$, control the parameter dependent observation function $y_d(\boldsymbol{\mu})$ such that:
$$ y_d(\boldsymbol{\mu})=
\begin{cases}
\mu_1 \quad \text{in} \; \hat{\Omega}_1 \
\mu_2 \quad \text{in} \; \hat{\Omega}_2
\end{cases}
$$
The ranges of the three parameters are the following: $$\mu_0 \in [3,20], \mu_1 \in [0.5,1.5], \mu_2 \in [1.5,2.5]$$
The parameter vector $\boldsymbol{\mu}$ is thus given by $$\boldsymbol{\mu}=(\mu_0,\mu_1,\mu_2)$$ on the parameter domain $$\mathbb{P}=[3,20] \times [0.5,1.5] \times [1.5,2.5].$$
In order to obtain a faster approximation of the optimal control problem, we pursue an optimize-then-discretize approach using the POD-Galerkin method.
2. Parametrized Formulation
Let $y(\boldsymbol{\mu})$, the state function, be the temperature field in the domain $\Omega$ and $u(\boldsymbol{\mu})$, the control function, act as a heat source. The observation domain $\hat{\Omega}$ is defined as: $\hat{\Omega}=\hat{\Omega}_1 \cup \hat{\Omega}_2$.
Consider the following optimal control problem:
$$
\underset{y,u}{min} \; J(y,u;\boldsymbol{\mu}) = \frac{1}{2} \left\lVert y(\boldsymbol{\mu})-y_d(\boldsymbol{\mu})\right\rVert ^2_{L^2(\hat{\Omega})}, \
s.t.
\begin{cases}
-\frac{1}{\mu_0}\Delta y(\boldsymbol{\mu}) + x_2(1-x_2)\frac{\partial y(\boldsymbol{\mu})}{\partial x_1} = u(\boldsymbol{\mu}) \quad \text{in} \; \Omega, \
\frac{1}{\mu_0} \nabla y(\boldsymbol{\mu}) \cdot \boldsymbol{n} = 0 \qquad \qquad \qquad \quad \enspace \; \text{on} \; \Gamma_N, \
y(\boldsymbol{\mu})=1 \qquad \qquad \qquad \qquad \qquad \enspace \text{on} \; \Gamma_{D1}, \
y(\boldsymbol{\mu})=2 \qquad \qquad \qquad \qquad \qquad \enspace \text{on} \; \Gamma_{D2}
\end{cases}
$$
The corresponding weak formulation comes from solving for the gradient of the Lagrangian function as detailed in the previous tutorial.
Since this problem is recast in the framework of saddle-point problems, the reduced basis problem must satisfy the inf-sup condition, thus an aggregated space for the state and adjoint variables is defined.
End of explanation
"""
class EllipticOptimalControl(EllipticOptimalControlProblem):
# Default initialization of members
def __init__(self, V, **kwargs):
# Call the standard initialization
EllipticOptimalControlProblem.__init__(self, V, **kwargs)
# ... and also store FEniCS data structures for assembly
assert "subdomains" in kwargs
assert "boundaries" in kwargs
self.subdomains, self.boundaries = kwargs["subdomains"], kwargs["boundaries"]
yup = TrialFunction(V)
(self.y, self.u, self.p) = split(yup)
zvq = TestFunction(V)
(self.z, self.v, self.q) = split(zvq)
self.dx = Measure("dx")(subdomain_data=subdomains)
self.ds = Measure("ds")(subdomain_data=boundaries)
# Regularization coefficient
self.alpha = 0.01
# Store the velocity expression
self.vel = Expression("x[1] * (1 - x[1])", element=self.V.sub(0).ufl_element())
# Customize linear solver parameters
self._linear_solver_parameters.update({
"linear_solver": "mumps"
})
# Return custom problem name
def name(self):
return "EllipticOptimalControl2POD"
# Return theta multiplicative terms of the affine expansion of the problem.
def compute_theta(self, term):
mu = self.mu
if term in ("a", "a*"):
theta_a0 = 1.0 / mu[0]
theta_a1 = 1.0
return (theta_a0, theta_a1)
elif term in ("c", "c*"):
theta_c0 = 1.0
return (theta_c0,)
elif term == "m":
theta_m0 = 1.0
return (theta_m0,)
elif term == "n":
theta_n0 = self.alpha
return (theta_n0,)
elif term == "f":
theta_f0 = 1.0
return (theta_f0,)
elif term == "g":
theta_g0 = mu[1]
theta_g1 = mu[2]
return (theta_g0, theta_g1)
elif term == "h":
theta_h0 = 0.24 * mu[1]**2 + 0.52 * mu[2]**2
return (theta_h0,)
elif term == "dirichlet_bc_y":
theta_bc0 = 1.
return (theta_bc0,)
else:
raise ValueError("Invalid term for compute_theta().")
# Return forms resulting from the discretization of the affine expansion of the problem operators.
def assemble_operator(self, term):
dx = self.dx
if term == "a":
y = self.y
q = self.q
vel = self.vel
a0 = inner(grad(y), grad(q)) * dx
a1 = vel * y.dx(0) * q * dx
return (a0, a1)
elif term == "a*":
z = self.z
p = self.p
vel = self.vel
as0 = inner(grad(z), grad(p)) * dx
as1 = - vel * p.dx(0) * z * dx
return (as0, as1)
elif term == "c":
u = self.u
q = self.q
c0 = u * q * dx
return (c0,)
elif term == "c*":
v = self.v
p = self.p
cs0 = v * p * dx
return (cs0,)
elif term == "m":
y = self.y
z = self.z
m0 = y * z * dx(1) + y * z * dx(2)
return (m0,)
elif term == "n":
u = self.u
v = self.v
n0 = u * v * dx
return (n0,)
elif term == "f":
q = self.q
f0 = Constant(0.0) * q * dx
return (f0,)
elif term == "g":
z = self.z
g0 = z * dx(1)
g1 = z * dx(2)
return (g0, g1)
elif term == "h":
h0 = 1.0
return (h0,)
elif term == "dirichlet_bc_y":
bc0 = [DirichletBC(self.V.sub(0), Constant(i), self.boundaries, i) for i in (1, 2)]
return (bc0,)
elif term == "dirichlet_bc_p":
bc0 = [DirichletBC(self.V.sub(2), Constant(0.0), self.boundaries, i) for i in (1, 2)]
return (bc0,)
elif term == "inner_product_y":
y = self.y
z = self.z
x0 = inner(grad(y), grad(z)) * dx
return (x0,)
elif term == "inner_product_u":
u = self.u
v = self.v
x0 = u * v * dx
return (x0,)
elif term == "inner_product_p":
p = self.p
q = self.q
x0 = inner(grad(p), grad(q)) * dx
return (x0,)
else:
raise ValueError("Invalid term for assemble_operator().")
"""
Explanation: 3. Affine Decomposition
For this problem the affine decomposition is straightforward.
End of explanation
"""
mesh = Mesh("data/mesh2.xml")
subdomains = MeshFunction("size_t", mesh, "data/mesh2_physical_region.xml")
boundaries = MeshFunction("size_t", mesh, "data/mesh2_facet_region.xml")
"""
Explanation: 4. Main program
4.1. Read the mesh for this problem
The mesh was generated by the data/generate_mesh_2.ipynb notebook.
End of explanation
"""
scalar_element = FiniteElement("Lagrange", mesh.ufl_cell(), 1)
element = MixedElement(scalar_element, scalar_element, scalar_element)
V = FunctionSpace(mesh, element, components=["y", "u", "p"])
"""
Explanation: 4.2. Create Finite Element space (Lagrange P1)
End of explanation
"""
problem = EllipticOptimalControl(V, subdomains=subdomains, boundaries=boundaries)
mu_range = [(3.0, 20.0), (0.5, 1.5), (1.5, 2.5)]
problem.set_mu_range(mu_range)
"""
Explanation: 4.3. Allocate an object of the EllipticOptimalControl class
End of explanation
"""
pod_galerkin_method = PODGalerkin(problem)
pod_galerkin_method.set_Nmax(20)
"""
Explanation: 4.4. Prepare reduction with a POD-Galerkin method
End of explanation
"""
lifting_mu = (3.0, 1.0, 2.0)
problem.set_mu(lifting_mu)
pod_galerkin_method.initialize_training_set(100)
reduced_problem = pod_galerkin_method.offline()
"""
Explanation: 4.5. Perform the offline phase
End of explanation
"""
online_mu = (15.0, 0.6, 1.8)
reduced_problem.set_mu(online_mu)
reduced_solution = reduced_problem.solve()
print("Reduced output for mu =", online_mu, "is", reduced_problem.compute_output())
plot(reduced_solution, reduced_problem=reduced_problem, component="y")
plot(reduced_solution, reduced_problem=reduced_problem, component="u")
plot(reduced_solution, reduced_problem=reduced_problem, component="p")
"""
Explanation: 4.6. Perform an online solve
End of explanation
"""
pod_galerkin_method.initialize_testing_set(100)
pod_galerkin_method.error_analysis()
"""
Explanation: 4.7. Perform an error analysis
End of explanation
"""
pod_galerkin_method.speedup_analysis()
"""
Explanation: 4.8. Perform a speedup analysis
End of explanation
"""
|
karlstroetmann/Artificial-Intelligence | Python/1 Search/Iterative-Deepening-A-Star-Search.ipynb | gpl-2.0 | def search(start, goal, next_states, heuristic):
limit = heuristic(start, goal)
while True:
print(f'Trying to find a solution of length {limit}.')
Path = dl_search([start], goal, next_states, limit, heuristic)
if isinstance(Path, list):
return Path
limit = Path
"""
Explanation: Iterative Deepening A-Star Search
The function search takes four arguments to solve a search problem:
- start is the start state of the search problem,
- goal is the goal state, and
- next_states is a function with signature
$\texttt{next_states}:Q \rightarrow 2^Q$, where $Q$ is the set of states.
For every state $s \in Q$, $\texttt{next_states}(s)$ is the set of states
that can be reached from $s$ in one step.
- heuristic is a function that takes two states as arguments.
It returns an estimate of the length of the shortest path between these
states.
If successful, search returns a path from start to goal that is a solution of the search problem
$$ \langle Q, \texttt{next_states}, \texttt{start}, \texttt{goal} \rangle. $$
The procedure search uses iterative deepening to compute a solution of the given search problem.
End of explanation
"""
def dl_search(Path, goal, next_states, limit, heuristic):
state = Path[-1]
distance = len(Path) - 1
total = distance + heuristic(state, goal)
if total > limit:
return total
if state == goal:
return Path
smallest = float("Inf") # infinity
for ns in next_states(state):
if ns not in Path:
Solution = dl_search(Path + [ns], goal, next_states, limit, heuristic)
if isinstance(Solution, list):
return Solution
smallest = min(smallest, Solution)
return smallest
"""
Explanation: The function dl_search tries to find a solution to the search problem
$$ \langle Q, \texttt{next_states}, \texttt{start}, \texttt{goal} \rangle $$
that has a length of at most limit. It uses heuristic to cut short a search that would be unsuccessful. If it can not find a solution that satisfies the given limit, it returns a number that is a lower bound for the
length of the solution. This lower bound will always be greater than limit and can be used in the next attempt to search for a solution.
End of explanation
"""
%run Sliding-Puzzle.ipynb
%load_ext memory_profiler
%%time
%memit Path = search(start, goal, next_states, manhattan)
print(len(Path)-1)
animation(Path)
%%time
%memit Path = search(start2, goal2, next_states, manhattan)
print(len(Path)-1)
animation(Path)
"""
Explanation: Solving the Sliding Puzzle
End of explanation
"""
|
tiagofabre/tiagofabre.github.io | _notebooks/Radial basis function.ipynb | mit | def rbf(inp, out, center):
def euclidean_norm(x1, x2):
return sqrt(((x1 - x2)**2).sum(axis=0))
def gaussian (x, c):
return exp(+1 * pow(euclidean_norm(x, c), 2))
R = np.ones((len(inp), (len(center) + 1)))
for i, iv in enumerate(inp):
for j, jv in enumerate(center):
R[i, j] = (gaussian(inp[i], center[j]))
Rt = R.transpose()
RtR = Rt.dot(R)
iRtR = inv(RtR)
oneR = iRtR.dot(Rt)
a = oneR.dot(out)
def rbf_interpolation(x):
sum = np.ones(len(center) + 1)
for i, iv in enumerate(center):
sum[i] = gaussian(x, iv)
y = a * sum
return reduce((lambda x, y: x + y), y)
return rbf_interpolation
"""
Explanation: Interpolation with RBF
$$
f( x) =\sum ^{P}{p=1} a{p} .R_{p} +b
$$
$$
R_{p} = e^{-\frac{1}{2\sigma ^{2}} .\parallel ( X_{i}) -( X_{p}) \parallel ^{2}}
$$
$$
\sigma =\frac{P_{max} -P_{min}}{\sqrt{2P}}
$$
$$
\sigma =\frac{4-2}{\sqrt{2.2}} \
$$
$$
\sigma ^{2} =1
$$
$$
C_{1}=2
$$
$$
C_{2}=4
$$
$$\displaystyle \frac{1}{[ R]} =\left([ R]^{t} .[ R]\right)^{-1} .[ R]^{t}$$
$$
\displaystyle \begin{bmatrix}
a
\end{bmatrix} =\frac{1}{[ R]} \ \begin{bmatrix}
A
\end{bmatrix}
$$
End of explanation
"""
inp = np.array([2, 3, 4])
out = np.array([3, 6, 5])
center = np.array([2, 4])
rbf_instance = rbf(inp, out, center)
input_test = input_test = np.linspace(0,10,100)
output_test = list(map(rbf_instance, input_test))
plt.plot(input_test, output_test)
plt.plot(inp, out, 'ro')
plt.ylabel('expected vs predicted')
plt.savefig("rbf1.svg")
plt.show()
"""
Explanation:
End of explanation
"""
inp = np.array([2, 3, 4, 5])
out = np.array([3, 1, 5, -2])
center = np.array([2, 3, 4])
rbf_instance = rbf(inp, out, center)
input_test = np.linspace(-5,10,100)
output_test = list(map(rbf_instance, input_test))
# plt.plot(input_test, output_test)
plt.plot(inp, out, 'ro')
plt.ylabel('expected vs predicted')
plt.savefig("interpolate1.svg")
plt.show()
"""
Explanation:
End of explanation
"""
inp = np.array([2, 3, 4, 5])
out = np.array([3, 1, 5, -2])
center = np.array([2, 3, 4])
rbf_instance = rbf(inp, out, center)
input_test = input_test = np.linspace(2,5,100)
output_test = list(map(rbf_instance, input_test))
plt.plot(input_test, output_test)
plt.plot(inp, out, 'ro')
plt.ylabel('expected vs predicted')
plt.savefig("rbf3.svg")
plt.show()
"""
Explanation:
End of explanation
"""
inp = np.array([np.array([1,1]), np.array([0,1]), np.array([0,0]), np.array([1,0])])
out = np.array([ 0, 1, 0, 1])
center = np.array([ np.array([1,1]), np.array([0,0])])
rbf_instance = rbf(inp, out, center)
inp_test = np.array([np.array([1,1]),
np.array([0,1]),
np.array([0,0]),
np.array([1,0])])
output = map(rbf_instance, inp_test)
def colorize(output):
c = [None]* len(output)
for i, iv in enumerate(output):
if (output[i] > 0):
c[i] = 'blue'
else:
c[i] = 'red'
return c
inp_x = [1, 0, 0, 1]
inp_y = [1, 1, 0, 0]
c = colorize(output)
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
xx, yy = np.meshgrid(np.arange(0, 1, 0.02), np.arange(0, 1, 0.02))
cm_bright = ListedColormap(['#FF0000', '#0000FF'])
ax.scatter(xx[:, 0], yy[:, 1], output, cmap=cm_bright, depthshade=False)
plt.savefig("rbf_xor.svg")
plt.show()
"""
Explanation: XOR input
End of explanation
"""
|
tensorflow/docs-l10n | site/en-snapshot/lite/guide/signatures.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2021 The TensorFlow Authors.
End of explanation
"""
import tensorflow as tf
"""
Explanation: Signatures in TensorFlow Lite
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/lite/guide/signatures"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/guide/signatures.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/guide/signatures.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/tensorflow/tensorflow/lite/g3doc/guide/signatures.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
TensorFlow Lite supports converting TensorFlow model's input/output
specifications to TensorFlow Lite models. The input/output specifications are
called "signatures". Signatures can be specified when building a SavedModel or
creating concrete functions.
Signatures in TensorFlow Lite provide the following features:
They specify inputs and outputs of the converted TensorFlow Lite model by
respecting the TensorFlow model's signatures.
Allow a single TensorFlow Lite model to support multiple entry points.
The signature is composed of three pieces:
Inputs: Map for inputs from input name in the signature to an input tensor.
Outputs: Map for output mapping from output name in signature to an output
tensor.
Signature Key: Name that identifies an entry point of the graph.
Setup
End of explanation
"""
class Model(tf.Module):
@tf.function(input_signature=[tf.TensorSpec(shape=[None], dtype=tf.float32)])
def encode(self, x):
result = tf.strings.as_string(x)
return {
"encoded_result": result
}
@tf.function(input_signature=[tf.TensorSpec(shape=[None], dtype=tf.string)])
def decode(self, x):
result = tf.strings.to_number(x)
return {
"decoded_result": result
}
"""
Explanation: Example model
Let's say we have two tasks, e.g., encoding and decoding, as a TensorFlow model:
End of explanation
"""
model = Model()
# Save the model
SAVED_MODEL_PATH = 'content/saved_models/coding'
tf.saved_model.save(
model, SAVED_MODEL_PATH,
signatures={
'encode': model.encode.get_concrete_function(),
'decode': model.decode.get_concrete_function()
})
# Convert the saved model using TFLiteConverter
converter = tf.lite.TFLiteConverter.from_saved_model(SAVED_MODEL_PATH)
converter.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS, # enable TensorFlow Lite ops.
tf.lite.OpsSet.SELECT_TF_OPS # enable TensorFlow ops.
]
tflite_model = converter.convert()
# Print the signatures from the converted model
interpreter = tf.lite.Interpreter(model_content=tflite_model)
signatures = interpreter.get_signature_list()
print(signatures)
"""
Explanation: In the signature wise, the above TensorFlow model can be summarized as follows:
Signature
Key: encode
Inputs: {"x"}
Output: {"encoded_result"}
Signature
Key: decode
Inputs: {"x"}
Output: {"decoded_result"}
Convert a model with Signatures
TensorFlow Lite converter APIs will bring the above signature information into
the converted TensorFlow Lite model.
This conversion functionality is available on all the converter APIs starting
from TensorFlow version 2.7.0. See example usages.
From Saved Model
End of explanation
"""
# Generate a Keras model.
keras_model = tf.keras.Sequential(
[
tf.keras.layers.Dense(2, input_dim=4, activation='relu', name='x'),
tf.keras.layers.Dense(1, activation='relu', name='output'),
]
)
# Convert the keras model using TFLiteConverter.
# Keras model converter API uses the default signature automatically.
converter = tf.lite.TFLiteConverter.from_keras_model(keras_model)
tflite_model = converter.convert()
# Print the signatures from the converted model
interpreter = tf.lite.Interpreter(model_content=tflite_model)
signatures = interpreter.get_signature_list()
print(signatures)
"""
Explanation: From Keras Model
End of explanation
"""
model = Model()
# Convert the concrete functions using TFLiteConverter
converter = tf.lite.TFLiteConverter.from_concrete_functions(
[model.encode.get_concrete_function(),
model.decode.get_concrete_function()], model)
converter.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS, # enable TensorFlow Lite ops.
tf.lite.OpsSet.SELECT_TF_OPS # enable TensorFlow ops.
]
tflite_model = converter.convert()
# Print the signatures from the converted model
interpreter = tf.lite.Interpreter(model_content=tflite_model)
signatures = interpreter.get_signature_list()
print(signatures)
"""
Explanation: From Concrete Functions
End of explanation
"""
# Load the TFLite model in TFLite Interpreter
interpreter = tf.lite.Interpreter(model_content=tflite_model)
# Print the signatures from the converted model
signatures = interpreter.get_signature_list()
print('Signature:', signatures)
# encode and decode are callable with input as arguments.
encode = interpreter.get_signature_runner('encode')
decode = interpreter.get_signature_runner('decode')
# 'encoded' and 'decoded' are dictionaries with all outputs from the inference.
input = tf.constant([1, 2, 3], dtype=tf.float32)
print('Input:', input)
encoded = encode(x=input)
print('Encoded result:', encoded)
decoded = decode(x=encoded['encoded_result'])
print('Decoded result:', decoded)
"""
Explanation: Run Signatures
TensorFlow inference APIs support the signature-based executions:
Accessing the input/output tensors through the names of the inputs and
outputs, specified by the signature.
Running each entry point of the graph separately, identified by the
signature key.
Support for the SavedModel's initialization procedure.
Java, C++ and Python language bindings are currently available. See example the
below sections.
Java
```
try (Interpreter interpreter = new Interpreter(file_of_tensorflowlite_model)) {
// Run encoding signature.
Map<String, Object> inputs = new HashMap<>();
inputs.put("x", input);
Map<String, Object> outputs = new HashMap<>();
outputs.put("encoded_result", encoded_result);
interpreter.runSignature(inputs, outputs, "encode");
// Run decoding signature.
Map<String, Object> inputs = new HashMap<>();
inputs.put("x", encoded_result);
Map<String, Object> outputs = new HashMap<>();
outputs.put("decoded_result", decoded_result);
interpreter.runSignature(inputs, outputs, "decode");
}
```
C++
```
SignatureRunner* encode_runner =
interpreter->GetSignatureRunner("encode");
encode_runner->ResizeInputTensor("x", {100});
encode_runner->AllocateTensors();
TfLiteTensor input_tensor = encode_runner->input_tensor("x");
float input = input_tensor->data.f;
// Fill input.
encode_runner->Invoke();
const TfLiteTensor output_tensor = encode_runner->output_tensor(
"encoded_result");
float output = output_tensor->data.f;
// Access output.
```
Python
End of explanation
"""
|
broundy/udacity | nanodegrees/deep_learning_foundations/unit_1/lesson_11_intro_to_tflearn/Sentiment analysis with TFLearn.ipynb | unlicense | import pandas as pd
import numpy as np
import tensorflow as tf
import tflearn
from tflearn.data_utils import to_categorical
"""
Explanation: Sentiment analysis with TFLearn
In this notebook, we'll continue Andrew Trask's work by building a network for sentiment analysis on the movie review data. Instead of a network written with Numpy, we'll be using TFLearn, a high-level library built on top of TensorFlow. TFLearn makes it simpler to build networks just by defining the layers. It takes care of most of the details for you.
We'll start off by importing all the modules we'll need, then load and prepare the data.
End of explanation
"""
reviews = pd.read_csv('reviews.txt', header=None)
labels = pd.read_csv('labels.txt', header=None)
"""
Explanation: Preparing the data
Following along with Andrew, our goal here is to convert our reviews into word vectors. The word vectors will have elements representing words in the total vocabulary. If the second position represents the word 'the', for each review we'll count up the number of times 'the' appears in the text and set the second position to that count. I'll show you examples as we build the input data from the reviews data. Check out Andrew's notebook and video for more about this.
End of explanation
"""
from collections import Counter
total_counts = Counter()
for idx, row in reviews.iterrows():
total_counts.update(row[0].split(' '))
print("Total words in data set: ", len(total_counts))
"""
Explanation: Counting word frequency
To start off we'll need to count how often each word appears in the data. We'll use this count to create a vocabulary we'll use to encode the review data. This resulting count is known as a bag of words. We'll use it to select our vocabulary and build the word vectors. You should have seen how to do this in Andrew's lesson. Try to implement it here using the Counter class.
Exercise: Create the bag of words from the reviews data and assign it to total_counts. The reviews are stores in the reviews Pandas DataFrame. If you want the reviews as a Numpy array, use reviews.values. You can iterate through the rows in the DataFrame with for idx, row in reviews.iterrows(): (documentation).
End of explanation
"""
vocab = sorted(total_counts, key=total_counts.get, reverse=True)[:10000]
print(vocab[:60])
"""
Explanation: Let's keep the first 10000 most frequent words. As Andrew noted, most of the words in the vocabulary are rarely used so they will have little effect on our predictions. Below, we'll sort vocab by the count value and keep the 10000 most frequent words.
End of explanation
"""
print(vocab[-1], ': ', total_counts[vocab[-1]])
"""
Explanation: What's the last word in our vocabulary? We can use this to judge if 10000 is too few. If the last word is pretty common, we probably need to keep more words.
End of explanation
"""
word2idx = {word: i for i, word in enumerate(vocab)}
"""
Explanation: The last word in our vocabulary shows up in 30 reviews out of 25000. I think it's fair to say this is a tiny proportion of reviews. We are probably fine with this number of words.
Now for each review in the data, we'll make a word vector. First we need to make a mapping of word to index, pretty easy to do with a dictionary comprehension.
Exercise: Create a dictionary called word2idx that maps each word in the vocabulary to an index. The first word in vocab has index 0, the second word has index 1, and so on.
End of explanation
"""
def text_to_vector(text):
word_vector = np.zeros(len(vocab), dtype=np.int_)
for word in text.split(' '):
idx = word2idx.get(word, None)
if idx is None:
continue
else:
word_vector[idx] += 1
return np.array(word_vector)
"""
Explanation: Text to vector function
Now we can write a function that converts a some text to a word vector. The function will take a string of words as input and return a vector with the words counted up. Here's the general algorithm to do this:
Initialize the word vector with np.zeros, it should be the length of the vocabulary.
Split the input string of text into a list of words with .split(' ').
For each word in that list, increment the element in the index associated with that word, which you get from word2idx.
Note: Since all words aren't in the vocab dictionary, you'll get a key error if you run into one of those words. You can use the .get method of the word2idx dictionary to specify a default returned value when you make a key error. For example, word2idx.get(word, None) returns None if word doesn't exist in the dictionary.
End of explanation
"""
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[:65]
"""
Explanation: If you do this right, the following code should return
```
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[:65]
array([0, 1, 0, 0, 2, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0,
0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0])
```
End of explanation
"""
word_vectors = np.zeros((len(reviews), len(vocab)), dtype=np.int_)
for ii, (_, text) in enumerate(reviews.iterrows()):
word_vectors[ii] = text_to_vector(text[0])
# Printing out the first 5 word vectors
word_vectors[:5, :23]
"""
Explanation: Now, run through our entire review data set and convert each review to a word vector.
End of explanation
"""
Y = (labels=='positive').astype(np.int_)
records = len(labels)
shuffle = np.arange(records)
np.random.shuffle(shuffle)
test_fraction = 0.9
train_split, test_split = shuffle[:int(records*test_fraction)], shuffle[int(records*test_fraction):]
trainX, trainY = word_vectors[train_split,:], to_categorical(Y.values[train_split], 2)
testX, testY = word_vectors[test_split,:], to_categorical(Y.values[test_split], 2)
trainY
"""
Explanation: Train, Validation, Test sets
Now that we have the word_vectors, we're ready to split our data into train, validation, and test sets. Remember that we train on the train data, use the validation data to set the hyperparameters, and at the very end measure the network performance on the test data. Here we're using the function to_categorical from TFLearn to reshape the target data so that we'll have two output units and can classify with a softmax activation function. We actually won't be creating the validation set here, TFLearn will do that for us later.
End of explanation
"""
# Network building
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
# Input
net = tflearn.input_data([None, 10000])
#Hidden
net = tflearn.fully_connected(net, 400, activation='ReLU')
net = tflearn.fully_connected(net, 50, activation='ReLU')
net = tflearn.fully_connected(net, 12, activation='ReLU')
#Output
net = tflearn.fully_connected(net, 2, activation='softmax')
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
#Make it go
model = tflearn.DNN(net)
return model
"""
Explanation: Building the network
TFLearn lets you build the network by defining the layers.
Input layer
For the input layer, you just need to tell it how many units you have. For example,
net = tflearn.input_data([None, 100])
would create a network with 100 input units. The first element in the list, None in this case, sets the batch size. Setting it to None here leaves it at the default batch size.
The number of inputs to your network needs to match the size of your data. For this example, we're using 10000 element long vectors to encode our input data, so we need 10000 input units.
Adding layers
To add new hidden layers, you use
net = tflearn.fully_connected(net, n_units, activation='ReLU')
This adds a fully connected layer where every unit in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call. It's telling the network to use the output of the previous layer as the input to this layer. You can set the number of units in the layer with n_hidden, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling net = tflearn.fully_connected(net, n_units).
Output layer
The last layer you add is used as the output layer. There for, you need to set the number of units to match the target data. In this case we are predicting two classes, positive or negative sentiment. You also need to set the activation function so it's appropriate for your model. Again, we're trying to predict if some input data belongs to one of two classes, so we should use softmax.
net = tflearn.fully_connected(net, 2, activation='softmax')
Training
To set how you train the network, use
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
Again, this is passing in the network you've been building. The keywords:
optimizer sets the training method, here stochastic gradient descent
learning_rate is the learning rate
loss determines how the network error is calculated. In this example, with the categorical cross-entropy.
Finally you put all this together to create the model with tflearn.DNN(net). So it ends up looking something like
net = tflearn.input_data([None, 10]) # Input
net = tflearn.fully_connected(net, 5, activation='ReLU') # Hidden
net = tflearn.fully_connected(net, 2, activation='softmax') # Output
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
model = tflearn.DNN(net)
Exercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc.
End of explanation
"""
model = build_model()
"""
Explanation: Intializing the model
Next we need to call the build_model() function to actually build the model. In my solution I haven't included any arguments to the function, but you can add arguments so you can change parameters in the model if you want.
Note: You might get a bunch of warnings here. TFLearn uses a lot of deprecated code in TensorFlow. Hopefully it gets updated to the new TensorFlow version soon.
End of explanation
"""
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=128, n_epoch=50)
"""
Explanation: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively. Below is the code to fit our the network to our word vectors.
You can rerun model.fit to train the network further if you think you can increase the validation accuracy. Remember, all hyperparameter adjustments must be done using the validation set. Only use the test set after you're completely done training the network.
End of explanation
"""
predictions = (np.array(model.predict(testX))[:,0] >= 0.5).astype(np.int_)
test_accuracy = np.mean(predictions == testY[:,0], axis=0)
print("Test accuracy: ", test_accuracy)
"""
Explanation: Testing
After you're satisified with your hyperparameters, you can run the network on the test set to measure it's performance. Remember, only do this after finalizing the hyperparameters.
End of explanation
"""
text = "This movie is so good. It was super and the worst"
positive_prob = model.predict([text_to_vector(text.lower())])[0][1]
print('P(positive) = {:.3f} :'.format(positive_prob),
'Positive' if positive_prob > 0.5 else 'Negative')
"""
Explanation: Try out your own text!
End of explanation
"""
|
jmhsi/justin_tinker | data_science/courses/deeplearning1/nbs/lesson4.ipynb | apache-2.0 | ratings = pd.read_csv(path+'ratings.csv')
ratings.head()
len(ratings)
"""
Explanation: Set up data
We're working with the movielens data, which contains one rating per row, like this:
End of explanation
"""
movie_names = pd.read_csv(path+'movies.csv').set_index('movieId')['title'].to_dict()
users = ratings.userId.unique()
movies = ratings.movieId.unique()
userid2idx = {o:i for i,o in enumerate(users)}
movieid2idx = {o:i for i,o in enumerate(movies)}
"""
Explanation: Just for display purposes, let's read in the movie names too.
End of explanation
"""
ratings.movieId = ratings.movieId.apply(lambda x: movieid2idx[x])
ratings.userId = ratings.userId.apply(lambda x: userid2idx[x])
user_min, user_max, movie_min, movie_max = (ratings.userId.min(),
ratings.userId.max(), ratings.movieId.min(), ratings.movieId.max())
user_min, user_max, movie_min, movie_max
n_users = ratings.userId.nunique()
n_movies = ratings.movieId.nunique()
n_users, n_movies
"""
Explanation: We update the movie and user ids so that they are contiguous integers, which we want when using embeddings.
End of explanation
"""
n_factors = 50
np.random.seed = 42
"""
Explanation: This is the number of latent factors in each embedding.
End of explanation
"""
msk = np.random.rand(len(ratings)) < 0.8
trn = ratings[msk]
val = ratings[~msk]
"""
Explanation: Randomly split into training and validation.
End of explanation
"""
g=ratings.groupby('userId')['rating'].count()
topUsers=g.sort_values(ascending=False)[:15]
g=ratings.groupby('movieId')['rating'].count()
topMovies=g.sort_values(ascending=False)[:15]
top_r = ratings.join(topUsers, rsuffix='_r', how='inner', on='userId')
top_r = top_r.join(topMovies, rsuffix='_r', how='inner', on='movieId')
pd.crosstab(top_r.userId, top_r.movieId, top_r.rating, aggfunc=np.sum)
"""
Explanation: Create subset for Excel
We create a crosstab of the most popular movies and most movie-addicted users which we'll copy into Excel for creating a simple example. This isn't necessary for any of the modeling below however.
End of explanation
"""
user_in = Input(shape=(1,), dtype='int64', name='user_in')
u = Embedding(n_users, n_factors, input_length=1, W_regularizer=l2(1e-4))(user_in)
movie_in = Input(shape=(1,), dtype='int64', name='movie_in')
m = Embedding(n_movies, n_factors, input_length=1, W_regularizer=l2(1e-4))(movie_in)
x = merge([u, m], mode='dot')
x = Flatten()(x)
model = Model([user_in, movie_in], x)
model.compile(Adam(0.001), loss='mse')
model.fit([trn.userId, trn.movieId], trn.rating, batch_size=64, nb_epoch=1,
validation_data=([val.userId, val.movieId], val.rating))
model.optimizer.lr=0.01
model.fit([trn.userId, trn.movieId], trn.rating, batch_size=64, nb_epoch=3,
validation_data=([val.userId, val.movieId], val.rating))
model.optimizer.lr=0.001
model.fit([trn.userId, trn.movieId], trn.rating, batch_size=64, nb_epoch=6,
validation_data=([val.userId, val.movieId], val.rating))
"""
Explanation: Dot product
The most basic model is a dot product of a movie embedding and a user embedding. Let's see how well that works:
End of explanation
"""
def embedding_input(name, n_in, n_out, reg):
inp = Input(shape=(1,), dtype='int64', name=name)
return inp, Embedding(n_in, n_out, input_length=1, W_regularizer=l2(reg))(inp)
user_in, u = embedding_input('user_in', n_users, n_factors, 1e-4)
movie_in, m = embedding_input('movie_in', n_movies, n_factors, 1e-4)
def create_bias(inp, n_in):
x = Embedding(n_in, 1, input_length=1)(inp)
return Flatten()(x)
ub = create_bias(user_in, n_users)
mb = create_bias(movie_in, n_movies)
x = merge([u, m], mode='dot')
x = Flatten()(x)
x = merge([x, ub], mode='sum')
x = merge([x, mb], mode='sum')
model = Model([user_in, movie_in], x)
model.compile(Adam(0.001), loss='mse')
model.fit([trn.userId, trn.movieId], trn.rating, batch_size=64, nb_epoch=1,
validation_data=([val.userId, val.movieId], val.rating))
model.optimizer.lr=0.01
model.fit([trn.userId, trn.movieId], trn.rating, batch_size=64, nb_epoch=6,
validation_data=([val.userId, val.movieId], val.rating))
model.optimizer.lr=0.001
model.fit([trn.userId, trn.movieId], trn.rating, batch_size=64, nb_epoch=10,
validation_data=([val.userId, val.movieId], val.rating))
model.fit([trn.userId, trn.movieId], trn.rating, batch_size=64, nb_epoch=5,
validation_data=([val.userId, val.movieId], val.rating))
"""
Explanation: The best benchmarks are a bit over 0.9, so this model doesn't seem to be working that well...
Bias
The problem is likely to be that we don't have bias terms - that is, a single bias for each user and each movie representing how positive or negative each user is, and how good each movie is. We can add that easily by simply creating an embedding with one output for each movie and each user, and adding it to our output.
End of explanation
"""
model.save_weights(model_path+'bias.h5')
model.load_weights(model_path+'bias.h5')
"""
Explanation: This result is quite a bit better than the best benchmarks that we could find with a quick google search - so looks like a great approach!
End of explanation
"""
model.predict([np.array([3]), np.array([6])])
"""
Explanation: We can use the model to generate predictions by passing a pair of ints - a user id and a movie id. For instance, this predicts that user #3 would really enjoy movie #6.
End of explanation
"""
g=ratings.groupby('movieId')['rating'].count()
topMovies=g.sort_values(ascending=False)[:2000]
topMovies = np.array(topMovies.index)
"""
Explanation: Analyze results
To make the analysis of the factors more interesting, we'll restrict it to the top 2000 most popular movies.
End of explanation
"""
get_movie_bias = Model(movie_in, mb)
movie_bias = get_movie_bias.predict(topMovies)
movie_ratings = [(b[0], movie_names[movies[i]]) for i,b in zip(topMovies,movie_bias)]
"""
Explanation: First, we'll look at the movie bias term. We create a 'model' - which in keras is simply a way of associating one or more inputs with one more more outputs, using the functional API. Here, our input is the movie id (a single id), and the output is the movie bias (a single float).
End of explanation
"""
sorted(movie_ratings, key=itemgetter(0))[:15]
sorted(movie_ratings, key=itemgetter(0), reverse=True)[:15]
"""
Explanation: Now we can look at the top and bottom rated movies. These ratings are corrected for different levels of reviewer sentiment, as well as different types of movies that different reviewers watch.
End of explanation
"""
get_movie_emb = Model(movie_in, m)
movie_emb = np.squeeze(get_movie_emb.predict([topMovies]))
movie_emb.shape
"""
Explanation: We can now do the same thing for the embeddings.
End of explanation
"""
from sklearn.decomposition import PCA
pca = PCA(n_components=3)
movie_pca = pca.fit(movie_emb.T).components_
fac0 = movie_pca[0]
movie_comp = [(f, movie_names[movies[i]]) for f,i in zip(fac0, topMovies)]
"""
Explanation: Because it's hard to interpret 50 embeddings, we use PCA to simplify them down to just 3 vectors.
End of explanation
"""
sorted(movie_comp, key=itemgetter(0), reverse=True)[:10]
sorted(movie_comp, key=itemgetter(0))[:10]
fac1 = movie_pca[1]
movie_comp = [(f, movie_names[movies[i]]) for f,i in zip(fac1, topMovies)]
"""
Explanation: Here's the 1st component. It seems to be 'critically acclaimed' or 'classic'.
End of explanation
"""
sorted(movie_comp, key=itemgetter(0), reverse=True)[:10]
sorted(movie_comp, key=itemgetter(0))[:10]
fac2 = movie_pca[2]
movie_comp = [(f, movie_names[movies[i]]) for f,i in zip(fac2, topMovies)]
"""
Explanation: The 2nd is 'hollywood blockbuster'.
End of explanation
"""
sorted(movie_comp, key=itemgetter(0), reverse=True)[:10]
sorted(movie_comp, key=itemgetter(0))[:10]
"""
Explanation: The 3rd is 'violent vs happy'.
End of explanation
"""
import sys
stdout, stderr = sys.stdout, sys.stderr # save notebook stdout and stderr
reload(sys)
sys.setdefaultencoding('utf-8')
sys.stdout, sys.stderr = stdout, stderr # restore notebook stdout and stderr
start=50; end=100
X = fac0[start:end]
Y = fac2[start:end]
plt.figure(figsize=(15,15))
plt.scatter(X, Y)
for i, x, y in zip(topMovies[start:end], X, Y):
plt.text(x,y,movie_names[movies[i]], color=np.random.rand(3)*0.7, fontsize=14)
plt.show()
"""
Explanation: We can draw a picture to see how various movies appear on the map of these components. This picture shows the 1st and 3rd components.
End of explanation
"""
user_in, u = embedding_input('user_in', n_users, n_factors, 1e-4)
movie_in, m = embedding_input('movie_in', n_movies, n_factors, 1e-4)
x = merge([u, m], mode='concat')
x = Flatten()(x)
x = Dropout(0.3)(x)
x = Dense(70, activation='relu')(x)
x = Dropout(0.75)(x)
x = Dense(1)(x)
nn = Model([user_in, movie_in], x)
nn.compile(Adam(0.001), loss='mse')
nn.fit([trn.userId, trn.movieId], trn.rating, batch_size=64, nb_epoch=8,
validation_data=([val.userId, val.movieId], val.rating))
"""
Explanation: Neural net
Rather than creating a special purpose architecture (like our dot-product with bias earlier), it's often both easier and more accurate to use a standard neural network. Let's try it! Here, we simply concatenate the user and movie embeddings into a single vector, which we feed into the neural net.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/miroc/cmip6/models/miroc-es2l/atmos.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'miroc', 'miroc-es2l', 'atmos')
"""
Explanation: ES-DOC CMIP6 Model Properties - Atmos
MIP Era: CMIP6
Institute: MIROC
Source ID: MIROC-ES2L
Topic: Atmos
Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos.
Properties: 156 (127 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-20 15:02:40
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of atmospheric model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the atmosphere.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.4. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on the computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 2.5. High Top
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required: TRUE Type: STRING Cardinality: 1.1
Timestep for the dynamics, e.g. 30 min.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.2. Timestep Shortwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the shortwave radiative transfer, e.g. 1.5 hours.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Timestep Longwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the longwave radiative transfer, e.g. 3 hours.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the orography.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
"""
Explanation: 4.2. Changes
Is Required: TRUE Type: ENUM Cardinality: 1.N
If the orography type is modified describe the time adaptation changes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of grid discretisation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
"""
Explanation: 6.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.3. Scheme Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation function order
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.4. Horizontal Pole
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal discretisation pole singularity treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.5. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type of vertical coordinate system
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere dynamical core
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the dynamical core of the model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Timestepping Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Timestepping framework type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of the model prognostic variables
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Top boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Top Heat
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary heat treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Top Wind
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary wind treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required: FALSE Type: ENUM Cardinality: 0.1
Type of lateral boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Horizontal diffusion scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal diffusion scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Tracer advection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.3. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme conserved quantities
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.4. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracer advection scheme conservation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Momentum advection schemes name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.3. Scheme Staggering Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme staggering type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.4. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme conserved quantities
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme conservation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required: TRUE Type: ENUM Cardinality: 1.N
Aerosols whose radiative effect is taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of shortwave radiation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Shortwave radiation scheme spectral integration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Shortwave radiation transport calculation methods
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Shortwave radiation scheme number of spectral intervals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud ice crystals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud liquid droplets
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with aerosols
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with gases
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of longwave radiation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the longwave radiation scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Longwave radiation scheme spectral integration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Longwave radiation transport calculation methods
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 22.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Longwave radiation scheme number of spectral intervals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud ice crystals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24.2. Physical Reprenstation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud liquid droplets
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with aerosols
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with gases
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere convection and turbulence
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Boundary layer turbulence scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Boundary layer turbulence scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Closure Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Boundary layer turbulence scheme closure order
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.4. Counter Gradient
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Uses boundary layer turbulence scheme counter gradient
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Deep convection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of deep convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Shallow convection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
shallow convection scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
"""
Explanation: 32.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
shallow convection scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of shallow convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for shallow convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of large scale cloud microphysics and precipitation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the large scale precipitation parameterisation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 34.2. Hydrometeors
Is Required: TRUE Type: ENUM Cardinality: 1.N
Precipitating hydrometeors taken into account in the large scale precipitation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the microphysics parameterisation scheme used for large scale clouds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 35.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Large scale cloud microphysics processes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the atmosphere cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
"""
Explanation: 36.3. Atmos Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Atmosphere components that are linked to the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.4. Uses Separate Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.6. Prognostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a prognostic scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.7. Diagnostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a diagnostic scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.8. Prognostic Variables
Is Required: FALSE Type: ENUM Cardinality: 0.N
List the prognostic variables used by the cloud scheme, if applicable.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required: FALSE Type: ENUM Cardinality: 0.1
Method for taking into account overlapping of cloud layers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.2. Cloud Inhomogeneity
Is Required: FALSE Type: STRING Cardinality: 0.1
Method for taking into account cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
"""
Explanation: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale water distribution type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 38.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale water distribution function name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 38.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale water distribution function type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
"""
Explanation: 38.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale water distribution coupling with convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
"""
Explanation: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale ice distribution type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 39.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale ice distribution function name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 39.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale ice distribution function type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
"""
Explanation: 39.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale ice distribution coupling with convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of observation simulator characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator ISSCP top height estimation methodUo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41.2. Top Height Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator ISSCP top height direction
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator COSP run configuration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.2. Number Of Grid Points
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of grid points
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.3. Number Of Sub Columns
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.4. Number Of Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of levels
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Cloud simulator radar frequency (Hz)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 43.2. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator radar type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 43.3. Gas Absorption
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses gas absorption
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 43.4. Effective Radius
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses effective radius
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator lidar ice type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 44.2. Overlap
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator lidar overlap
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of gravity wave parameterisation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.2. Sponge Layer
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sponge layer in the upper levels in order to avoid gravity wave reflection at the top.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.3. Background
Is Required: TRUE Type: ENUM Cardinality: 1.1
Background wave distribution
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.4. Subgrid Scale Orography
Is Required: TRUE Type: ENUM Cardinality: 1.N
Subgrid scale orography effects taken into account.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the orographic gravity wave scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave source mechanisms
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave calculation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave propogation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave dissipation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the non-orographic gravity wave scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave source mechanisms
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
"""
Explanation: 47.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave calculation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave propogation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave dissipation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of solar insolation of the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required: TRUE Type: ENUM Cardinality: 1.N
Pathways for the solar forcing of the atmosphere model domain
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
"""
Explanation: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the solar constant.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 50.2. Fixed Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If the solar constant is fixed, enter the value of the solar constant (W m-2).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 50.3. Transient Characteristics
Is Required: TRUE Type: STRING Cardinality: 1.1
solar constant transient characteristics (W m-2)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
"""
Explanation: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of orbital parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 51.2. Fixed Reference Date
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Reference date for fixed orbital parameters (yyyy)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 51.3. Transient Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Description of transient orbital parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 51.4. Computation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used for computing orbital parameters.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does top of atmosphere insolation impact on stratospheric ozone?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the implementation of volcanic effects in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How volcanic effects are modeled in the atmosphere.
End of explanation
"""
|
yunqu/PYNQ | boards/Pynq-Z1/base/notebooks/pmod/pmod_grove_tmp.ipynb | bsd-3-clause | from pynq.overlays.base import BaseOverlay
base = BaseOverlay("base.bit")
"""
Explanation: Grove Temperature Sensor 1.2
This example shows how to use the Grove Temperature Sensor v1.2. You will also see how to plot a graph using matplotlib. The Grove Temperature sensor produces an analog signal, and requires an ADC.
A Grove Temperature sensor and Pynq Grove Adapter, or Pynq Shield is required. The Grove Temperature Sensor, Pynq Grove Adapter, and Grove I2C ADC are used for this example.
You can read a single value of temperature or read multiple values at regular intervals for a desired duration.
At the end of this notebook, a Python only solution with single-sample read functionality is provided.
1. Load overlay
End of explanation
"""
import math
from pynq.lib.pmod import Grove_TMP
from pynq.lib.pmod import PMOD_GROVE_G4
tmp = Grove_TMP(base.PMODB,PMOD_GROVE_G4)
temperature = tmp.read()
print(float("{0:.2f}".format(temperature)),'degree Celsius')
"""
Explanation: 2. Read single temperature
This example shows on how to get a single temperature sample from the Grove TMP sensor.
The Grove ADC is assumed to be attached to the GR4 connector of the StickIt. The StickIt module is assumed to be plugged in the 1st PMOD labeled JB. The Grove TMP sensor is connected to the other connector of the Grove ADC.
Grove ADC provides a raw sample which is converted into resistance first and then converted into temperature.
End of explanation
"""
import time
%matplotlib inline
import matplotlib.pyplot as plt
tmp.set_log_interval_ms(100)
tmp.start_log()
# Change input during this time
time.sleep(10)
tmp_log = tmp.get_log()
plt.plot(range(len(tmp_log)), tmp_log, 'ro')
plt.title('Grove Temperature Plot')
min_tmp_log = min(tmp_log)
max_tmp_log = max(tmp_log)
plt.axis([0, len(tmp_log), min_tmp_log, max_tmp_log])
plt.show()
"""
Explanation: 3. Start logging once every 100ms for 10 seconds
Executing the next cell will start logging the temperature sensor values every 100ms, and will run for 10s. You can try touch/hold the temperature sensor to vary the measured temperature.
You can vary the logging interval and the duration by changing the values 100 and 10 in the cellbelow. The raw samples are stored in the internal memory, and converted into temperature values.
End of explanation
"""
from time import sleep
from math import log
from pynq.lib.pmod import PMOD_GROVE_G3
from pynq.lib.pmod import PMOD_GROVE_G4
from pynq.lib import Pmod_IIC
class Python_Grove_TMP(Pmod_IIC):
"""This class controls the grove temperature sensor.
This class inherits from the PMODIIC class.
Attributes
----------
iop : _IOP
The _IOP object returned from the DevMode.
scl_pin : int
The SCL pin number.
sda_pin : int
The SDA pin number.
iic_addr : int
The IIC device address.
"""
def __init__(self, pmod_id, gr_pins, model = 'v1.2'):
"""Return a new instance of a grove OLED object.
Parameters
----------
pmod_id : int
The PMOD ID (1, 2) corresponding to (PMODA, PMODB).
gr_pins: list
The group pins on Grove Adapter. G3 or G4 is valid.
model : string
Temperature sensor model (can be found on the device).
"""
if gr_pins in [PMOD_GROVE_G3, PMOD_GROVE_G4]:
[scl_pin,sda_pin] = gr_pins
else:
raise ValueError("Valid group numbers are G3 and G4.")
# Each revision has its own B value
if model == 'v1.2':
# v1.2 uses thermistor NCP18WF104F03RC
self.bValue = 4250
elif model == 'v1.1':
# v1.1 uses thermistor NCP18WF104F03RC
self.bValue = 4250
else:
# v1.0 uses thermistor TTC3A103*39H
self.bValue = 3975
super().__init__(pmod_id, scl_pin, sda_pin, 0x50)
# Initialize the Grove ADC
self.send([0x2,0x20]);
def read(self):
"""Read temperature in Celsius from grove temperature sensor.
Parameters
----------
None
Returns
-------
float
Temperature reading in Celsius.
"""
val = self._read_grove_adc()
R = 4095.0/val - 1.0
temp = 1.0/(log(R)/self.bValue + 1/298.15)-273.15
return temp
def _read_grove_adc(self):
self.send([0])
bytes = self.receive(2)
return 2*(((bytes[0] & 0x0f) << 8) | bytes[1])
from pynq import PL
# Flush IOP state
PL.reset()
py_tmp = Python_Grove_TMP(base.PMODB, PMOD_GROVE_G4)
temperature = py_tmp.read()
print(float("{0:.2f}".format(temperature)),'degree Celsius')
"""
Explanation: 4. A Pure Python class to exercise the AXI IIC Controller inheriting from PMOD_IIC
This class is ported from http://wiki.seeedstudio.com/Grove-Temperature_Sensor/
End of explanation
"""
|
cliburn/sta-663-2017 | scratch/Lecture04.ipynb | mit | import numpy as np
import numpy.random as npr
x = np.array([1,2,3])
x2 = np.array([[1,2,3],[4,5,6]])
x.max()
np.max(x)
x2.shape
x2.size
x2.dtype
x2.strides
x3 = np.fromstring('1-2-3', sep='-', dtype='int')
x3.dtype
x3
x3.astype('complex')
%%file foo.txt
123-456-789abc
abc234-23-99x
np.fromregex('foo.txt', r'[a-c]*(\d+)-(\d+)-(\d+)[abcx]*', dtype='int')
np.fromfunction(lambda i, j: i**2 + j**2, (4,5))
np.arange(8,10, 0.2)
np.zeros((3,4))
np.ones((2,3))
np.eye(3)
np.eye(3, k=1)
np.diag([1,2,3,4])
x = np.ones((3,4), dtype='float')
x
np.empty_like(x)
np.linspace(0, 10, 11)
np.logspace(0, 10, num=11, base=2).astype('int')
np.meshgrid(range(3), range(4))
"""
Explanation: Numbers and numpy
Creating numpy arrays
Creating an array from a list
Properties: shape, size, dtype, stride
fromstring, fromregex, fromfunc
arange, empty, zeros, ones
eye, diag
empty_like, zeros_like, ones_like
linspace, logspace
meshgrid, mgrid
Changing order and dtype with astype
End of explanation
"""
npr.choice(['H', 'T'], 10, p=[0.2, 0.8])
npr.choice(10, 20, replace=True)
npr.binomial(10, 0.5, 3)
npr.beta(5,1, 10)
npr.seed(123)
npr.beta(5,1, 10)
npr.seed(123)
npr.beta(5,1, 10)
x = np.arange(10)
x
npr.shuffle(x)
x
npr.permutation(x)
x = np.arange(1,17).reshape(8,2)
x
npr.shuffle(x)
x
"""
Explanation: Creating random arrays
np.choice
Discrete distributions
Continuous distributions
shuffle and permutation
End of explanation
"""
x
x[2:5, :]
x[x % 2 == 0]
x[[0,3,5],:]
x = np.arange(1, 17).reshape((4,4))
x
x[np.ix_([0,2], [1,3])]
x
np.transpose(x)
x.T
x = np.array([1,2,3])
x
x.shape
x @ x
xc = x.reshape((3,1))
xc
xr = x.reshape((1,3))
xr
xc.T @ xc
xc @ xc.T
np.dot(xc, xc.T)
x = np.arange(12).reshape(3,4)
x
np.concatenate([[1,1,1], x])
x0 = np.ones(3)
x0
np.c_[x0, x]
np.vstack((x, x))
y = np.r_[x, x]
y
np.split(y, 2, axis=1)
"""
Explanation: Indexing, reshaping and concatenation
Simple, boolean and fancy indexing
ix_
reshape and transpose
A 1D-array is neither a row nor a column vector!
concat, stack, `split``
r_, c_
End of explanation
"""
a = np.arange(10)
a**2
%timeit a**2
%timeit [i**2 for i in a]
np.exp(a)
a + a
def f(a, b):
return a + 2*b
f(a, a)
a = list(range(10))
f(a, a)
fv = np.vectorize(f)
fv(a,a)
np.kron([1,2,3], [2,3,4])
np.kron(np.diag([1,2]), np.ones((2,2)))
np.diag([1,2])
np.ones((2,2))
"""
Explanation: Vectorization and universal functions (ufuncs)
unary and binary ufucns
cumsum and cumprod
vectorize
dot, @, kron
End of explanation
"""
a = np.arange(1, 17).reshape((-1,2))
a
np.mean(a, axis=0)
a.mean(axis=1)
a.var(axis=0)
"""
Explanation: Array reductions
Global reductions
Using axis
End of explanation
"""
x = np.zeros((3,4), dtype='int')
x
x.shape
a = np.array(1)
b = np.array([1,2,3,4])
c = np.array([10,20,30])
a.shape, b.shape, c.shape
x + a
x + b
x + c[:, np.newaxis]
x = np.arange(1, 13)
x
x[:, np.newaxis] * x[np.newaxis, :]
x = npr.random((5, 2))
x
x[:,np.newaxis,:].shape
x[np.newaxis, :,:].shape
np.sum((x[:,np.newaxis,:] - x[np.newaxis, :,:])**2, axis=2)
"""
Explanation: Broadcasting rules
Compatible shapes for broadcasting
Using newaxis to enable broadcasting
Examples: multiplication table, normalization and distance matrix
End of explanation
"""
np.set_printoptions(precision=2, suppress=True)
M = np.sum((x[:,np.newaxis,:] - x[np.newaxis, :,:])**2, axis=2)
np.save('M.npy', M)
np.load('M.npy')
"""
Explanation: Miscellaneous
set_printoptions
I/O with save, load, savetxt, loadtxt
End of explanation
"""
|
mtasende/Machine-Learning-Nanodegree-Capstone | notebooks/prod/n10_dyna_q_with_predictor.ipynb | mit | # Basic imports
import os
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import datetime as dt
import scipy.optimize as spo
import sys
from time import time
from sklearn.metrics import r2_score, median_absolute_error
from multiprocessing import Pool
import pickle
%matplotlib inline
%pylab inline
pylab.rcParams['figure.figsize'] = (20.0, 10.0)
%load_ext autoreload
%autoreload 2
sys.path.append('../../')
import recommender.simulator as sim
from utils.analysis import value_eval
from recommender.agent_predictor import AgentPredictor
from functools import partial
from sklearn.externals import joblib
def show_results(results_list, data_in_df, graph=False):
for values in results_list:
total_value = values.sum(axis=1)
print('Sharpe ratio: {}\nCum. Ret.: {}\nAVG_DRET: {}\nSTD_DRET: {}\nFinal value: {}'.format(*value_eval(pd.DataFrame(total_value))))
print('-'*100)
initial_date = total_value.index[0]
compare_results = data_in_df.loc[initial_date:, 'Close'].copy()
compare_results.name = SYMBOL
compare_results_df = pd.DataFrame(compare_results)
compare_results_df['portfolio'] = total_value
std_comp_df = compare_results_df / compare_results_df.iloc[0]
if graph:
plt.figure()
std_comp_df.plot()
NUM_THREADS = 1
LOOKBACK = 252*3
STARTING_DAYS_AHEAD = 252
POSSIBLE_FRACTIONS = [0.0, 1.0]
DYNA = 20
BASE_DAYS = 112
# Get the data
SYMBOL = 'SPY'
total_data_train_df = pd.read_pickle('../../data/data_train_val_df.pkl').stack(level='feature')
data_train_df = total_data_train_df[SYMBOL].unstack()
total_data_test_df = pd.read_pickle('../../data/data_test_df.pkl').stack(level='feature')
data_test_df = total_data_test_df[SYMBOL].unstack()
if LOOKBACK == -1:
total_data_in_df = total_data_train_df
data_in_df = data_train_df
else:
data_in_df = data_train_df.iloc[-LOOKBACK:]
total_data_in_df = total_data_train_df.loc[data_in_df.index[0]:]
# Crop the final days of the test set as a workaround to make dyna work
# (the env, only has the market calendar up to a certain time)
data_test_df = data_test_df.iloc[:-DYNA]
total_data_test_df = total_data_test_df.loc[:data_test_df.index[-1]]
# Create many agents
index = np.arange(NUM_THREADS).tolist()
env, num_states, num_actions = sim.initialize_env(total_data_in_df,
SYMBOL,
starting_days_ahead=STARTING_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS)
estimator_close = joblib.load('../../data/best_predictor.pkl')
estimator_volume = joblib.load('../../data/best_volume_predictor.pkl')
agents = [AgentPredictor(num_states=num_states,
num_actions=num_actions,
random_actions_rate=0.98,
random_actions_decrease=0.999,
dyna_iterations=DYNA,
name='Agent_{}'.format(i),
estimator_close=estimator_close,
estimator_volume=estimator_volume,
env=env,
prediction_window=BASE_DAYS) for i in index]
"""
Explanation: In this notebook a Q learner with dyna and a custom predictor will be trained and evaluated. The Q learner recommends when to buy or sell shares of one particular stock, and in which quantity (in fact it determines the desired fraction of shares in the total portfolio value).
End of explanation
"""
print('Sharpe ratio: {}\nCum. Ret.: {}\nAVG_DRET: {}\nSTD_DRET: {}\nFinal value: {}'.format(*value_eval(pd.DataFrame(data_in_df['Close'].iloc[STARTING_DAYS_AHEAD:]))))
# Simulate (with new envs, each time)
n_epochs = 4
for i in range(n_epochs):
tic = time()
env.reset(STARTING_DAYS_AHEAD)
results_list = sim.simulate_period(total_data_in_df,
SYMBOL,
agents[0],
starting_days_ahead=STARTING_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
verbose=False,
other_env=env)
toc = time()
print('Epoch: {}'.format(i))
print('Elapsed time: {} seconds.'.format((toc-tic)))
print('Random Actions Rate: {}'.format(agents[0].random_actions_rate))
show_results([results_list], data_in_df)
env.reset(STARTING_DAYS_AHEAD)
results_list = sim.simulate_period(total_data_in_df,
SYMBOL, agents[0],
learn=False,
starting_days_ahead=STARTING_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
other_env=env)
show_results([results_list], data_in_df, graph=True)
import pickle
with open('../../data/dyna_q_with_predictor.pkl', 'wb') as best_agent:
pickle.dump(agents[0], best_agent)
"""
Explanation: Let's show the symbols data, to see how good the recommender has to be.
End of explanation
"""
TEST_DAYS_AHEAD = 112
env.set_test_data(total_data_test_df, TEST_DAYS_AHEAD)
tic = time()
results_list = sim.simulate_period(total_data_test_df,
SYMBOL,
agents[0],
learn=False,
starting_days_ahead=TEST_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
verbose=False,
other_env=env)
toc = time()
print('Epoch: {}'.format(i))
print('Elapsed time: {} seconds.'.format((toc-tic)))
print('Random Actions Rate: {}'.format(agents[0].random_actions_rate))
show_results([results_list], data_test_df, graph=True)
"""
Explanation: Let's run the trained agent, with the test set
First a non-learning test: this scenario would be worse than what is possible (in fact, the q-learner can learn from past samples in the test set without compromising the causality).
End of explanation
"""
env.set_test_data(total_data_test_df, TEST_DAYS_AHEAD)
agents[0].env = env
tic = time()
results_list = sim.simulate_period(total_data_test_df,
SYMBOL,
agents[0],
learn=True,
starting_days_ahead=TEST_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
verbose=False,
other_env=env)
toc = time()
print('Epoch: {}'.format(i))
print('Elapsed time: {} seconds.'.format((toc-tic)))
print('Random Actions Rate: {}'.format(agents[0].random_actions_rate))
show_results([results_list], data_test_df, graph=True)
"""
Explanation: And now a "realistic" test, in which the learner continues to learn from past samples in the test set (it even makes some random moves, though very few).
End of explanation
"""
print('Sharpe ratio: {}\nCum. Ret.: {}\nAVG_DRET: {}\nSTD_DRET: {}\nFinal value: {}'.format(*value_eval(pd.DataFrame(data_test_df['Close'].iloc[TEST_DAYS_AHEAD:]))))
"""
Explanation: What are the metrics for "holding the position"?
End of explanation
"""
|
daler/metaseq | doc/source/example_session_2.ipynb | mit | # Enable in-line plots for this IPython Notebook
%matplotlib inline
"""
Explanation: Example 2: Differential expression scatterplots
This example looks more closely at using the results table part of :mod:metaseq, and highlights the flexibility in plotting afforded by :mod:metaseq.
End of explanation
"""
import metaseq
from metaseq import example_filename
from metaseq.results_table import ResultsTable
import pandas
import numpy as np
import matplotlib
import pybedtools
import gffutils
from gffutils.helpers import asinterval
import os
"""
Explanation: Setup
In this section, we'll get the example data for control and knockdown samples, combine the data, and and create :class:ResultsTable object out of them.
If you haven't already done so, run the download_metaseq_example_data.py script, which will download and prepare from public sources.
Import what we'll be using:
End of explanation
"""
%%bash
example_dir="metaseq-example"
mkdir -p $example_dir
(cd $example_dir \
&& wget --progress=dot:giga https://raw.githubusercontent.com/daler/metaseq-example-data/master/metaseq-example-data.tar.gz \
&& tar -xzf metaseq-example-data.tar.gz \
&& rm metaseq-example-data.tar.gz)
data_dir = 'metaseq-example/data'
control_filename = os.path.join(data_dir, 'GSM847565_SL2585.table')
knockdown_filename = os.path.join(data_dir, 'GSM847566_SL2592.table')
"""
Explanation: We'll be using tables prepared from Cufflinks GTF output from GEO entries GSM847565 and GSM847566. These represent results control and ATF3 knockdown experiments in the K562 human cell line. You can read more about the data on GEO; this example will be more about the features of :mod:metaseq than the biology.
Let's get the example files:
End of explanation
"""
# System call; IPython only!
!head -n5 $control_filename
"""
Explanation: Let's take a quick peak to see what these files look like:
End of explanation
"""
# Create two pandas.DataFrames
control = pandas.read_table(control_filename, index_col=0)
knockdown = pandas.read_table(knockdown_filename, index_col=0)
"""
Explanation: As documented at http://cufflinks.cbcb.umd.edu/manual.html#gtfout, the score field indicates relative expression of one isoform compared to other isoforms of the same gene, times 1000. The max score is 1000, and an isoform with this score is considered the major isoform. A score of 800 would mean an isoform's FPKM is 0.8 that of the major isoform.
If you're working with DESeq results, the :mod:metaseq.results_table.DESeqResults class is a nice wrapper around those results with one-step import. But here, we'll construct a pandas.DataFrame first and then create a ResultsTable object out of it.
End of explanation
"""
control.head()
knockdown.head()
"""
Explanation: Here's what the first few entries look like:
End of explanation
"""
# Merge control and knockdown into one DataFrame
df = pandas.merge(control, knockdown, left_index=True, right_index=True, suffixes=('_ct', '_kd'))
df.head()
"""
Explanation: These are two separate objects. It will be easier to work with the data if we first combine the data into a single dataframe. For this we will use standard pandas routines:
End of explanation
"""
# Create a ResultsTable
d = ResultsTable(df)
"""
Explanation: Now we'll create a :class:metaseq.results_table.ResultsTable out of it:
End of explanation
"""
# DataFrame is always accessible via .data
print type(d), type(d.data)
"""
Explanation: :class:ResultsTable objects are wrappers around pandas.DataFrame objects, and are useful for working with annotations and tablular data. You can always access the DataFrame with the .data attribute:
End of explanation
"""
# Get gene annotations for chr17
gtf = os.path.join(data_dir, 'Homo_sapiens.GRCh37.66_chr17.gtf')
print open(gtf).readline()
"""
Explanation: The metaseq example data includes a GFF file of the genes on chromosome 17 of the hg19 human genome assembly:
End of explanation
"""
# Get a list of transcript IDs on chr17, and subset the dataframe.
# Here we use pybedtools, but the list of names can come from anywhere
names = list(set([i['transcript_id'] for i in pybedtools.BedTool(gtf)]))
names.sort()
# Make a copy of d
d2 = d.copy()
# And subset
d2.data = d2.data.ix[names]
# How many did we omit?
print "original:", len(d.data)
print "chr17 subset:", len(d2.data)
"""
Explanation: Subsetting data
The data we loaded from the knockdown experiment contains genes from all chromosomes. For the sake of argument, let's say we're only interested in the expression data for these genes on chr17. We can simply use pandas.DataFrame.ix to subset dataframe by a list of genes. Note that for this to work, the items in the list need to be in the index of the dataframe. Since the data frame index consists of Ensembl transcript IDs, we'll need to create a list of Ensembl transcript IDs on chromosome 17:
End of explanation
"""
# Scatterplot of control vs knockdown FPKM
d2.scatter(
x='fpkm_ct',
y='fpkm_kd');
"""
Explanation: Scatterplots
Let's plot some data. The :meth:ResultsTable.scatter method helps with plotting genome-wide data, and offers lots of flexibility.
For its most basic usage, we need to at least supply x and y. These are names of variables in the dataframe. We'll add more data later, but for now, let's plot the FPKM of control vs knockdown:
End of explanation
"""
# arbitrary gene for demonstration purposes
interesting_gene = np.argmax(d2.fpkm_ct)
interesting_gene
# What happens if you were to click on the points in an interactive session
d2._default_callback(interesting_gene)
"""
Explanation: If you're following along in a terminal with interactive matplotlib plots, you can click on a point to see what gene it is. In this IPython Notebook (and the HTML documentation generated from it), we don't have that interactive ability. We can simulate it here by choosing a gene ID to show, and then manually call the _default_callback like this:
End of explanation
"""
# Adding extra variables gets verbose and cluttered
d2.data['log_fpkm_ct'] = np.log1p(d2.data.fpkm_ct)
"""
Explanation: Clicking around interactively on the points is a great way to get a feel for the data.
OK, it looks like this plot could use log scaling. Recall though that the ResultsTable.scatter method needs to have x and y variables available in the dataframe. So one way to do this would be to do something like this:
End of explanation
"""
# We'll use a better way, so delete it.
del d2.data['log_fpkm_ct']
"""
Explanation: But when playing around with different scales, this quickly pollutes the dataframe with extra columns. Let's delete that column . . .
End of explanation
"""
# Scale x and y axes using log2(x + 1)
def log2p1(x):
return np.log2(x + 1)
d2.scatter(
x='fpkm_ct',
y='fpkm_kd',
#----------------
xfunc=log2p1,
yfunc=log2p1,
);
"""
Explanation: . . . and show another way.
You may find it more streamlined to use the xfunc and/or yfunc arguments. We can use any arbitrary function for these, and the axes labels will reflect that.
Since we're about to start incrementally improving the figure by adding additional keyword arguments (kwargs), the stuff we've already talked about will be at the top, and a comment line like this will mark the start of new stuff to pay attention to:
# ------------- (marks the start of new stuff)
Here's the next version of the scatterplot:
End of explanation
"""
# Manually specify x and y labels
ax = d2.scatter(
x='fpkm_ct',
y='fpkm_kd',
xfunc=log2p1,
yfunc=log2p1,
#-----------------------------
# specify xlabel
xlab='Control, log2(FPKM + 1)'
);
# adjust the ylabel afterwards
ax.set_ylabel('Knockdown, log2(FPKM + 1)');
"""
Explanation: Of course, we can specify axes labels either directly in the method call with xlab or ylab, or after the fact using standard matplotlib functionality:
End of explanation
"""
# Crude differential expression detection....
d2.data['foldchange'] = d2.fpkm_kd / d2.fpkm_ct
up = (d2.foldchange > 2).values
dn = (d2.foldchange < 0.5).values
"""
Explanation: Let's highlight some genes. How about those that change expression > 2 fold in upon knockdown in red, and < 2 fold in blue? While we're at it, let's add another variable to the dataframe.
End of explanation
"""
# Use the genes_to_highlight argument to show up/downregulated genes
# in different colors
d2.scatter(
x='fpkm_ct',
y='fpkm_kd',
xfunc=log2p1,
yfunc=log2p1,
xlab='Control, log2(FPKM + 1)',
ylab='Knockdown, log2(FPKM + 1)',
#-------------------------------
genes_to_highlight=[
(up, dict(color='#da3b3a')),
(dn, dict(color='#00748e'))]
);
"""
Explanation: The way to highlight genes is with the genes_to_highlight argument. OK, OK, it's a little bit of a misnomer here because we're actually working with transcripts. But the idea is the same.
The genes_to_highlight argument takes a list of tuples. Each tuple consists of two items: an index (boolean or integer, doesn't matter) and a style dictionary. This dictionary is passed directly to matplotlib.scatter, so you can use any supported arguments here.
Here's the plot with up/downregulated genes highlighted:
End of explanation
"""
# Add a 1:1 line
d2.scatter(
x='fpkm_ct',
y='fpkm_kd',
xfunc=log2p1,
yfunc=log2p1,
xlab='Control, log2(FPKM + 1)',
ylab='Knockdown, log2(FPKM + 1)',
genes_to_highlight=[
(up, dict(color='#da3b3a')),
(dn, dict(color='#00748e'))],
#------------------------------------------
one_to_one=dict(color='r', linestyle='--'),
);
"""
Explanation: We can add a 1-to-1 line for reference:
End of explanation
"""
# Style changes:
# default gray small dots; make changed genes stand out more
d2.scatter(
x='fpkm_ct',
y='fpkm_kd',
xfunc=log2p1,
yfunc=log2p1,
xlab='Control, log2(FPKM + 1)',
ylab='Knockdown, log2(FPKM + 1)',
one_to_one=dict(color='k', linestyle=':'),
#------------------------------------------------------
genes_to_highlight=[
(up, dict(color='#da3b3a', alpha=0.8)),
(dn, dict(color='#00748e', alpha=0.8))],
general_kwargs=dict(marker='.', color='0.5', alpha=0.2, s=5),
);
"""
Explanation: Let's change the plot style a bit. The general_kwargs argument determines the base style of all points. By default, it's dict(color='k', alpha=0.2, linewidths=0). Let's change the default style to smaller gray dots, and make the red and blue stand out more by adjusting their alpha:
End of explanation
"""
# Add marginal histograms
d2.scatter(
x='fpkm_ct',
y='fpkm_kd',
xfunc=log2p1,
yfunc=log2p1,
xlab='Control, log2(FPKM + 1)',
ylab='Knockdown, log2(FPKM + 1)',
genes_to_highlight=[
(up, dict(color='#da3b3a', alpha=0.8)),
(dn, dict(color='#00748e', alpha=0.8))],
one_to_one=dict(color='k', linestyle=':'),
general_kwargs=dict(marker='.', color='0.5', alpha=0.2, s=5),
#------------------------------------------------------
marginal_histograms=True,
);
"""
Explanation: Marginal histograms
:mod:metaseq also offers support for marginal histograms, which are stacked up on either axes for each set of genes that were plotted. There are lots of ways for configuring this. First, let's turn them on for everything:
End of explanation
"""
# Tweak the marginal histograms:
# 50 bins, don't show unchanged genes, and remove outlines
d2.scatter(
x='fpkm_ct',
y='fpkm_kd',
xfunc=log2p1,
yfunc=log2p1,
xlab='Control, log2(FPKM + 1)',
ylab='Knockdown, log2(FPKM + 1)',
one_to_one=dict(color='k', linestyle=':'),
general_kwargs=dict(marker='.', color='0.5', alpha=0.2, s=5),
#------------------------------------------------------
# Go back go disabling them globally...
marginal_histograms=False,
# ...and then turn them back on for each set of genes
# to highlight.
#
# By the way, genes_to_highlight is indented to better show the
# the structure.
genes_to_highlight=[
(
up,
dict(
color='#da3b3a', alpha=0.8,
marginal_histograms=True,
xhist_kwargs=dict(bins=50, linewidth=0),
yhist_kwargs=dict(bins=50, linewidth=0),
)
),
(
dn,
dict(
color='#00748e', alpha=0.8,
marginal_histograms=True,
xhist_kwargs=dict(bins=50, linewidth=0),
yhist_kwargs=dict(bins=50, linewidth=0),
)
)
],
);
"""
Explanation: As a contrived example to illustrate the flexibility for plotting marginal histograms, lets:
only show histograms for up/down regulated
change the number of bins to 50
remove the edge around each bar
End of explanation
"""
matplotlib.rcParams['font.family'] = "Arial"
ax = d2.scatter(
x='fpkm_ct',
y='fpkm_kd',
xfunc=log2p1,
yfunc=log2p1,
xlab='Control, log2(FPKM + 1)',
ylab='Knockdown, log2(FPKM + 1)',
one_to_one=dict(color='k', linestyle=':'),
marginal_histograms=False,
#------------------------------------------------------
# add the "unchanged" label
general_kwargs=dict(marker='.', color='0.5', alpha=0.2, s=5, label='unchanged'),
genes_to_highlight=[
(
up,
dict(
color='#da3b3a', alpha=0.8,
marginal_histograms=True,
xhist_kwargs=dict(bins=50, linewidth=0),
yhist_kwargs=dict(bins=50, linewidth=0),
# add label
label='upregulated',
)
),
(
dn,
dict(
color='#00748e', alpha=0.8,
marginal_histograms=True,
xhist_kwargs=dict(bins=50, linewidth=0),
yhist_kwargs=dict(bins=50, linewidth=0),
# add label
label='downregulated'
)
)
],
);
# Get handles and labels, and then reverse their order
handles, legend_labels = ax.get_legend_handles_labels()
handles = handles[::-1]
legend_labels = legend_labels[::-1]
# Draw a legend using the flipped handles and labels.
leg = ax.legend(handles,
legend_labels,
# These values may take some tweaking.
# By default they are in axes coordinates, so this means
# the legend is slightly outside the axes.
loc=(1.01, 1.05),
# Various style fixes to default legend.
fontsize=9,
scatterpoints=1,
borderpad=0.1,
handletextpad=0.05,
frameon=False,
title='chr17 transcripts',
);
# Adjust the legend title after it's created
leg.get_title().set_weight('bold')
"""
Explanation: Let's clean up the plot by adding a legend (using label in genes_to_highlight), and adding it outside the axes. While we're at it we'll add a title, too.
There's a trick here -- for each set of genes, the histograms are incrementally added on top of each other but the legend, lists them going down. So we need to flip the order of legend entries to make it nicely match the order of the histograms.
End of explanation
"""
# When `d2.scatter` is called, we get a `marginal` attribute.
top_axes = d2.marginal.top_hists[-1]
top_axes.set_title('Differential expression, ATF3 knockdown');
for ax in d2.marginal.top_hists:
ax.set_ylabel('No.\ntranscripts', rotation=0, ha='right', va='center', size=8)
for ax in d2.marginal.right_hists:
ax.set_xlabel('No.\ntranscripts', rotation=-90, ha='left', va='top', size=8)
fig = ax.figure
fig.savefig('expression-demo.png')
fig
"""
Explanation: We'd also like to add a title. But how to access the top-most axes?
Whenever the scatter method is called, the MarginalHistograms object created as a by-product of the plotting is stored in the marginal attribute. This, in turn, has a top_hists attribute, and we can grab the last one created. While we're at it, let's histograms axes as well.
End of explanation
"""
|
CUBoulder-ASTR2600/lectures | lecture_09_functions_2.ipynb | isc | from math import exp
# Could avoid this by using our constants.py module!
h = 6.626e-34 # MKS
k = 1.38e-23
c = 3.00e8
def intensity(wave, temp, mydefault=0):
wavelength = wave / 1e10
B = 2 * h * c**2 / (wavelength**5 * (exp(h * c / (wavelength * k * temp)) - 1))
return B
"""
Explanation: Today: More on functions
Final project posted.
Poll: When do you want Homework deadline? Friday lunch, to enable discussing HW on Friday? Or end of Friday?
End of explanation
"""
mywave = 5000
intensity(mywave, temp=5800.0)
"""
Explanation: Q. Is the following call sequence acceptable?
End of explanation
"""
print(intensity(5000.0, temp=5800.0))
print(intensity(wave=5000.0, temp=5800.0))
print(intensity(5000.0, 5800.0))
intensity(5800.0, 5000.0)
"""
Explanation: No! The following are all OK!
End of explanation
"""
def testFunc(arg1, arg2, kwarg1=True, kwarg2=4.2):
print(arg1, arg2, kwarg1, kwarg2)
"""
Explanation: Keyword arguments
End of explanation
"""
testFunc(1.0, 2.0)
testFunc(1.0, 2.0, 3.0) # NOTE! I do not HAVE TO use the keyword access!
"""
Explanation: The first two arguments in this case are "positional arguments."
The second two are named "keyword arguments".
Keyword arguments must follow positional arguments in function calls.
Q. What will this do?
End of explanation
"""
from math import pi, exp, sin
tau = 2*pi
# t is positional argument, others are keyword arguments
def f(t, A=1, a=1, omega=tau):
return A * exp(-a * t) * sin(omega * t)
v1 = f(0.01) # Only the time is specified
v1
"""
Explanation: $$f(t)=A \cdot e^{-at} \cdot \sin({\omega \cdot t})$$
End of explanation
"""
v1 = f(A=2, t=0.01)
v1
"""
Explanation: Q. What will this yield?
End of explanation
"""
import math
def exponential(x, epsilon=1e-6):
total = 1.0
i = 1
term = (x**i) / math.factorial(i)
while abs(term) > epsilon:
term = x**i / math.factorial(i)
total += term
i += 1
return total, i
total, i = exponential(2.4, epsilon=1e-2)
print(exponential(2.4, epsilon=1e-4))
print(exponential(2.4))
print(math.exp(2.4))
"""
Explanation: $$e^x = \sum_{i=0}^{\infty} \frac{x^i}{i!}$$
$$e^x \approx \sum_{i=0}^{N} \frac{x^i}{i!}$$
End of explanation
"""
from math import exp
h = 6.626e-34 # MKS
k = 1.38e-23
c = 3.00e8
def intensity(wave, temp):
wavelength = wave / 1e10
inten = 2 * h * c**2 / (wavelength**5 * (exp(h * c / (wavelength * k * temp)) - 1))
return inten
"""
Explanation: Q. What's missing? Have there been ambiguities in the functions we've written?
End of explanation
"""
import math
def exponential(x, epsilon=1e-6):
"""
This function calculates exp(x) to a tolerance
of epsilon.
Parameters
----------
x : exponent
epsilon : tolerance
returns : exp(x), number of terms include (after 1.0)
Example from interactive shell:
>>> sum, i = exponential(0.15)
>>> print i-1, sum
5, 1.16183422656
"""
total = 1.0
i = 1
term = (x**i) / math.factorial(i)
while abs(term) > epsilon:
term = x**i / math.factorial(i)
total += term
i += 1
return total, i
total, i = exponential(0.15)
i - 1, total
"""
Explanation: Doc-strings (review)
End of explanation
"""
print(exponential.__doc__)
# How handy!!
expontential.
help(exponential)
from math import exp
h = 6.626e-34 # MKS
k = 1.38e-23
c = 3.00e8
def intensity(wave, temp):
"""
Compute the value of the Planck function.
Parameters
----------
wave : int, float
Wavelength at which to compute Planck function, in Angstroms.
T : int, float
Temperature of blackbody, in Kelvin.
Example
-------
Radiance at 5000 Angstrom, 5800 K blackbody:
>>> radiance = I(wave=5000., T=5800.)
Returns
-------
Radiance of blackbody in W / sr / m^3.
"""
wavelength = wave / 1e10
inten = 2 * h * c**2 / (wavelength**5 * (exp(h * c / (wavelength * k * temp)) - 1))
return inten
"""
Explanation: Q. So, what should doc strings contain in general?
End of explanation
"""
def diff2(f, x, h=1e-6):
"""
Calculates a second derivative.
f: the function (of one variable) to be differentiated
x: value at which the function is evaluated
h: small difference in x
"""
r = (f(x-h) + f(x + h) - 2.0 * f(x)) / float(h**2)
return r
"""
Explanation: Functions as arguments to other functions
$$\frac{df(x)}{dx} \approx \frac{f(x-h) + f(x+h) - 2f(x)}{h^2}$$
End of explanation
"""
def g(t):
return t**2
t = 3.0
gPrimePrime = diff2(g, t)
"""
Explanation: We'll work with the following function:
$g(t) = t^2$
End of explanation
"""
print("Second derivative of g=t^4 evaluated at t=%g" % t)
"g(%f)=%.8g" % (t, gPrimePrime)
"""
Explanation: Q. What should this yield?
End of explanation
"""
g = lambda t: t**2
print(g)
g(2.0)
def g(t):
return t**2
# This simply calculates the second derivative of t^2 evaluated at t=1.
test = diff2(lambda t: t**2, 56)
# Recall the first argument to diff2 was the function
test
def cubed(x):
return x**3
cubed(2)
y = lambda x: x**3
y(2)
s = 'hans_gruber'
s.split('_')
df.sort().groupby().do_stuff(lambda x: x.split('-')[0])
"""
Explanation: Lambda functions
End of explanation
"""
|
jswoboda/SimISR | ExampleNotebooks/SpecEstimator.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import scipy as sp
import scipy.fftpack as scfft
from SimISR.utilFunctions import makesumrule,MakePulseDataRepLPC,spect2acf,acf2spect,CenteredLagProduct
from SimISR.IonoContainer import IonoContainer,MakeTestIonoclass
from ISRSpectrum.ISRSpectrum import ISRSpectrum
import seaborn as sns
"""
Explanation: Spectrum Estimation
by John Swoboda
This notebook will use portions of the SimISR and ISRSpectrum software modules to create IS spectra and show different ways of estimating them. There will also be examples using CWGN from the random number generator in scipy.
Import Everything
End of explanation
"""
# Processings parameters
spfreq=50e3 # Bandwidth
nspec=256 # length of spectrum
rep1=10000 # number of pulses
L=24. # Length of pulse in standard processings
pulse=sp.ones(int(L)) # Pulse for standard processing
pulse_pergram=sp.ones(nspec) # For periodogram
Nrg=128 # Number of Range gates for data
# Parameters for spectrum
species=['O+','e-']
databloc = sp.array([[1.66e10,1e3],[1.66e10,2.5e3]])
f_c = 440e6
# setup seaborne
sns.set_style("whitegrid")
sns.set_context("notebook")
"""
Explanation: Set up
Set the number of points desired for the averaging and length of spectra. Also set up seaborn formats.
End of explanation
"""
# Make spectrum
ISpec_ion = ISRSpectrum(centerFrequency=f_c, nspec=nspec, sampfreq=spfreq, dFlag=False)
f,cur_spec,rcs = ISpec_ion.getspecsep(databloc,species,rcsflag=True)
specsum = sp.absolute(cur_spec).sum()
cur_spec = len(cur_spec)*cur_spec*rcs/specsum
tau,acf=spect2acf(f,cur_spec)
fig,ax = plt.subplots(1,2,sharey=False, figsize=(8,4),facecolor='w')
rp,imp=ax[0].plot(tau*1e3,acf.real,tau*1e3,acf.imag)
ax[0].legend([rp,imp],['Real','Imag'])
ax[0].set_ylabel('Amplitude')
ax[0].set_title('ACF')
ax[0].set_xlabel(r'$\tau$ in ms')
ax[1].plot(f*1e-3,cur_spec.real)
ax[1].set_ylabel('Amplitude')
ax[1].set_title('Spectrum')
ax[1].set_xlabel(r'f in kHz')
fig.tight_layout()
"""
Explanation: IS Spectra
This will create an example ISR spectra that will be used.
End of explanation
"""
xin =sp.power(2,-.5)*(sp.random.randn(rep1,nspec)+1j*sp.random.randn(rep1,nspec))
Xfft=sp.power(nspec,-.5)*scfft.fftshift(scfft.fft(xin,axis=-1),axes=-1)
Xperiod=sp.power(Xfft.real,2).mean(0) +sp.power(Xfft.imag,2).mean(0)
tau2,acfperiod=spect2acf(f,Xperiod*nspec)
fig2,ax2 = plt.subplots(1,2,sharey=False, figsize=(8,4),facecolor='w')
rp,imp=ax2[0].plot(tau2*1e6,acfperiod.real,tau2*1e6,acfperiod.imag)
ax2[0].legend([rp,imp],['Real','Imag'])
ax2[0].set_ylabel('Amplitude')
ax2[0].set_title('ACF')
ax2[0].set_xlabel(r'$\tau$ in $\mu$s')
ax2[1].plot(f*1e-3,Xperiod.real)
ax2[1].set_ylabel('Amplitude')
ax2[1].set_title('Spectrum')
ax2[1].set_xlabel(r'f in kHz')
ax2[1].set_ylim([0.,1.5])
fig2.tight_layout()
"""
Explanation: White Noise
A periodogram is applied to complex white Gaussian Noise. This is here in order to show that the output of the scipy random number generator outputs uncorrelated random variables.
End of explanation
"""
Xdata = MakePulseDataRepLPC(pulse_pergram,cur_spec,30,rep1,numtype = sp.complex128)
Xfftd=sp.power(nspec,-.5)*scfft.fftshift(scfft.fft(Xdata,axis=-1),axes=-1)
Xperiodd=sp.power(Xfftd.real,2).mean(0) +sp.power(Xfftd.imag,2).mean(0)
tau3,acfperiodd=spect2acf(f,Xperiodd*nspec)
fig3,ax3 = plt.subplots(1,2,sharey=False, figsize=(8,4),facecolor='w')
rp,imp=ax3[0].plot(tau3*1e6,acfperiodd.real,tau3*1e6,acfperiodd.imag)
ax3[0].legend([rp,imp],['Real','Imag'])
ax3[0].set_ylabel('Amplitude')
ax3[0].set_title('ACF')
ax3[0].set_xlabel(r'$\tau$ in $\mu$s')
ax3[1].plot(f*1e-3,Xperiodd.real)
ax3[1].set_ylabel('Amplitude')
ax3[1].set_title('Spectrum')
ax3[1].set_xlabel(r'f in kHz')
fig2.tight_layout()
"""
Explanation: Shaped Noise
A set of shaped noise is created using the IS spectrum formed earlier. Using linear pridictive coding to apply the spectrum to the noise through the MakePulseDataRepLPC function. This is similar to the method used by vocoders to encode human speech. To show the effect of the LPC coloring a periodogram esitmator is applied to noise the noise.
End of explanation
"""
v=1
l=sp.arange(L)
W=-l**2/(L*v) + (L-v)*l/L/v+1
Wp=sp.pad(W,(int(sp.ceil(float(nspec-L)/2)),int(sp.floor(float(nspec-L)/2))),'constant',constant_values=0)
wfft=scfft.fftshift(scfft.fft(W,n=nspec))
fig4,ax4 = plt.subplots(1,2,sharey=False, figsize=(8,4),facecolor='w')
ax4[0].plot(l,W)
ax4[0].set_ylabel('Weighting')
ax4[0].set_title('Weighting')
ax4[0].set_xlabel(r'$l$')
rp,imp,abp=ax4[1].plot(f*1e-3,wfft.real,f*1e-3,wfft.imag,f*1e-3,sp.absolute(wfft))
ax4[1].legend([rp,imp,abp],['Real','Imag','Abs'])
ax4[1].set_ylabel('Amplitude')
ax4[1].set_title('Spectrum')
ax4[1].set_xlabel(r'f in kHz')
fig4.tight_layout()
"""
Explanation: Window Function
When a long pulse is used in ISR the ACF is esitmated instead of the spectrum directly through the periodogram estimator. The estimation is a two step process, first estimate the lags and then do a summation rule. This leads to a windowing of the ACF shown in this. The window is also shown in the frequency domain which will be applied as a convolution to the original spectra in frequency.
End of explanation
"""
Xdata=sp.zeros((rep1,Nrg),dtype=sp.complex128)
Lint = int(L)
for i in range(int(Nrg-Lint)):
Xdata[:,i:i+Lint] = MakePulseDataRepLPC(pulse,cur_spec,40,rep1,numtype = sp.complex128)+Xdata[:,i:i+Lint]
lagsData=CenteredLagProduct(Xdata,numtype=sp.complex128,pulse =pulse,lagtype='centered')/rep1
ptype='long'
ts = 1.
sumrule=makesumrule(ptype,L,ts,lagtype='centered')
minrg = -sumrule[0].min()
maxrg = Nrg-sumrule[1].max()
Nrng2 = maxrg-minrg
lagsDatasum = sp.zeros((Nrng2,Lint),dtype=sp.complex128)
for irngnew,irng in enumerate(sp.arange(minrg,maxrg)):
for ilag in range(Lint):
lagsDatasum[irngnew,ilag] = lagsData[irng+sumrule[0,ilag]:irng+sumrule[1,ilag]+1,ilag].sum(axis=0)
# divide off the gain from the pulse stacking
lagsDatasum = lagsDatasum/L
"""
Explanation: Full ISR Data Creation and Estimator
The basics data creation and processing behind SimISR for this case for only one beam. The data is created along a set of samples by adding together a set of uncorrelated data sets together. These sets of pulses that are added together are uncorrelated because any spatial correlation of the electron density fluctuations are much smaller than a range gate. After that the ACFs are estimated they are plotted with the input ACF and spectra with the window applied.
End of explanation
"""
dt=tau[1]-tau[0]
f1,spec_all=acf2spect(l*dt,lagsDatasum,n_s=nspec)
acf_single = lagsDatasum[50]
spec_single = spec_all[50]
# Apply weighting and integrations from gain from pulse stacking
acf_act=scfft.ifftshift(acf)[:Lint]*W
feh,spec_act=acf2spect(l*dt,acf_act,n_s=nspec)
fig5,ax5 = plt.subplots(1,2,sharey=False, figsize=(8,4),facecolor='w')
rp,imp,act_acf=ax5[0].plot(l*dt*1e6,acf_single.real,l*dt*1e6,acf_single.imag,l*dt*1e6,acf_act.real)
ax5[0].legend([rp,imp,act_acf],['Real','Imag','Actual'])
ax5[0].set_ylabel('Amplitude')
ax5[0].set_title('ACF')
ax5[0].set_xlabel(r'$\tau$ in $\mu$s')
est1,act_spec=ax5[1].plot(f*1e-3,spec_single.real,f*1e-3,spec_act.real)
ax5[1].legend([est1,act_spec],['Estimated','Actual'])
ax5[1].set_ylabel('Amplitude')
ax5[1].set_title('Spectrum')
ax5[1].set_xlabel(r'f in kHz')
fig5.tight_layout()
"""
Explanation: Plotting and Normalization of Input Spectra
Need to apply window function to spectrum.
End of explanation
"""
|
mromanello/SunoikisisDC_NER | participants_notebooks/Sunoikisis - Named Entity Extraction 1b-G3.ipynb | gpl-3.0 | ########
# NLTK #
########
import nltk
from nltk.tag import StanfordNERTagger
########
# CLTK #
########
import cltk
from cltk.tag.ner import tag_ner
##############
# MyCapytain #
##############
import MyCapytain
from MyCapytain.resolvers.cts.api import HttpCTSResolver
from MyCapytain.retrievers.cts5 import CTS
from MyCapytain.common.constants import Mimetypes
#################
# other imports #
#################
import sys
sys.path.append("/opt/nlp/pymodules/")
from idai_journals.nlp import sub_leaves
"""
Explanation: Welcome
This notebook accompanies the Sunokisis Digital Classics common session on Named Entity Extraction, see https://github.com/SunoikisisDC/SunoikisisDC-2016-2017/wiki/Named-Entity-Extraction-I.
In this notebook we are going to experiment with three different methods for extracting named entities from a Latin text.
Library imports
External modules and libraries can be imported using import statements.
Let's the Natural Language ToolKit (NLTK), the Classical Language ToolKit (CLTK), MyCapytain and some local libraries that are used in this notebook.
End of explanation
"""
print(nltk.__version__)
print(cltk.__version__)
print(MyCapytain.__version__)
"""
Explanation: And more precisely, we are using the following versions:
End of explanation
"""
my_passage = "urn:cts:latinLit:phi0448.phi001.perseus-lat2"
"""
Explanation: Let's grab some text
To start with, we need some text from which we'll try to extract named entities using various methods and libraries.
There are several ways of doing this e.g.:
1. copy and paste the text from Perseus or the Latin Library into a text document, and read it into a variable
2. load a text from one of the Latin corpora available via cltk (cfr. this blog post)
3. or load it from Perseus by leveraging its Canonical Text Services API
Let's gor for #3 :)
What's CTS?
CTS URNs stand for Canonical Text Service Uniform Resource Names.
You can think of a CTS URN like a social security number for texts (or parts of texts).
Here are some examples of CTS URNs with different levels of granularity:
- urn:cts:latinLit:phi0448 (Caesar)
- urn:cts:latinLit:phi0448.phi001 (Caesar's De Bello Gallico)
- urn:cts:latinLit:phi0448.phi001.perseus-lat2 DBG Latin edtion
- urn:cts:latinLit:phi0448.phi001.perseus-lat2:1 DBG Latin edition, book 1
- urn:cts:latinLit:phi0448.phi001.perseus-lat2:1.1.1 DBG Latin edition, book 1, chapter 1, section 1
How do I find out the CTS URN of a given author or text? The Perseus Catalog is your friend! (crf. e.g. http://catalog.perseus.org/catalog/urn:cts:latinLit:phi0448)
Querying a CTS API
The URN of the Latin edition of Caesar's De Bello Gallico is urn:cts:latinLit:phi0448.phi001.perseus-lat2.
End of explanation
"""
# We set up a resolver which communicates with an API available in Leipzig
resolver = HttpCTSResolver(CTS("http://cts.dh.uni-leipzig.de/api/cts/"))
# We require some metadata information
textMetadata = resolver.getMetadata("urn:cts:latinLit:phi0448.phi001.perseus-lat2")
# Texts in CTS Metadata have one interesting property : its citation scheme.
# Citation are embedded objects that carries information about how a text can be quoted, what depth it has
print([citation.name for citation in textMetadata.citation])
"""
Explanation: With this information, we can query a CTS API and get some information about this text.
For example, we can "discover" its canonical text structure, an essential information to be able to cite this text.
End of explanation
"""
my_passage = "urn:cts:latinLit:phi0448.phi001.perseus-lat2:1"
"""
Explanation: But we can also query the same API and get back the text of a specific text section, for example the entire book 1.
To do so, we need to append the indication of the reference scope (i.e. book 1) to the URN.
End of explanation
"""
passage = resolver.getTextualNode(my_passage)
"""
Explanation: So we retrieve the first book of the De Bello Gallico by passing its CTS URN (that we just stored in the variable my_passage) to the CTS API, via the resolver provided by MyCapytains:
End of explanation
"""
de_bello_gallico_book1 = passage.export(Mimetypes.PLAINTEXT)
"""
Explanation: At this point the passage is available in various formats: text, but also TEI XML, etc.
Thus, we need to specify that we are interested in getting the text only:
End of explanation
"""
print(de_bello_gallico_book1)
"""
Explanation: Let's check that the text is there by printing the content of the variable de_bello_gallico_book1 where we stored it:
End of explanation
"""
from IPython.display import IFrame
IFrame('http://cts.dh.uni-leipzig.de/read/latinLit/phi0448/phi001/perseus-lat2/1', width=1000, height=350)
"""
Explanation: The text that we have just fetched by using a programming interface (API) can also be viewed in the browser.
Or even imported as an iframe into this notebook!
End of explanation
"""
len(de_bello_gallico_book1.split(" "))
"""
Explanation: Let's see how many words (tokens, more properly) there are in Caesar's De Bello Gallico I:
End of explanation
"""
"T".istitle()
"t".istitle()
# we need a list to store the tagged tokens
tagged_tokens = []
# tokenisation is done by using the string method `split(" ")`
# that splits a string upon white spaces
for n, token in enumerate(de_bello_gallico_book1.split(" ")):
if(token.istitle()):
tagged_tokens.append((token, "Entity"))
else:
tagged_tokens.append((token, "O"))
"""
Explanation: Very simple baseline
Now let's write what in NLP jargon is called a baseline, that is a method for extracting named entities that can serve as a term of comparison to evaluate the accuracy of other methods.
Baseline method:
- cycle through each token of the text
- if the token starts with a capital letter it's a named entity (only one type, i.e. Entity)
End of explanation
"""
tagged_tokens[:50]
"""
Explanation: Let's a have a look at the first 50 tokens that we just tagged:
End of explanation
"""
def extract_baseline(input_text):
"""
:param input_text: the text to tag (string)
:return: a list of tuples, where tuple[0] is the token and tuple[1] is the named entity tag
"""
# we need a list to store the tagged tokens
tagged_tokens = []
# tokenisation is done by using the string method `split(" ")`
# that splits a string upon white spaces
for n, token in enumerate(input_text.split(" ")):
if(token.istitle()):
tagged_tokens.append((token, "Entity"))
else:
tagged_tokens.append((token, "O"))
return tagged_tokens
"""
Explanation: For convenience we can also wrap our baseline code into a function that we call extract_baseline. Let's define it:
End of explanation
"""
tagged_tokens_baseline = extract_baseline(de_bello_gallico_book1)
tagged_tokens_baseline[-50:]
"""
Explanation: And now we can call it like this:
End of explanation
"""
def extract_baseline(input_text):
"""
:param input_text: the text to tag (string)
:return: a list of tuples, where tuple[0] is the token and tuple[1] is the named entity tag
"""
# we need a list to store the tagged tokens
tagged_tokens = []
# tokenisation is done by using the string method `split(" ")`
# that splits a string upon white spaces
for n, token in enumerate(input_text.split(" ")):
if(token.istitle()):
tagged_tokens.append((token, "Entity"))
context = input_text.split(" ")[n-5:n+5]
print("Found entity \"%s\" in context \"%s\""%(token, " ".join(context)))
else:
tagged_tokens.append((token, "O"))
return tagged_tokens
tagged_text_baseline = extract_baseline(de_bello_gallico_book1)
tagged_text_baseline[:50]
"""
Explanation: We can modify slightly our function so that it prints the snippet of text where an entity is found:
End of explanation
"""
%%time
tagged_text_cltk = tag_ner('latin', input_text=de_bello_gallico_book1)
"""
Explanation: NER with CLTK
The CLTK library has some basic support for the extraction of named entities from Latin and Greek texts (see CLTK's documentation).
The current implementation (as of version 0.1.47) uses a lookup-based method.
For each token in a text, the tagger checks whether that token is contained within a predefined list of possible named entities:
- list of Latin proper nouns: https://github.com/cltk/latin_proper_names_cltk
- list of Greek proper nouns: https://github.com/cltk/greek_proper_names_cltk
Let's run CLTK's tagger (it takes a moment):
End of explanation
"""
tagged_text_cltk[:10]
"""
Explanation: Let's have a look at the ouput, only the first 10 tokens (by using the list slicing notation):
End of explanation
"""
def reshape_cltk_output(tagged_tokens):
reshaped_output = []
for tagged_token in tagged_tokens:
if(len(tagged_token)==1):
reshaped_output.append((tagged_token[0], "O"))
else:
reshaped_output.append((tagged_token[0], tagged_token[1]))
return reshaped_output
"""
Explanation: The output looks slightly different from the one of our baseline function (the size of the tuples in the list varies).
But we can write a function to fix this, we call it reshape_cltk_output:
End of explanation
"""
tagged_text_cltk = reshape_cltk_output(tagged_text_cltk)
"""
Explanation: We apply this function to CLTK's output:
End of explanation
"""
tagged_text_cltk[:20]
"""
Explanation: And the resulting output looks now ok:
End of explanation
"""
list(zip(tagged_text_baseline[:20], tagged_text_cltk_reshaped[:20]))
"""
Explanation: Now let's compare the two list of tagged tokens by using a python function called zip, which allows us to read multiple lists simultaneously:
End of explanation
"""
tagged_text_cltk = reshape_cltk_output(tag_ner('latin', input_text=de_bello_gallico_book1.split(" ")))
list(zip(tagged_text_baseline[:20], tagged_text_cltk[:20]))
"""
Explanation: But, as you can see, the two lists are not aligned.
This is due to how the CLTK function tokenises the text. The comma after "tres" becomes a token on its own, whereas when we tokenise by white space the comma is attached to "tres" (i.e. "tres,").
A solution to this is to pass to the tag_ner function the text already tokenised by text.
End of explanation
"""
stanford_model_italian = "/opt/nlp/stanford-tools/stanford-ner-2015-12-09/classifiers/ner-ita-nogpe-noiob_gaz_wikipedia_sloppy.ser.gz"
ner_tagger = StanfordNERTagger(stanford_model_italian)
tagged_text_nltk = ner_tagger.tag(de_bello_gallico_book1.split(" "))
"""
Explanation: NER with NLTK
End of explanation
"""
tagged_text_nltk[:20]
"""
Explanation: Let's have a look at the output
End of explanation
"""
list(zip(tagged_text_baseline[:20], tagged_text_cltk[:20], tagged_text_nltk[:20]))
for baseline_out, cltk_out, nltk_out in zip(tagged_text_baseline[:20], tagged_text_cltk[:20], tagged_text_nltk[:20]):
print("Baseline: %s\nCLTK: %s\nNLTK: %s\n"%(baseline_out, cltk_out, nltk_out))
"""
Explanation: Wrap up
At this point we can "compare" the output of the three different methods we used, again by using the zip function.
End of explanation
"""
stanford_model_english = "/opt/nlp/stanford-tools/stanford-ner-2015-12-09/classifiers/english.muc.7class.distsim.crf.ser.gz"
"""
Explanation: Excercise
Extract the named entities from the English translation of the De Bello Gallico book 1.
The CTS URN for this translation is urn:cts:latinLit:phi0448.phi001.perseus-eng2:1.
Modify the code above to use the English model of the Stanford tagger instead of the italian one.
Hint:
End of explanation
"""
|
autism-research-centre/Autism-Gradients | 6b_networks-inside-gradients.ipynb | gpl-3.0 | % matplotlib inline
from __future__ import print_function
import nibabel as nib
from nilearn.image import resample_img
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import os
import os.path
# The following are a progress bar, these are not strictly necessary:
from ipywidgets import FloatProgress
from IPython.display import display
"""
Explanation: 6b Calculate binned gradient-network overlap
This file works out the average z-score inside a gradient percentile area
written by Jan Freyberg for the Brainhack 2017 Project_
This should reproduce this analysis
End of explanation
"""
percentiles = range(10)
# unthresholded z-maps from neurosynth:
zmaps = [os.path.join(os.getcwd(), 'ROIs_Mask', fname) for fname in os.listdir(os.path.join(os.getcwd(), 'ROIs_Mask'))
if 'z.nii' in fname]
# individual, binned gradient maps, in a list of lists:
gradmaps = [[os.path.join(os.getcwd(), 'data', 'Outputs', 'Bins', str(percentile), fname)
for fname in os.listdir(os.path.join(os.getcwd(), 'data', 'Outputs', 'Bins', str(percentile)))]
for percentile in percentiles]
# a brain mask file:
brainmaskfile = os.path.join(os.getcwd(), 'ROIs_Mask', 'rbgmask.nii')
"""
Explanation: Define the variables for this analysis.
1. how many percentiles the data is divided into
2. where the Z-Maps (from neurosynth) lie
3. where the binned gradient maps lie
4. where a mask of the brain lies (not used at the moment).
End of explanation
"""
def zinsidemask(zmap, mask):
#
zaverage = zmap.dataobj[
np.logical_and(np.not_equal(mask.dataobj, 0), brainmask.dataobj>0)
].mean()
return zaverage
"""
Explanation: Next define a function to take the average of an image inside a mask and return it:
End of explanation
"""
zaverages = np.zeros([len(zmaps), len(gradmaps), len(gradmaps[0])])
# load first gradmap just for resampling
gradmap = nib.load(gradmaps[0][0])
# Load a brainmask
brainmask = nib.load(brainmaskfile)
brainmask = resample_img(brainmask, target_affine=gradmap.affine, target_shape=gradmap.shape)
# Initialise a progress bar:
progbar = FloatProgress(min=0, max=zaverages.size)
display(progbar)
# loop through the network files:
for i1, zmapfile in enumerate(zmaps):
# load the neurosynth activation file:
zmap = nib.load(zmapfile)
# make sure the images are in the same space:
zmap = resample_img(zmap,
target_affine=gradmap.affine,
target_shape=gradmap.shape)
# loop through the bins:
for i2, percentile in enumerate(percentiles):
# loop through the subjects:
for i3, gradmapfile in enumerate(gradmaps[percentile]):
gradmap = nib.load(gradmapfile) # load image
zaverages[i1, i2, i3] = zinsidemask(zmap, gradmap) # calculate av. z-score
progbar.value += 1 # update progressbar (only works in jupyter notebooks)
"""
Explanation: This next cell will step through each combination of gradient, subject and network file to calculate the average z-score inside the mask defined by the gradient percentile. This will take a long time to run!
End of explanation
"""
# np.save(os.path.join(os.getcwd(), 'data', 'average-abs-z-scores'), zaverages)
zaverages = np.load(os.path.join(os.getcwd(), 'data', 'average-z-scores.npy'))
"""
Explanation: To save time next time, we'll save the result of this to file:
End of explanation
"""
df_phen = pd.read_csv('data' + os.sep + 'SelectedSubjects.csv')
diagnosis = df_phen.loc[:, 'DX_GROUP']
fileids = df_phen.loc[:, 'FILE_ID']
groupvec = np.zeros(len(gradmaps[0]))
for filenum, filename in enumerate(gradmaps[0]):
fileid = os.path.split(filename)[-1][5:-22]
groupvec[filenum] = (diagnosis[fileids.str.contains(fileid)])
print(groupvec.shape)
"""
Explanation: Extract a list of which group contains which participants.
End of explanation
"""
fig = plt.figure(figsize=(15, 8))
grouplabels = ['Control group', 'Autism group']
for group in np.unique(groupvec):
ylabels = [os.path.split(fname)[-1][0:-23].replace('_', ' ') for fname in zmaps]
# remove duplicates!
includenetworks = []
seen = set()
for string in ylabels:
includenetworks.append(string not in seen)
seen.add(string)
ylabels = [string for index, string in enumerate(ylabels) if includenetworks[index]]
tmp_zaverages = zaverages[includenetworks, :, :]
tmp_zaverages = tmp_zaverages[:, :, groupvec==group]
tmp_zaverages = tmp_zaverages[np.argsort(np.argmax(tmp_zaverages.mean(axis=2), axis=1)), :, :]
# make the figure
plt.subplot(1, 2, group)
cax = plt.imshow(tmp_zaverages.mean(axis=2),
cmap='bwr', interpolation='nearest',
vmin=zaverages.mean(axis=2).min(),
vmax=zaverages.mean(axis=2).max())
ax = plt.gca()
plt.title(grouplabels[int(group-1)])
plt.xlabel('Percentile of principle gradient')
ax.set_xticks(np.arange(0, len(percentiles), 3))
ax.set_xticklabels(['100-90', '70-60', '40-30', '10-0'])
ax.set_yticks(np.arange(0, len(seen), 1))
ax.set_yticklabels(ylabels)
ax.set_yticks(np.arange(-0.5, len(seen), 1), minor=True)
ax.set_xticks(np.arange(-0.5, 10, 1), minor=True)
ax.grid(which='minor', color='w', linewidth=2)
fig.subplots_adjust(right=0.8)
cbar_ax = fig.add_axes([0.85, 0.15, 0.01, 0.7])
fig.colorbar(cax, cax=cbar_ax, label='Average Z-Score')
#fig.colorbar(cax, cmap='bwr', orientation='horizontal')
plt.savefig('./figures/z-scores-inside-gradient-bins.png')
"""
Explanation: Make a plot of the z-scores inside each parcel for each gradient, split by group!
End of explanation
"""
|
ssanderson/notebooks | quanto/Quantopian_Meetup_Talk_IPython_Notebook.ipynb | apache-2.0 | # This is a python execution cell.
# Anything you could do in a python shell or script, you can do here.
# To execute a cell, type CTRL-Enter.
# You can also type SHIFT-Enter to execute and move to the next cell,
# and you can type OPTION-Enter to execute and insert a new cell below.
def foo():
print "IPython Notebook is Awesome!"
foo()
# The last expression in a cell is always displayed as the cell's output when
# it's executed.
[1,2,3,4]
"""
Explanation: IPython Notebook
End of explanation
"""
import pandas
class Point(object):
"""
A class-level docstring.
"""
def __init__(self, x, y=3):
"""
Constructor docstring. SHIFT+TAB will show you this first line.
SHIFT + two TABs will show you the entire docstring.
"""
self.x, self.y = x, y
long_variable_name = Point(3,4)
"""
Explanation: This is a level 1 header cell.
This is a level 2 header cell.
This is a level 3 header cell.
This is a level 4 header cell.
This is a level 5 header cell.
This is a level 6 header cell.
This is a Markdown cell.
Markdown is a text-to-HTML conversion tool for web writers. Markdown allows
you to write using an easy-to-read, easy-to-write plain text format, then
convert it to structurally valid XHTML (or HTML).
Markdown supports bulleted lists.
You can nest lists as deep as you want.
List elements don't have to be text either.
Reasons to like Markdown:
It also supports numbered lists.
It has support for code() formatting.
You can embed arbitrary HTML
<table>
<tr><td>Such as,</td><td>for example,</td></tr>
<tr><td>a table.</td><td></td></tr>
</table>
It even has support for code blocks:
class Monad m where
(>>=) :: m a -> (a -> m b) -> m b
(>>) :: m a -> m b -> m b
return :: a -> m a
fail :: String -> m a
m >> k = m >>= \_ -> k
Notebook Magics:
Most of the magics that work in the IPython shell also work in the notebook. Additionally, there are some magics that only work in the notebook.
Autocomplete and Inline Documentation
Pressing the TAB key part of the way through a variable name autocompletes that name.
Autocomplete also works on attributes (e.g. pandas.DataF<TAB> -> pandas.DataFrame).
You can bring up documentation for a function or class inline with SHIFT+TAB
Execute this cell before trying the autocomplete examples in the next cell.
End of explanation
"""
pan # Pressing TAB here will autocomplete pandas.
pandas. # Pressing TAB here will show you the top-level attributes of pandas.
pandas.D # Pressing TAB here will show you DataFrame, DatetimeIndex, and DateOffset.
pandas.DataFrame # Pressing TAB here will show you methods and attributes of DataFrame.
long_variable_name # Pressing TAB here will autocomplete long_variable_name
# Hold SHIFT and press TAB with your cursor in
x = Point(1,2) # the parentheses to see info on how to make a Point object.
"""
Explanation: Try it out!
End of explanation
"""
pandas.DataFrame?
pandas.DataFrame.plot??
pandas.DataFrame.|plot
"""
Explanation: Documentation:
Typing <expression>? shows the function signature and documentation for that expression.
Typing <expression>?? takes you to the source code for the expression.
You can also use the pinfo and pinfo2 magics to get the same info.
Notebook Only: Press SHIFT+TAB while hovering over an object to open in-line documentation for that callable.
End of explanation
"""
%%javascript
alert('foo')
"""
Explanation: Cell Magics
In addition to all the magics we saw above, there are additional magics that operate at the cell level. Many of these are focused around interoperation with other languages.
Javascript
End of explanation
"""
%load_ext rpy2.ipython
import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.randn(10,5), columns=['A','B','C','D','E'])
df
# Push our DataFrame into R.
%Rpush df
%%R
# Despite also being valid python, this is actually R code!
col_A = df['A']
plot(col_A)
# We can also pull values back out of R!
%Rpull col_A
col_A
"""
Explanation: R
Some cell magics are provided by extensions. Here we load the rpy2's cell magic for interacting with R.
End of explanation
"""
import IPython.display
dir(IPython.display)[:18]
"""
Explanation: Builtin Rich Display Formats
IPython supports a wide array of rich display formats, including:
* LaTeX
* Markdown
* HTML
* SVG
* PNG
...and more
End of explanation
"""
from IPython.display import Math
Math(r'w_A = \frac{\sigma_B - Cov(r_A, r_B)}{\sigma_B^2 + \sigma_A^2 - 2 Cov(r_A, r_B)}')
"""
Explanation: LaTeX
End of explanation
"""
from IPython.display import HTML
HTML('''\
To learn more about IPython's rich display capabilities, click
<a href="http://ipython.org/ipython-doc/dev/config/integrating.html">here</a>.
''')
"""
Explanation: HTML
End of explanation
"""
from IPython.display import YouTubeVideo
YouTubeVideo("https://www.youtube.com/watch?v=B_XiSozs-SE")
"""
Explanation: YouTube Video
End of explanation
"""
class Table(object):
"""
A simple table represented as a list of lists.
"""
def __init__(self, lists):
self.lists = lists
def make_row(self, l):
columns = ''.join('<td>{value}</td>'.format(value=value) for value in l)
return '<tr>{columns}</tr>'.format(columns=columns)
def _repr_html_(self):
rows = ''.join(self.make_row(l) for l in self.lists)
return "<table>{rows}</table>".format(rows=rows)
Table(
[
[1,2,3],
[4,5,6]
]
)
"""
Explanation: Customizing Object Display
If a class implements one of many _repr_ methods, IPython will use that method to display the object.
End of explanation
"""
|
scotthuang1989/Python-3-Module-of-the-Week | algorithm/contextlib.ipynb | apache-2.0 | with open('tmp/pymotw.txt', 'wt') as f:
f.write('contents go here')
"""
Explanation: The contextlib module contains utilities for working with context managers and the with statement.
Context Manager API
A context manager is responsible for a resource within a code block, possibly creating it when the block is entered and then cleaning it up after the block is exited. For example, files support the context manager API to make it easy to ensure they are closed after all reading or writing is done.
End of explanation
"""
class Context:
def __init__(self):
print('__init__()')
def __enter__(self):
print('__enter__()')
return self
def __exit__(self, exc_type, exc_val, exc_tb):
print('__exit__()')
with Context():
print('Doing work in the context')
"""
Explanation: A context manager is enabled by the with statement, and the API involves two methods. The __enter__() method is run when execution flow enters the code block inside the with. It returns an object to be used within the context. When execution flow leaves the with block, the __exit__() method of the context manager is called to clean up any resources being used.
End of explanation
"""
class WithinContext:
def __init__(self, context):
print('WithinContext.__init__({})'.format(context))
def do_something(self):
print('WithinContext.do_something()')
def __del__(self):
print('WithinContext.__del__')
class Context:
def __init__(self):
print('Context.__init__()')
def __enter__(self):
print('Context.__enter__()')
return WithinContext(self)
def __exit__(self, exc_type, exc_val, exc_tb):
print('Context.__exit__()')
with Context() as c:
c.do_something()
"""
Explanation: The __enter__() method can return any object to be associated with a name specified in the as clause of the with statement. In this example, the Context returns an object that uses the open context.
End of explanation
"""
class Context:
def __init__(self, handle_error):
print('__init__({})'.format(handle_error))
self.handle_error = handle_error
def __enter__(self):
print('__enter__()')
return self
def __exit__(self, exc_type, exc_val, exc_tb):
print('__exit__()')
print(' exc_type =', exc_type)
print(' exc_val =', exc_val)
print(' exc_tb =', exc_tb)
return self.handle_error
with Context(True):
raise RuntimeError('error message handled')
print()
with Context(False):
raise RuntimeError('error message propagated')
"""
Explanation: The value associated with the variable c is the object returned by __enter__(), which is not necessarily the Context instance created in the with statement.
The __exit__() method receives arguments containing details of any exception raised in the with block.
End of explanation
"""
import contextlib
class Context(contextlib.ContextDecorator):
def __init__(self, how_used):
self.how_used = how_used
print('__init__({})'.format(how_used))
def __enter__(self):
print('__enter__({})'.format(self.how_used))
return self
def __exit__(self, exc_type, exc_val, exc_tb):
print('__exit__({})'.format(self.how_used))
@Context('as decorator')
def func(message):
print(message)
print()
with Context('as context manager'):
print('Doing work in the context')
print()
func('Doing work in the wrapped function')
"""
Explanation: If the context manager can handle the exception, __exit__() should return a true value to indicate that the exception does not need to be propagated. Returning false causes the exception to be re-raised after __exit__() returns.
Context Managers as Function Decorators
The class ContextDecorator adds support to regular context manager classes to let them be used as function decorators as well as context managers.
End of explanation
"""
import contextlib
@contextlib.contextmanager
def make_context():
print(' entering')
try:
yield {}
except RuntimeError as err:
print(' ERROR:', err)
finally:
print(' exiting')
print('Normal:')
with make_context() as value:
print(' inside with statement:', value)
print('\nHandled error:')
with make_context() as value:
raise RuntimeError('showing example of handling an error')
print('\nUnhandled error:')
with make_context() as value:
raise ValueError('this exception is not handled')
"""
Explanation: One difference with using the context manager as a decorator is that the value returned by enter() is not available inside the function being decorated, unlike when using with and as. Arguments passed to the decorated function are available in the usual way.
From Generator to Context Manager
Creating context managers the traditional way, by writing a class with __enter__() and __exit__() methods, is not difficult. But sometimes writing everything out fully is extra overhead for a trivial bit of context. In those sorts of situations, use the contextmanager() decorator to convert a generator function into a context manager.
End of explanation
"""
import contextlib
@contextlib.contextmanager
def make_context():
print(' entering')
try:
# Yield control, but not a value, because any value
# yielded is not available when the context manager
# is used as a decorator.
yield
except RuntimeError as err:
print(' ERROR:', err)
finally:
print(' exiting')
@make_context()
def normal():
print(' inside with statement')
@make_context()
def throw_error(err):
raise err
print('Normal:')
normal()
print('\nHandled error:')
throw_error(RuntimeError('showing example of handling an error'))
print('\nUnhandled error:')
throw_error(ValueError('this exception is not handled'))
"""
Explanation: The generator should initialize the context, yield exactly one time, then clean up the context. The value yielded, if any, is bound to the variable in the as clause of the with statement. Exceptions from within the with block are re-raised inside the generator, so they can be handled there.
The context manager returned by contextmanager() is derived from ContextDecorator, so it also works as a function decorator.
End of explanation
"""
import contextlib
class Door:
def __init__(self):
print(' __init__()')
self.status = 'open'
def close(self):
print(' close()')
self.status = 'closed'
print('Normal Example:')
with contextlib.closing(Door()) as door:
print(' inside with statement: {}'.format(door.status))
print(' outside with statement: {}'.format(door.status))
print('\nError handling example:')
try:
with contextlib.closing(Door()) as door:
print(' raising from inside with statement')
raise RuntimeError('error message')
except Exception as err:
print(' Had an error:', err)
"""
Explanation: Closing Open Handles
The file class supports the context manager API directly, but some other objects that represent open handles do not. The example given in the standard library documentation for contextlib is the object returned from urllib.urlopen(). There are other legacy classes that use a close() method but do not support the context manager API. To ensure that a handle is closed, use closing() to create a context manager for it.
End of explanation
"""
import contextlib
class NonFatalError(Exception):
pass
def non_idempotent_operation():
raise NonFatalError(
'The operation failed because of existing state'
)
try:
print('trying non-idempotent operation')
non_idempotent_operation()
print('succeeded!')
except NonFatalError:
pass
print('done')
"""
Explanation: Ignoring Exceptions
It is frequently useful to ignore exceptions raised by libraries, because the error indicates that the desired state has already been achieved, or it can otherwise be ignored. The most common way to ignore exceptions is with a try:except statement with only a pass statement in the except block.
End of explanation
"""
import contextlib
class NonFatalError(Exception):
pass
def non_idempotent_operation():
raise NonFatalError(
'The operation failed because of existing state'
)
with contextlib.suppress(NonFatalError):
print('trying non-idempotent operation')
non_idempotent_operation()
print('succeeded!')
print('done')
"""
Explanation: The try:except
form can be replaced with contextlib.suppress() to more explicitly suppress a class of exceptions happening anywhere in the with block.
End of explanation
"""
from contextlib import redirect_stdout, redirect_stderr
import io
import sys
def misbehaving_function(a):
sys.stdout.write('(stdout) A: {!r}\n'.format(a))
sys.stderr.write('(stderr) A: {!r}\n'.format(a))
capture = io.StringIO()
with redirect_stdout(capture), redirect_stderr(capture):
misbehaving_function(5)
print(capture.getvalue())
"""
Explanation: Redirecting Output Streams
Poorly designed library code may write directly to sys.stdout or sys.stderr, without providing arguments to configure different output destinations. The redirect_stdout() and redirect_stderr() context managers can be used to capture output from functions like this, for which the source cannot be changed to accept a new output argument.
End of explanation
"""
import contextlib
@contextlib.contextmanager
def make_context(i):
print('{} entering'.format(i))
yield {}
print('{} exiting'.format(i))
def variable_stack(n, msg):
with contextlib.ExitStack() as stack:
for i in range(n):
stack.enter_context(make_context(i))
print(msg)
variable_stack(2, 'inside context')
"""
Explanation: Dynamic Context Manager Stacks
Most context managers operate on one object at a time, such as a single file or database handle. In these cases, the object is known in advance and the code using the context manager can be built around that one object. In other cases, a program may need to create an unknown number of objects in a context, while wanting all of them to be cleaned up when control flow exits the context. ExitStack was created to handle these more dynamic cases.
An ExitStack instance maintains a stack data structure of cleanup callbacks. The callbacks are populated explicitly within the context, and any registered callbacks are called in the reverse order when control flow exits the context. The result is like having multple nested with statements, except they are established dynamically.
Stacking Context Managers
There are several ways to populate the ExitStack. This example uses enter_context() to add a new context manager to the stack.
End of explanation
"""
import contextlib
class Tracker:
"Base class for noisy context managers."
def __init__(self, i):
self.i = i
def msg(self, s):
print(' {}({}): {}'.format(
self.__class__.__name__, self.i, s))
def __enter__(self):
self.msg('entering')
class HandleError(Tracker):
"If an exception is received, treat it as handled."
def __exit__(self, *exc_details):
received_exc = exc_details[1] is not None
if received_exc:
self.msg('handling exception {!r}'.format(
exc_details[1]))
self.msg('exiting {}'.format(received_exc))
# Return Boolean value indicating whether the exception
# was handled.
return received_exc
class PassError(Tracker):
"If an exception is received, propagate it."
def __exit__(self, *exc_details):
received_exc = exc_details[1] is not None
if received_exc:
self.msg('passing exception {!r}'.format(
exc_details[1]))
self.msg('exiting')
# Return False, indicating any exception was not handled.
return False
class ErrorOnExit(Tracker):
"Cause an exception."
def __exit__(self, *exc_details):
self.msg('throwing error')
raise RuntimeError('from {}'.format(self.i))
class ErrorOnEnter(Tracker):
"Cause an exception."
def __enter__(self):
self.msg('throwing error on enter')
raise RuntimeError('from {}'.format(self.i))
def __exit__(self, *exc_info):
self.msg('exiting')
"""
Explanation: The context managers given to ExitStack are treated as though they are in a series of nested with statements. Errors that happen anywhere within the context propagate through the normal error handling of the context managers. These context manager classes illustrate the way errors propagate.
End of explanation
"""
print('No errors:')
variable_stack([
HandleError(1),
PassError(2),
],"test error")
"""
Explanation: The examples using these classes are based around variable_stack(), which uses the context managers passed to construct an ExitStack, building up the overall context one by one. The examples below pass different context managers to explore the error handling behavior. First, the normal case of no exceptions.
End of explanation
"""
|
tensorflow/docs-l10n | site/en-snapshot/tfx/tutorials/tfx/template_local.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
"""
import sys
!{sys.executable} -m pip install --upgrade "tfx<2"
"""
Explanation: Create a TFX pipeline using templates with Local orchestrator
<div class="devsite-table-wrapper"><table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/tfx/tutorials/tfx/template_local">
<img src="https://www.tensorflow.org/images/tf_logo_32px.png"/>View on TensorFlow.org</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/tfx/blob/master/docs/tutorials/tfx/template_local.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png">Run in Google Colab</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/tfx/tree/master/docs/tutorials/tfx/template_local.ipynb">
<img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">View source on GitHub</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/tfx/docs/tutorials/tfx/template_local.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a></td>
</table></div>
Introduction
This document will provide instructions to create a TensorFlow Extended (TFX) pipeline
using templates which are provided with TFX Python package.
Most of instructions are Linux shell commands, and corresponding
Jupyter Notebook code cells which invoke those commands using ! are provided.
You will build a pipeline using Taxi Trips dataset
released by the City of Chicago. We strongly encourage you to try to build
your own pipeline using your dataset by utilizing this pipeline as a baseline.
We will build a pipeline which runs on local environment. If you are interested in using Kubeflow orchestrator on Google Cloud, please see TFX on Cloud AI Platform Pipelines tutorial.
Prerequisites
Linux / MacOS
Python >= 3.5.3
You can get all prerequisites easily by running this notebook on Google Colab.
Step 1. Set up your environment.
Throughout this document, we will present commands twice. Once as a copy-and-paste-ready shell command, once as a jupyter notebook cell. If you are using Colab, just skip shell script block and execute notebook cells.
You should prepare a development environment to build a pipeline.
Install tfx python package. We recommend use of virtualenv in the local environment. You can use following shell script snippet to set up your environment.
```sh
Create a virtualenv for tfx.
virtualenv -p python3 venv
source venv/bin/activate
Install python packages.
python -m pip install --upgrade "tfx<2"
```
If you are using colab:
End of explanation
"""
# Set `PATH` to include user python binary directory.
HOME=%env HOME
PATH=%env PATH
%env PATH={PATH}:{HOME}/.local/bin
"""
Explanation: NOTE: There might be some errors during package installation. For example,
ERROR: some-package 0.some_version.1 has requirement other-package!=2.0.,<3,>=1.15, but you'll have other-package 2.0.0 which is incompatible.
Please ignore these errors at this moment.
End of explanation
"""
!python3 -c "from tfx import version ; print('TFX version: {}'.format(version.__version__))"
"""
Explanation: Let's check the version of TFX.
bash
python -c "from tfx import version ; print('TFX version: {}'.format(version.__version__))"
End of explanation
"""
PIPELINE_NAME="my_pipeline"
import os
# Create a project directory under Colab content directory.
PROJECT_DIR=os.path.join(os.sep,"content",PIPELINE_NAME)
"""
Explanation: And, it's done. We are ready to create a pipeline.
Step 2. Copy predefined template to your project directory.
In this step, we will create a working pipeline project directory and files by copying additional files from a predefined template.
You may give your pipeline a different name by changing the PIPELINE_NAME below. This will also become the name of the project directory where your files will be put.
bash
export PIPELINE_NAME="my_pipeline"
export PROJECT_DIR=~/tfx/${PIPELINE_NAME}
End of explanation
"""
!tfx template copy \
--pipeline_name={PIPELINE_NAME} \
--destination_path={PROJECT_DIR} \
--model=taxi
"""
Explanation: TFX includes the taxi template with the TFX python package. If you are planning to solve a point-wise prediction problem, including classification and regresssion, this template could be used as a starting point.
The tfx template copy CLI command copies predefined template files into your project directory.
sh
tfx template copy \
--pipeline_name="${PIPELINE_NAME}" \
--destination_path="${PROJECT_DIR}" \
--model=taxi
End of explanation
"""
%cd {PROJECT_DIR}
"""
Explanation: Change the working directory context in this notebook to the project directory.
bash
cd ${PROJECT_DIR}
End of explanation
"""
!{sys.executable} -m models.features_test
!{sys.executable} -m models.keras.model_test
"""
Explanation: Step 3. Browse your copied source files.
The TFX template provides basic scaffold files to build a pipeline, including Python source code, sample data, and Jupyter Notebooks to analyse the output of the pipeline. The taxi template uses the same Chicago Taxi dataset and ML model as the Airflow Tutorial.
In Google Colab, you can browse files by clicking a folder icon on the left. Files should be copied under the project directoy, whose name is my_pipeline in this case. You can click directory names to see the content of the directory, and double-click file names to open them.
Here is brief introduction to each of the Python files.
- pipeline - This directory contains the definition of the pipeline
- configs.py — defines common constants for pipeline runners
- pipeline.py — defines TFX components and a pipeline
- models - This directory contains ML model definitions.
- features.py, features_test.py — defines features for the model
- preprocessing.py, preprocessing_test.py — defines preprocessing
jobs using tf::Transform
- estimator - This directory contains an Estimator based model.
- constants.py — defines constants of the model
- model.py, model_test.py — defines DNN model using TF estimator
- keras - This directory contains a Keras based model.
- constants.py — defines constants of the model
- model.py, model_test.py — defines DNN model using Keras
- local_runner.py, kubeflow_runner.py — define runners for each orchestration engine
You might notice that there are some files with _test.py in their name. These are unit tests of the pipeline and it is recommended to add more unit tests as you implement your own pipelines.
You can run unit tests by supplying the module name of test files with -m flag. You can usually get a module name by deleting .py extension and replacing / with .. For example:
bash
python -m models.features_test
End of explanation
"""
!tfx pipeline create --engine=local --pipeline_path=local_runner.py
"""
Explanation: Step 4. Run your first TFX pipeline
You can create a pipeline using pipeline create command.
bash
tfx pipeline create --engine=local --pipeline_path=local_runner.py
End of explanation
"""
!tfx run create --engine=local --pipeline_name={PIPELINE_NAME}
"""
Explanation: Then, you can run the created pipeline using run create command.
sh
tfx run create --engine=local --pipeline_name="${PIPELINE_NAME}"
End of explanation
"""
# Update the pipeline
!tfx pipeline update --engine=local --pipeline_path=local_runner.py
# You can run the pipeline the same way.
!tfx run create --engine local --pipeline_name {PIPELINE_NAME}
"""
Explanation: If successful, you'll see Component CsvExampleGen is finished. When you copy the template, only one component, CsvExampleGen, is included in the pipeline.
Step 5. Add components for data validation.
In this step, you will add components for data validation including StatisticsGen, SchemaGen, and ExampleValidator. If you are interested in data validation, please see Get started with Tensorflow Data Validation.
We will modify copied pipeline definition in pipeline/pipeline.py. If you are working on your local environment, use your favorite editor to edit the file. If you are working on Google Colab,
Click folder icon on the left to open Files view.
Click my_pipeline to open the directory and click pipeline directory to open and double-click pipeline.py to open the file.
Find and uncomment the 3 lines which add StatisticsGen, SchemaGen, and ExampleValidator to the pipeline. (Tip: find comments containing TODO(step 5):).
Your change will be saved automatically in a few seconds. Make sure that the * mark in front of the pipeline.py disappeared in the tab title. There is no save button or shortcut for the file editor in Colab. Python files in file editor can be saved to the runtime environment even in playground mode.
You now need to update the existing pipeline with modified pipeline definition. Use the tfx pipeline update command to update your pipeline, followed by the tfx run create command to create a new execution run of your updated pipeline.
```sh
Update the pipeline
tfx pipeline update --engine=local --pipeline_path=local_runner.py
You can run the pipeline the same way.
tfx run create --engine local --pipeline_name "${PIPELINE_NAME}"
```
End of explanation
"""
!tfx pipeline update --engine=local --pipeline_path=local_runner.py
!tfx run create --engine local --pipeline_name {PIPELINE_NAME}
"""
Explanation: You should be able to see the output log from the added components. Our pipeline creates output artifacts in tfx_pipeline_output/my_pipeline directory.
Step 6. Add components for training.
In this step, you will add components for training and model validation including Transform, Trainer, Resolver, Evaluator, and Pusher.
Open pipeline/pipeline.py. Find and uncomment 5 lines which add Transform, Trainer, Resolver, Evaluator and Pusher to the pipeline. (Tip: find TODO(step 6):)
As you did before, you now need to update the existing pipeline with the modified pipeline definition. The instructions are the same as Step 5. Update the pipeline using tfx pipeline update, and create an execution run using tfx run create.
sh
tfx pipeline update --engine=local --pipeline_path=local_runner.py
tfx run create --engine local --pipeline_name "${PIPELINE_NAME}"
End of explanation
"""
if 'google.colab' in sys.modules:
from google.colab import auth
auth.authenticate_user()
print('Authenticated')
"""
Explanation: When this execution run finishes successfully, you have now created and run your first TFX pipeline using Local orchestrator!
NOTE: You might have noticed that every time we create a pipeline run, every component runs again and again even though the input and the parameters were not changed.
It is waste of time and resources, and you can skip those executions with pipeline caching. You can enable caching by specifying enable_cache=True for the Pipeline object in pipeline.py.
Step 7. (Optional) Try BigQueryExampleGen.
[BigQuery] is a serverless, highly scalable, and cost-effective cloud data warehouse. BigQuery can be used as a source for training examples in TFX. In this step, we will add BigQueryExampleGen to the pipeline.
You need a Google Cloud Platform account to use BigQuery. Please prepare a GCP project.
Login to your project using colab auth library or gcloud utility.
```sh
You need gcloud tool to login in local shell environment.
gcloud auth login
```
End of explanation
"""
# Set your project name below.
# WARNING! ENTER your project name before running this cell.
%env GOOGLE_CLOUD_PROJECT=YOUR_PROJECT_NAME_HERE
"""
Explanation: You should specify your GCP project name to access BigQuery resources using TFX. Set GOOGLE_CLOUD_PROJECT environment variable to your project name.
sh
export GOOGLE_CLOUD_PROJECT=YOUR_PROJECT_NAME_HERE
End of explanation
"""
!tfx pipeline update --engine=local --pipeline_path=local_runner.py
!tfx run create --engine local --pipeline_name {PIPELINE_NAME}
"""
Explanation: Open pipeline/pipeline.py. Comment out CsvExampleGen and uncomment the line which create an instance of BigQueryExampleGen. You also need to uncomment query argument of the create_pipeline function.
We need to specify which GCP project to use for BigQuery again, and this is done by setting --project in beam_pipeline_args when creating a pipeline.
Open pipeline/configs.py. Uncomment the definition of BIG_QUERY__WITH_DIRECT_RUNNER_BEAM_PIPELINE_ARGS and BIG_QUERY_QUERY. You should replace the project id and the region value in this file with the correct values for your GCP project.
Open local_runner.py. Uncomment two arguments, query and beam_pipeline_args, for create_pipeline() method.
Now the pipeline is ready to use BigQuery as an example source. Update the pipeline and create a run as we did in step 5 and 6.
End of explanation
"""
|
AllenDowney/ModSim | python/soln/chap06.ipynb | gpl-2.0 | # install Pint if necessary
try:
import pint
except ImportError:
!pip install pint
# download modsim.py if necessary
from os.path import exists
filename = 'modsim.py'
if not exists(filename):
from urllib.request import urlretrieve
url = 'https://raw.githubusercontent.com/AllenDowney/ModSim/main/'
local, _ = urlretrieve(url+filename, filename)
print('Downloaded ' + local)
# import functions from modsim
from modsim import *
"""
Explanation: Chapter 6
Modeling and Simulation in Python
Copyright 2021 Allen Downey
License: Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International
End of explanation
"""
import os
filename = 'World_population_estimates.html'
if not os.path.exists(filename):
!wget https://raw.githubusercontent.com/AllenDowney/ModSimPy/master/data/World_population_estimates.html
from pandas import read_html
tables = read_html(filename, header=0, index_col=0, decimal='M')
table2 = tables[2]
table2.columns = ['census', 'prb', 'un', 'maddison',
'hyde', 'tanton', 'biraben', 'mj',
'thomlinson', 'durand', 'clark']
"""
Explanation: In the previous chapter we simulated a model of world population with
constant growth. In this chapter we see if we can make a better model
with growth proportional to the population.
But first, we'll improve the code from the previous chapter by
encapsulating it in a function and using System objects.
Here's the code that reads the data.
End of explanation
"""
un = table2.un / 1e9
census = table2.census / 1e9
t_0 = census.index[0]
t_end = census.index[-1]
elapsed_time = t_end - t_0
p_0 = census[t_0]
p_end = census[t_end]
total_growth = p_end - p_0
annual_growth = total_growth / elapsed_time
"""
Explanation: System objects
Like a State object, a System object contains variables and their
values. The difference is:
State objects contain state variables that get updated in the course of a simulation.
System objects contain system parameters, which usually don't get updated over the course of a simulation.
For example, in the bike share model, state variables include the number of bikes at each location, which get updated whenever a customer moves a bike. System parameters include the number of locations, total number of bikes, and arrival rates at each location.
In the population model, the only state variable is the population.
System parameters include the annual growth rate, the initial time and
population, and the end time.
Suppose we have the following variables, as computed in the previous
chapter (assuming table2 is the DataFrame we read from the file):
End of explanation
"""
system = System(t_0=t_0,
t_end=t_end,
p_0=p_0,
annual_growth=annual_growth)
"""
Explanation: Some of these are parameters we need to simulate the system; others are temporary values we can discard.
To distinguish between them, we'll put the parameters we need into a System object like this:
End of explanation
"""
system
"""
Explanation: t0 and t_end are the first and last years; p_0 is the initial
population, and annual_growth is the estimated annual growth.
Here's what system looks like.
End of explanation
"""
def run_simulation1(system):
results = TimeSeries()
results[system.t_0] = system.p_0
for t in range(system.t_0, system.t_end):
results[t+1] = results[t] + system.annual_growth
return results
"""
Explanation: Next we'll wrap the code from the previous chapter in a function:
End of explanation
"""
results1 = run_simulation1(system)
"""
Explanation: run_simulation1 takes a System object and uses the parameters in it to determine t_0, t_end, and annual_growth.
Inside the loop, it stores the results in a TimeSeries which it returns at the end.
The following function plots the results along with the estimates
census and un:
Here's how we call it.
End of explanation
"""
def plot_estimates():
census.plot(style=':', label='US Census')
un.plot(style='--', label='UN DESA')
decorate(xlabel='Year',
ylabel='World population (billion)')
"""
Explanation: Here's the function we used in the previous chapter to plot the estimates.
End of explanation
"""
results1.plot(label='model', color='gray')
plot_estimates()
decorate(title='Constant Growth Model')
"""
Explanation: And here are the results.
End of explanation
"""
def run_simulation2(system):
results = TimeSeries()
results[system.t_0] = system.p_0
for t in range(system.t_0, system.t_end):
births = system.birth_rate * results[t]
deaths = system.death_rate * results[t]
results[t+1] = results[t] + births - deaths
return results
"""
Explanation: It might not be obvious that using functions and System objects is a
big improvement, and for a simple model that we run only once, maybe
it's not. But as we work with more complex models, and when we run many simulations with different parameters, we'll see that the organization of the code makes a big difference.
Now let's see if we can improve the model.
Proportional growth model
The biggest problem with the constant growth model is that it doesn't
make any sense. It is hard to imagine how people all over the world
could conspire to keep population growth constant from year to year.
On the other hand, if some fraction of the population dies each year,
and some fraction gives birth, we can compute the net change in the
population like this:
End of explanation
"""
system.death_rate = 7.7 / 1000
system.birth_rate = 25 / 1000
"""
Explanation: Now we can choose the values of birth_rate and death_rate that best fit the data.
For the death rate, I'll use 7.7 deaths per 1000 people, which was roughly the global death rate in 2020 (see https://www.indexmundi.com/world/death_rate.html).
I chose the birth rate by hand to fit the data.
End of explanation
"""
results2 = run_simulation2(system)
results2.plot(label='model', color='gray')
plot_estimates()
decorate(title='Proportional Growth Model')
"""
Explanation: Then I ran the simulation and plotted the results:
End of explanation
"""
def growth_func1(pop, t, system):
births = system.birth_rate * pop
deaths = system.death_rate * pop
return births - deaths
"""
Explanation: The proportional model fits
the data well from 1950 to 1965, but not so well after that. Overall,
the quality of fit is not as good as the constant growth model,
which is surprising, because it seems like the proportional model is
more realistic.
In the next chapter we'll try one more time to find a model that makes
sense and fits the data. But first, I want to make a few more
improvements to the code.
Factoring out the update function
run_simulation1 and run_simulation2 are nearly identical except for the body of the for loop, where we compute the population for the next year.
Rather than repeat identical code, we can separate the things that
change from the things that don't. First, I'll pull out the update code from run_simulation2 and make it a function:
End of explanation
"""
def run_simulation(system, growth_func):
results = TimeSeries()
results[system.t_0] = system.p_0
for t in range(system.t_0, system.t_end):
growth = growth_func(results[t], t, system)
results[t+1] = results[t] + growth
return results
"""
Explanation: This function takes as arguments the current population, current year,
and a System object; it returns the net population growth during the current year.
This update function does not use t, so we could leave it out. But we will see other functions that need it, and it is convenient if they all take the same parameters, used or not.
Now we can write a function that runs any model:
End of explanation
"""
results = run_simulation(system, growth_func1)
"""
Explanation: This function demonstrates a feature we have not seen before: it takes a
function as a parameter! When we call run_simulation, the second
parameter is a function, like growth_func1, that computes the
population for the next year.
Here's how we call it:
End of explanation
"""
system.alpha = system.birth_rate - system.death_rate
"""
Explanation: Passing a function as an argument is the same as passing any other
value. The argument, which is growth_func1 in this example, gets
assigned to the parameter, which is called growth_func. Inside
run_simulation, we can run growth_func just like any other function.
Each time through the loop, run_simulation calls growth_func1 to compute net growth, and uses it to compute the population during the next year.
Combining birth and death
We can simplify the code slightly by combining births and deaths to compute the net growth rate.
Instead of two parameters, birth_rate and death_rate, we can write the update function in terms of a single parameter that represents the difference:
End of explanation
"""
def growth_func2(pop, t, system):
return system.alpha * pop
"""
Explanation: The name of this parameter, alpha, is the conventional name for a
proportional growth rate.
Here's the modified version of growth_func1:
End of explanation
"""
results = run_simulation(system, growth_func2)
"""
Explanation: And here's how we run it:
End of explanation
"""
# Solution
def growth_func3(pop, t, system):
"""Compute the population next year.
pop: current population
t: current year
system: system object containing parameters of the model
returns: population next year
"""
if t < 1980:
return system.alpha1 * pop
else:
return system.alpha2 * pop
# Solution
system.alpha1 = 19 / 1000
system.alpha2 = 15 / 1000
results3 = run_simulation(system, growth_func3)
results3.plot(label='model', color='gray')
plot_estimates()
decorate(title='Proportional growth, parameter changes over time')
# Solution
# Using two parameters, we can make the model fit the data better.
# But it still seems like the shape of the function is not right.
"""
Explanation: The results are the same as the previous versions, but now the code is organized in a way that makes it easy to explore other models.
Summary
In this chapter, we wrapped the code from the previous chapter in functions and used a System object to store the parameters of the system.
We explored a new model of population growth, where the number of births and deaths is proportional to the current population. This model seems more realistic, but it turns out not to fit the data particularly well.
In the next chapter, we'll try one more model, which is based on the assumption that the population can't keep growing forever.
But first, you might want to work on some exercises.
Exercises
Exercise: Maybe the reason the proportional model doesn't work very well is that the growth rate, alpha, is changing over time. So let's try a model with different growth rates before and after 1980 (as an arbitrary choice).
Write an update function that takes pop, t, and system as parameters. The system object, system, should contain two parameters: the growth rate before 1980, alpha1, and the growth rate after 1980, alpha2. It should use t to determine which growth rate to use.
Test your function by calling it directly, then pass it to run_simulation. Plot the results. Adjust the parameters alpha1 and alpha2 to fit the data as well as you can.
End of explanation
"""
|
mikelseverson/Udacity-Deep_Learning-Nanodegree | dcgan-svhn/DCGAN.ipynb | mit | %matplotlib inline
import pickle as pkl
import matplotlib.pyplot as plt
import numpy as np
from scipy.io import loadmat
import tensorflow as tf
!mkdir data
"""
Explanation: Deep Convolutional GANs
In this notebook, you'll build a GAN using convolutional layers in the generator and discriminator. This is called a Deep Convolutional GAN, or DCGAN for short. The DCGAN architecture was first explored last year and has seen impressive results in generating new images, you can read the original paper here.
You'll be training DCGAN on the Street View House Numbers (SVHN) dataset. These are color images of house numbers collected from Google street view. SVHN images are in color and much more variable than MNIST.
So, we'll need a deeper and more powerful network. This is accomplished through using convolutional layers in the discriminator and generator. It's also necessary to use batch normalization to get the convolutional networks to train. The only real changes compared to what you saw previously are in the generator and discriminator, otherwise the rest of the implementation is the same.
End of explanation
"""
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
data_dir = 'data/'
if not isdir(data_dir):
raise Exception("Data directory doesn't exist!")
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(data_dir + "train_32x32.mat"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Training Set') as pbar:
urlretrieve(
'http://ufldl.stanford.edu/housenumbers/train_32x32.mat',
data_dir + 'train_32x32.mat',
pbar.hook)
if not isfile(data_dir + "test_32x32.mat"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Training Set') as pbar:
urlretrieve(
'http://ufldl.stanford.edu/housenumbers/test_32x32.mat',
data_dir + 'test_32x32.mat',
pbar.hook)
"""
Explanation: Getting the data
Here you can download the SVHN dataset. Run the cell above and it'll download to your machine.
End of explanation
"""
trainset = loadmat(data_dir + 'train_32x32.mat')
testset = loadmat(data_dir + 'test_32x32.mat')
"""
Explanation: These SVHN files are .mat files typically used with Matlab. However, we can load them in with scipy.io.loadmat which we imported above.
End of explanation
"""
idx = np.random.randint(0, trainset['X'].shape[3], size=36)
fig, axes = plt.subplots(6, 6, sharex=True, sharey=True, figsize=(5,5),)
for ii, ax in zip(idx, axes.flatten()):
ax.imshow(trainset['X'][:,:,:,ii], aspect='equal')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
plt.subplots_adjust(wspace=0, hspace=0)
"""
Explanation: Here I'm showing a small sample of the images. Each of these is 32x32 with 3 color channels (RGB). These are the real images we'll pass to the discriminator and what the generator will eventually fake.
End of explanation
"""
def scale(x, feature_range=(-1, 1)):
# scale to (0, 1)
x = ((x - x.min())/(255 - x.min()))
# scale to feature_range
min, max = feature_range
x = x * (max - min) + min
return x
class Dataset:
def __init__(self, train, test, val_frac=0.5, shuffle=False, scale_func=None):
split_idx = int(len(test['y'])*(1 - val_frac))
self.test_x, self.valid_x = test['X'][:,:,:,:split_idx], test['X'][:,:,:,split_idx:]
self.test_y, self.valid_y = test['y'][:split_idx], test['y'][split_idx:]
self.train_x, self.train_y = train['X'], train['y']
self.train_x = np.rollaxis(self.train_x, 3)
self.valid_x = np.rollaxis(self.valid_x, 3)
self.test_x = np.rollaxis(self.test_x, 3)
if scale_func is None:
self.scaler = scale
else:
self.scaler = scale_func
self.shuffle = shuffle
def batches(self, batch_size):
if self.shuffle:
idx = np.arange(len(dataset.train_x))
np.random.shuffle(idx)
self.train_x = self.train_x[idx]
self.train_y = self.train_y[idx]
n_batches = len(self.train_y)//batch_size
for ii in range(0, len(self.train_y), batch_size):
x = self.train_x[ii:ii+batch_size]
y = self.train_y[ii:ii+batch_size]
yield self.scaler(x), self.scaler(y)
"""
Explanation: Here we need to do a bit of preprocessing and getting the images into a form where we can pass batches to the network. First off, we need to rescale the images to a range of -1 to 1, since the output of our generator is also in that range. We also have a set of test and validation images which could be used if we're trying to identify the numbers in the images.
End of explanation
"""
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32, (None, *real_dim), name='input_real')
inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')
return inputs_real, inputs_z
"""
Explanation: Network Inputs
Here, just creating some placeholders like normal.
End of explanation
"""
def generator(z, output_dim, reuse=False, alpha=0.2, training=True):
with tf.variable_scope('generator', reuse=reuse):
# First fully connected layer
x1 = tf.layers.dense(z, 4*4*512)
# Reshape it to start the convolutional stack
x1 = tf.reshape(x1, (-1, 4, 4, 512))
x1 = tf.layers.batch_normalization(x1, training=training)
x1 = tf.maximum(alpha * x1, x1)
# 4x4x512 now
x2 = tf.layers.conv2d_transpose(x1, 256, 5, strides=2, padding='same')
x2 = tf.layers.batch_normalization(x2, training=training)
x2 = tf.maximum(alpha * x2, x2)
# 8x8x256 now
x3 = tf.layers.conv2d_transpose(x2, 128, 5, strides=2, padding='same')
x3 = tf.layers.batch_normalization(x3, training=training)
x3 = tf.maximum(alpha * x3, x3)
# 16x16x128 now
# Output layer
logits = tf.layers.conv2d_transpose(x3, output_dim, 5, strides=2, padding='same')
# 32x32x3 now
out = tf.tanh(logits)
return out
"""
Explanation: Generator
Here you'll build the generator network. The input will be our noise vector z as before. Also as before, the output will be a $tanh$ output, but this time with size 32x32 which is the size of our SVHN images.
What's new here is we'll use convolutional layers to create our new images. The first layer is a fully connected layer which is reshaped into a deep and narrow layer, something like 4x4x1024 as in the original DCGAN paper. Then we use batch normalization and a leaky ReLU activation. Next is a transposed convolution where typically you'd halve the depth and double the width and height of the previous layer. Again, we use batch normalization and leaky ReLU. For each of these layers, the general scheme is convolution > batch norm > leaky ReLU.
You keep stack layers up like this until you get the final transposed convolution layer with shape 32x32x3. Below is the archicture used in the original DCGAN paper:
Note that the final layer here is 64x64x3, while for our SVHN dataset, we only want it to be 32x32x3.
End of explanation
"""
def discriminator(x, reuse=False, alpha=0.2):
with tf.variable_scope('discriminator', reuse=reuse):
# Input layer is 32x32x3
x1 = tf.layers.conv2d(x, 64, 5, strides=2, padding='same')
relu1 = tf.maximum(alpha * x1, x1)
# 16x16x32
x2 = tf.layers.conv2d(relu1, 128, 5, strides=2, padding='same')
bn2 = tf.layers.batch_normalization(x2, training=True)
relu2 = tf.maximum(alpha * bn2, bn2)
# 8x8x128
x3 = tf.layers.conv2d(relu2, 256, 5, strides=2, padding='same')
bn3 = tf.layers.batch_normalization(x3, training=True)
relu3 = tf.maximum(alpha * bn3, bn3)
# 4x4x256
# Flatten it
flat = tf.reshape(relu3, (-1, 4*4*256))
logits = tf.layers.dense(flat, 1)
out = tf.sigmoid(logits)
return out, logits
"""
Explanation: Discriminator
Here you'll build the discriminator. This is basically just a convolutional classifier like you've build before. The input to the discriminator are 32x32x3 tensors/images. You'll want a few convolutional layers, then a fully connected layer for the output. As before, we want a sigmoid output, and you'll need to return the logits as well. For the depths of the convolutional layers I suggest starting with 16, 32, 64 filters in the first layer, then double the depth as you add layers. Note that in the DCGAN paper, they did all the downsampling using only strided convolutional layers with no maxpool layers.
You'll also want to use batch normalization with tf.layers.batch_normalization on each layer except the first convolutional and output layers. Again, each layer should look something like convolution > batch norm > leaky ReLU.
Note: in this project, your batch normalization layers will always use batch statistics. (That is, always set training to True.) That's because we are only interested in using the discriminator to help train the generator. However, if you wanted to use the discriminator for inference later, then you would need to set the training parameter appropriately.
End of explanation
"""
def model_loss(input_real, input_z, output_dim, alpha=0.2):
"""
Get the loss for the discriminator and generator
:param input_real: Images from the real dataset
:param input_z: Z input
:param out_channel_dim: The number of channels in the output image
:return: A tuple of (discriminator loss, generator loss)
"""
g_model = generator(input_z, output_dim, alpha=alpha)
d_model_real, d_logits_real = discriminator(input_real, alpha=alpha)
d_model_fake, d_logits_fake = discriminator(g_model, reuse=True, alpha=alpha)
d_loss_real = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_model_real)))
d_loss_fake = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.zeros_like(d_model_fake)))
g_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_model_fake)))
d_loss = d_loss_real + d_loss_fake
return d_loss, g_loss
"""
Explanation: Model Loss
Calculating the loss like before, nothing new here.
End of explanation
"""
def model_opt(d_loss, g_loss, learning_rate, beta1):
"""
Get optimization operations
:param d_loss: Discriminator loss Tensor
:param g_loss: Generator loss Tensor
:param learning_rate: Learning Rate Placeholder
:param beta1: The exponential decay rate for the 1st moment in the optimizer
:return: A tuple of (discriminator training operation, generator training operation)
"""
# Get weights and bias to update
t_vars = tf.trainable_variables()
d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
g_vars = [var for var in t_vars if var.name.startswith('generator')]
# Optimize
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
d_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(g_loss, var_list=g_vars)
return d_train_opt, g_train_opt
"""
Explanation: Optimizers
Not much new here, but notice how the train operations are wrapped in a with tf.control_dependencies block so the batch normalization layers can update their population statistics.
End of explanation
"""
class GAN:
def __init__(self, real_size, z_size, learning_rate, alpha=0.2, beta1=0.5):
tf.reset_default_graph()
self.input_real, self.input_z = model_inputs(real_size, z_size)
self.d_loss, self.g_loss = model_loss(self.input_real, self.input_z,
real_size[2], alpha=0.2)
self.d_opt, self.g_opt = model_opt(self.d_loss, self.g_loss, learning_rate, 0.5)
"""
Explanation: Building the model
Here we can use the functions we defined about to build the model as a class. This will make it easier to move the network around in our code since the nodes and operations in the graph are packaged in one object.
End of explanation
"""
def view_samples(epoch, samples, nrows, ncols, figsize=(5,5)):
fig, axes = plt.subplots(figsize=figsize, nrows=nrows, ncols=ncols,
sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.axis('off')
img = ((img - img.min())*255 / (img.max() - img.min())).astype(np.uint8)
ax.set_adjustable('box-forced')
im = ax.imshow(img, aspect='equal')
plt.subplots_adjust(wspace=0, hspace=0)
return fig, axes
"""
Explanation: Here is a function for displaying generated images.
End of explanation
"""
def train(net, dataset, epochs, batch_size, print_every=10, show_every=100, figsize=(5,5)):
saver = tf.train.Saver()
sample_z = np.random.uniform(-1, 1, size=(72, z_size))
samples, losses = [], []
steps = 0
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for x, y in dataset.batches(batch_size):
steps += 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(net.d_opt, feed_dict={net.input_real: x, net.input_z: batch_z})
_ = sess.run(net.g_opt, feed_dict={net.input_z: batch_z, net.input_real: x})
if steps % print_every == 0:
# At the end of each epoch, get the losses and print them out
train_loss_d = net.d_loss.eval({net.input_z: batch_z, net.input_real: x})
train_loss_g = net.g_loss.eval({net.input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
if steps % show_every == 0:
gen_samples = sess.run(
generator(net.input_z, 3, reuse=True, training=False),
feed_dict={net.input_z: sample_z})
samples.append(gen_samples)
_ = view_samples(-1, samples, 6, 12, figsize=figsize)
plt.show()
saver.save(sess, './checkpoints/generator.ckpt')
with open('samples.pkl', 'wb') as f:
pkl.dump(samples, f)
return losses, samples
"""
Explanation: And another function we can use to train our network. Notice when we call generator to create the samples to display, we set training to False. That's so the batch normalization layers will use the population statistics rather than the batch statistics. Also notice that we set the net.input_real placeholder when we run the generator's optimizer. The generator doesn't actually use it, but we'd get an errror without it because of the tf.control_dependencies block we created in model_opt.
End of explanation
"""
real_size = (32,32,3)
z_size = 100
learning_rate = 0.0002
batch_size = 128
epochs = 25
alpha = 0.2
beta1 = 0.5
# Create the network
net = GAN(real_size, z_size, learning_rate, alpha=alpha, beta1=beta1)
dataset = Dataset(trainset, testset)
losses, samples = train(net, dataset, epochs, batch_size, figsize=(10,5))
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator', alpha=0.5)
plt.plot(losses.T[1], label='Generator', alpha=0.5)
plt.title("Training Losses")
plt.legend()
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator', alpha=0.5)
plt.plot(losses.T[1], label='Generator', alpha=0.5)
plt.title("Training Losses")
plt.legend()
_ = view_samples(-1, samples, 6, 12, figsize=(10,5))
_ = view_samples(-1, samples, 6, 12, figsize=(10,5))
"""
Explanation: Hyperparameters
GANs are very senstive to hyperparameters. A lot of experimentation goes into finding the best hyperparameters such that the generator and discriminator don't overpower each other. Try out your own hyperparameters or read the DCGAN paper to see what worked for them.
End of explanation
"""
|
AllenDowney/ProbablyOverthinkingIt | ess.ipynb | mit | from __future__ import print_function, division
import numpy as np
import pandas as pd
import statsmodels.formula.api as smf
%matplotlib inline
"""
Explanation: Internet use and religion in Europe
This notebook presents a quick-and-dirty analysis of the association between Internet use and religion in Europe, using data from the European Social Survey (http://www.europeansocialsurvey.org).
Copyright 2015 Allen Downey
MIT License: http://opensource.org/licenses/MIT
End of explanation
"""
def select_cols(df):
cols = ['cntry', 'tvtot', 'tvpol', 'rdtot', 'rdpol', 'nwsptot', 'nwsppol', 'netuse',
'rlgblg', 'rlgdgr', 'eduyrs', 'hinctnta', 'yrbrn', 'eisced', 'pspwght', 'pweight']
df = df[cols]
return df
"""
Explanation: The following function selects the columns I need.
End of explanation
"""
df1 = pd.read_stata('ESS1e06_4.dta', convert_categoricals=False)
df1['hinctnta'] = df1.hinctnt
df1 = select_cols(df1)
df1.head()
"""
Explanation: Read data from Cycle 1.
TODO: investigate the difference between hinctnt and hinctnta; is there a recode that reconciles them?
End of explanation
"""
df2 = pd.read_stata('ESS2e03_4.dta', convert_categoricals=False)
df2['hinctnta'] = df2.hinctnt
df2 = select_cols(df2)
df2.head()
"""
Explanation: Read data from Cycle 2.
End of explanation
"""
df3 = pd.read_stata('ESS3e03_5.dta', convert_categoricals=False)
df3['hinctnta'] = df3.hinctnt
df3 = select_cols(df3)
df3.head()
"""
Explanation: Read data from Cycle 3.
End of explanation
"""
df4 = pd.read_stata('ESS4e04_3.dta', convert_categoricals=False)
df4 = select_cols(df4)
df4.head()
"""
Explanation: Read data from Cycle 4.
End of explanation
"""
df5 = pd.read_stata('ESS5e03_2.dta', convert_categoricals=False)
df5 = select_cols(df5)
df5.head()
"""
Explanation: Read data from Cycle 5.
End of explanation
"""
df = pd.concat([df1, df2, df3, df4, df5], ignore_index=True)
df.head()
"""
Explanation: Concatenate the cycles.
TODO: Have to resample each cycle before concatenating.
End of explanation
"""
df.tvtot.replace([77, 88, 99], np.nan, inplace=True)
df.tvtot.value_counts().sort_index()
"""
Explanation: TV watching time on average weekday
End of explanation
"""
df.rdtot.replace([77, 88, 99], np.nan, inplace=True)
df.rdtot.value_counts().sort_index()
"""
Explanation: Radio listening, total time on average weekday.
End of explanation
"""
df.nwsptot.replace([77, 88, 99], np.nan, inplace=True)
df.nwsptot.value_counts().sort_index()
"""
Explanation: Newspaper reading, total time on average weekday.
End of explanation
"""
df.tvpol.replace([66, 77, 88, 99], np.nan, inplace=True)
df.tvpol.value_counts().sort_index()
"""
Explanation: TV watching: news, politics, current affairs
End of explanation
"""
df.rdpol.replace([66, 77, 88, 99], np.nan, inplace=True)
df.rdpol.value_counts().sort_index()
"""
Explanation: Radio listening: news, politics, current affairs
End of explanation
"""
df.nwsppol.replace([66, 77, 88, 99], np.nan, inplace=True)
df.nwsppol.value_counts().sort_index()
"""
Explanation: Newspaper reading: politics, current affairs
End of explanation
"""
df.netuse.replace([77, 88, 99], np.nan, inplace=True)
df.netuse.value_counts().sort_index()
"""
Explanation: Personal use of Internet, email, www
End of explanation
"""
df.rlgblg.replace([7, 8, 9], np.nan, inplace=True)
df.rlgblg.value_counts().sort_index()
"""
Explanation: Belong to a particular religion or denomination
End of explanation
"""
df.rlgdgr.replace([77, 88, 99], np.nan, inplace=True)
df.rlgdgr.value_counts().sort_index()
"""
Explanation: How religious
End of explanation
"""
df.hinctnta.replace([77, 88, 99], np.nan, inplace=True)
df.hinctnta.value_counts().sort_index()
"""
Explanation: Total household net income, all sources
TODO: It looks like one cycle measured HINCTNT on a 12 point scale. Might need to reconcile
End of explanation
"""
df['hinctnta5'] = df.hinctnta - 5
df.hinctnta5.describe()
"""
Explanation: Shift income to mean near 0.
End of explanation
"""
df.yrbrn.replace([7777, 8888, 9999], np.nan, inplace=True)
df.yrbrn.describe()
"""
Explanation: Year born
End of explanation
"""
df['yrbrn60'] = df.yrbrn - 1960
df.yrbrn60.describe()
"""
Explanation: Shifted to mean near 0
End of explanation
"""
df.eduyrs.replace([77, 88, 99], np.nan, inplace=True)
df.loc[df.eduyrs > 25, 'eduyrs'] = 25
df.eduyrs.value_counts().sort_index()
"""
Explanation: Number of years of education
End of explanation
"""
df.eduyrs.describe()
"""
Explanation: There are a bunch of really high values for eduyrs, need to investigate.
End of explanation
"""
df['eduyrs12'] = df.eduyrs - 12
df.eduyrs12.describe()
"""
Explanation: Shift to mean near 0
End of explanation
"""
df.cntry.value_counts().sort_index()
"""
Explanation: Country codes
End of explanation
"""
df['hasrelig'] = (df.rlgblg==1).astype(int)
"""
Explanation: Make a binary dependent variable
End of explanation
"""
def run_model(df, formula):
model = smf.logit(formula, data=df)
results = model.fit(disp=False)
return results
"""
Explanation: Run the model
End of explanation
"""
formula = ('hasrelig ~ yrbrn60 + eduyrs12 + hinctnta5 +'
'tvtot + tvpol + rdtot + rdpol + nwsptot + nwsppol + netuse')
res = run_model(df, formula)
res.summary()
"""
Explanation: Here's the model with all control variables and all media variables:
End of explanation
"""
formula = ('hasrelig ~ yrbrn60 + eduyrs12 + hinctnta5 +'
'tvtot + rdtot + nwsptot + netuse')
res = run_model(df, formula)
res.summary()
"""
Explanation: Most of the media variables are not statistically significant. If we drop the politial media variables, we get a cleaner model:
End of explanation
"""
def fill_var(df, var):
fill = df[var].dropna().sample(len(df), replace=True)
fill.index = df.index
df[var].fillna(fill, inplace=True)
fill_var(df, var='hinctnta5')
formula = ('hasrelig ~ yrbrn60 + eduyrs12 + hinctnta5 +'
'tvtot + rdtot + nwsptot + netuse')
res = run_model(df, formula)
res.summary()
"""
Explanation: And if we fill missing values for income, cleaner still.
End of explanation
"""
def extract_res(res, var='netuse'):
param = res.params[var]
pvalue = res.pvalues[var]
stars = '**' if pvalue < 0.01 else '*' if pvalue < 0.05 else ''
return res.nobs, param, stars
extract_res(res)
"""
Explanation: Now all variables have small p-values. All parameters have the expected signs:
Younger people are less affiliated.
More educated people are less affiliated.
Higher income people are less affiliated (although this could go either way)
Consumers of all media are less affiliated.
The strength of the Internet effect is stronger than for other media.
These results are consistent in each cycle of the data, and across a few changes I've made in the cleaning process.
However, these results should be considered preliminary:
I have not dealt with the stratification weights.
I have not dealt with missing data (particularly important for education)
Nevertheless, I'll run a breakdown by country.
Here's a function to extract the parameter associated with netuse:
End of explanation
"""
formula = ('rlgdgr ~ yrbrn60 + eduyrs12 + hinctnta5 +'
'tvtot + rdtot + nwsptot + netuse')
model = smf.ols(formula, data=df)
res = model.fit(disp=False)
res.summary()
"""
Explanation: Running a similar model with degree of religiosity.
End of explanation
"""
grouped = df.groupby('cntry')
for name, group in grouped:
print(name, len(group))
"""
Explanation: Group by country:
End of explanation
"""
gb = grouped.get_group('DK')
run_model(gb, formula).summary()
"""
Explanation: Run a sample country
End of explanation
"""
for name, group in grouped:
try:
fill_var(group, var='hinctnta5')
res = run_model(group, formula)
nobs, param, stars = extract_res(res)
arrow = '<--' if stars and param > 0 else ''
print(name, len(group), nobs, '%0.3g'%param, stars, arrow, sep='\t')
except:
print(name, len(group), ' ', 'NA', sep='\t')
"""
Explanation: Run all countries
End of explanation
"""
group = grouped.get_group('FR')
len(group)
for col in group.columns:
print(col, sum(group[col].isnull()))
fill_var(group, 'hinctnta5')
formula
res = run_model(group, formula)
res.summary()
"""
Explanation: In more than half of the countries, the association between Internet use and religious affiliation is statistically significant. In all except two, the association is negative.
In many countries we've lost a substantial number of observations due to missing data. Really need to fill that in!
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst | courses/machine_learning/deepdive2/text_classification/labs/automl_for_text_classification.ipynb | apache-2.0 | import os
from google.cloud import bigquery
import pandas as pd
%load_ext google.cloud.bigquery
"""
Explanation: AutoML for Text Classification
Learning Objectives
Learn how to create a text classification dataset for AutoML using BigQuery
Learn how to train AutoML to build a text classification model
Learn how to evaluate a model trained with AutoML
Learn how to predict on new test data with AutoML
Introduction
In this notebook, we will use AutoML for Text Classification to train a text model to recognize the source of article titles: New York Times, TechCrunch or GitHub.
In a first step, we will query a public dataset on BigQuery taken from hacker news ( it is an aggregator that displays tech related headlines from various sources) to create our training set.
In a second step, use the AutoML UI to upload our dataset, train a text model on it, and evaluate the model we have just trained.
Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook.
End of explanation
"""
PROJECT = "cloud-training-demos" # Replace with your PROJECT
BUCKET = PROJECT # defaults to PROJECT
REGION = "us-central1" # Replace with your REGION
os.environ['PROJECT'] = PROJECT
os.environ['BUCKET'] = BUCKET
os.environ['REGION'] = REGION
!gsutil mb gs://$BUCKET
"""
Explanation: Replace the variable values in the cell below:
End of explanation
"""
%%bigquery --project $PROJECT
SELECT
# TODO: Your code goes here.
FROM
# TODO: Your code goes here.
WHERE
# TODO: Your code goes here.
# TODO: Your code goes here.
# TODO: Your code goes here.
LIMIT 10
"""
Explanation: Create a Dataset from BigQuery
Hacker news headlines are available as a BigQuery public dataset. The dataset contains all headlines from the sites inception in October 2006 until October 2015.
Lab Task 1a:
Complete the query below to create a sample dataset containing the url, title, and score of articles from the public dataset bigquery-public-data.hacker_news.stories. Use a WHERE clause to restrict to only those articles with
* title length greater than 10 characters
* score greater than 10
* url length greater than 0 characters
End of explanation
"""
%%bigquery --project $PROJECT
SELECT
ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.'))[OFFSET(1)] AS source,
# TODO: Your code goes here.
FROM
`bigquery-public-data.hacker_news.stories`
WHERE
REGEXP_CONTAINS(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.com$')
# TODO: Your code goes here.
GROUP BY
# TODO: Your code goes here.
ORDER BY num_articles DESC
LIMIT 100
"""
Explanation: Let's do some regular expression parsing in BigQuery to get the source of the newspaper article from the URL. For example, if the url is http://mobile.nytimes.com/...., I want to be left with <i>nytimes</i>
Lab task 1b:
Complete the query below to count the number of titles within each 'source' category. Note that to grab the 'source' of the article we use the a regex command on the url of the article. To count the number of articles you'll use a GROUP BY in sql, and we'll also restrict our attention to only those articles whose title has greater than 10 characters.
End of explanation
"""
regex = '.*://(.[^/]+)/'
sub_query = """
SELECT
title,
ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '{0}'), '.'))[OFFSET(1)] AS source
FROM
`bigquery-public-data.hacker_news.stories`
WHERE
REGEXP_CONTAINS(REGEXP_EXTRACT(url, '{0}'), '.com$')
AND LENGTH(title) > 10
""".format(regex)
query = """
SELECT
LOWER(REGEXP_REPLACE(title, '[^a-zA-Z0-9 $.-]', ' ')) AS title,
source
FROM
({sub_query})
WHERE (source = 'github' OR source = 'nytimes' OR source = 'techcrunch')
""".format(sub_query=sub_query)
print(query)
"""
Explanation: Now that we have good parsing of the URL to get the source, let's put together a dataset of source and titles. This will be our labeled dataset for machine learning.
End of explanation
"""
bq = bigquery.Client(project=PROJECT)
title_dataset = bq.query(query).to_dataframe()
title_dataset.head()
"""
Explanation: For ML training, we usually need to split our dataset into training and evaluation datasets (and perhaps an independent test dataset if we are going to do model or feature selection based on the evaluation dataset). AutoML however figures out on its own how to create these splits, so we won't need to do that here.
End of explanation
"""
print("The full dataset contains {n} titles".format(n=len(title_dataset)))
"""
Explanation: AutoML for text classification requires that
* the dataset be in csv form with
* the first column being the texts to classify or a GCS path to the text
* the last column to be the text labels
The dataset we pulled from BigQuery satisfies these requirements.
End of explanation
"""
title_dataset.source.value_counts()
"""
Explanation: Let's make sure we have roughly the same number of labels for each of our three labels:
End of explanation
"""
DATADIR = './data/'
if not os.path.exists(DATADIR):
os.makedirs(DATADIR)
FULL_DATASET_NAME = 'titles_full.csv'
FULL_DATASET_PATH = os.path.join(DATADIR, FULL_DATASET_NAME)
# Let's shuffle the data before writing it to disk.
title_dataset = title_dataset.sample(n=len(title_dataset))
title_dataset.to_csv(
FULL_DATASET_PATH, header=False, index=False, encoding='utf-8')
"""
Explanation: Finally we will save our data, which is currently in-memory, to disk.
We will create a csv file containing the full dataset and another containing only 500 articles for development.
Note: It may take a long time to train AutoML on the full dataset, so we recommend to use the sample dataset for the purpose of learning the tool.
End of explanation
"""
sample_title_dataset = # TODO: Your code goes here.
# TODO: Your code goes here.
"""
Explanation: Now let's sample 500 articles from the full dataset and make sure we have enough examples for each label in our sample dataset (see here for further details on how to prepare data for AutoML).
Lab Task 1c:
Use .sample to create a sample dataset of 1,000 articles from the full dataset. Use .value_counts to see how many articles are contained in each of the three source categories?
End of explanation
"""
SAMPLE_DATASET_NAME = 'titles_sample.csv'
SAMPLE_DATASET_PATH = os.path.join(DATADIR, SAMPLE_DATASET_NAME)
sample_title_dataset.to_csv(
SAMPLE_DATASET_PATH, header=False, index=False, encoding='utf-8')
sample_title_dataset.head()
%%bash
gsutil cp data/titles_sample.csv gs://$BUCKET
"""
Explanation: Let's write the sample dataset to disk.
End of explanation
"""
|
dereneaton/ipyrad | newdocs/API-analysis/cookbook-window_extracter.ipynb | gpl-3.0 | # conda install ipyrad -c bioconda
# conda install raxml -c bioconda
# conda install toytree -c eaton-lab
import ipyrad.analysis as ipa
import toytree
"""
Explanation: <span style="color:gray">ipyrad-analysis toolkit:</span> window_extracter
View as notebook
Extract all sequence data within a genomic window, concatenate, and write to a phylip file. Useful for inferring the phylogeny near a specific gene/region of interest. Follow up with downstream phylogenetic analysis of the region.
Key features:
Automatically concatenates ref-mapped RAD loci in sliding windows.
Filter to remove sites by missing data.
Optionally remove samples from alignments.
Optionally use consensus seqs to represent clades of multiple samples.
<img src="https://eaton-lab.org/slides/data-svg/window-extracter-min4.svg">
Required software
End of explanation
"""
# the path to your HDF5 formatted seqs file
seqfile = "/home/deren/Downloads/ref_pop2.seqs.hdf5"
"""
Explanation: Required input data files
Your input data should be a .seqs.hdf database file produced by ipyrad. This file contains the full sequence alignment for your samples as well as associated meta-data of the genomic positions of RAD loci relative to a reference genome.
End of explanation
"""
# first load the data file with no other arguments to see scaffold table
ext = ipa.window_extracter(seqfile)
# the scaffold table shows scaffold names and lens in length-order
ext.scaffold_table.head(15)
"""
Explanation: The scaffold table
The window_extracter() tool takes the .seqs.hdf5 database file from ipyrad as its input file. You select scaffolds by their index (integer) which can be found in the .scaffold_table. We can see from the table below that this genome has 12 large scaffolds (chromosome-scale linkage blocks) and many other smaller unplaced scaffolds. If you are working with a high quality reference genome then it will likely look similar to this, whereas many other reference genomes will be composed of many more scaffolds that are mostly smaller in size. Here I will focus just on the large chromosomes.
End of explanation
"""
# select a scaffold idx, start, and end positions
ext = ipa.window_extracter(
data=seqfile,
scaffold_idxs=0,
)
# show stats of the window
ext.stats
"""
Explanation: Selecting scaffolds
The scaffold_idxs designates the scaffold to extract sequence data from. This is the index (row) of the named scaffold from the scaffold table (e.g., above). The window_extracter tool will select all RAD data within this window and exclude any sites that have no data (e.g., the space between RAD markers, or the space between paired reads) to create a clean concise alignment.
The .stats attribute shows the information content of the selected window before and after filtering. The stats are returned as a dataframe, showing the size, information content, missingness, and number of samples in the alignment. You can see that the 55Mbp scaffold is reduced to a 450Kbp alignment that includes 13K snps and has 20% missing data across 30 samples (NB: this dataset already had some minimum sample filtering applied during assembly). The default filtering applied to sites only reduced the number of sites by a few thousand.
End of explanation
"""
# select a scaffold idx, start, and end positions
ext = ipa.window_extracter(
data=seqfile,
scaffold_idxs=0,
start=0,
end=10000,
)
# show stats of the window
ext.stats
# select a scaffold idx, start, and end positions
ext = ipa.window_extracter(
data=seqfile,
scaffold_idxs=0,
start=500000,
end=800000,
)
# show stats of the window
ext.stats
"""
Explanation: Subsetting scaffold windows
You can use the start and end arguments to select subsets of scaffolds as smaller window sizes to be extracted. As with the example above the selected window will be filtered to reduce missing data. If there is no data in the selected window the stats will show no sites, and a warning will be printed. An example with no data and with some data are both shown below.
End of explanation
"""
# select a scaffold idx, start, and end positions
ext = ipa.window_extracter(
data=seqfile,
scaffold_idxs=0,
start=500000,
end=800000,
mincov=0.8,
rmincov=0.5,
)
# show stats of the window
ext.stats
"""
Explanation: Filtering missing data with mincov
You can filter sites from the alignment by using mincov, which applies a filter to all sites in the alignment. For example, mincov=0.5 will require that 50% of samples contain a site that is not N or - for the site to be included in the alignment. This value can be a proportion like 0.5, or it can be a number, like 10.
<img src="https://eaton-lab.org/slides/data-svg/window-extracter-min8.svg">
End of explanation
"""
# assign samples to groups/taxa
imap = {
"reference": ["reference"],
"virg": ["TXWV2", "LALC2", "SCCU3", "FLSF33", "FLBA140"],
"mini": ["FLSF47", "FLMO62", "FLSA185", "FLCK216"],
"gemi": ["FLCK18", "FLSF54", "FLWO6", "FLAB109"],
"bran": ["BJSL25", "BJSB3", "BJVL19"],
"fusi": ["MXED8", "MXGT4", "TXGR3", "TXMD3"],
"sagr": ["CUVN10", "CUCA4", "CUSV6"],
"oleo": ["CRL0030", "HNDA09", "BZBB1", "MXSA3017"],
}
# set a simple minmap requiring 1 sample from each group
minmap = {name: 0.75 for name in imap}
# select a scaffold idx, start, and end positions
ext = ipa.window_extracter(
data=seqfile,
scaffold_idxs=0,
start=500000,
end=800000,
mincov=0.8,
imap=imap,
minmap=minmap,
)
# show stats of the window
ext.stats
"""
Explanation: Filtering missing data with imap and minmap
An imap dictionary can be used to group samples into populations/species, as in the example below. It takes key,value pairs where the key is the name of the group, and the value is a list of sample names. One way to use an imap is to apply a minmap filter. This acts just like the global mincov filter, but applies to each group separately. Only if a site meets the minimum coverage argument for each group will it be retained in the data set. In this case the imap sampling selected 28/30 samples and required 75% of data in each group which reduced the number of SNPs from 92 to 86.
End of explanation
"""
# select a scaffold idx, start, and end positions
ext = ipa.window_extracter(
data=seqfile,
scaffold_idxs=2,
mincov=0.8,
imap={
"include": [
"TXWV2", "LALC2", "SCCU3", "FLSF33", "FLBA140",
"FLSF47", "FLMO62", "FLSA185", "FLCK216",
"FLCK18", "FLSF54", "FLWO6", "FLAB109",
]
},
)
# show stats of the window
ext.stats
"""
Explanation: Subsample taxa with imap
You can also use an imap dictionary to select which samples to include/exclude from an analysis. This is an easy way to remove rogue taxa, hybrids, or technical replicates from phylogenetic analyses. Here I select a subset ot taxa to include in the analyses and keep only sites that have 80% coverage from scaffold 2 (Qrob_Chr03).
End of explanation
"""
# select a scaffold idx, start, and end positions
ext = ipa.window_extracter(
data=seqfile,
scaffold_idxs=[0, 1, 2, 3, 4, 5],
mincov=0.5,
)
# show stats of the window
ext.stats
"""
Explanation: Concatenate multiple scaffolds together
You can also concatenate multiple scaffolds together using window_extracter. This can be useful for creating genome-wide alignments, or smaller subsets of the genome. For example, you may want to combine multiple scaffolds from the same chromosome together, or, if you are working with denovo data, you could even combine a random sample of anonymous loci together as a sort of pseudo bootstrapping procedure. To select multiple scaffolds you simply provide a list or range of scaffold idxs.
End of explanation
"""
# select a scaffold idx, start, and end positions
ext = ipa.window_extracter(
data=seqfile,
scaffold_idxs=0,
start=200000,
end=5000000,
mincov=0.8,
imap=imap,
minmap=minmap,
consensus_reduce=True,
)
# show stats of the window
ext.stats
"""
Explanation: Consensus reduction with imap
You can further reduce missing data by condensing data from multiple samples into a single "consensus" representative using the consensus_reduce=True option. This uses the imap dictionary to group samples into groups and sample the most frequent allele. This can be particularly useful for analyses in which you want dense species-level coverage with little missing data, but it is not particularly important which individual represents the sampled allele for a species at a given locus. For example, if you want to construct many gene trees with one representative per species to use as input to a two-step species tree inference program like ASTRAL.
<img src="https://eaton-lab.org/slides/data-svg/consensus.svg">
End of explanation
"""
ext.run(force=True)
"""
Explanation: Write selected window to a file
Once you've chosen the final set of arguments to select the window of interest you can write the alignment to .phy format by calling the .run() command. If you want to write to nexus format you can simply add the argument nexus=True.
End of explanation
"""
# path to the phylip file output
ext.outfile
"""
Explanation: Accessing the output files
The output files created by the .run() command will be written to the working directory (defaults to "./analysis-window_extracter"). You can either find the full path to that file or access it easily from the extracter object itself as an attribute like below.
End of explanation
"""
# run raxml on the phylip file
rax = ipa.raxml(data=ext.outfile, name="test", N=50, T=4)
# show the raxml command
print(rax.command)
# run job and wait to finish
rax.run(force=True)
# plot the tree for this genome window
print(rax.trees.bipartitions)
tre = toytree.tree(rax.trees.bipartitions)
rtre = tre.root("reference").collapse_nodes(min_support=50)
rtre.draw(node_labels="support");
"""
Explanation: <h3><span style="color:red">Advanced:</span> Infer tree from phy output</h3>
You can pass in the file path that was created above to a number of inference programs. The ipyrad tools for raxml and mrbayes both accept phylip format (ipyrad converts it to nexus under the hood for mrbayes).
End of explanation
"""
|
elfi-dev/notebooks | quickstart.ipynb | bsd-3-clause | import elfi
"""
Explanation: Quickstart
First ensure you have installed Python 3.5 (or greater) and ELFI. After installation you can start using ELFI:
End of explanation
"""
mu = elfi.Prior('uniform', -2, 4)
sigma = elfi.Prior('uniform', 1, 4)
"""
Explanation: ELFI includes an easy to use generative modeling syntax, where the generative model is specified as a directed acyclic graph (DAG). Let’s create two prior nodes:
End of explanation
"""
import scipy.stats as ss
import numpy as np
def simulator(mu, sigma, batch_size=1, random_state=None):
mu, sigma = np.atleast_1d(mu, sigma)
return ss.norm.rvs(mu[:, None], sigma[:, None], size=(batch_size, 30), random_state=random_state)
def mean(y):
return np.mean(y, axis=1)
def var(y):
return np.var(y, axis=1)
"""
Explanation: The above would create two prior nodes, a uniform distribution from -2 to 2 for the mean mu and another uniform distribution from 1 to 5 for the standard deviation sigma. All distributions from scipy.stats are available.
For likelihood-free models we typically need to define a simulator and summary statistics for the data. As an example, lets define the simulator as 30 draws from a Gaussian distribution with a given mean and standard deviation. Let's use mean and variance as our summaries:
End of explanation
"""
# Set the generating parameters that we will try to infer
mean0 = 1
std0 = 3
# Generate some data (using a fixed seed here)
np.random.seed(20170525)
y0 = simulator(mean0, std0)
print(y0)
"""
Explanation: Let’s now assume we have some observed data y0 (here we just create some with the simulator):
End of explanation
"""
# Add the simulator node and observed data to the model
sim = elfi.Simulator(simulator, mu, sigma, observed=y0)
# Add summary statistics to the model
S1 = elfi.Summary(mean, sim)
S2 = elfi.Summary(var, sim)
# Specify distance as euclidean between summary vectors (S1, S2) from simulated and
# observed data
d = elfi.Distance('euclidean', S1, S2)
"""
Explanation: Now we have all the components needed. Let’s complete our model by adding the simulator, the observed data, summaries and a distance to our model:
End of explanation
"""
# Plot the complete model (requires graphviz)
elfi.draw(d)
"""
Explanation: If you have graphviz installed to your system, you can also visualize the model:
End of explanation
"""
rej = elfi.Rejection(d, batch_size=10000, seed=30052017)
res = rej.sample(1000, threshold=.5)
print(res)
"""
Explanation: We can try to infer the true generating parameters mean0 and std0 above with any of ELFI’s inference methods. Let’s use ABC Rejection sampling and sample 1000 samples from the approximate posterior using threshold value 0.5:
End of explanation
"""
import matplotlib.pyplot as plt
res.plot_marginals()
plt.show()
"""
Explanation: Let's plot also the marginal distributions for the parameters:
End of explanation
"""
|
berlemontkevin/Jupyter_Notebook | Inference_Big_data/Hopfield/Hopfield.ipynb | apache-2.0 |
%%html
<script src="https://cdn.rawgit.com/parente/4c3e6936d0d7a46fd071/raw/65b816fb9bdd3c28b4ddf3af602bfd6015486383/code_toggle.js"></script>
"""
Explanation: TD 3 : Hopfield model : Berlemont Kevin
Hopfield network : An introduction
The Hopfield model , consists of a network of $N$ neurons, labeled by a lower index $i$ , with $ 1\leq i\leq N$. Neurons in the Hopfield model have only two states. A neuron $i$ is ‘ON’ if its state variable takes the value $S_{i}=+1$ and ‘OFF’ (silent) if $S_{i}=-1$. The dynamics evolves in discrete time with time steps $\Delta t$. There is no refractoriness and the duration of a time step is typically not specified. If we take $ \Delta t=1ms $, we can interpret $S_{i}(t)=+1$ as an action potential of neuron $i$ at time $t$. If we take $\Delta t=500ms$, $S_{i}(t)=+1$ should rather be interpreted as an episode of high firing rate.
Neurons interact with each other with weights $w_{ij}$. The input potential of neuron $i$, influenced by the activity of other neurons is
$$
h_{i}(t)=\sum_{j}w_{ij}\,S_{j}(t)\,.
$$
The input potential at time t t influences the probabilistic update of the state variable $ S_{i}$ in the next time step:
$$
S_{i}(t+\Delta t)=\operatorname{sgn}[h(t)]
$$
If we want now to include some dependancy in the temperature for the dynamic, we can use the Glauber dynamic which is the following.
If the temperature is not very low, there is a complication in the magnetic problem. Thermal fluctuations tend to flip the spins, from down to up or from up to down, and thus upset the tendency of each spin to align with its field. At high temperature the thermal fluctuations dominate and a spin is nearly as often opposite to its field aligned with it.
The conventional way to describe mathematically the effect of thermal fluctuations in an Ising model is with Glauber dynamics. We replace the previous deterministic dynamics by a stochastic rule.
$$
S_i \equiv \left\lbrace
\begin{array}{cc}
+1 & \mbox{with probability $g(h_i)$} \
-1 & \mbox{with probability $1-g(h_i)$}
\end{array}\right.
$$
This is taken to be applied whenever spin $S_i$ is updated. The function $g(h)$ depends on temperature and will be :
$$ g(h) = \frac{1}{1+ \exp (- 2 \beta h)}$$
Patterns in the Hopfield model
The Hopfield model consists of a network of $ N$ binary neurons. A neuron i i is characterized by its state $ S_{i}=\pm 1$. The state variable is updated according to the dynamics defined previously.
The task of the network is to store and recall $ M$ different patterns. Patterns are labeled by the index $\mu$ with $ 1\leq\mu\leq M$. Each pattern $ \mu$ is defined as a desired configuration $ \left{p_{i}^{\mu}=\pm 1;1\leq i\leq N\right}$. The network of $ N$ neurons is said to correctly represent pattern $ \mu$, if the state of all neurons $1\leq i\leq N$ is $ S_{i}(t)=S_{i}(t+\Delta t)=p_{i}^{\mu}$. In other words, patterns must be fixed points of the dynamics.
During the set-up phase of the Hopfield network, a random number generator generates, for each pattern $\mu$ a string of $ N $independent binary numbers $ {p_{i}^{\mu}=\pm 1;1\leq i\leq N} $with expectation value $ \langle p_{i}^{\mu}\rangle=0$. Strings of different patterns are independent. The weights are chosen as
$$
w_{ij}=c\sum_{\mu=1}^{M}p_{i}^{\mu}\,p_{j}^{\mu}\,
$$
with a positive constant $ c>0$. The network has full connectivity. Note that for a single pattern and $c=1$, the set-ip is identical to the connections of the anti-ferromagnet. For reasons of normalization, the standard choice of the constant $c$ is $ c=1/N$.
End of explanation
"""
#%matplotlib inline
#%matplotlib qt
import matplotlib.pyplot as plt
from neurodynex.hopfield_network import network, pattern_tools, plot_tools
pattern_size = 5
# create an instance of the class HopfieldNetwork
hopfield_net = network.HopfieldNetwork(nr_neurons= pattern_size**2)
# instantiate a pattern factory
factory = pattern_tools.PatternFactory(pattern_size, pattern_size)
# create a checkerboard pattern and add it to the pattern list
checkerboard = factory.create_checkerboard()
pattern_list = [checkerboard]
# add random patterns to the list
pattern_list.extend(factory.create_random_pattern_list(nr_patterns=3, on_probability=0.5))
plot_tools.plot_pattern_list(pattern_list)
# how similar are the random patterns and the checkerboard? Check the overlaps
overlap_matrix = pattern_tools.compute_overlap_matrix(pattern_list)
#plot_tools.plot_overlap_matrix(overlap_matrix)
# let the hopfield network "learn" the patterns. Note: they are not stored
# explicitly but only network weights are updated !
hopfield_net.store_patterns(pattern_list)
# create a noisy version of a pattern and use that to initialize the network
noisy_init_state = pattern_tools.flip_n(checkerboard, nr_of_flips=3)
hopfield_net.set_state_from_pattern(noisy_init_state)
# from this initial state, let the network dynamics evolve.
states = hopfield_net.run_with_monitoring(nr_steps=4)
# each network state is a vector. reshape it to the same shape used to create the patterns.
states_as_patterns = factory.reshape_patterns(states)
# plot the states of the network
plt.rcParams["figure.figsize"] = [12,9]
plot_tools.plot_state_sequence_and_overlap(states_as_patterns, pattern_list, reference_idx=0, suptitle="Network dynamics")
"""
Explanation: We will now study how a network stores and retrieves patterns. Using a small network allows us to have a close look at the network weights and dynamics.
End of explanation
"""
pattern_size = 4
# create an instance of the class HopfieldNetwork
hopfield_net = network.HopfieldNetwork(nr_neurons= pattern_size**2)
# instantiate a pattern factory
factory = pattern_tools.PatternFactory(pattern_size, pattern_size)
# create a checkerboard pattern and add it to the pattern list
checkerboard = factory.create_checkerboard()
pattern_list = [checkerboard]
pattern_list.extend(factory.create_random_pattern_list(nr_patterns=6, on_probability=0.5))
plot_tools.plot_pattern_list(pattern_list)
hopfield_net.store_patterns(pattern_list)
# create a noisy version of a pattern and use that to initialize the network
noisy_init_state = pattern_tools.flip_n(checkerboard, nr_of_flips=3)
hopfield_net.set_state_from_pattern(noisy_init_state)
# from this initial state, let the network dynamics evolve.
states = hopfield_net.run_with_monitoring(nr_steps=6)
# each network state is a vector. reshape it to the same shape used to create the patterns.
states_as_patterns = factory.reshape_patterns(states)
# plot the states of the network
plt.rcParams["figure.figsize"] = [12,9]
plot_tools.plot_state_sequence_and_overlap(states_as_patterns, pattern_list, reference_idx=0, suptitle="Network dynamics with 7 patterns stored")
pattern_size = 4
# create an instance of the class HopfieldNetwork
hopfield_net = network.HopfieldNetwork(nr_neurons= pattern_size**2)
# instantiate a pattern factory
factory = pattern_tools.PatternFactory(pattern_size, pattern_size)
# create a checkerboard pattern and add it to the pattern list
checkerboard = factory.create_checkerboard()
pattern_list = [checkerboard]
pattern_list.extend(factory.create_random_pattern_list(nr_patterns=3, on_probability=0.5))
plot_tools.plot_pattern_list(pattern_list)
hopfield_net.store_patterns(pattern_list)
# create a noisy version of a pattern and use that to initialize the network
noisy_init_state = pattern_tools.flip_n(checkerboard, nr_of_flips=10)
hopfield_net.set_state_from_pattern(noisy_init_state)
# from this initial state, let the network dynamics evolve.
states = hopfield_net.run_with_monitoring(nr_steps=6)
# each network state is a vector. reshape it to the same shape used to create the patterns.
states_as_patterns = factory.reshape_patterns(states)
# plot the states of the network
plt.rcParams["figure.figsize"] = [12,9]
plot_tools.plot_state_sequence_and_overlap(states_as_patterns, pattern_list, reference_idx=0, suptitle="Network dynamics with more than 50% of noise")
"""
Explanation: On this figure we saw that starting with a noisy pattern, the network is capable of finding the right pattern corresponding. We can observe that the correlation $m$ between the right pattern and the output is increasing with time.
But the thing we can ask is whether the network can retrieve a infinite number of patterns, with a strong noise ?
The following simulations will give a first answer to this question.
End of explanation
"""
pattern_size = 4
# create an instance of the class HopfieldNetwork
hopfield_net1 = network.HopfieldNetwork(nr_neurons= pattern_size**2)
%matplotlib inline
factory = pattern_tools.PatternFactory(pattern_size, pattern_size)
# create a checkerboard pattern and add it to the pattern list
checkerboard = factory.create_checkerboard()
pattern_list = [checkerboard]
hopfield_net1.store_patterns(pattern_list)
#plot_tools.plot_pattern_list(pattern_list)
plot_tools.plot_nework_weights(hopfield_net1)
plt.figure()
plt.hist(hopfield_net1.weights.flatten())
plt.title('Histogram of the weights distribution checkerboard pattern')
plt.show()
hopfield_net2 = network.HopfieldNetwork(nr_neurons= pattern_size**2)
%matplotlib inline
L_pattern = factory.create_L_pattern()
pattern_list= [L_pattern]
hopfield_net2.store_patterns(pattern_list)
#plot_tools.plot_pattern_list(pattern_list)
plot_tools.plot_nework_weights(hopfield_net2)
plt.figure()
plt.hist(hopfield_net2.weights.flatten())
plt.title('Histogram of the weights distribution for L pattern')
plt.show()
hopfield_net3 = network.HopfieldNetwork(nr_neurons= pattern_size**2)
L_pattern = factory.create_L_pattern()
checkboard = factory.create_checkerboard()
pattern_list= [L_pattern,checkboard]
plot_tools.plot_pattern_list(pattern_list)
hopfield_net3.store_patterns(pattern_list)
plot_tools.plot_nework_weights(hopfield_net3)
plt.figure()
plt.hist(hopfield_net3.weights.flatten())
plt.title('Histogram of the weights distribution for both patterns')
plt.show()
"""
Explanation: Those two pictures show the limits of out model. In the situation wherem ore patterns are stored, the netowrk is not able to correctly recover the ones he stored. Moreover, in a situation where he was able to recover the correct pattern, if we increase the noise, the network does not recover the pattern (this seems logical).
Weights distribution
Let's just have a quick look at the weight's distribution of the Hopfield network before looking at the phase transition. We are going to try to understand how the weights are instancied in the Hopfield network. For that we will create a checkerboard pattern, and a L-pattern and observe the behavior of the system. Each time we will plot the weight matrice and the weight distribution.
End of explanation
"""
import numpy as np
%matplotlib inline
pattern_size = 20
# create an instance of the class HopfieldNetwork
hopfield_net = network.HopfieldNetwork(nr_neurons= pattern_size**2)
# instantiate a pattern factory
factory = pattern_tools.PatternFactory(pattern_size, pattern_size)
# create a checkerboard pattern and add it to the pattern list
#checkerboard = factory.create_checkerboard()
#pattern_list = [checkerboard]
pattern_list=[]
# add random patterns to the list
pattern_list.extend(factory.create_random_pattern_list(nr_patterns=10, on_probability=0.5))
plot_tools.plot_pattern_list(pattern_list)
# let the hopfield network "learn" the patterns. Note: they are not stored
# explicitly but only network weights are updated !
hopfield_net.store_patterns(pattern_list)
# create a noisy version of a pattern and use that to initialize the network
noisy_init_state=pattern_tools.flip_n(pattern_list[0],nr_of_flips=60)
hopfield_net.set_state_from_pattern(noisy_init_state)
# from this initial state, let the network dynamics evolve.
#hopfield_net.run(nr_steps=50)
states = hopfield_net.run_with_monitoring(nr_steps=6)
energy_list = []
for p in range(len(states)) :
energy_list.append(- np.sum(np.dot(np.dot(hopfield_net.weights,states[p]),states[p])))
# each network state is a vector. reshape it to the same shape used to create the patterns.
states_as_patterns = factory.reshape_patterns(states)
# plot the states of the network
plt.rcParams["figure.figsize"] = [12,9]
plot_tools.plot_state_sequence_and_overlap(states_as_patterns, pattern_list, reference_idx=0, suptitle="Network dynamics")
plt.figure()
plt.plot(energy_list,'-o')
plt.title('Energy of the network')
plt.xlabel('Epoch of the dynamics')
plt.ylabel('Energy')
plt.show()
"""
Explanation: On each of this figure we present which patterns are store in the network and the result on the weights. We can observe that the wieght matrice has the same symmetries as the pattern which are storen. Moreover, the weight distribution seems to depend on how many pattern are store.
When we are trying to store more pattern ,the resulting weight matrice is only the sum of the weight matrices of the individual pattern. This is clearly in accordance with the algotihm we have defined in the introduction. But this influence on the weights distribution highlight the fact that the system can only store a finite number of pattern if he wants to be able to correctly retrieve them.
Phase transition
Let us first briefly explain why do we have a phase transition in the dynamics of the Hopfield network. We will highlight this fact with computation at the end of this discussion.
How many random patterns can be stored in a network of $ N$ neurons? Memory retrieval implies pattern completion, starting from a partial pattern. An absolutely minimal condition for pattern completion is that at least the dynamics should not move away from the pattern, if the initial cue is identical to the complete pattern. In other words, we require that a network with initial state $S_{i}(t_{0})=p_{i}^{\nu}$ for $1\leq i\leq N$ stays in pattern $ \nu$. Therefore pattern $ \nu$ must be a fixed point under the dynamics.
We study a Hopfield network at zero temperature ($\beta=\infty$). We start the calculation by inserting $ S_{j}(t_{0})=p_{j}^{\nu}$. This yields to :
$$ \displaystyle S_{i}(t_{0}+\Delta t) = \displaystyle\operatorname{sgn}\left[{1\over N}\sum_{j=1}^{N}\sum_{\mu=1}^{M}p_{i}^{\mu}\,p_{j}^{\mu}\,p_{j}^{\nu}\right]
= \displaystyle\operatorname{sgn}\left[p_{i}^{\nu}\,\left({1\over N}\sum_{j=1}^{N}p_{j}^{\nu}\,p_{j}^{\nu}\right)+{1\over N}\sum_{\mu\neq\nu}\sum_{j}p_{i}^{\mu}\,p_{j}^{\mu}\,p_{j}^{\nu}\right]\, $$
where we have separated the pattern $ \nu$ from the other patterns. The factor in parenthesis on the right-hand side adds up to one and can therefore be dropped. We now multiply the second term on the right-hand side by a factor $ 1=p_{i}^{\nu}\,p_{i}^{\nu}$. Finally, because$ p_{i}^{\nu}=\pm 1$, a factor $p_{i}^{\nu}$ can be pulled out of the argument of the sign-function:
$$S_{i}(t_{0}+\Delta t)=p_{i}^{\nu}\,\operatorname{sgn}[1+{1\over N}\sum_{j}\sum_{\mu\neq\nu}p_{i}^{\mu}\,p_{i}^{\nu}\,p_{j}^{\mu}\,p_{j}^{\nu}]=p_{i}^{\nu}\,\operatorname{sgn}[1-a_{i\nu}]\,.
$$
The desired fixed point exists only if $1>a_{i\nu}={1\over N}\sum_{j}\sum_{\mu\neq\nu}p_{i}^{\mu}\,p_{i}^{\nu}\,p_{j}^{\mu}\,p_{j}^{\nu}$ for all neurons $ i$. In other words, even if the network is initialized in perfect agreement with one of the patterns, it can happen that one or a few neurons flip their sign. The probability to move away from the pattern is equal to the probability of finding a value $ a_{i\nu}>0$ for one of the neurons $ i$.
Because patterns are generated from independent random numbers $ p_{i}^{\mu}=\pm 1$ with zero mean, the product $ p_{i}^{\mu}\,p_{i}^{\nu}\,p_{j}^{\mu}\,p_{j}^{\nu}=\pm 1$ is also a binary random number with zero mean. Since the values $ p_{i}^{\mu}$ are chosen independently for each neuron $ i$ and each pattern $ \mu$, the term $a_{i\nu}$ can be visualized as a random walk of $N\,(M-1)$ steps and step size $1/N$. For a large number of steps, the positive or negative walking distance can be approximated by a Gaussian distribution with zero mean and standard deviation $ \sigma=\sqrt{(M-1)/N}\approx\sqrt{M/N}$ for $ M\gg 1$. The probability that the activity state of neuron $ i$ erroneously flips is therefore proportional to
$$
P_{\rm error}={1\over\sqrt{2\pi}\sigma}\int_{1}^{\infty}e^{-x^{2}\over 2\sigma^{2}}{\text{d}}x\approx{1\over 2}\left[1-{\rm erf}\left(\sqrt{{N\over 2M}}\right)\right]
$$
where we have introduced the error function
$$
{\rm erf}(x)={1\over\sqrt{\pi}}\int_{0}^{x}e^{-y^{2}}\,{\text{d}}y
$$
The most important insight is that the probability of an erroneous state-flip increases with the ratio $ M/N$. Formally, we can define the storage capacity $ C_{\rm store}$ of a network as the maximal number $ M^{\rm max}$ of patterns that a network of $ N$ neurons can retrieve
$$
C_{\rm store}={M^{\rm max}\over N}={M^{\rm max}\,N\over N^{2}}\,.
$$
For the second equality sign we have multiplied both numerator and denominator by a common factor N N which gives rise to the following interpretation. Since each pattern consists of $ N$ neurons (i.e., $ N $binary numbers), the total number of bits that need to be stored at maximum capacity is $M^{\rm max}\,N$. In the Hopfield model, patterns are stored by an appropriate choice of the synaptic connections. The number of available synapses in a fully connected network is $ N^{2}$. Therefore, the storage capacity measures the number of bits stored per synapse.
We can evaluate this solution for various choices of $ P_{\rm error}$. For example, if we accept an error probability of $ P_{\rm error}=0.001$, we find a storage capacity of $ C_{\rm store}=0.105$.
Hence, a network of 10’000 neurons is able of storing about 1’000 patterns with $ P_{\rm error}=0.001$. Thus in each of the patterns, we expect that about 10 neurons exhibit erroneous activity. We emphasize that the above calculation focuses on the first iteration step only. If we start in the pattern, then about 10 neurons will flip their state in the first iteration. But these flips could in principle cause further neurons to flip in the second iteration and eventually initiate an avalanche of many other changes.
A more precise calculation shows that such an avalanche does not occur, if the number of stored pattern stays below a limit such that $ C_{\rm store}=0.138 $.
End of explanation
"""
%matplotlib inline
import numpy as np
import time
network_size=20
start_time = time.time()
#alpha_list = [0.01,0.05,0.08,0.10,0.12,0.15,0.20,0.30]
alpha_list = np.arange(0.07,0.22,0.010
)
m_list = []
hopfield_net = network.HopfieldNetwork(nr_neurons= network_size**2)
factory = pattern_tools.PatternFactory(network_size,network_size)
# create a checkerboard pattern and add it to the pattern list
#checkerboard = factory.create_checkerboard()
#pattern_list = [checkerboard]
pattern_list=[]
pattern_list.extend(factory.create_random_pattern_list(nr_patterns=int(alpha_list[-1]*network_size*network_size), on_probability=0.5))
for alpha in alpha_list:
# create an instance of the class HopfieldNetwork
# let the hopfield network "learn" the patterns. Note: they are not stored
# explicitly but only network weights are updated !
hopfield_net.store_patterns(pattern_list[0:int(alpha*network_size*network_size)])
# create a noisy version of a pattern and use that to initialize the network
noisy_init_state=pattern_tools.flip_n(pattern_list[0],nr_of_flips=int(network_size*network_size*0.15)) # 10% of noisy pixels
hopfield_net.set_state_from_pattern(noisy_init_state)
states_as_patterns_old = [noisy_init_state]
states_as_patterns_new = [pattern_list[0]]
# from this initial state, let the network dynamics evolve.
#hopfield_net.run(nr_steps=50)
#while pattern_tools.compute_overlap(states_as_patterns_old[0],states_as_patterns_new[0]) < 0.95:
#print pattern_tools.compute_overlap(states_as_patterns_old[0],states_as_patterns_new[0])
#states_as_patterns_old = states_as_patterns_new
#states_new = hopfield_net.run_with_monitoring(nr_steps=1)
#states_as_patterns_new = factory.reshape_patterns(states_new)
# each network state is a vector. reshape it to the same shape used to create the patterns.
hopfield_net.run(nr_steps=60)
states = hopfield_net.run_with_monitoring(nr_steps=1)
states_as_patterns = factory.reshape_patterns(states)
m_list.append(pattern_tools.compute_overlap(states_as_patterns[0],pattern_list[0]))
# plot the states of the network
#print("--- %s seconds ---" % (time.time() - start_time))
import plotly.plotly as py
import plotly.graph_objs as go
from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
init_notebook_mode(connected=True)
# Create a trace
trace = go.Scatter(
x = alpha_list,
y = m_list,
mode = 'lines+markers'
)
data = [trace]
layout= go.Layout(
title= 'Phase transition at T=0',
hovermode= 'closest',
xaxis= dict(
title= 'Alpha',
ticklen= 5,
zeroline= False,
gridwidth= 2,
),
yaxis=dict(
title= 'Correlation',
ticklen= 5,
gridwidth= 2,
),
showlegend= False
)
fig= go.Figure(data=data, layout=layout)
py.iplot(fig)
"""
Explanation: As we can observe as we eprform the dynamics of the Hopfield model, the total energy of the network is strongly decreasing. This is in accordance with the physical interpretation we had. But even if we always have a decreasing energy (even with wrong recovery), we can observe a typical behavior of the magnetization $m$ (or correlation) as function of $p/N$.
End of explanation
"""
from wand.image import Image as WImage
img = WImage(filename='Phase_transition_with_size25.pdf')
img
"""
Explanation: As we have explained with the theory around $\alpha = 0.14$, at temperature $0$ , we should observe a phase transition. And that's exactly what the strong drop in $m$ is telling us. At a precize number of stored patterns the network does not work well anymore.
To conclude this observation we are going to ocmpute the pahse diagram with Glauber dynamics of the Hopfield network. Each point will be the phase transition in the $(\alpha,T)$ space.
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
from neurodynex.hopfield_network import network, pattern_tools, plot_tools
import numpy
# the letters we want to store in the hopfield network
letter_list = ['A', 'B', 'C', 'S', 'X', 'Y', 'Z']
# set a seed to reproduce the same noise in the next run
# numpy.random.seed(123)
abc_dictionary =pattern_tools.load_alphabet()
print("the alphabet is stored in an object of type: {}".format(type(abc_dictionary)))
# access the first element and get it's size (they are all of same size)
pattern_shape = abc_dictionary['A'].shape
print("letters are patterns of size: {}. Create a network of corresponding size".format(pattern_shape))
# create an instance of the class HopfieldNetwork
hopfield_net = network.HopfieldNetwork(nr_neurons= pattern_shape[0]*pattern_shape[1])
# create a list using Pythons List Comprehension syntax:
pattern_list = [abc_dictionary[key] for key in letter_list ]
plot_tools.plot_pattern_list(pattern_list)
# store the patterns
hopfield_net.store_patterns(pattern_list)
# # create a noisy version of a pattern and use that to initialize the network
noisy_init_state = pattern_tools.get_noisy_copy(abc_dictionary['A'], noise_level=0.2)
hopfield_net.set_state_from_pattern(noisy_init_state)
# from this initial state, let the network dynamics evolve.
states = hopfield_net.run_with_monitoring(nr_steps=4)
# each network state is a vector. reshape it to the same shape used to create the patterns.
states_as_patterns = pattern_tools.reshape_patterns(states, pattern_list[0].shape)
# plot the states of the network
plot_tools.plot_state_sequence_and_overlap(
states_as_patterns, pattern_list, reference_idx=0, suptitle="Network dynamics for letter A")
"""
Explanation: This phase diagram shows that there is roughly a triangular region where the network is a good memory device. Outside this region (on the right) the device is not useful as a memory device; $m$ is 0. At the obundary, $m$ jumps discontinuously down to $0$, except on the $T$ axis. There is a critical temperature $T=1$ which sets the limit on the $T$ axis. In the terminology ofp hase transitions, this transition is of first order.
Like we could have propose, an increase in the Temperature leads to a decrease in the storage capacity. At high temperature it is harder to fix the state of the network.
Correlated patterns :
Right now we only have talked about uncorellated patterns. But in reality the patterns we see each day are strongly correlated. Hence we can ask if this algorithm works as well for correlated patterns.
Let's take the alphabet. We are going to observe the influence of the letter on the dynamics of the network, and on the storage capacity.
End of explanation
"""
letter_list = ['A', 'B', 'C', 'R','S', 'X', 'Y', 'Z']
hopfield_net = network.HopfieldNetwork(nr_neurons= pattern_shape[0]*pattern_shape[1])
pattern_list = [abc_dictionary[key] for key in letter_list ]
plot_tools.plot_pattern_list(pattern_list)
# store the patterns
hopfield_net.store_patterns(pattern_list)
# # create a noisy version of a pattern and use that to initialize the network
noisy_init_state = pattern_tools.get_noisy_copy(abc_dictionary['A'], noise_level=0.2)
hopfield_net.set_state_from_pattern(noisy_init_state)
overlap_matrix = pattern_tools.compute_overlap_matrix(pattern_list)
plot_tools.plot_overlap_matrix(overlap_matrix)
# from this initial state, let the network dynamics evolve.
states = hopfield_net.run_with_monitoring(nr_steps=5)
energy_list = []
for p in range(len(states)) :
energy_list.append(- np.sum(np.dot(np.dot(hopfield_net.weights,states[p]),states[p])))
# each network state is a vector. reshape it to the same shape used to create the patterns.
states_as_patterns = pattern_tools.reshape_patterns(states, pattern_list[0].shape)
# plot the states of the network
plot_tools.plot_state_sequence_and_overlap(
states_as_patterns, pattern_list, reference_idx=0, suptitle="Network dynamics")
plt.figure()
plt.plot(energy_list,'o')
plt.title('Energy of the network')
"""
Explanation: Here we have use a list of structured patterns, the letters. Each letter is represented in a 10 by 10 grid. In this example we try to recover the letter $A$ from a noisy pattern. We can observe that $A$ is a stable attractor of the dynamic. Like in the case of uncorellated pattern the system can store the information. However is the network able to store as many information as in the case of uncorellated patterns ?
End of explanation
"""
|
mspcvsp/cincinnati311Data | Cincinnati311DataEDA.ipynb | gpl-3.0 | from Cincinnati311CSVDataParser import Cincinnati311CSVDataParser
from csv import DictReader
import os
import re
import urllib2
"""
Explanation: Setup Software Environment
End of explanation
"""
data_dir = "./Data"
csv_file_path = os.path.join(data_dir, "cincinnati311.csv")
if not os.path.exists(csv_file_path):
if not os.path.exists(data_dir):
os.mkdir(data_dir)
url = 'https://data.cincinnati-oh.gov/api/views' +\
'/4cjh-bm8b/rows.csv?accessType=DOWNLOAD'
response = urllib2.urlopen(url)
html = response.read()
with open(csv_file_path, 'wb') as h_file:
h_file.write(html)
"""
Explanation: Download the Cincinnati 311 (Non-Emergency) Service Requests data
Dataset Description
Example of downloading a *.csv file progamatically using urllib2
End of explanation
"""
h_file = open("./Data/cincinnati311.csv", "r")
fieldnames = [re.sub("_", "", elem.lower())\
for elem in h_file.readline().rstrip().split(',')]
readerobj = DictReader(h_file, fieldnames)
print readerobj.next()
h_file.close()
"""
Explanation: Parse the 1st record
End of explanation
"""
# head -n 3 cincinnati311.csv > sample.csv
h_file = open("./Data/sample.csv", "r")
parserobj = Cincinnati311CSVDataParser(h_file)
for record in parserobj:
print record
h_file.close()
"""
Explanation: Implement a class that parses and cleans a Cincinnati 311 data record
This class forms the basis for mapper functions
This software applies the dateutil package parser fucntion to parse date/time strings
End of explanation
"""
|
georgetown-analytics/yelp-classification | data_analysis/Basic_Review_Analysis-ed-Copy.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import matplotlib as mpl
import pandas as pd
import json
import pandas as pd
import csv
import os
import re
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
from sklearn import svm
from sklearn.linear_model import SGDClassifier
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.ensemble import BaggingClassifier
from sklearn.svm import LinearSVC
from sklearn.naive_bayes import MultinomialNB
from sklearn.ensemble import RandomForestClassifier
from sklearn import metrics
from sklearn.pipeline import Pipeline
import numpy as np
from sklearn import datasets, linear_model
from sklearn.linear_model import LinearRegression
import statsmodels.api as sm
from scipy import stats
from statsmodels.sandbox.regression.predstd import wls_prediction_std
from pymongo import MongoClient
from datetime import datetime
"""
Explanation: Import important modules and declare important directories
End of explanation
"""
def plot_coefficients(classifier, feature_names, top_features=20):
coef = classifier.coef_.ravel()[0:200]
top_positive_coefficients = np.argsort(coef)[-top_features:]
top_negative_coefficients = np.argsort(coef)[:top_features]
top_coefficients = np.hstack([top_negative_coefficients, top_positive_coefficients])
#create plot
plt.figure(figsize=(15, 5))
colors = ['red' if c < 0 else 'blue' for c in coef[top_coefficients]]
plt.bar(np.arange(2 * top_features), coef[top_coefficients], color=colors)
feature_names = np.array(feature_names)
plt.xticks(np.arange(1, 1 + 2 * top_features), feature_names[top_coefficients], rotation=60, ha='right')
plt.show()
#def bayesian_average()
#This is the main folder where all the modules and JSON files are stored on my computer.
#You need to change this to the folder path specific to your computer
file_directory = "/Users/ed/yelp-classification/"
reviews_file = "cleaned_reviews_states_2010.json"
biz_file = "cleaned_business_data.json"
"""
Explanation: This is a function that we'll use later to plot the results of a linear SVM classifier
End of explanation
"""
#This is a smaller subset of our overall Yelp data
#I randomly chose 5000 reviews from each state and filed them into the JSON file
#Note that for the overall dataset, we have about 2 million reviews.
#That's why we need to use a data management system like MongoDB in order to hold all our data
#and to more efficiently manipulate it
reviews_json = json.load(open(file_directory+reviews_file))
biz_json = json.load(open(file_directory+biz_file))
for key in reviews_json.keys():
reviews_json[key] = reviews_json[key][0:5000]
#Let's see how reviews_json is set up
#changed this for python 3
print(reviews_json.keys())
reviews_json['OH'][0]
#We can see that on the highest level, the dictionary keys are the different states
#Let's look at the first entry under Ohio
print(reviews_json['OH'][0]['useful'])
#So for each review filed under Ohio, we have many different attributes to choose from
#Let's look at what the review and rating was for the first review filed under Ohio
print(reviews_json['OH'][0]['text'])
print(reviews_json['OH'][0]['stars'])
"""
Explanation: Load in the sample JSON file and view its contents
End of explanation
"""
#We want to split up reviews between text and labels for each state
reviews = []
stars = []
cool = []
useful = []
funny = []
compliment = []
cunumber = []
for key in reviews_json.keys():
for review in reviews_json[key]:
reviews.append(review['text'])
stars.append(review['stars'])
cool.append(review['cool'])
useful.append(review['useful'])
funny.append(review['funny'])
compliment.append(review['funny']+review['useful']+review['cool'])
cunumber.append(review['useful']+review['cool'])
#Just for demonstration, let's pick out the same review example as above but from our respective lists
print(reviews[0])
print(stars[0])
print(cool[0])
print(useful[0])
print(funny[0])
reviews_json['OH'][1]['cool']+1
"""
Explanation: Now, let's create two lists for all the reviews in Ohio:
One that holds all the reviews
One that holds all the ratings
End of explanation
"""
#added 'low_memory=False' after I got a warning about mixed data types
harvard_dict = pd.read_csv('HIV-4.csv',low_memory=False)
negative_words = list(harvard_dict.loc[harvard_dict['Negativ'] == 'Negativ']['Entry'])
positive_words = list(harvard_dict.loc[harvard_dict['Positiv'] == 'Positiv']['Entry'])
#Use word dictionary from Hu and Liu (2004)
#had to use encoding = "ISO-8859-1" to avoid error
negative_words = open('negative-words.txt', 'r',encoding = "ISO-8859-1").read()
negative_words = negative_words.split('\n')
positive_words = open('positive-words.txt', 'r',encoding = "ISO-8859-1").read()
positive_words = positive_words.split('\n')
total_words = negative_words + positive_words
total_words = list(set(total_words))
review_length = []
negative_percent = []
positive_percent = []
for review in reviews:
length_words = len(review.split())
neg_words = [x.lower() for x in review.split() if x in negative_words]
pos_words = [x.lower() for x in review.split() if x in positive_words]
negative_percent.append(float(len(neg_words))/float(length_words))
positive_percent.append(float(len(pos_words))/float(length_words))
review_length.append(length_words)
regression_df = pd.DataFrame({'stars':stars, 'review_length':review_length, 'neg_percent': negative_percent, 'positive_percent': positive_percent})
use_df = pd.DataFrame({'useful':cunumber, 'review_length':review_length, 'neg_percent': negative_percent, 'positive_percent': positive_percent})
use_df2 = pd.DataFrame({'useful':cunumber, 'review_length':review_length})
#Standardize dependent variables
std_vars = ['neg_percent', 'positive_percent', 'review_length']
for var in std_vars:
len_std = regression_df[var].std()
len_mu = regression_df[var].mean()
regression_df[var] = [(x - len_mu)/len_std for x in regression_df[var]]
"""
Explanation: Let's take a look at the following regression (information is correlated with review length):
$Rating = \beta_{neg}neg + \beta_{pos}pos + \beta_{num}\text{Std_NumWords} + \epsilon$
Where:
$neg = \frac{\text{Number of Negative Words}}{\text{Total Number of Words}}$
$pos = \frac{\text{Number of Positive Words}}{\text{Total Number of Words}}$
End of explanation
"""
#The R-Squared from using the Harvard Dictionary is 0.1 but with the Hu & Liu word dictionary
X = np.column_stack((regression_df.review_length,regression_df.neg_percent, regression_df.positive_percent))
y = regression_df.stars
X = sm.add_constant(X)
est = sm.OLS(y, X)
est2 = est.fit()
print(est2.summary())
#The R-Squared from using the Harvard Dictionary is 0.1 but with the Hu & Liu word dictionary
X = np.column_stack((regression_df.review_length,regression_df.neg_percent, regression_df.positive_percent))
y = use_df2.useful
X = sm.add_constant(X)
est = sm.OLS(y, X)
est2 = est.fit()
print(est2.summary())
"""
Explanation: Let's try using dictionary sentiment categories as dependent variables
End of explanation
"""
multi_logit = Pipeline([('vect', vectorizer),
('tfidf', TfidfTransformer()),
('clf', MultinomialNB())])
multi_logit.set_params(clf__alpha=1, clf__fit_prior = True, clf__class_prior = None).fit(train_reviews, train_ratings)
output['multi_logit'] = multi_logit.predict(test_reviews)
x = np.array(regression_df.stars)
#beta = [3.3648, -0.3227 , 0.5033]
y = [int(round(i)) for i in list(est2.fittedvalues)]
y = np.array(y)
errors = np.subtract(x,y)
np.sum(errors)
# fig, ax = plt.subplots(figsize=(5,5))
# ax.plot(x, x, 'b', label="data")
# ax.plot(x, y, 'o', label="ols")
# #ax.plot(x, est2.fittedvalues, 'r--.', label="OLS")
# #ax.plot(x, iv_u, 'r--')
# #ax.plot(x, iv_l, 'r--')
# ax.legend(loc='best');
#Do a QQ plot of the data
fig = sm.qqplot(errors)
plt.show()
"""
Explanation: NOTE: BLUE Estimator does not require normality of errors
Gauss-Markov Theorem states that the ordinary least squares estimate is the best linear unbiased estimator (BLUE) of the regression coefficients ('Best' meaning optimal in terms of minimizing mean squared error) as long as the errors:
(1) have mean zero
(2) are uncorrelated
(3) have constant variance
Now lets try it using multinomial logit regression
End of explanation
"""
star_hist = pd.DataFrame({'Ratings':stars})
star_hist.plot.hist()
cooluse_hist = pd.DataFrame({'Ratings':cunumber})
cooluse_hist.plot.hist(range=[0, 6])
"""
Explanation: Let's plot the overall distribution of ratings aggregated across all of the states
End of explanation
"""
df_list = []
states = list(reviews_json.keys())
for state in states:
stars_state = []
for review in reviews_json[state]:
stars_state.append(review['stars'])
star_hist = pd.DataFrame({'Ratings':stars_state})
df_list.append(star_hist)
for i in range(0, len(df_list)):
print(states[i] + " Rating Distribution")
df_list[i].plot.hist()
plt.show()
"""
Explanation: Let's plot the rating distribution of reviews within each of the states.
End of explanation
"""
#First let's separate out our dataset into a training sample and a test sample
#We specify a training sample percentage of 80% of our total dataset. This is just a rule of thumb
training_percent = 0.8
train_reviews = reviews[0:int(len(reviews)*training_percent)]
test_reviews = reviews[int(len(reviews)*training_percent):len(reviews)]
train_ratings = stars[0:int(len(stars)*training_percent)]
test_ratings = stars[int(len(stars)*training_percent):len(stars)]
"""
Explanation: Now let's try to build a simple linear support vector machine
Note, all support vector machine algorithm relies on drawing a separating hyperplane amongst the different classes. This is not necessarily guarenteed to exist. For a complete set of conditions that must be satisfied for this to be an appropriate algorithm to use, please see below:
http://www.unc.edu/~normanp/890part4.pdf
The following is also a good, and more general, introduction to Support Vector Machines:
http://web.mit.edu/6.034/wwwbob/svm-notes-long-08.pdf
End of explanation
"""
vectorizer = CountVectorizer(analyzer = "word", \
tokenizer = None, \
preprocessor = None, \
stop_words = None, \
vocabulary = total_words, \
max_features = 200)
train_data_features = vectorizer.fit_transform(train_reviews)
test_data_features = vectorizer.fit_transform(test_reviews)
"""
Explanation: In order to use the machine learning algorithms in Sci-Kit learn, we first have to initialize a CountVectorizer object. We can use this object creates a matrix representation of each of our words. There are many options that we can specify when we initialize our CountVectorizer object (see documentation for full list) but they essentially all relate to how the words are represented in the final matrix.
End of explanation
"""
output = pd.DataFrame( data={"Reviews": test_reviews, "Rating": test_ratings} )
"""
Explanation: Create dataframe to hold our results from the classification algorithms
End of explanation
"""
#Let's do the same exercise as above but use TF-IDF, you can learn more about TF-IDF here:
#https://nlp.stanford.edu/IR-book/html/htmledition/tf-idf-weighting-1.html
tf_transformer = TfidfTransformer(use_idf=True)
train_data_features = tf_transformer.fit_transform(train_data_features)
test_data_features = tf_transformer.fit_transform(test_data_features)
lin_svm = lin_svm.fit(train_data_features, train_ratings)
lin_svm_result = lin_svm.predict(test_data_features)
output['lin_svm'] = lin_svm_result
output['Accurate'] = np.where(output['Rating'] == output['lin_svm'], 1, 0)
accurate_percentage = float(sum(output['Accurate']))/float(len(output))
print accurate_percentage
#Here we plot the features with the highest absolute value coefficient weight
plot_coefficients(lin_svm, vectorizer.get_feature_names())
"""
Explanation: Lets call a linear SVM instance from SK Learn have it train on our subset of reviews. We'll output the results to an output dataframe and then calculate a total accuracy percentage.
End of explanation
"""
# random_forest = Pipeline([('vect', vectorizer),
# ('tfidf', TfidfTransformer()),
# ('clf', RandomForestClassifier())])
# random_forest.set_params(clf__n_estimators=100, clf__criterion='entropy').fit(train_reviews, train_ratings)
# output['random_forest'] = random_forest.predict(test_reviews)
# output['Accurate'] = np.where(output['Rating'] == output['random_forest'], 1, 0)
# accurate_percentage = float(sum(output['Accurate']))/float(len(output))
# print accurate_percentage
# bagged_dt = Pipeline([('vect', vectorizer),
# ('tfidf', TfidfTransformer()),
# ('clf', BaggingClassifier())])
# bagged_dt.set_params(clf__n_estimators=100, clf__n_jobs=1).fit(train_reviews, train_ratings)
# output['bagged_dt'] = bagged_dt.predict(test_reviews)
# output['Accurate'] = np.where(output['Rating'] == output['bagged_dt'], 1, 0)
# accurate_percentage = float(sum(output['Accurate']))/float(len(output))
# print accurate_percentage
multi_logit = Pipeline([('vect', vectorizer),
('tfidf', TfidfTransformer()),
('clf', MultinomialNB())])
multi_logit.set_params(clf__alpha=1, clf__fit_prior = True, clf__class_prior = None).fit(train_reviews, train_ratings)
output['multi_logit'] = multi_logit.predict(test_reviews)
output['Accurate'] = np.where(output['Rating'] == output['multi_logit'], 1, 0)
accurate_percentage = float(sum(output['Accurate']))/float(len(output))
print(accurate_percentage)
random_forest = Pipeline([('vect', vectorizer),
('tfidf', TfidfTransformer()),
('clf', RandomForestClassifier())])
random_forest.set_params(clf__n_estimators=100, clf__criterion='entropy').fit(train_reviews, train_ratings)
output['random_forest'] = random_forest.predict(test_reviews)
output['Accurate'] = np.where(output['Rating'] == output['random_forest'], 1, 0)
accurate_percentage = float(sum(output['Accurate']))/float(len(output))
print(accurate_percentage)
"""
Explanation: SKLearn uses what's known as a pipeline. Instead of having to declare each of these objects on their own and passing them into each other, we can just create one object with all the necessary options specified and then use that to run the algorithm. For each pipeline below, we specify the vector to be the CountVectorizer object we have defined above, set it to use tfidf, and then specify the classifier that we want to use.
Below, we create a separate pipeline for Random Forest, a Bagged Decision Tree, and Multinomial Logistic Regression. We then append the results to the dataframe that we've already created.
End of explanation
"""
print(metrics.confusion_matrix(test_ratings, bagged_dt.predict(test_reviews), labels = [1, 2, 3, 4, 5]))
"""
Explanation: Test results using all of the states
0.5383 from Naive TF-IDF Linear SVM
0.4567 from Naive TF-IDF Linear SVM using Harvard-IV dictionary
0.5241 from Naive TF-IDF Bagged DT using 100 estimators
0.496 from Naive TF-IDF Bagged DT using 100 estimators and Harvard-IV dictionary
0.5156 from Naive TF-IDF RandomForest and Harvard-IV dictionary
0.53 from Naive TF-IDF RF
0.458 from Naive TF-IDF SVM
As you can see, none of the above classifiers performs significantly better than a fair coin toss. This is most likely due to the heavily skewed distribution of review ratings. There are many reviews that receive 4 or 5 stars, therefore it is likely that the language associated with each review is being confused with each other. We can confirm this by looking at the "confusion matrix" of our predictions.
End of explanation
"""
for review in reviews_json[reviews_json.keys()[0]]:
print(type(review['date']))
break
reviews_json.keys()
latitude_list = []
longitude_list = []
stars_list = []
count_list = []
state_list = []
for biz in biz_json:
stars_list.append(biz['stars'])
latitude_list.append(biz['latitude'])
longitude_list.append(biz['longitude'])
count_list.append(biz['review_count'])
state_list.append(biz['state'])
biz_df = pd.DataFrame({'ratings':stars_list, 'latitude':latitude_list, 'longitude': longitude_list, 'review_count': count_list, 'state':state_list})
"""
Explanation: Each row and column corresponds to a rating number. For example, element (1,1) is the number of 1 star reviews that were correctly classified. Element (1,2) is the number of 1 star reviews that were incorrectly classified as 2 stars. Therefore, the sum of the diagonal represents the total number of correctly classified reviews. As you can see, the bagged decision tree classifier is classifying many four starred reviews as five starred reviews and vice versa.
This indicates that we can improve our results by using more aggregated categories. For example, we can call all four and five star reviews as "good" and all other review ratings as "bad".
End of explanation
"""
states = [u'OH', u'NC', u'WI', u'IL', u'AZ', u'NV']
cmap, norm = mpl.colors.from_levels_and_colors([1, 2, 3, 4, 5], ['red', 'orange', 'yellow', 'green', 'blue'], extend = 'max')
for state in states:
state_df = biz_df[biz_df.state == state]
state_df_filt = state_df[(np.abs(state_df.longitude-state_df.longitude.mean()) <= 2*state_df.longitude.std()) \
& (np.abs(state_df.latitude-state_df.latitude.mean()) <= 2*state_df.latitude.std())]
plt.ylim(min(state_df_filt.latitude), max(state_df_filt.latitude))
plt.xlim(min(state_df_filt.longitude), max(state_df_filt.longitude))
plt.scatter(state_df_filt.longitude, state_df_filt.latitude, c=state_df_filt.ratings, cmap=cmap, norm=norm)
plt.show()
print state
"""
Explanation: We draw a heat map for each state below. Longitude is on the Y axis and Latitude is on the X axis. The color coding is as follows:
Red = Rating of 1
Orange = Rating of 2
Yellow = Rating of 3
Green = Rating of 4
Blue = Rating of 5
End of explanation
"""
for state in states:
state_df = biz_df[biz_df.state == state]
state_df_filt = state_df[(np.abs(state_df.longitude-state_df.longitude.mean()) <= 2*state_df.longitude.std()) \
& (np.abs(state_df.latitude-state_df.latitude.mean()) <= 2*state_df.latitude.std())]
state_df_filt['longitude'] = (state_df_filt.longitude - state_df.longitude.mean())/state_df.longitude.std()
state_df_filt['latitude'] = (state_df_filt.latitude - state_df.latitude.mean())/state_df.latitude.std()
state_df_filt['review_count'] = (state_df_filt.review_count - state_df.review_count.mean())/state_df.review_count.std()
X = np.column_stack((state_df_filt.longitude, state_df_filt.latitude, state_df_filt.review_count))
y = state_df_filt.ratings
est = sm.OLS(y, X)
est2 = est.fit()
print(est2.summary())
print state
"""
Explanation: We run the following linear regression model for each of the states:
$Rating = \beta_{1} Longitude + \beta_{2} Latitude + \beta_{3} Num of Reviews + \epsilon$
End of explanation
"""
|
YannickJadoul/Parselmouth | docs/examples/psychopy_experiments.ipynb | gpl-3.0 | # ** Begin Experiment **
import parselmouth
import numpy as np
import random
conditions = ['a', 'e']
stimulus_files = {'a': "audio/bat.wav", 'e': "audio/bet.wav"}
STANDARD_INTENSITY = 70.
stimuli = {}
for condition in conditions:
stimulus = parselmouth.Sound(stimulus_files[condition])
stimulus.scale_intensity(STANDARD_INTENSITY)
stimuli[condition] = stimulus
"""
Explanation: PsychoPy experiments
Parselmouth also allows Praat functionality to be included in an interactive PsychoPy experiment (refer to the subsection on installing Parselmouth for PsychoPy for detailed installation instructions for the PsychoPy graphical interface, the PsychoPy Builder). The following example shows how easily Python code that uses Parselmouth can be injected in such an experiment; following an adaptive staircase experimental design, at each trial of the experiment a new stimulus is generated based on the responses of the participant. See e.g. Kaernbach, C. (2001). Adaptive threshold estimation with unforced-choice tasks. Attention, Perception, & Psychophysics, 63, 1377--1388., or the PsychoPy tutorial at https://www.psychopy.org/coder/tutorial2.html.
In this example, we use an adaptive staircase experiment to determine the minimal amount of noise that makes the participant unable to distinguish between two audio fragments, "bat" and "bet" (bat.wav, bet.wav). At every iteration of the experiment, we want to generate a version of these audio files with a specific signal-to-noise ratio, of course using Parselmouth to do so. Depending on whether the participant correctly identifies whether the noisy stimulus was "bat" or "bet", the noise level is then either increased or decreased.
As Parselmouth is just another Python library, using it from the PsychoPy Coder interface or from a standard Python script that imports the psychopy module is quite straightforward. However, PsychoPy also features a so-called Builder interface, which is a graphical interface to set up experiments with minimal or no coding. In this Builder, a user can create multiple experimental 'routines' out of different 'components' and combine them through 'loops', that can all be configured graphically:
For our simple example, we create a single routine trial, with a Sound, a Keyboard, and a Text component. We also insert a loop around this routine of the type staircase, such that PsychoPy will take care of the actual implementation of the loop in adaptive staircase design. The full PsychoPy experiment which can be opened in the Builder can be downloaded here: adaptive_listening.psyexp
Finally, to customize the behavior of the trial routine and to be able to use Parselmouth inside the PsychoPy experiment, we still add a Code component to the routine. This component will allow us to write Python code that interacts with the rest of the components and with the adaptive staircase loop. The Code components has different tabs, that allow us to insert custom code at different points during the execution of our trial.
First, there is the Begin Experiment tab. The code in this tab is executed only once, at the start of the experiment. We use this to set up the Python environment, importing modules and initializing variables, and defining constants:
End of explanation
"""
level = 10
"""
Explanation: The code in the Begin Routine tab is executed before the routine, so in our example, for every iteration of the surrounding staircase loop. This allows us to actually use Parselmouth to generate the stimulus that should be played to the participant during this iteration of the routine. To do this, we need to access the current value of the adaptive staircase algorithm: PsychoPy stores this in the Python variable level. For example, at some point during the experiment, this could be 10 (representing a signal-to-noise ratio of 10 dB):
End of explanation
"""
# 'filename' variable is also set by PsychoPy and contains base file name of saved log/output files
filename = "data/participant_staircase_23032017"
# PsychoPy also create a Trials object, containing e.g. information about the current iteration of the loop
# So let's quickly fake this, in this example, such that the code can be executed without errors
# In PsychoPy this would be a `psychopy.data.TrialHandler` (https://www.psychopy.org/api/data.html#psychopy.data.TrialHandler)
class MockTrials:
def addResponse(self, response):
print("Registering that this trial was {}successful".format("" if response else "un"))
trials = MockTrials()
trials.thisTrialN = 5 # We only need the 'thisTrialN' attribute of the 'trials' variable
# The Sound component can also be accessed by it's name, so let's quickly mock that as well
# In PsychoPy this would be a `psychopy.sound.Sound` (https://www.psychopy.org/api/sound.html#psychopy.sound.Sound)
class MockSound:
def setSound(self, file_name):
print("Setting audio file of Sound component to '{}'".format(file_name))
sound_1 = MockSound()
# And the same for our Keyboard component, `key_resp_2`:
class MockKeyboard:
pass
key_resp_2 = MockKeyboard()
# Finally, let's also seed the random module to have a consistent output across different runs
random.seed(42)
# Let's also create the directory where we will store our example output
!mkdir data
"""
Explanation: To execute the code we want to put in the Begin Routine tab, we need to add a few variables that would be made available by the PsychoPy Builder, normally:
End of explanation
"""
# ** Begin Routine **
random_condition = random.choice(conditions)
random_stimulus = stimuli[random_condition]
noise_samples = np.random.normal(size=random_stimulus.n_samples)
noisy_stimulus = parselmouth.Sound(noise_samples,
sampling_frequency=random_stimulus.sampling_frequency)
noisy_stimulus.scale_intensity(STANDARD_INTENSITY - level)
noisy_stimulus.values += random_stimulus.values
noisy_stimulus.scale_intensity(STANDARD_INTENSITY)
# use 'filename' to save our custom stimuli
stimulus_file_name = filename + "_stimulus_" + str(trials.thisTrialN) + ".wav"
noisy_stimulus.resample(44100).save(stimulus_file_name, 'WAV')
sound_1.setSound(stimulus_file_name)
"""
Explanation: Now, we can execute the code that would be in the Begin Routine tab:
End of explanation
"""
from IPython.display import Audio
Audio(filename="data/participant_staircase_23032017_stimulus_5.wav")
"""
Explanation: Let's listen to the file we have just generated and that we would play to the participant:
End of explanation
"""
key_resp_2.keys = 'a'
"""
Explanation: In this example, we do not really need to have code executed during the trial (i.e., in the Each Frame tab). However, at the end of the trial, we need to inform the PsychoPy staircase loop whether the participant was correct or not, because this will affect the further execution the adaptive staircase, and thus value of the level variable set by PsychoPy. For this we add a final line in the End Routine tab. Let's say the participant guessed "bat" and pressed the a key:
End of explanation
"""
# ** End Routine **
trials.addResponse(key_resp_2.keys == random_condition)
# Clean up the output directory again
!rm -r data
"""
Explanation: The End Routine tab then contains the following code to check the participant's answer against the randomly chosen condition, and to inform the trials object of whether the participant was correct:
End of explanation
"""
|
ddandur/Twords | jupyter_example_notebooks/Trump Tweets Example.ipynb | mit | import sys
sys.path.append('..')
from twords.twords import Twords
import matplotlib.pyplot as plt
%matplotlib inline
import pandas as pd
# this pandas line makes the dataframe display all text in a line; useful for seeing entire tweets
pd.set_option('display.max_colwidth', -1)
twit = Twords()
# set path to folder that contains jar files for twitter search
twit.jar_folder_path = "../jar_files_and_background/"
twit.get_all_user_tweets("realdonaldtrump", tweets_per_run=500)
twit.data_path = "realdonaldtrump"
twit.get_java_tweets_from_csv_list()
twit.convert_tweet_dates_to_standard()
"""
Explanation: Collect all tweets from @realDonaldTrump
End of explanation
"""
twit.tweets_df["retweets"] = twit.tweets_df["retweets"].map(int)
twit.tweets_df["favorites"] = twit.tweets_df["favorites"].map(int)
twit.tweets_df.sort_values("favorites", ascending=False)[:5]
twit.tweets_df.sort_values("retweets", ascending=False)[:5]
"""
Explanation: To sort tweets by favorites or retweets, need to convert unicode to integers:
End of explanation
"""
twit.background_path = '../jar_files_and_background/freq_table_72319443_total_words_twitter_corpus.csv'
twit.create_Background_dict()
twit.create_Stop_words()
twit.keep_column_of_original_tweets()
twit.lower_tweets()
twit.keep_only_unicode_tweet_text()
twit.remove_urls_from_tweets()
twit.remove_punctuation_from_tweets()
twit.drop_non_ascii_characters_from_tweets()
twit.drop_duplicate_tweets()
twit.convert_tweet_dates_to_standard()
twit.sort_tweets_by_date()
"""
Explanation: For some reason the search did not include Trump's username - random errors like this sometimes happen when querying the twitter website.
Look at word frequencies
End of explanation
"""
twit.create_word_bag()
twit.make_nltk_object_from_word_bag()
twit.create_word_freq_df(10000)
twit.word_freq_df.sort_values("log relative frequency", ascending = False, inplace = True)
twit.word_freq_df.head(20)
"""
Explanation: Make word frequency dataframe:
End of explanation
"""
num_words_to_plot = 32
background_cutoff = 100
twit.word_freq_df[twit.word_freq_df['background occurrences']>background_cutoff].sort_values("log relative frequency", ascending=True).set_index("word")["log relative frequency"][-num_words_to_plot:].plot.barh(figsize=(20,
num_words_to_plot/2.), fontsize=30, color="c");
plt.title("log relative frequency", fontsize=30);
ax = plt.axes();
ax.xaxis.grid(linewidth=4);
"""
Explanation: Look at most and least Trump-like tweets at varying levels of background requirement
At least 100 background occurrences:
End of explanation
"""
num_words_to_plot = 32
background_cutoff = 1000
twit.word_freq_df[twit.word_freq_df['background occurrences']>background_cutoff].sort_values("log relative frequency", ascending=True).set_index("word")["log relative frequency"][-num_words_to_plot:].plot.barh(figsize=(20,
num_words_to_plot/2.), fontsize=30, color="c");
plt.title("log relative frequency", fontsize=30);
ax = plt.axes();
ax.xaxis.grid(linewidth=4);
"""
Explanation: At least 1000 background occurrences:
End of explanation
"""
num_words_to_plot = 32
background_cutoff = 10000
twit.word_freq_df[twit.word_freq_df['background occurrences']>background_cutoff].sort_values("log relative frequency", ascending=True).set_index("word")["log relative frequency"][-num_words_to_plot:].plot.barh(figsize=(20,
num_words_to_plot/2.), fontsize=30, color="c");
plt.title("log relative frequency", fontsize=30);
ax = plt.axes();
ax.xaxis.grid(linewidth=4);
"""
Explanation: At least 10,000 background occurrences:
End of explanation
"""
num_words_to_plot = 32
background_cutoff = 10000
twit.word_freq_df[twit.word_freq_df['background occurrences']>background_cutoff].sort_values("log relative frequency", ascending=False).set_index("word")["log relative frequency"][-num_words_to_plot:].plot.barh(figsize=(20,
num_words_to_plot/2.), fontsize=30, color="c");
plt.title("log relative frequency", fontsize=30);
ax = plt.axes();
ax.xaxis.grid(linewidth=4);
"""
Explanation: And now look at least Trump-like relative to Twitter background:
End of explanation
"""
twit.tweets_containing("fuck")
"""
Explanation: Trump does not post about things happening automatically.
End of explanation
"""
|
moonbury/pythonanywhere | github/MasteringMatplotlib/mmpl-custom-and-config.ipynb | gpl-3.0 | import matplotlib
matplotlib.use('nbagg')
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from IPython.display import Image
"""
Explanation: Advanced Customization and Configuration
Table of Contents
Introduction
Customization
matplotlib Styles
Subplots
Making a Plan
Revisiting Pandas
Individual Plots
Combined Plots
Configuration
Run Control
Warm-up proceedures:
End of explanation
"""
print(plt.style.available)
"""
Explanation: Note that we're not using Seaborn for styling like we did previously -- that's beccause the first thing we're going to tackle is creating a custom matplotlib style :-)
Customization
Creating a custom style
In the previous notebook, we saw that we could list the available styles with the following call:
End of explanation
"""
def make_plot ():
x = np.random.randn(5000, 6)
(figure, axes) = plt.subplots(figsize=(16,10))
(n, bins, patches) = axes.hist(x, 12, normed=1, histtype='bar',
label=['Color 1', 'Color 2', 'Color 3',
'Color 4', 'Color 5', 'Color 6'])
axes.set_title("Histogram\nfor a\nNormal Distribution", fontsize=24)
axes.set_xlabel("Data Points", fontsize=16)
axes.set_ylabel("Counts", fontsize=16)
axes.legend()
plt.show()
plt.style.use('ggplot')
make_plot()
"""
Explanation: You can create custom styles and use them by calling style.use with the path or URL to the style sheet. Alternatively, if you save your <style-name>.mplstyle file to the ~/.matplotlib/stylelib directory (you may need to create it), you can reuse your custom style sheet with a call to style.use(<style-name>). Note that a custom style sheet in ~/.matplotlib/stylelib will override a style sheet defined by matplotlib if the styles have the same name.
We've created a style sheet for you to use in this repository for this notebook, but before we go further, let's create a function that will generate a demo plot for us. Then we'll render it, using the default style -- thus having a baseline to compare our work to:
End of explanation
"""
#Image(filename="superhero.png")
"""
Explanation: Okay, we've got our sample plot. Now let's look at the style.
We've created a style called "Superheroine", based on Thomas Park's excellent Bootstrap theme, Superhero. Here's a screenshot of the Boostrap theme:
End of explanation
"""
ls -l ../styles/
"""
Explanation: We've saved captured some of the colors from this screenshot and saved them in a couple of plot style files to the "styles" directory in this notebook repo:
End of explanation
"""
cat ./superheroine-2.mplstyle
"""
Explanation: Basically, we couldn't make up our mind about whether we liked the light text (style 1) or the orange text (style 2). So we kept both :-)
Let's take a look at the second one's contents which show the hexadecimal colors we copied from the Boostrap theme:
End of explanation
"""
plt.style.use("./superheroine-2.mplstyle")
"""
Explanation: Now let's load it:
End of explanation
"""
make_plot()
"""
Explanation: And then re-render our plot:
End of explanation
"""
import sys
sys.path.append("../lib")
import demodata, demoplot, radar
raw_data = demodata.get_raw_data()
raw_data.head()
limited_data = demodata.get_limited_data()
limited_data.head()
demodata.get_all_auto_makes()
(makes, counts) = demodata.get_make_counts(limited_data)
counts
(makes, counts) = demodata.get_make_counts(limited_data, lower_bound=6)
counts
data = demodata.get_limited_data(lower_bound=6)
data.head()
len(data.index)
sum([x[1] for x in counts])
normed_data = data.copy()
normed_data.rename(columns={"horsepower": "power"}, inplace=True)
"""
Explanation: A full list of styles available for customization is in given in the matplotlib run control file. We'll be discussing this more in the next section.
Subplots
Making a Plan
In this next section, we'll be creating a sophisticated subplot to give you a sense of what's possible with matplotlib's layouts. We'll be ingesting data from the UCI Machine Learning Repository, in particular the 1985 Automobile Data Set, an example of data which can be used to assess the insurance risks for different vehicles.
We will use it in an effort to compare 21 automobile manufacturers (using 1985 data) along the following dimensions:
* mean price
* mean city MPG
* mean highway MPG
* mean horsepower
* mean curb-weight
* mean relative average loss payment
* mean insurance riskiness
We will limit ourselves to automobile manufacturers that have data for losses as well as 6 or more data rows.
Our subplot will be comprised of the following sections:
* An overall title
* Line plots for max, mean, and min prices
* Stacked bar chart for combined riskiness/losses
* Stacked bar chart for riskiness
* Stacked bar chart for losses
* Radar charts for each automobile manufacturer
* Combined scatter plot for city and highway MPG
These will be composed as subplots in the following manner:
```
| overall title |
| price ranges |
| combined loss/risk | |
| | radar |
---------------------- plots |
| risk | loss | |
| mpg |
```
Revisiting Pandas
End of explanation
"""
demodata.norm_columns(["city mpg", "highway mpg", "power"], normed_data)
normed_data.head()
"""
Explanation: Higher values are better for these:
End of explanation
"""
demodata.invert_norm_columns(["price", "weight", "riskiness", "losses"], normed_data)
normed_data.head()
"""
Explanation: Lower values are better for these:
End of explanation
"""
figure = plt.figure(figsize=(15, 5))
prices_gs = mpl.gridspec.GridSpec(1, 1)
prices_axes = demoplot.make_autos_price_plot(figure, prices_gs, data)
plt.show()
figure = plt.figure(figsize=(15, 5))
mpg_gs = mpl.gridspec.GridSpec(1, 1)
mpg_axes = demoplot.make_autos_mpg_plot(figure, mpg_gs, data)
plt.show()
figure = plt.figure(figsize=(15, 5))
risk_gs = mpl.gridspec.GridSpec(1, 1)
risk_axes = demoplot.make_autos_riskiness_plot(figure, risk_gs, normed_data)
plt.show()
figure = plt.figure(figsize=(15, 5))
loss_gs = mpl.gridspec.GridSpec(1, 1)
loss_axes = demoplot.make_autos_losses_plot(figure, loss_gs, normed_data)
plt.show()
figure = plt.figure(figsize=(15, 5))
risk_loss_gs = mpl.gridspec.GridSpec(1, 1)
risk_loss_axes = demoplot.make_autos_loss_and_risk_plot(figure, risk_loss_gs, normed_data)
plt.show()
figure = plt.figure(figsize=(15, 5))
radar_gs = mpl.gridspec.GridSpec(3, 7, height_ratios=[1, 10, 10], wspace=0.50, hspace=0.60, top=0.95, bottom=0.25)
radar_axes = demoplot.make_autos_radar_plot(figure, radar_gs, normed_data)
plt.show()
"""
Explanation: Individual Plots
End of explanation
"""
figure = plt.figure(figsize=(10, 8))
gs_master = mpl.gridspec.GridSpec(4, 2, height_ratios=[1, 2, 8, 2])
# Layer 1 - Title
gs_1 = mpl.gridspec.GridSpecFromSubplotSpec(1, 1, subplot_spec=gs_master[0, :])
title_axes = figure.add_subplot(gs_1[0])
# Layer 2 - Price
gs_2 = mpl.gridspec.GridSpecFromSubplotSpec(1, 1, subplot_spec=gs_master[1, :])
price_axes = figure.add_subplot(gs_2[0])
# Layer 3 - Risks & Radar
gs_31 = mpl.gridspec.GridSpecFromSubplotSpec(2, 2, height_ratios=[2, 1], subplot_spec=gs_master[2, :1])
risk_and_loss_axes = figure.add_subplot(gs_31[0, :])
risk_axes = figure.add_subplot(gs_31[1, :1])
loss_axes = figure.add_subplot(gs_31[1:, 1])
gs_32 = mpl.gridspec.GridSpecFromSubplotSpec(1, 1, subplot_spec=gs_master[2, 1])
radar_axes = figure.add_subplot(gs_32[0])
# Layer 4 - MPG
gs_4 = mpl.gridspec.GridSpecFromSubplotSpec(1, 1, subplot_spec=gs_master[3, :])
mpg_axes = figure.add_subplot(gs_4[0])
# Tidy up
gs_master.tight_layout(figure)
plt.show()
figure = plt.figure(figsize=(15, 15))
gs_master = mpl.gridspec.GridSpec(4, 2, height_ratios=[1, 24, 128, 32], hspace=0, wspace=0)
# Layer 1 - Title
gs_1 = mpl.gridspec.GridSpecFromSubplotSpec(1, 1, subplot_spec=gs_master[0, :])
title_axes = figure.add_subplot(gs_1[0])
title_axes.set_title("Demo Plots for 1985 Auto Maker Data", fontsize=30, color="#cdced1")
demoplot.hide_axes(title_axes)
# Layer 2 - Price
gs_2 = mpl.gridspec.GridSpecFromSubplotSpec(1, 1, subplot_spec=gs_master[1, :])
price_axes = figure.add_subplot(gs_2[0])
demoplot.make_autos_price_plot(figure, pddata=data, axes=price_axes)
# Layer 3, Part I - Risks
gs_31 = mpl.gridspec.GridSpecFromSubplotSpec(2, 2, height_ratios=[2, 1], hspace=0.4, subplot_spec=gs_master[2, :1])
risk_and_loss_axes = figure.add_subplot(gs_31[0, :])
demoplot.make_autos_loss_and_risk_plot(
figure, pddata=normed_data, axes=risk_and_loss_axes, x_label=False, rotate_ticks=True)
risk_axes = figure.add_subplot(gs_31[1, :1])
demoplot.make_autos_riskiness_plot(figure, pddata=normed_data, axes=risk_axes, legend=False, labels=False)
loss_axes = figure.add_subplot(gs_31[1:, 1])
demoplot.make_autos_losses_plot(figure, pddata=normed_data, axes=loss_axes, legend=False, labels=False)
# Layer 3, Part II - Radar
gs_32 = mpl.gridspec.GridSpecFromSubplotSpec(
5, 3, height_ratios=[1, 20, 20, 20, 20], hspace=0.6, wspace=0, subplot_spec=gs_master[2, 1])
(rows, cols) = geometry = gs_32.get_geometry()
title_axes = figure.add_subplot(gs_32[0, :])
inner_axes = []
projection = radar.RadarAxes(spoke_count=len(normed_data.groupby("make").mean().columns))
[inner_axes.append(figure.add_subplot(m, projection=projection)) for m in [n for n in gs_32][cols:]]
demoplot.make_autos_radar_plot(
figure, pddata=normed_data, title_axes=title_axes, inner_axes=inner_axes, legend_axes=False,
geometry=geometry)
# Layer 4 - MPG
gs_4 = mpl.gridspec.GridSpecFromSubplotSpec(1, 1, subplot_spec=gs_master[3, :])
mpg_axes = figure.add_subplot(gs_4[0])
demoplot.make_autos_mpg_plot(figure, pddata=data, axes=mpg_axes)
# Tidy up
gs_master.tight_layout(figure)
plt.show()
"""
Explanation: Combined Plots
Here's a refresher on the plot layout we're aiming for:
```
| overall title |
| price ranges |
| combined loss/risk | |
| | radar |
---------------------- plots |
| risk | loss | |
| mpg |
```
Let's try that now with just empty graphs, to get a sense of things:
End of explanation
"""
mpl.get_configdir()
mpl.matplotlib_fname()
"""
Explanation: Configuration
Get the directory for the matplotlib config files and cache:
End of explanation
"""
len(mpl.rcParams.keys())
"""
Explanation: matplotlib's rcParams configuration dictionary holds a great many options for tweaking your use of matplotlib the way you want to:
End of explanation
"""
dict(list(mpl.rcParams.items())[:10])
mpl.rcParams['savefig.jpeg_quality'] = 72
mpl.rcParams['axes.formatter.limits'] = [-5, 5]
mpl.rcParams['axes.formatter.limits']
mpl.rcdefaults()
mpl.rcParams['axes.formatter.limits']
"""
Explanation: The first 10 configuration options in rcParams are:
End of explanation
"""
|
dennisproppe/fp_python | fp_lesson_2_partials.ipynb | apache-2.0 | from functools import partial
"""
Explanation: Partials
Partials really help using functional concepts in Python. Using a partial just means executing a function with a partial argument list, which return another function, with the partials arguments alerady "filled".
Can make classes that are just used as attribute containers obsolete. I find this very appealing.
End of explanation
"""
def greet(greeting, name):
return "{0}! {1}".format(greeting, name)
"""
Explanation: Example from: http://kachayev.github.io/talks/uapycon2012/#/41
End of explanation
"""
greet("Hello", "Klaus")
"""
Explanation: The following is the standard way of calling this function. I want to greet Klaus with a simple "Hello". Thus, I add two arguments to the function.
End of explanation
"""
good_evening_greet = partial(greet, "Good evening")
good_evening_greet("Klaus")
good_evening_greet("Engelbert")
"""
Explanation: Now I want to build a function that always greets with the phrase "Good evening". I could solve this with a class or just always use two arguments. Or I could define a partial function.
End of explanation
"""
good_evening_greet
"""
Explanation: good_evening_greet itself is a function:
End of explanation
"""
def closure_greet(greeting):
def named_greet(name):
return "{0}! {1}".format(greeting, name)
return named_greet
evening_closure_greet = closure_greet("Good evening my dear closure")
evening_closure_greet("Klaus")
"""
Explanation: This nice little tool allows me to create different functions from a function that have some values already embedded into them. This approach is very similar to the closure appraoch from lesson one, but with one important distinction:
I don't have to think about the closure-ness at the time I am writing the greet function.
The closurable "greet" function would have to look like this:
End of explanation
"""
greet_queen_mother = partial(greet, name="Queen Elizabeth the Queen Mother")
greet_queen_mother("Nice to see you")
"""
Explanation: Note how it wouldn't be possible to embed a pre-fixed name into this construct, because the order of nesting does not allow this. Using a partial, this is very simple:
End of explanation
"""
good_evening_queen_mother = good_evening_greet("Queen Elizabeth the Queen Mother")
good_evening_queen_mother
"""
Explanation: I could even build on good_evening_greet to wish the Queen Mother a good evening:
End of explanation
"""
|
robertoalotufo/ia898 | dev/widgets_ImageProcessing.ipynb | mit | # Stdlib imports
from io import BytesIO
# Third-party libraries
from IPython.display import Image
from ipywidgets import interact, interactive, fixed
import matplotlib as mpl
from skimage import data, filters, io, img_as_float
"""
Explanation: Image Manipulation with skimage
This examples was taken from ipywidgets tutorial
This example builds a simple UI for performing basic image manipulation with scikit-image.
End of explanation
"""
#i = img_as_float(data.coffee())
i = data.coffee()
i.shape,i.min(),i.max()
"""
Explanation: Let's load an image from scikit-image's collection, stored in the data module. These come back as regular numpy arrays:
End of explanation
"""
def arr2img(arr):
"""Display a 2- or 3-d numpy array as an image."""
if arr.ndim == 2:
format, cmap = 'png', mpl.cm.gray
elif arr.ndim == 3:
format, cmap = 'jpg', None
else:
raise ValueError("Only 2- or 3-d arrays can be displayed as images.")
# Don't let matplotlib autoscale the color range so we can control overall luminosity
vmax = 255 if arr.dtype == 'uint8' else 1.0
with BytesIO() as buffer:
mpl.image.imsave(buffer, arr, format=format, cmap=cmap, vmin=0, vmax=vmax)
out = buffer.getvalue()
return Image(out)
arr2img(i)
"""
Explanation: Let's make a little utility function for displaying Numpy arrays with the IPython display protocol:
End of explanation
"""
def edit_image(image, sigma=0.1, R=1.0, G=1.0, B=1.0):
new_image = filters.gaussian(image, sigma=sigma, multichannel=True)
new_image[:,:,0] = R*new_image[:,:,0]
new_image[:,:,1] = G*new_image[:,:,1]
new_image[:,:,2] = B*new_image[:,:,2]
return arr2img(new_image)
"""
Explanation: Now, let's create a simple "image editor" function, that allows us to blur the image or change its color balance:
End of explanation
"""
edit_image(i, sigma=5, R=0)
"""
Explanation: We can call this function manually and get a new image. For example, let's do a little blurring and remove all the red from the image:
End of explanation
"""
lims = (0.0,1.0,0.01)
interactive(edit_image, image=fixed(i), sigma=(0.0,10.0,0.1), R=lims, G=lims, B=lims)
"""
Explanation: But it's a lot easier to explore what this function does by controlling each parameter interactively and getting immediate visual feedback. IPython's ipywidgets package lets us do that with a minimal amount of code:
End of explanation
"""
def choose_img(name):
# Let's store the result in the global `img` that we can then use in our image editor below
global img
img = getattr(data, name)()
return arr2img(img)
# Skip 'load' and 'lena', two functions that don't actually return images
interact(choose_img, name=sorted(set(data.__all__)-{'lena', 'load'}));
"""
Explanation: Browsing the scikit-image gallery, and editing grayscale and jpg images
The coffee cup isn't the only image that ships with scikit-image, the data module has others. Let's make a quick interactive explorer for this:
End of explanation
"""
lims = (0.0, 1.0, 0.01)
def edit_image(image, sigma, R, G, B):
new_image = filters.gaussian(image, sigma=sigma, multichannel=True)
if new_image.ndim == 3:
new_image[:,:,0] = R*new_image[:,:,0]
new_image[:,:,1] = G*new_image[:,:,1]
new_image[:,:,2] = B*new_image[:,:,2]
else:
new_image = G*new_image
return arr2img(new_image)
interact(edit_image, image=fixed(img), sigma=(0.0, 10.0, 0.1),
R=lims, G=lims, B=lims);
"""
Explanation: And now, let's update our editor to cope correctly with grayscale and color images, since some images in the scikit-image collection are grayscale. For these, we ignore the red (R) and blue (B) channels, and treat 'G' as 'Grayscale':
End of explanation
"""
lims = (0.0, 1.0, 0.01)
@interact
def edit_image(image: fixed(img), σ:(0.0, 10.0, 0.1)=0,
R:lims=1.0, G:lims=1.0, B:lims=1.0):
new_image = filters.gaussian(image, sigma=σ, multichannel=True)
if new_image.ndim == 3:
new_image[:,:,0] = R*new_image[:,:,0]
new_image[:,:,1] = G*new_image[:,:,1]
new_image[:,:,2] = B*new_image[:,:,2]
else:
new_image = G*new_image
return arr2img(new_image)
"""
Explanation: Python 3 only: Function annotations and unicode identifiers
In Python 3, we can use the new function annotation syntax to describe widgets for interact, as well as unicode names for variables such as sigma. Note how this syntax also lets us define default values for each control in a convenient (if slightly awkward looking) form: var:descriptor=default.
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst | courses/machine_learning/deepdive2/building_production_ml_systems/solutions/0_export_data_from_bq_to_gcs.ipynb | apache-2.0 | !sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
#%load_ext google.cloud.bigquery
import os
from google.cloud import bigquery
"""
Explanation: Exporting data from BigQuery to Google Cloud Storage
In this notebook, we export BigQuery data to GCS so that we can reuse our Keras model that was developed on CSV data.
Uncomment the following line if you are running the notebook locally:
End of explanation
"""
# Change with your own bucket and project below:
BUCKET = "<BUCKET>"
PROJECT = "<PROJECT>"
OUTDIR = "gs://{bucket}/taxifare/data".format(bucket=BUCKET)
os.environ['BUCKET'] = BUCKET
os.environ['OUTDIR'] = OUTDIR
os.environ['PROJECT'] = PROJECT
"""
Explanation: Change the following cell as necessary:
End of explanation
"""
bq = bigquery.Client(project = PROJECT)
dataset = bigquery.Dataset(bq.dataset("taxifare"))
try:
bq.create_dataset(dataset)
print("Dataset created")
except:
print("Dataset already exists")
"""
Explanation: Create BigQuery tables
If you haven not already created a BigQuery dataset for our data, run the following cell:
End of explanation
"""
%%bigquery
CREATE OR REPLACE TABLE taxifare.feateng_training_data AS
SELECT
(tolls_amount + fare_amount) AS fare_amount,
pickup_datetime,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count*1.0 AS passengers,
'unused' AS key
FROM `nyc-tlc.yellow.trips`
WHERE ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 1000)) = 1
AND
trip_distance > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
AND passenger_count > 0
"""
Explanation: Let's create a table with 1 million examples.
Note that the order of columns is exactly what was in our CSV files.
End of explanation
"""
%%bigquery
CREATE OR REPLACE TABLE taxifare.feateng_valid_data AS
SELECT
(tolls_amount + fare_amount) AS fare_amount,
pickup_datetime,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count*1.0 AS passengers,
'unused' AS key
FROM `nyc-tlc.yellow.trips`
WHERE ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 10000)) = 2
AND
trip_distance > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
AND passenger_count > 0
"""
Explanation: Make the validation dataset be 1/10 the size of the training dataset.
End of explanation
"""
%%bash
echo "Deleting current contents of $OUTDIR"
gsutil -m -q rm -rf $OUTDIR
echo "Extracting training data to $OUTDIR"
bq --location=US extract \
--destination_format CSV \
--field_delimiter "," --noprint_header \
taxifare.feateng_training_data \
$OUTDIR/taxi-train-*.csv
echo "Extracting validation data to $OUTDIR"
bq --location=US extract \
--destination_format CSV \
--field_delimiter "," --noprint_header \
taxifare.feateng_valid_data \
$OUTDIR/taxi-valid-*.csv
gsutil ls -l $OUTDIR
!gsutil cat gs://$BUCKET/taxifare/data/taxi-train-000000000000.csv | head -2
"""
Explanation: Export the tables as CSV files
End of explanation
"""
|
mlperf/training_results_v0.5 | v0.5.0/google/cloud_v3.8/ssd-tpuv3-8/code/ssd/model/tpu/tools/colab/fashion_mnist.ipynb | apache-2.0 | import tensorflow as tf
import numpy as np
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.fashion_mnist.load_data()
# add empty color dimension
x_train = np.expand_dims(x_train, -1)
x_test = np.expand_dims(x_test, -1)
"""
Explanation: Fashion MNIST with Keras and TPUs
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tpu/blob/master/tools/colab/fashion_mnist.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/tpu/blob/master/tools/colab/fashion_mnist.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
Let's try out using tf.keras and Cloud TPUs to train a model on the fashion MNIST dataset.
First, let's grab our dataset using tf.keras.datasets.
End of explanation
"""
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.BatchNormalization(input_shape=x_train.shape[1:]))
model.add(tf.keras.layers.Conv2D(64, (5, 5), padding='same', activation='elu'))
model.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2), strides=(2,2)))
model.add(tf.keras.layers.Dropout(0.25))
model.add(tf.keras.layers.BatchNormalization(input_shape=x_train.shape[1:]))
model.add(tf.keras.layers.Conv2D(128, (5, 5), padding='same', activation='elu'))
model.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2)))
model.add(tf.keras.layers.Dropout(0.25))
model.add(tf.keras.layers.BatchNormalization(input_shape=x_train.shape[1:]))
model.add(tf.keras.layers.Conv2D(256, (5, 5), padding='same', activation='elu'))
model.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2), strides=(2,2)))
model.add(tf.keras.layers.Dropout(0.25))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(256))
model.add(tf.keras.layers.Activation('elu'))
model.add(tf.keras.layers.Dropout(0.5))
model.add(tf.keras.layers.Dense(10))
model.add(tf.keras.layers.Activation('softmax'))
model.summary()
"""
Explanation: Defining our model
We will use a standard conv-net for this example. We have 3 layers with drop-out and batch normalization between each layer.
End of explanation
"""
import os
tpu_model = tf.contrib.tpu.keras_to_tpu_model(
model,
strategy=tf.contrib.tpu.TPUDistributionStrategy(
tf.contrib.cluster_resolver.TPUClusterResolver(tpu='grpc://' + os.environ['COLAB_TPU_ADDR'])
)
)
tpu_model.compile(
optimizer=tf.train.AdamOptimizer(learning_rate=1e-3, ),
loss=tf.keras.losses.sparse_categorical_crossentropy,
metrics=['sparse_categorical_accuracy']
)
def train_gen(batch_size):
while True:
offset = np.random.randint(0, x_train.shape[0] - batch_size)
yield x_train[offset:offset+batch_size], y_train[offset:offset + batch_size]
tpu_model.fit_generator(
train_gen(1024),
epochs=10,
steps_per_epoch=100,
validation_data=(x_test, y_test),
)
"""
Explanation: Training on the TPU
We're ready to train! We first construct our model on the TPU, and compile it.
Here we demonstrate that we can use a generator function and fit_generator to train the model. You can also pass in x_train and y_train to tpu_model.fit() instead.
End of explanation
"""
LABEL_NAMES = ['t_shirt', 'trouser', 'pullover', 'dress', 'coat', 'sandal', 'shirt', 'sneaker', 'bag', 'ankle_boots']
cpu_model = tpu_model.sync_to_cpu()
from matplotlib import pyplot
%matplotlib inline
def plot_predictions(images, predictions):
n = images.shape[0]
nc = int(np.ceil(n / 4))
f, axes = pyplot.subplots(nc, 4)
for i in range(nc * 4):
y = i // 4
x = i % 4
axes[x, y].axis('off')
label = LABEL_NAMES[np.argmax(predictions[i])]
confidence = np.max(predictions[i])
if i > n:
continue
axes[x, y].imshow(images[i])
axes[x, y].text(0.5, 0.5, label + '\n%.3f' % confidence, fontsize=14)
pyplot.gcf().set_size_inches(8, 8)
plot_predictions(np.squeeze(x_test[:16]),
cpu_model.predict(x_test[:16]))
"""
Explanation: Checking our results (inference)
Now that we're done training, let's see how well we can predict fashion categories! Keras/TPU prediction isn't working due to a small bug (fixed in TF 1.12!), but we can predict on the CPU to see how our results look.
End of explanation
"""
|
chapman-phys227-2016s/hw-3-ChinmaiRaman | HW3Notebook.ipynb | mit | p1.loan(6, 10000, 12)
"""
Explanation: Chinmai Raman
Homework 3
A.4 Solving a system of difference equations
Computes the development of a loan over time.
The below function calculates the amount paid per month (the first array) and the amount left to be paid (the second array) at each month of the year at a principal of $10,000 to be paid over 1 year at annual interest rate of 6%
End of explanation
"""
p2.graph(p2.f1, 100, -2 * np.pi, 2 * np.pi)
p2.Newton(p2.f1, p2.f1prime, -4)
p2.bisect(p2.f1, -4, -2)
p2.secant(p2.f1, -4.5, -3.5)
"""
Explanation: A.11 Testing different methods of root finding
$f(x) = Sin(x)$
End of explanation
"""
p2.graph(p2.f2, 100, -np.pi, np.pi)
p2.Newton(p2.f2, p2.f2prime, 1)
p2.bisect(p2.f2, -1, 1)
p2.secant(p2.f2, -2, -1)
"""
Explanation: $f(x) = x - sin(x)$
End of explanation
"""
p2.graph(p2.f3, 100, -np.pi / 2, np.pi / 2)
p2.Newton(p2.f3, p2.f3prime, -1)
p2.bisect(p2.f3, -1, 1)
p2.secant(p2.f3, -1, -0.5)
"""
Explanation: $f(x) = x^5 - sin x$
End of explanation
"""
p2.graph(p2.f4, 100, -2 * np.pi, 2 * np.pi)
p2.Newton(p2.f4, p2.f4prime, -4)
p2.bisect(p2.f4, -4, -2)
p2.secant(p2.f4, -5, -4)
"""
Explanation: $f(x) = x^4sinx$
End of explanation
"""
p2.graph(p2.f5, 100, -2 * np.pi, 2 * np.pi)
p2.Newton(p2.f5, p2.f5prime, -3)
p2.bisect(p2.f5, -3, -1)
p2.secant(p2.f5, -4, -3)
"""
Explanation: $f(x) = x^4 - 16$
End of explanation
"""
p2.graph(p2.f6, 100, -2 * np.pi, 2 * np.pi)
p2.Newton(p2.f6, p2.f6prime, 2)
p2.bisect(p2.f6, 0, 2)
p2.secant(p2.f6, 3, 2)
"""
Explanation: $f(x) = x^{10} - 1$
End of explanation
"""
p2.graph(p2.f7, 100, -2 * np.pi, 2 * np.pi)
p2.Newton(p2.f7, p2.f7prime, 1)
p2.bisect(p2.f7, 0.5, 2)
p2.secant(p2.f7, 3, 2)
"""
Explanation: $tanh(x) - x^{10}$
End of explanation
"""
h1 = -4 * (x)**2
x = sp.Symbol('x')
h2 = sp.exp(h1)
h3 = 1 / np.sqrt(2 * np.pi) * h2
length = p3.arclength(h3, -2, 2, 10)
print length
"""
Explanation: A.13 Computing the arc length of a curve
End of explanation
"""
fig = plt.figure(1)
x = np.linspace(-2, 2, 100)
y = 1 / np.sqrt(2 * np.pi) * np.exp(-4 * x**2)
x1 = length[0]
y1 = length[1]
plt.plot(x, y, 'r-', x1, y1, 'b-')
plt.xlabel('x')
plt.ylabel('y')
plt.title('1/sqrt(2pi) * e^(-4t^2)')
plt.show(fig)
"""
Explanation: The arclength of the function f(x) from -2 to 2 is 4.18
End of explanation
"""
x = [-3 * np.pi / 4.0, -np.pi / 4.0, np.pi / 4.0, 3 * np.pi / 4]
N = [5, 5, 5, 5]
n = 0
Sn = []
while n < 4:
Sn.append(p4.sin_Taylor(x[n], N[n])[0])
n += 1
print Sn
"""
Explanation: A.14 Finding difference equations for computing sin(x)
The accuracy of a Taylor polynomial improves as x decreases (moves closer to zero).
End of explanation
"""
x = [np.pi / 4, np.pi / 4, np.pi / 4, np.pi / 4]
N = [1, 3, 5, 10]
n = 0
Sn = []
while n < 4:
Sn.append(p4.sin_Taylor(x[n], N[n])[0])
n += 1
print Sn
"""
Explanation: The accuracy of a Taylor polynomial also improves as n increases.
End of explanation
"""
|
dafrie/lstm-load-forecasting | notebooks/1_entsoe_forecast_only.ipynb | mit | # Model category name used throughout the subsequent analysis
model_cat_id = "01"
# Which features from the dataset should be loaded:
# ['all', 'actual', 'entsoe', 'weather_t', 'weather_i', 'holiday', 'weekday', 'hour', 'month']
features = ['actual', 'entsoe']
# LSTM Layer configuration
# ========================
# Stateful True or false
layer_conf = [ True, True, True ]
# Number of neurons per layer
cells = [[ 5, 10, 20, 30, 50, 75, 100, 125, 150 ], [0, 10, 20, 50], [0, 10, 15, 20]]
# Regularization per layer
dropout = [0, 0.1, 0.2]
# Size of how many samples are used for one forward/backward pass
batch_size = [8]
# In a sense this is the output neuron dimension, or how many timesteps the neuron should output. Currently not implemented, defaults to 1.
timesteps = [1]
"""
Explanation: Model Category 1: Using the ENTSO-E forecast only
The first model category will just use the current available ENTSO-E forecast and try to create a better forecast in terms of mean absolute error.
Model category specific configuration
These parameters are model category specific
End of explanation
"""
import os
import sys
import math
import itertools
import datetime as dt
import pytz
import time as t
import numpy as np
import pandas as pd
from pandas import read_csv
from pandas import datetime
from numpy import newaxis
import matplotlib as mpl
import matplotlib.pyplot as plt
import scipy.stats as stats
from statsmodels.tsa import stattools
from tabulate import tabulate
import math
import keras as keras
from keras import backend as K
from keras.models import Sequential
from keras.layers import Activation, Dense, Dropout, LSTM
from keras.callbacks import TensorBoard
from keras.utils import np_utils
from keras.models import load_model
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import mean_squared_error, mean_absolute_error
from IPython.display import HTML
from IPython.display import display
%matplotlib notebook
mpl.rcParams['figure.figsize'] = (9,5)
# Import custom module functions
module_path = os.path.abspath(os.path.join('../'))
if module_path not in sys.path:
sys.path.append(module_path)
from lstm_load_forecasting import data, lstm
"""
Explanation: Module imports
End of explanation
"""
# Directory with dataset
path = os.path.join(os.path.abspath(''), '../data/fulldataset.csv')
# Splitdate for train and test data. As the TBATS and ARIMA benchmark needs 2 full cycle of all seasonality, needs to be after jan 01.
loc_tz = pytz.timezone('Europe/Zurich')
split_date = loc_tz.localize(dt.datetime(2017,2,1,0,0,0,0))
# Validation split percentage
validation_split = 0.2
# How many epochs in total
epochs = 30
# Set verbosity level. 0 for only per model, 1 for progress bar...
verbose = 0
# Dataframe containing the relevant data from training of all models
results = pd.DataFrame(columns=['model_name', 'config', 'dropout',
'train_loss', 'train_rmse', 'train_mae', 'train_mape',
'valid_loss', 'valid_rmse', 'valid_mae', 'valid_mape',
'test_rmse', 'test_mae', 'test_mape',
'epochs', 'batch_train', 'input_shape',
'total_time', 'time_step', 'splits'
])
# Early stopping parameters
early_stopping = True
min_delta = 0.006
patience = 2
"""
Explanation: Overall configuration
These parameters are later used, but shouldn't have to change between different model categories (model 1-5)
End of explanation
"""
# Generate output folders and files
res_dir = '../results/notebook_' + model_cat_id + '/'
plot_dir = '../plots/notebook_' + model_cat_id + '/'
model_dir = '../models/notebook_' + model_cat_id + '/'
os.makedirs(res_dir, exist_ok=True)
os.makedirs(model_dir, exist_ok=True)
output_table = res_dir + model_cat_id + '_results_' + t.strftime("%Y%m%d") + '.csv'
test_output_table = res_dir + model_cat_id + '_test_results' + t.strftime("%Y%m%d") + '.csv'
# Generate model combinations
models = []
models = lstm.generate_combinations(
model_name=model_cat_id + '_', layer_conf=layer_conf, cells=cells, dropout=dropout,
batch_size=batch_size, timesteps=[1])
"""
Explanation: Preparation and model generation
Necessary preliminary steps and then the generation of all possible models based on the settings at the top of this notebook.
End of explanation
"""
# Load data and prepare for standardization
df = data.load_dataset(path=path, modules=features)
df_scaled = df.copy()
df_scaled = df_scaled.dropna()
# Get all float type columns and standardize them
floats = [key for key in dict(df_scaled.dtypes) if dict(df_scaled.dtypes)[key] in ['float64']]
scaler = StandardScaler()
scaled_columns = scaler.fit_transform(df_scaled[floats])
df_scaled[floats] = scaled_columns
# Split in train and test dataset
df_train = df_scaled.loc[(df_scaled.index < split_date )].copy()
df_test = df_scaled.loc[df_scaled.index >= split_date].copy()
# Split in features and label data
y_train = df_train['actual'].copy()
X_train = df_train.drop('actual', 1).copy()
y_test = df_test['actual'].copy()
X_test = df_test.drop('actual', 1).copy()
"""
Explanation: Loading the data:
End of explanation
"""
start_time = t.time()
for idx, m in enumerate(models):
stopper = t.time()
print('========================= Model {}/{} ========================='.format(idx+1, len(models)))
print(tabulate([['Starting with model', m['name']], ['Starting time', datetime.fromtimestamp(stopper)]],
tablefmt="jira", numalign="right", floatfmt=".3f"))
try:
# Creating the Keras Model
model = lstm.create_model(layers=m['layers'], sample_size=X_train.shape[0], batch_size=m['batch_size'],
timesteps=m['timesteps'], features=X_train.shape[1])
# Training...
history = lstm.train_model(model=model, mode='fit', y=y_train, X=X_train,
batch_size=m['batch_size'], timesteps=m['timesteps'], epochs=epochs,
rearrange=False, validation_split=validation_split, verbose=verbose,
early_stopping=early_stopping, min_delta=min_delta, patience=patience)
# Write results
min_loss = np.min(history.history['val_loss'])
min_idx = np.argmin(history.history['val_loss'])
min_epoch = min_idx + 1
if verbose > 0:
print('______________________________________________________________________')
print(tabulate([['Minimum validation loss at epoch', min_epoch, 'Time: {}'.format(t.time()-stopper)],
['Training loss & MAE', history.history['loss'][min_idx], history.history['mean_absolute_error'][min_idx] ],
['Validation loss & mae', history.history['val_loss'][min_idx], history.history['val_mean_absolute_error'][min_idx] ],
], tablefmt="jira", numalign="right", floatfmt=".3f"))
print('______________________________________________________________________')
result = [{'model_name': m['name'], 'config': m, 'train_loss': history.history['loss'][min_idx], 'train_rmse': 0,
'train_mae': history.history['mean_absolute_error'][min_idx], 'train_mape': 0,
'valid_loss': history.history['val_loss'][min_idx], 'valid_rmse': 0,
'valid_mae': history.history['val_mean_absolute_error'][min_idx],'valid_mape': 0,
'test_rmse': 0, 'test_mae': 0, 'test_mape': 0, 'epochs': '{}/{}'.format(min_epoch, epochs), 'batch_train':m['batch_size'],
'input_shape':(X_train.shape[0], timesteps, X_train.shape[1]), 'total_time':t.time()-stopper,
'time_step':0, 'splits':str(split_date), 'dropout': m['layers'][0]['dropout']
}]
results = results.append(result, ignore_index=True)
# Saving the model and weights
model.save(model_dir + m['name'] + '.h5')
# Write results to csv
results.to_csv(output_table, sep=';')
#if not os.path.isfile(output_table):
#results.to_csv(output_table, sep=';')
#else: # else it exists so append without writing the header
# results.to_csv(output_table,mode = 'a',header=False, sep=';')
K.clear_session()
import tensorflow as tf
tf.reset_default_graph()
# Shouldn't catch all errors, but for now...
except BaseException as e:
print('=============== ERROR {}/{} ============='.format(idx+1, len(models)))
print(tabulate([['Model:', m['name']], ['Config:', m]], tablefmt="jira", numalign="right", floatfmt=".3f"))
print('Error: {}'.format(e))
result = [{'model_name': m['name'], 'config': m, 'train_loss': str(e)}]
results = results.append(result, ignore_index=True)
results.to_csv(output_table,sep=';')
continue
"""
Explanation: Running through all generated models
Note: Depending on the above settings, this can take very long!
End of explanation
"""
# Number of the selected top models
selection = 5
# If run in the same instance not necessary. If run on the same day, then just use output_table
results_fn = res_dir + model_cat_id + '_results_' + '20170616' + '.csv'
results_csv = pd.read_csv(results_fn, delimiter=';')
top_models = results_csv.nsmallest(selection, 'valid_mae')
"""
Explanation: Model selection based on the validation MAE
Select the top 5 models based on the Mean Absolute Error in the validation data:
http://scikit-learn.org/stable/modules/model_evaluation.html#mean-absolute-error
End of explanation
"""
# Init test results table
test_results = pd.DataFrame(columns=['Model name', 'Mean absolute error', 'Mean squared error'])
# Init empty predictions
predictions = {}
# Loop through models
for index, row in top_models.iterrows():
filename = model_dir + row['model_name'] + '.h5'
model = load_model(filename)
batch_size = int(row['batch_train'])
# Calculate scores
loss, mae = lstm.evaluate_model(model=model, X=X_test, y=y_test, batch_size=batch_size, timesteps=1, verbose=verbose)
# Store results
result = [{'Model name': row['model_name'],
'Mean squared error': loss, 'Mean absolute error': mae
}]
test_results = test_results.append(result, ignore_index=True)
# Generate predictions
model.reset_states()
model_predictions = lstm.get_predictions(model=model, X=X_test, batch_size=batch_size, timesteps=timesteps[0], verbose=verbose)
# Save predictions
predictions[row['model_name']] = model_predictions
K.clear_session()
import tensorflow as tf
tf.reset_default_graph()
test_results = test_results.sort_values('Mean absolute error', ascending=True)
test_results = test_results.set_index(['Model name'])
if not os.path.isfile(test_output_table):
test_results.to_csv(test_output_table, sep=';')
else: # else it exists so append without writing the header
test_results.to_csv(test_output_table,mode = 'a',header=False, sep=';')
print('Test dataset performance of the best {} (out of {} tested models):'.format(min(selection, len(models)), len(models)))
print(tabulate(test_results, headers='keys', tablefmt="grid", numalign="right", floatfmt=".3f"))
"""
Explanation: Evaluate top 5 models
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.21/_downloads/59a29cf7eb53c7ab95857dfb2e3b31ba/plot_40_sensor_locations.ipynb | bsd-3-clause | import os
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D # noqa
import mne
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file, preload=True, verbose=False)
"""
Explanation: Working with sensor locations
This tutorial describes how to read and plot sensor locations, and how
the physical location of sensors is handled in MNE-Python.
:depth: 2
As usual we'll start by importing the modules we need and loading some
example data <sample-dataset>:
End of explanation
"""
montage_dir = os.path.join(os.path.dirname(mne.__file__),
'channels', 'data', 'montages')
print('\nBUILT-IN MONTAGE FILES')
print('======================')
print(sorted(os.listdir(montage_dir)))
"""
Explanation: About montages and layouts
:class:Montages <mne.channels.DigMontage> contain sensor
positions in 3D (x, y, z, in meters), and can be used to set
the physical positions of sensors. By specifying the location of sensors
relative to the brain, :class:Montages <mne.channels.DigMontage> play an
important role in computing the forward solution and computing inverse
estimates.
In contrast, :class:Layouts <mne.channels.Layout> are idealized 2-D
representations of sensor positions, and are primarily used for arranging
individual sensor subplots in a topoplot, or for showing the approximate
relative arrangement of sensors as seen from above.
Working with built-in montages
The 3D coordinates of MEG sensors are included in the raw recordings from MEG
systems, and are automatically stored in the info attribute of the
:class:~mne.io.Raw file upon loading. EEG electrode locations are much more
variable because of differences in head shape. Idealized montages for many
EEG systems are included during MNE-Python installation; these files are
stored in your mne-python directory, in the
:file:mne/channels/data/montages folder:
End of explanation
"""
ten_twenty_montage = mne.channels.make_standard_montage('standard_1020')
print(ten_twenty_montage)
"""
Explanation: .. sidebar:: Computing sensor locations
If you are interested in how standard ("idealized") EEG sensor positions
are computed on a spherical head model, the `eeg_positions`_ repository
provides code and documentation to this end.
These built-in EEG montages can be loaded via
:func:mne.channels.make_standard_montage. Note that when loading via
:func:~mne.channels.make_standard_montage, provide the filename without
its file extension:
End of explanation
"""
# these will be equivalent:
# raw_1020 = raw.copy().set_montage(ten_twenty_montage)
# raw_1020 = raw.copy().set_montage('standard_1020')
"""
Explanation: Once loaded, a montage can be applied to data via one of the instance methods
such as :meth:raw.set_montage <mne.io.Raw.set_montage>. It is also possible
to skip the loading step by passing the filename string directly to the
:meth:~mne.io.Raw.set_montage method. This won't work with our sample
data, because it's channel names don't match the channel names in the
standard 10-20 montage, so these commands are not run here:
End of explanation
"""
fig = ten_twenty_montage.plot(kind='3d')
fig.gca().view_init(azim=70, elev=15)
ten_twenty_montage.plot(kind='topomap', show_names=False)
"""
Explanation: :class:Montage <mne.channels.DigMontage> objects have a
:meth:~mne.channels.DigMontage.plot method for visualization of the sensor
locations in 3D; 2D projections are also possible by passing
kind='topomap':
End of explanation
"""
biosemi_montage = mne.channels.make_standard_montage('biosemi64')
biosemi_montage.plot(show_names=False)
"""
Explanation: Controlling channel projection (MNE vs EEGLAB)
Channel positions in 2d space are obtained by projecting their actual 3d
positions using a sphere as a reference. Because 'standard_1020' montage
contains realistic, not spherical, channel positions, we will use a different
montage to demonstrate controlling how channels are projected to 2d space.
End of explanation
"""
biosemi_montage.plot(show_names=False, sphere=0.07)
"""
Explanation: By default a sphere with an origin in (0, 0, 0) x, y, z coordinates and
radius of 0.095 meters (9.5 cm) is used. You can use a different sphere
radius by passing a single value to sphere argument in any function that
plots channels in 2d (like :meth:~mne.channels.DigMontage.plot that we use
here, but also for example :func:mne.viz.plot_topomap):
End of explanation
"""
biosemi_montage.plot(show_names=False, sphere=(0.03, 0.02, 0.01, 0.075))
"""
Explanation: To control not only radius, but also the sphere origin, pass a
(x, y, z, radius) tuple to sphere argument:
End of explanation
"""
biosemi_montage.plot()
"""
Explanation: In mne-python the head center and therefore the sphere center are calculated
using fiducial points. Because of this the head circle represents head
circumference at the nasion and ear level, and not where it is commonly
measured in 10-20 EEG system: above nasion at T4/T8, T3/T7, Oz, Fz level.
Notice below that by default T7 and Oz channels are placed within the head
circle, not on the head outline:
End of explanation
"""
biosemi_montage.plot(sphere=(0, 0, 0.035, 0.094))
"""
Explanation: If you have previous EEGLAB experience you may prefer its convention to
represent 10-20 head circumference with the head circle. To get EEGLAB-like
channel layout you would have to move the sphere origin a few centimeters
up on the z dimension:
End of explanation
"""
fig = plt.figure()
ax2d = fig.add_subplot(121)
ax3d = fig.add_subplot(122, projection='3d')
raw.plot_sensors(ch_type='eeg', axes=ax2d)
raw.plot_sensors(ch_type='eeg', axes=ax3d, kind='3d')
ax3d.view_init(azim=70, elev=15)
"""
Explanation: Instead of approximating the EEGLAB-esque sphere location as above, you can
calculate the sphere origin from position of Oz, Fpz, T3/T7 or T4/T8
channels. This is easier once the montage has been applied to the data and
channel positions are in the head space - see
this example <ex-topomap-eeglab-style>.
Reading sensor digitization files
In the sample data, setting the digitized EEG montage was done prior to
saving the :class:~mne.io.Raw object to disk, so the sensor positions are
already incorporated into the info attribute of the :class:~mne.io.Raw
object (see the documentation of the reading functions and
:meth:~mne.io.Raw.set_montage for details on how that works). Because of
that, we can plot sensor locations directly from the :class:~mne.io.Raw
object using the :meth:~mne.io.Raw.plot_sensors method, which provides
similar functionality to
:meth:montage.plot() <mne.channels.DigMontage.plot>.
:meth:~mne.io.Raw.plot_sensors also allows channel selection by type, can
color-code channels in various ways (by default, channels listed in
raw.info['bads'] will be plotted in red), and allows drawing into an
existing matplotlib axes object (so the channel positions can easily be
made as a subplot in a multi-panel figure):
End of explanation
"""
fig = mne.viz.plot_alignment(raw.info, trans=None, dig=False, eeg=False,
surfaces=[], meg=['helmet', 'sensors'],
coord_frame='meg')
mne.viz.set_3d_view(fig, azimuth=50, elevation=90, distance=0.5)
"""
Explanation: It's probably evident from the 2D topomap above that there is some
irregularity in the EEG sensor positions in the sample dataset
<sample-dataset> — this is because the sensor positions in that dataset are
digitizations of the sensor positions on an actual subject's head, rather
than idealized sensor positions based on a spherical head model. Depending on
what system was used to digitize the electrode positions (e.g., a Polhemus
Fastrak digitizer), you must use different montage reading functions (see
dig-formats). The resulting :class:montage <mne.channels.DigMontage>
can then be added to :class:~mne.io.Raw objects by passing it to the
:meth:~mne.io.Raw.set_montage method (just as we did above with the name of
the idealized montage 'standard_1020'). Once loaded, locations can be
plotted with :meth:~mne.channels.DigMontage.plot and saved with
:meth:~mne.channels.DigMontage.save, like when working with a standard
montage.
<div class="alert alert-info"><h4>Note</h4><p>When setting a montage with :meth:`~mne.io.Raw.set_montage`
the measurement info is updated in two places (the ``chs``
and ``dig`` entries are updated). See `tut-info-class`.
``dig`` may contain HPI, fiducial, or head shape points in
addition to electrode locations.</p></div>
Rendering sensor position with mayavi
It is also possible to render an image of a MEG sensor helmet in 3D, using
mayavi instead of matplotlib, by calling the :func:mne.viz.plot_alignment
function:
End of explanation
"""
layout_dir = os.path.join(os.path.dirname(mne.__file__),
'channels', 'data', 'layouts')
print('\nBUILT-IN LAYOUT FILES')
print('=====================')
print(sorted(os.listdir(layout_dir)))
"""
Explanation: :func:~mne.viz.plot_alignment requires an :class:~mne.Info object, and
can also render MRI surfaces of the scalp, skull, and brain (by passing
keywords like 'head', 'outer_skull', or 'brain' to the
surfaces parameter) making it useful for assessing coordinate frame
transformations <plot_source_alignment>. For examples of various uses of
:func:~mne.viz.plot_alignment, see plot_montage,
:doc:../../auto_examples/visualization/plot_eeg_on_scalp, and
:doc:../../auto_examples/visualization/plot_meg_sensors.
Working with layout files
As with montages, many layout files are included during MNE-Python
installation, and are stored in the :file:mne/channels/data/layouts folder:
End of explanation
"""
biosemi_layout = mne.channels.read_layout('biosemi')
biosemi_layout.plot() # same result as: mne.viz.plot_layout(biosemi_layout)
"""
Explanation: You may have noticed that the file formats and filename extensions of the
built-in layout and montage files vary considerably. This reflects different
manufacturers' conventions; to make loading easier the montage and layout
loading functions in MNE-Python take the filename without its extension so
you don't have to keep track of which file format is used by which
manufacturer.
To load a layout file, use the :func:mne.channels.read_layout function, and
provide the filename without its file extension. You can then visualize the
layout using its :meth:~mne.channels.Layout.plot method, or (equivalently)
by passing it to :func:mne.viz.plot_layout:
End of explanation
"""
midline = np.where([name.endswith('z') for name in biosemi_layout.names])[0]
biosemi_layout.plot(picks=midline)
"""
Explanation: Similar to the picks argument for selecting channels from
:class:~mne.io.Raw objects, the :meth:~mne.channels.Layout.plot method of
:class:~mne.channels.Layout objects also has a picks argument. However,
because layouts only contain information about sensor name and location (not
sensor type), the :meth:~mne.channels.Layout.plot method only allows
picking channels by index (not by name or by type). Here we find the indices
we want using :func:numpy.where; selection by name or type is possible via
:func:mne.pick_channels or :func:mne.pick_types.
End of explanation
"""
layout_from_raw = mne.channels.make_eeg_layout(raw.info)
# same result as: mne.channels.find_layout(raw.info, ch_type='eeg')
layout_from_raw.plot()
"""
Explanation: If you're working with a :class:~mne.io.Raw object that already has sensor
positions incorporated, you can create a :class:~mne.channels.Layout object
with either the :func:mne.channels.make_eeg_layout function or
(equivalently) the :func:mne.channels.find_layout function.
End of explanation
"""
|
wuafeing/Python3-Tutorial | 01 data structures and algorithms/01.07 keep dict in order.ipynb | gpl-3.0 | from collections import OrderedDict
d = OrderedDict()
d["foo"] = 1
d["bar"] = 2
d["spam"] = 3
d["grok"] = 4
# Outputs "foo 1", "bar 2", "spam 3", "grok 4"
for key in d:
print(key, d[key])
"""
Explanation: Previous
1.7 字典排序
问题
你想创建一个字典,并且在迭代或序列化这个字典的时候能够控制元素的顺序。
解决方案
为了能控制一个字典中元素的顺序,你可以使用 collections 模块中的 OrderedDict 类。 在迭代操作的时候它会保持元素被插入时的顺序,示例如下:
End of explanation
"""
import json
json.dumps(d)
"""
Explanation: 当你想要构建一个将来需要序列化或编码成其他格式的映射的时候, OrderedDict 是非常有用的。 比如,你想精确控制以 JSON 编码后字段的顺序,你可以先使用 OrderedDict 来构建这样的数据:
End of explanation
"""
|
Hasil-Sharma/Neural-Networks-CS231n | assignment1/features.ipynb | gpl-3.0 | import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
"""
Explanation: Image features exercise
Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the assignments page on the course website.
We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.
All of your work for this exercise will be done in this notebook.
End of explanation
"""
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
"""
Explanation: Load data
Similar to previous exercises, we will load CIFAR-10 data from disk.
End of explanation
"""
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
"""
Explanation: Extract Features
For each image we will compute a Histogram of Oriented
Gradients (HOG) as well as a color histogram using the hue channel in HSV
color space. We form our final feature vector for each image by concatenating
the HOG and color histogram feature vectors.
Roughly speaking, HOG should capture the texture of the image while ignoring
color information, and the color histogram represents the color of the input
image while ignoring texture. As a result, we expect that using both together
ought to work better than using either alone. Verifying this assumption would
be a good thing to try for the bonus section.
The hog_feature and color_histogram_hsv functions both operate on a single
image and return a feature vector for that image. The extract_features
function takes a set of images and a list of feature functions and evaluates
each feature function on each image, storing the results in a matrix where
each column is the concatenation of all feature vectors for a single image.
End of explanation
"""
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-7, 2e-7, 3e-7, 5e-5, 8e-7]
regularization_strengths = [1e4, 2e4, 3e4, 4e4, 5e4, 6e4, 7e4, 8e4, 7e5]
results = {}
best_val = -1
best_svm = None
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
for lr in learning_rates:
for rs in regularization_strengths:
svm = LinearSVM()
svm.train(X_train_feats, y_train, learning_rate = lr, reg = rs, num_iters = 2000)
train_accuracy = np.mean(y_train == svm.predict(X_train_feats))
val_accuracy = np.mean(y_val == svm.predict(X_val_feats))
results[(lr, rs)] = (train_accuracy, val_accuracy)
if val_accuracy > best_val:
best_val = val_accuracy
best_svm = svm
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print 'lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy)
print 'best validation accuracy achieved during cross-validation: %f' % best_val
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print test_accuracy
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
"""
Explanation: Train SVM on features
Using the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
|Number of Bins|Validation Accuracy|Learning Rate|Regularization Strength|Test Accuracy|
|------|------|--|
|10|0.426000|8.000000e-07|5.000000e+04||
|50|0.440000|8.000000e-07|5.000000e+04|0.428|
|50|0.441000|3.000000e-07|1.000000e+05|0.428|
|100|0.440000|2.000000e-07|8.000000e+04|0.414|
|150|0.428000|8.000000e-07|2.000000e+04|0.388|
lr 3.000000e-07 reg 1.000000e+05 train accuracy: 0.426041 val accuracy: 0.441000
End of explanation
"""
print X_train_feats.shape
"""
Explanation: Inline question 1:
Describe the misclassification results that you see. Do they make sense?
Neural Network on image features
Earlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels.
For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
End of explanation
"""
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
best_net = None
best_val_acc = 0.0
best_hidden_size = None
best_learning_rate = None
best_regularization_strength = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
learning_rates = np.logspace(-1, 2, 10)
regularization_strengths = np.logspace(-4, -1, 10)
print '| Learning Rate| Regularization Rate | Validation Accuracy | Test Accuracy |'
print '| --- | --- | --- | --- |'
for learning_rate in learning_rates:
for regularization_strength in regularization_strengths:
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
# Train the network
stats = net.train(X_train_feats, y_train, X_val_feats, y_val,
num_iters=5000, batch_size=500,
learning_rate=learning_rate, learning_rate_decay=0.95,
reg=regularization_strength, verbose=False)
# Predict on the validation set
val_acc = (net.predict(X_val_feats) == y_val).mean()
test_acc = (net.predict(X_test_feats) == y_test).mean()
if best_val_acc < val_acc:
best_val_acc = val_acc
best_net = net
best_learning_rate = learning_rate
best_regularization_strength = regularization_strength
print '|', learning_rate, '|', regularization_strength,'|', val_acc,'|',test_acc, '|'
################################################################################
# END OF YOUR CODE #
################################################################################
# Run your neural net classifier on the test set. You should be able to
# get more than 55% accuracy.
test_acc = (best_net.predict(X_test_feats) == y_test).mean()
print test_acc
"""
Explanation: | Learning Rate| Regularization Rate | Validation Accuracy | Test Accuracy |
| --- | --- | --- | --- |
| 0.1 | 0.0001 | 0.544 | 0.534 |
| 0.1 | 0.000215443469003 | 0.544 | 0.538 |
| 0.1 | 0.000464158883361 | 0.542 | 0.534 |
| 0.1 | 0.001 | 0.537 | 0.535 |
| 0.1 | 0.00215443469003 | 0.536 | 0.533 |
| 0.1 | 0.00464158883361 | 0.529 | 0.533 |
| 0.1 | 0.01 | 0.524 | 0.522 |
| 0.1 | 0.0215443469003 | 0.508 | 0.508 |
| 0.1 | 0.0464158883361 | 0.51 | 0.489 |
| 0.1 | 0.1 | 0.434 | 0.446 |
| 0.215443469003 | 0.0001 | 0.594 | 0.58 |
| 0.215443469003 | 0.000215443469003 | 0.604 | 0.578 |
| 0.215443469003 | 0.000464158883361 | 0.601 | 0.58 |
| 0.215443469003 | 0.001 | 0.593 | 0.586 |
| 0.215443469003 | 0.00215443469003 | 0.597 | 0.569 |
| 0.215443469003 | 0.00464158883361 | 0.579 | 0.56 |
| 0.215443469003 | 0.01 | 0.554 | 0.539 |
| 0.215443469003 | 0.0215443469003 | 0.515 | 0.517 |
| 0.215443469003 | 0.0464158883361 | 0.508 | 0.491 |
| 0.215443469003 | 0.1 | 0.441 | 0.446 |
| 0.464158883361 | 0.0001 | 0.595 | 0.599 |
| 0.464158883361 | 0.000215443469003 | 0.601 | 0.597 |
| 0.464158883361 | 0.000464158883361 | 0.594 | 0.6 |
| 0.464158883361 | 0.001 | 0.616 | 0.596 |
| 0.464158883361 | 0.00215443469003 | 0.609 | 0.601 |
| 0.464158883361 | 0.00464158883361 | 0.603 | 0.575 |
| 0.464158883361 | 0.01 | 0.573 | 0.551 |
| 0.464158883361 | 0.0215443469003 | 0.525 | 0.517 |
| 0.464158883361 | 0.0464158883361 | 0.502 | 0.503 |
| 0.464158883361 | 0.1 | 0.44 | 0.447 |
| 1.0 | 0.0001 | 0.568 | 0.566 |
| 1.0 | 0.000215443469003 | 0.588 | 0.589 |
| 1.0 | 0.000464158883361 | 0.591 | 0.571 |
| 1.0 | 0.001 | 0.61 | 0.587 |
| 1.0 | 0.00215443469003 | 0.614 | 0.603 |
| 1.0 | 0.00464158883361 | 0.62 | 0.587 |
| 1.0 | 0.01 | 0.574 | 0.557 |
| 1.0 | 0.0215443469003 | 0.521 | 0.517 |
| 1.0 | 0.0464158883361 | 0.498 | 0.492 |
| 1.0 | 0.1 | 0.433 | 0.441 |
| 2.15443469003 | 0.0001 | 0.547 | 0.559 |
| 2.15443469003 | 0.000215443469003 | 0.571 | 0.564 |
| 2.15443469003 | 0.000464158883361 | 0.563 | 0.578 |
| 2.15443469003 | 0.001 | 0.6 | 0.592 |
| 2.15443469003 | 0.00215443469003 | 0.615 | 0.613 |
| 2.15443469003 | 0.00464158883361 | 0.611 | 0.6 |
| 2.15443469003 | 0.01 | 0.578 | 0.558 |
| 2.15443469003 | 0.0215443469003 | 0.525 | 0.511 |
| 2.15443469003 | 0.0464158883361 | 0.491 | 0.485 |
| 2.15443469003 | 0.1 | 0.449 | 0.454 |
| 4.64158883361 | 0.0001 | 0.087 | 0.103 |
| 4.64158883361 | 0.000215443469003 | 0.087 | 0.103 |
| 4.64158883361 | 0.000464158883361 | 0.087 | 0.103 |
| 4.64158883361 | 0.001 | 0.087 | 0.103 |
| 4.64158883361 | 0.00215443469003 | 0.087 | 0.103 |
| 4.64158883361 | 0.00464158883361 | 0.087 | 0.103 |
| 4.64158883361 | 0.01 | 0.087 | 0.103 |
| 4.64158883361 | 0.0215443469003 | 0.087 | 0.103 |
| 4.64158883361 | 0.0464158883361 | 0.087 | 0.103 |
| 4.64158883361 | 0.1 | 0.087 | 0.103 |
| 10.0 | 0.0001 | 0.087 | 0.103 |
| 10.0 | 0.000215443469003 | 0.087 | 0.103 |
| 10.0 | 0.000464158883361 | 0.087 | 0.103 |
| 10.0 | 0.001 | 0.087 | 0.103 |
| 10.0 | 0.00215443469003 | 0.087 | 0.103 |
| 10.0 | 0.00464158883361 | 0.087 | 0.103 |
| 10.0 | 0.01 | 0.087 | 0.103 |
| 10.0 | 0.0215443469003 | 0.087 | 0.103 |
| 10.0 | 0.0464158883361 | 0.087 | 0.103 |
| 10.0 | 0.1 | 0.087 | 0.103 |
| 21.5443469003 | 0.0001 | 0.087 | 0.103 |
| 21.5443469003 | 0.000215443469003 | 0.087 | 0.103 |
| 21.5443469003 | 0.000464158883361 | 0.087 | 0.103 |
| 21.5443469003 | 0.001 | 0.087 | 0.103 |
| 21.5443469003 | 0.00215443469003 | 0.087 | 0.103 |
| 21.5443469003 | 0.00464158883361 | 0.087 | 0.103 |
| 21.5443469003 | 0.01 | 0.087 | 0.103 |
| 21.5443469003 | 0.0215443469003 | 0.087 | 0.103 |
| 21.5443469003 | 0.0464158883361 | 0.087 | 0.103 |
| 21.5443469003 | 0.1 | 0.087 | 0.103 |
| 46.4158883361 | 0.0001 | 0.087 | 0.103 |
| 46.4158883361 | 0.000215443469003 | 0.087 | 0.103 |
| 46.4158883361 | 0.000464158883361 | 0.087 | 0.103 |
| 46.4158883361 | 0.001 | 0.087 | 0.103 |
| 46.4158883361 | 0.00215443469003 | 0.087 | 0.103 |
| 46.4158883361 | 0.00464158883361 | 0.087 | 0.103 |
| 46.4158883361 | 0.01 | 0.087 | 0.103 |
| 46.4158883361 | 0.0215443469003 | 0.087 | 0.103 |
| 46.4158883361 | 0.0464158883361 | 0.087 | 0.103 |
| 46.4158883361 | 0.1 | 0.087 | 0.103 |
| 100.0 | 0.0001 | 0.087 | 0.103 |
| 100.0 | 0.000215443469003 | 0.087 | 0.103 |
| 100.0 | 0.000464158883361 | 0.087 | 0.103 |
| 100.0 | 0.001 | 0.087 | 0.103 |
| 100.0 | 0.00215443469003 | 0.087 | 0.103 |
| 100.0 | 0.00464158883361 | 0.087 | 0.103 |
| 100.0 | 0.01 | 0.087 | 0.103 |
| 100.0 | 0.0215443469003 | 0.087 | 0.103 |
| 100.0 | 0.0464158883361 | 0.087 | 0.103 |
| 100.0 | 0.1 | 0.087 | 0.103 |
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.20/_downloads/59a29cf7eb53c7ab95857dfb2e3b31ba/plot_40_sensor_locations.ipynb | bsd-3-clause | import os
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D # noqa
import mne
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file, preload=True, verbose=False)
"""
Explanation: Working with sensor locations
This tutorial describes how to read and plot sensor locations, and how
the physical location of sensors is handled in MNE-Python.
:depth: 2
As usual we'll start by importing the modules we need and loading some
example data <sample-dataset>:
End of explanation
"""
montage_dir = os.path.join(os.path.dirname(mne.__file__),
'channels', 'data', 'montages')
print('\nBUILT-IN MONTAGE FILES')
print('======================')
print(sorted(os.listdir(montage_dir)))
"""
Explanation: About montages and layouts
:class:Montages <mne.channels.DigMontage> contain sensor
positions in 3D (x, y, z, in meters), and can be used to set
the physical positions of sensors. By specifying the location of sensors
relative to the brain, :class:Montages <mne.channels.DigMontage> play an
important role in computing the forward solution and computing inverse
estimates.
In contrast, :class:Layouts <mne.channels.Layout> are idealized 2-D
representations of sensor positions, and are primarily used for arranging
individual sensor subplots in a topoplot, or for showing the approximate
relative arrangement of sensors as seen from above.
Working with built-in montages
The 3D coordinates of MEG sensors are included in the raw recordings from MEG
systems, and are automatically stored in the info attribute of the
:class:~mne.io.Raw file upon loading. EEG electrode locations are much more
variable because of differences in head shape. Idealized montages for many
EEG systems are included during MNE-Python installation; these files are
stored in your mne-python directory, in the
:file:mne/channels/data/montages folder:
End of explanation
"""
ten_twenty_montage = mne.channels.make_standard_montage('standard_1020')
print(ten_twenty_montage)
"""
Explanation: .. sidebar:: Computing sensor locations
If you are interested in how standard ("idealized") EEG sensor positions
are computed on a spherical head model, the `eeg_positions`_ repository
provides code and documentation to this end.
These built-in EEG montages can be loaded via
:func:mne.channels.make_standard_montage. Note that when loading via
:func:~mne.channels.make_standard_montage, provide the filename without
its file extension:
End of explanation
"""
# these will be equivalent:
# raw_1020 = raw.copy().set_montage(ten_twenty_montage)
# raw_1020 = raw.copy().set_montage('standard_1020')
"""
Explanation: Once loaded, a montage can be applied to data via one of the instance methods
such as :meth:raw.set_montage <mne.io.Raw.set_montage>. It is also possible
to skip the loading step by passing the filename string directly to the
:meth:~mne.io.Raw.set_montage method. This won't work with our sample
data, because it's channel names don't match the channel names in the
standard 10-20 montage, so these commands are not run here:
End of explanation
"""
fig = ten_twenty_montage.plot(kind='3d')
fig.gca().view_init(azim=70, elev=15)
ten_twenty_montage.plot(kind='topomap', show_names=False)
"""
Explanation: :class:Montage <mne.channels.DigMontage> objects have a
:meth:~mne.channels.DigMontage.plot method for visualization of the sensor
locations in 3D; 2D projections are also possible by passing
kind='topomap':
End of explanation
"""
fig = plt.figure()
ax2d = fig.add_subplot(121)
ax3d = fig.add_subplot(122, projection='3d')
raw.plot_sensors(ch_type='eeg', axes=ax2d)
raw.plot_sensors(ch_type='eeg', axes=ax3d, kind='3d')
ax3d.view_init(azim=70, elev=15)
"""
Explanation: Reading sensor digitization files
In the sample data, setting the digitized EEG montage was done prior to
saving the :class:~mne.io.Raw object to disk, so the sensor positions are
already incorporated into the info attribute of the :class:~mne.io.Raw
object (see the documentation of the reading functions and
:meth:~mne.io.Raw.set_montage for details on how that works). Because of
that, we can plot sensor locations directly from the :class:~mne.io.Raw
object using the :meth:~mne.io.Raw.plot_sensors method, which provides
similar functionality to
:meth:montage.plot() <mne.channels.DigMontage.plot>.
:meth:~mne.io.Raw.plot_sensors also allows channel selection by type, can
color-code channels in various ways (by default, channels listed in
raw.info['bads'] will be plotted in red), and allows drawing into an
existing matplotlib axes object (so the channel positions can easily be
made as a subplot in a multi-panel figure):
End of explanation
"""
fig = mne.viz.plot_alignment(raw.info, trans=None, dig=False, eeg=False,
surfaces=[], meg=['helmet', 'sensors'],
coord_frame='meg')
mne.viz.set_3d_view(fig, azimuth=50, elevation=90, distance=0.5)
"""
Explanation: It's probably evident from the 2D topomap above that there is some
irregularity in the EEG sensor positions in the sample dataset
<sample-dataset> — this is because the sensor positions in that dataset are
digitizations of the sensor positions on an actual subject's head, rather
than idealized sensor positions based on a spherical head model. Depending on
what system was used to digitize the electrode positions (e.g., a Polhemus
Fastrak digitizer), you must use different montage reading functions (see
dig-formats). The resulting :class:montage <mne.channels.DigMontage>
can then be added to :class:~mne.io.Raw objects by passing it to the
:meth:~mne.io.Raw.set_montage method (just as we did above with the name of
the idealized montage 'standard_1020'). Once loaded, locations can be
plotted with :meth:~mne.channels.DigMontage.plot and saved with
:meth:~mne.channels.DigMontage.save, like when working with a standard
montage.
<div class="alert alert-info"><h4>Note</h4><p>When setting a montage with :meth:`~mne.io.Raw.set_montage`
the measurement info is updated in two places (the ``chs``
and ``dig`` entries are updated). See `tut-info-class`.
``dig`` may contain HPI, fiducial, or head shape points in
addition to electrode locations.</p></div>
Rendering sensor position with mayavi
It is also possible to render an image of a MEG sensor helmet in 3D, using
mayavi instead of matplotlib, by calling the :func:mne.viz.plot_alignment
function:
End of explanation
"""
layout_dir = os.path.join(os.path.dirname(mne.__file__),
'channels', 'data', 'layouts')
print('\nBUILT-IN LAYOUT FILES')
print('=====================')
print(sorted(os.listdir(layout_dir)))
"""
Explanation: :func:~mne.viz.plot_alignment requires an :class:~mne.Info object, and
can also render MRI surfaces of the scalp, skull, and brain (by passing
keywords like 'head', 'outer_skull', or 'brain' to the
surfaces parameter) making it useful for assessing coordinate frame
transformations <plot_source_alignment>. For examples of various uses of
:func:~mne.viz.plot_alignment, see
:doc:../../auto_examples/visualization/plot_montage,
:doc:../../auto_examples/visualization/plot_eeg_on_scalp, and
:doc:../../auto_examples/visualization/plot_meg_sensors.
Working with layout files
As with montages, many layout files are included during MNE-Python
installation, and are stored in the :file:mne/channels/data/layouts folder:
End of explanation
"""
biosemi_layout = mne.channels.read_layout('biosemi')
biosemi_layout.plot() # same result as: mne.viz.plot_layout(biosemi_layout)
"""
Explanation: You may have noticed that the file formats and filename extensions of the
built-in layout and montage files vary considerably. This reflects different
manufacturers' conventions; to make loading easier the montage and layout
loading functions in MNE-Python take the filename without its extension so
you don't have to keep track of which file format is used by which
manufacturer.
To load a layout file, use the :func:mne.channels.read_layout function, and
provide the filename without its file extension. You can then visualize the
layout using its :meth:~mne.channels.Layout.plot method, or (equivalently)
by passing it to :func:mne.viz.plot_layout:
End of explanation
"""
midline = np.where([name.endswith('z') for name in biosemi_layout.names])[0]
biosemi_layout.plot(picks=midline)
"""
Explanation: Similar to the picks argument for selecting channels from
:class:~mne.io.Raw objects, the :meth:~mne.channels.Layout.plot method of
:class:~mne.channels.Layout objects also has a picks argument. However,
because layouts only contain information about sensor name and location (not
sensor type), the :meth:~mne.channels.Layout.plot method only allows
picking channels by index (not by name or by type). Here we find the indices
we want using :func:numpy.where; selection by name or type is possible via
:func:mne.pick_channels or :func:mne.pick_types.
End of explanation
"""
layout_from_raw = mne.channels.make_eeg_layout(raw.info)
# same result as: mne.channels.find_layout(raw.info, ch_type='eeg')
layout_from_raw.plot()
"""
Explanation: If you're working with a :class:~mne.io.Raw object that already has sensor
positions incorporated, you can create a :class:~mne.channels.Layout object
with either the :func:mne.channels.make_eeg_layout function or
(equivalently) the :func:mne.channels.find_layout function.
End of explanation
"""
|
ChileanVirtualObservatory/DISPLAY | src/experiments/DISPLAY - 2011.0.00419.S O2-28_2_26-28_1_27.ipynb | gpl-3.0 | file_path = '../data/2011.0.00419.S/sg_ouss_id/group_ouss_id/member_ouss_2013-03-06_id/product/IRAS16547-4247_Jet_SO2-28_2_26-28_1_27.clean.fits'
noise_pixel = (15, 4)
train_pixels = [(133, 135),(134, 135),(133, 136),(134, 136)]
img = fits.open(file_path)
meta = img[0].data
hdr = img[0].header
# V axis
naxisv = hdr['NAXIS3']
onevpix = hdr['CDELT3']*0.000001
v0 = hdr['CRVAL3']*0.000001
v0pix = int(hdr['CRPIX3'])
vaxis = onevpix * (np.arange(naxisv)+1-v0pix) + v0
values = meta[0, :, train_pixels[0][0], train_pixels[0][1]] - np.mean(meta[0, :, train_pixels[0][0], train_pixels[0][1]])
values = values/np.max(values)
plt.plot(vaxis, values)
plt.xlim(np.min(vaxis), np.max(vaxis))
plt.ylim(-1, 1)
gca().xaxis.set_major_formatter(FormatStrFormatter('%d'))
noise = meta[0, :, noise_pixel[0], noise_pixel[1]] - np.mean(meta[0, :, noise_pixel[0], noise_pixel[1]])
noise = noise/np.max(noise)
plt.plot(vaxis, noise)
plt.ylim(-1, 1)
plt.xlim(np.min(vaxis), np.max(vaxis))
gca().xaxis.set_major_formatter(FormatStrFormatter('%d'))
"""
Explanation: ALMA Cycle 0
https://www.iram.fr/IRAMFR/ARC/documents/cycle0/ALMA_EarlyScience_Cycle0_HighestPriority.pdf
Project 2011.0.00419.S
End of explanation
"""
cube_params = {
'freq' : vaxis[naxisv/2],
'alpha' : 0,
'delta' : 0,
'spe_bw' : naxisv*onevpix,
'spe_res' : onevpix*v0pix,
's_f' : 4,
's_a' : 0}
dictionary = gen_all_words(cube_params, True)
"""
Explanation: Creation of Dictionary
We create the words necessary to fit a sparse coding model to the observed spectra in the previous created cube.
It returns a DataFrame with a vector for each theoretical line for each isotope in molist
End of explanation
"""
prediction = pd.DataFrame([])
for train_pixel in train_pixels:
dictionary_recal, detected_peaks = recal_words(file_path, dictionary, cube_params,
train_pixel, noise_pixel)
X = get_values_filtered_normalized(file_path, train_pixel, cube_params)
y_train = get_fortran_array(np.asmatrix(X))
dictionary_recal_fa = np.asfortranarray(dictionary_recal,
dtype= np.double)
lambda_param = 0
for idx in range(0, len(detected_peaks)):
if detected_peaks[idx] != 0:
lambda_param += 1
param = {
'lambda1' : lambda_param,
# 'L': 1,
'pos' : True,
'mode' : 0,
'ols' : True,
'numThreads' : -1}
alpha = spams.lasso(y_train, dictionary_recal_fa, **param).toarray()
total = np.inner(dictionary_recal_fa, alpha.T)
for i in range(0, len(alpha)):
iso_col = dictionary_recal.columns[i]
if(not prediction.columns.isin([iso_col]).any()):
prediction[iso_col] = alpha[i]
else:
prediction[iso_col] = prediction[iso_col]*alpha[i]
for p in prediction.columns:
if(prediction[p][0] != 0):
print(prediction[p])
pylab.rcParams['figure.figsize'] = (15, 15)
# Step 1: Read Cube
ax = plt.subplot(6, 1, 1)
ax.set_title('i) Raw Spectra Data')
data = get_data_from_fits(file_path)
y = data[0, :, train_pixel[0], train_pixel[1]]
plt.xticks([])
plt.plot(vaxis, y)
lines = get_lines_from_fits(file_path)
for line in lines:
# Shows lines really present
isotope_frequency = int(line[1])
isotope_name = line[0] + "-f" + str(line[1])
plt.axvline(x=isotope_frequency, ymin=0, ymax= 3, color='g')
# 2. Normalize, filter dada
ax = plt.subplot(6, 1, 2)
ax.set_title('ii) Normalized/Filtered Data')
plt.ylim(ymin =0,ymax = 1.15)
y = get_values_filtered_normalized(file_path, train_pixel, cube_params)
plt.xticks([])
plt.plot(vaxis, y)
# 3. Possible Words
ax = plt.subplot(6, 1, 3)
ax.set_title('iii) Theoretical Dictionary')
plt.ylim(ymin =0,ymax = 1.15)
plt.xticks([])
plt.plot(vaxis, dictionary)
# 4. Detect Lines
ax = plt.subplot(6, 1, 4)
ax.set_title('iv) Detection of Candidate Lines')
plt.ylim(ymin =0,ymax = 1.15)
plt.plot(vaxis, y)
plt.xticks([])
plt.ylabel("Temperature")
for idx in range(0, len(detected_peaks)):
if detected_peaks[idx] != 0:
plt.axvline(x=vaxis[idx], ymin=0, ymax= 1, color='r')
# 6. Recalibrate Dictionary
ax = plt.subplot(6, 1, 5)
ax.set_title('v) Recalibration of Dictonary')
plt.ylim(ymin =0,ymax = 1.15)
plt.plot(vaxis, dictionary_recal_fa)
plt.xticks([])
# 6. Recover Signal
ax = plt.subplot(6, 1, 6)
ax.set_title('2) Reconstructed Signal')
plt.ylim(ymin =0,ymax = 1.15)
plt.plot(vaxis, total)
gca().xaxis.set_major_formatter(FormatStrFormatter('%d'))
for i in range(0, len((results[0] > 0))):
if((results[0] > 0)[i]):
print(dictionary_recal.columns[i])
print(i)
for i in range(0, len(dictionary.index)):
print(calculate_probability(alpha, dictionary.index[i], dictionary_recal))
print(dictionary.index[i])
"""
Explanation: Recalibration of Dictionary
End of explanation
"""
|
sarahmid/programming-bootcamp-v2 | lab5_exercises.ipynb | mit | # run this cell first!
fruits = {"apple":"red", "banana":"yellow", "grape":"purple"}
print fruits["banana"]
"""
Explanation: Programming Bootcamp 2016
Lesson 5 Exercises
Earning points (optional)
Enter your name below.
Email your .ipynb file to me (sarahmid@mail.med.upenn.edu) before 9:00 am on 9/23.
You do not need to complete all the problems to get points.
I will give partial credit for effort when possible.
At the end of the course, everyone who gets at least 90% of the total points will get a prize (bootcamp mug!).
Name:
1. Guess the output: dictionary practice (1pt)
For the following blocks of code, first try to guess what the output will be, and then run the code yourself. Points will be given for filling in the guesses; guessing wrong won't be penalized.
End of explanation
"""
query = "apple"
print fruits[query]
"""
Explanation: Your guess:
End of explanation
"""
print fruits[0]
"""
Explanation: Your guess:
End of explanation
"""
print fruits.keys()
"""
Explanation: Your guess:
End of explanation
"""
print fruits.values()
"""
Explanation: Your guess:
End of explanation
"""
for key in fruits:
print fruits[key]
"""
Explanation: Your guess:
End of explanation
"""
del fruits["banana"]
print fruits
"""
Explanation: Your guess:
End of explanation
"""
print fruits["pear"]
"""
Explanation: Your guess:
End of explanation
"""
fruits["pear"] = "green"
print fruits["pear"]
"""
Explanation: Your guess:
End of explanation
"""
fruits["apple"] = fruits["apple"] + " or green"
print fruits["apple"]
"""
Explanation: Your guess:
End of explanation
"""
# hint code:
tallyDict = {}
seq = "ATGCTGATCGATATA"
length = len(seq)
if length not in tallyDict:
tallyDict[length] = 1 #initialize to 1 if this is the first occurrence of the length...
else:
tallyDict[length] = tallyDict[length] + 1 #...otherwise just increment the count.
"""
Explanation: Your guess:
2. On your own: using dictionaries (6pts)
Using the info in the table below, write code to accomplish the following tasks.
| Name | Favorite Food |
|:---------:|:-------------:|
| Wilfred | Steak |
| Manfred | French fries |
| Wadsworth | Spaghetti |
| Jeeves | Ice cream |
(A) Create a dictionary based on the data above, where each person's name is a key, and their favorite foods are the values.
(B) Using a for loop, go through the dictionary you created above and print each name and food combination in the format:
<NAME>'s favorite food is <FOOD>
(C) (1) Change the dictionary so that Wilfred's favorite food is Pizza. (2) Add a new entry for Mitsworth, whose favorite food is Tuna.
Do not recreate the whole dictionary while doing these things. Edit the dictionary you created in (A) using the syntax described in the lecture.
(D) Prompt the user to input a name. Check if the name they entered is a valid key in the dictionary using an if statement. If the name is in the dictionary, print out the corresponding favorite food. If not, print a message saying "That name is not in our database".
(E) Print just the names in the dictionary in alphabetical order. Use the sorting example from the slides.
(F) Print just the names in sorted order based on their favorite food. Use the value-sorting example from the slides.
3. File writing (3pts)
(A) Write code that prints "Hello, world" to a file called hello.txt
(B) Write code that prints the following text to a file called meow.txt. It must be formatted exactly as it here (you will need to use \n and \t):
```
Dear Mitsworth,
Meow, meow meow meow.
Sincerely,
A friend
```
(C) Write code that reads in the gene IDs from genes.txt and prints the unique gene IDs to a new file called genes_unique.txt. (You can re-use your code or the answer sheet from lab4 for getting the unique IDs.)
4. The "many counters" problem (4pts)
(A) Write code that reads a file of sequences and tallies how many sequences there are of each length. Use sequences3.txt as input.
Hint: you can use a dictionary to keep track of all the tallies. For example:
End of explanation
"""
error = False
if ">varlen2_uc001pmn.3_3476" in seqDict:
print "Remove > chars from headers!"
error = True
elif "varlen2_uc001pmn.3_3476" not in seqDict:
print "Something's wrong with your dictionary: missing keys"
error = True
if "varlen2_uc021qfk.1>2_1472" not in seqDict:
print "Only remove the > chars from the beginning of the header!"
error = True
if len(seqDict["varlen2_uc009wph.3_423"]) > 85:
if "\n" in seqDict["varlen2_uc009wph.3_423"]:
print "Remove newline chars from sequences"
error = True
else:
print "Length of sequences longer than expected for some reason"
error = True
elif len(seqDict["varlen2_uc009wph.3_423"]) < 85:
print "Length of sequences shorter than expected for some reason"
error = True
if error == False:
print "Congrats, you passed all my tests!"
"""
Explanation: (B) Using the tally dictionary you created above, figure out which sequence length was the most common, and print it to the screen.
5. Codon table (6pts)
For this question, use codon_table.txt, which contains a list of all possible codons and their corresponding amino acids. We will be using this info to create a dictionary, which will allow us to translate a nucleotide sequence into amino acids. Each part of this question builds off the previous parts.
(A) Thinkin' question (short answer, not code): If we want to create a codon dictionary and use it to translate nucleotide sequences, would it be better to use the codons or amino acids as keys?
Your answer:
(B) Read in codon_table.txt (note that it has a header line) and use it to create a codon dictionary. Then use raw_input() prompt the user to enter a single codon (e.g. ATG) and print the amino acid corresponding to that codon to the screen.
(C) Now we will adapt the code in (B) to translate a longer sequence. Instead of prompting the user for a single codon, allow them to enter a longer sequence. First, check that the sequence they entered has a length that is a multiple of 3 (Hint: use the mod operator, %), and print an error message if it is not. If it is valid, then go on to translate every three nucleotides to an amino acid. Print the final amino acid sequence to the screen.
(D) Now, instead of taking user input, you will apply your translator to a set of sequences stored in a file. Read in the sequences from sequences3.txt (assume each line is a separate sequence), translate it to amino acids, and print it to a new file called proteins.txt.
Bonus question: Parsing fasta files (+2 bonus pts)
This question is optional, but if you complete it, I'll give you two bonus points. You won't lose points if you skip it.
Write code that reads sequences from a fasta file and stores them in a dictionary according to their header (i.e. use the header line as the key and sequence as the value). You will use horrible.fasta to test your code.
If you are not familiar with fasta files, they have the following general format:
```
geneName1
ATCGCTAGTCGATCGATGGTTTCGCGTAGCGTTGCTAGCGTAGCTGATG
TCGATCGATGGTTTCGCGTAGCGTTGCTAGCGTAGCTGATGATGCTCAA
GCTGGATGGCTAGCTGATGCTAG
geneName2
ATCGATGGGCTGGATCGATGCGGCTCGGCGATCGA
...
```
There are many slight variations; for example the header often contains different information, depending where you got the file from, and the sequence for a given entry may span any number of lines. To write a good fasta parser, you must make as few assumptions about the formatting as possible. This will make your code more "robust".
For fasta files, pretty much the only things you can safely assume are that a new entry will be marked by the > sign, which is immediately followed by a (usually) unique header, and all sequence belonging to that entry will be located immediately below. However, you can't assume how many lines the sequence will take up.
With this in mind, write a robust fasta parser that reads in horrible.fasta and stores each sequence in a dictionary according to its header line. Call the dictionary seqDict. Remove any newline characters. Don't include the > sign in the header. Hint: use string slicing or .lstrip()
After you've written your code above and you think it works, run it and then run the following code to to spot-check whether you did everything correctly. If you didn't name your dictionary seqDict, you'll need to change it below to whatever you named your dictionary.
End of explanation
"""
|
changshuaiwei/Udc-ML | student_intervention/student_intervention.ipynb | gpl-3.0 | # Import libraries
import numpy as np
import pandas as pd
from time import time
from sklearn.metrics import f1_score
# Read student data
student_data = pd.read_csv("student-data.csv")
print "Student data read successfully!"
#set global seed
global_seed = 0
"""
Explanation: Machine Learning Engineer Nanodegree
Supervised Learning
Project 2: Building a Student Intervention System
Welcome to the second project of the Machine Learning Engineer Nanodegree! In this notebook, some template code has already been provided for you, and it will be your job to implement the additional functionality necessary to successfully complete this project. Sections that begin with 'Implementation' in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!
In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a 'Question X' header. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.
Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.
Question 1 - Classification vs. Regression
Your goal for this project is to identify students who might need early intervention before they fail to graduate. Which type of supervised learning problem is this, classification or regression? Why?
Answer: Classification. We need to predict whether 1) student will fail to graduate 2) student will not fail to graduate
Exploring the Data
Run the code cell below to load necessary Python libraries and load the student data. Note that the last column from this dataset, 'passed', will be our target label (whether the student graduated or didn't graduate). All other columns are features about each student.
End of explanation
"""
# TODO: Calculate number of students
n_students = student_data.shape[0]
# TODO: Calculate number of features
n_features = student_data.shape[1] -1
# TODO: Calculate passing students
n_passed = np.sum(student_data.passed == 'yes')
# TODO: Calculate failing students
n_failed = np.sum(student_data.passed =='no')
# TODO: Calculate graduation rate
grad_rate = 100 * np.mean(student_data.passed == 'yes')
# Print the results
print "Total number of students: {}".format(n_students)
print "Number of features: {}".format(n_features)
print "Number of students who passed: {}".format(n_passed)
print "Number of students who failed: {}".format(n_failed)
print "Graduation rate of the class: {:.2f}%".format(grad_rate)
"""
Explanation: Implementation: Data Exploration
Let's begin by investigating the dataset to determine how many students we have information on, and learn about the graduation rate among these students. In the code cell below, you will need to compute the following:
- The total number of students, n_students.
- The total number of features for each student, n_features.
- The number of those students who passed, n_passed.
- The number of those students who failed, n_failed.
- The graduation rate of the class, grad_rate, in percent (%).
End of explanation
"""
# Extract feature columns
feature_cols = list(student_data.columns[:-1])
# Extract target column 'passed'
target_col = student_data.columns[-1]
# Show the list of columns
print "Feature columns:\n{}".format(feature_cols)
print "\nTarget column: {}".format(target_col)
# Separate the data into feature data and target data (X_all and y_all, respectively)
X_all = student_data[feature_cols]
y_all = student_data[target_col]
# Show the feature information by printing the first five rows
print "\nFeature values:"
print X_all.head()
"""
Explanation: Preparing the Data
In this section, we will prepare the data for modeling, training and testing.
Identify feature and target columns
It is often the case that the data you obtain contains non-numeric features. This can be a problem, as most machine learning algorithms expect numeric data to perform computations with.
Run the code cell below to separate the student data into feature and target columns to see if any features are non-numeric.
End of explanation
"""
def preprocess_features(X):
''' Preprocesses the student data and converts non-numeric binary variables into
binary (0/1) variables. Converts categorical variables into dummy variables. '''
# Initialize new output DataFrame
output = pd.DataFrame(index = X.index)
# Investigate each feature column for the data
for col, col_data in X.iteritems():
# If data type is non-numeric, replace all yes/no values with 1/0
if col_data.dtype == object:
col_data = col_data.replace(['yes', 'no'], [1, 0])
# If data type is categorical, convert to dummy variables
if col_data.dtype == object:
# Example: 'school' => 'school_GP' and 'school_MS'
col_data = pd.get_dummies(col_data, prefix = col)
# Collect the revised columns
output = output.join(col_data)
return output
X_all = preprocess_features(X_all)
print "Processed feature columns ({} total features):\n{}".format(len(X_all.columns), list(X_all.columns))
"""
Explanation: Preprocess Feature Columns
As you can see, there are several non-numeric columns that need to be converted! Many of them are simply yes/no, e.g. internet. These can be reasonably converted into 1/0 (binary) values.
Other columns, like Mjob and Fjob, have more than two values, and are known as categorical variables. The recommended way to handle such a column is to create as many columns as possible values (e.g. Fjob_teacher, Fjob_other, Fjob_services, etc.), and assign a 1 to one of them and 0 to all others.
These generated columns are sometimes called dummy variables, and we will use the pandas.get_dummies() function to perform this transformation. Run the code cell below to perform the preprocessing routine discussed in this section.
End of explanation
"""
# TODO: Import any additional functionality you may need here
from sklearn import cross_validation
# TODO: Set the number of training points
num_train = 300
# Set the number of testing points
num_test = X_all.shape[0] - num_train
# TODO: Shuffle and split the dataset into the number of training and testing points above
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X_all, y_all, test_size=num_test, random_state = global_seed)
# Show the results of the split
print "Training set has {} samples.".format(X_train.shape[0])
print "Testing set has {} samples.".format(X_test.shape[0])
"""
Explanation: Implementation: Training and Testing Data Split
So far, we have converted all categorical features into numeric values. For the next step, we split the data (both features and corresponding labels) into training and test sets. In the following code cell below, you will need to implement the following:
- Randomly shuffle and split the data (X_all, y_all) into training and testing subsets.
- Use 300 training points (approximately 75%) and 95 testing points (approximately 25%).
- Set a random_state for the function(s) you use, if provided.
- Store the results in X_train, X_test, y_train, and y_test.
End of explanation
"""
def train_classifier(clf, X_train, y_train):
''' Fits a classifier to the training data. '''
# Start the clock, train the classifier, then stop the clock
start = time()
clf.fit(X_train, y_train)
end = time()
# Print the results
print "Trained model in {:.4f} seconds".format(end - start)
def predict_labels(clf, features, target):
''' Makes predictions using a fit classifier based on F1 score. '''
# Start the clock, make predictions, then stop the clock
start = time()
y_pred = clf.predict(features)
end = time()
# Print and return results
print "Made predictions in {:.4f} seconds.".format(end - start)
return f1_score(target.values, y_pred, pos_label='yes')
def train_predict(clf, X_train, y_train, X_test, y_test):
''' Train and predict using a classifer based on F1 score. '''
# Indicate the classifier and the training set size
print "Training a {} using a training set size of {}. . .".format(clf.__class__.__name__, len(X_train))
# Train the classifier
train_classifier(clf, X_train, y_train)
# Print the results of prediction for both training and testing
print "F1 score for training set: {:.4f}.".format(predict_labels(clf, X_train, y_train))
print "F1 score for test set: {:.4f}.".format(predict_labels(clf, X_test, y_test))
"""
Explanation: Training and Evaluating Models
In this section, you will choose 3 supervised learning models that are appropriate for this problem and available in scikit-learn. You will first discuss the reasoning behind choosing these three models by considering what you know about the data and each model's strengths and weaknesses. You will then fit the model to varying sizes of training data (100 data points, 200 data points, and 300 data points) and measure the F<sub>1</sub> score. You will need to produce three tables (one for each model) that shows the training set size, training time, prediction time, F<sub>1</sub> score on the training set, and F<sub>1</sub> score on the testing set.
The following supervised learning models are currently available in scikit-learn that you may choose from:
- Gaussian Naive Bayes (GaussianNB)
- Decision Trees
- Ensemble Methods (Bagging, AdaBoost, Random Forest, Gradient Boosting)
- K-Nearest Neighbors (KNeighbors)
- Stochastic Gradient Descent (SGDC)
- Support Vector Machines (SVM)
- Logistic Regression
Question 2 - Model Application
List three supervised learning models that are appropriate for this problem. For each model chosen
- Describe one real-world application in industry where the model can be applied. (You may need to do a small bit of research for this — give references!)
- What are the strengths of the model; when does it perform well?
- What are the weaknesses of the model; when does it perform poorly?
- What makes this model a good candidate for the problem, given what you know about the data?
Answer: Adaboost, Support Vector Machine, Logistic regression
* Adaboost
* Example: predict whether a movie will be profitable.
* Strength: fit a sequence of weak learners on repeatedly modified versions of the data. The predictions from all of them are then combined through a weighted majority vote (or sum) to produce the final prediction. As iterations proceed, examples that are difficult to predict receive ever-increasing influence. Each subsequent weak learner is thereby forced to concentrate on the examples that are missed by the previous ones in the sequence. (http://scikit-learn.org/stable/modules/ensemble.html#adaboost)
* Weakness: Prediction performance somehow dependents on weak learner. For example, if decision tree was used as weak learner, the model can not be not well generalized to space outside of sample space. As is similar to most ensemble model, it is computational more expensive.
* Reasons: the data has moderate number of features and data points, which fit for decision tree learning. However, decision tree subjects to overfitting. Tree pruning can be a solution but sometimes results in over-simplified model. Therefore, I here use a ensemble model.
- Support Vector Machine
* Example predict bankrupcy.
* Strength: different kernels to choose. Effective when number of feature larger than number of data points
* Weakness: Less interpretable
* Reasons: This a classification problem with moderate number of features. SVM is flexiable with different kernels to choose and is effective when there is large number of features.
- Logistic Regression.
* Examples: Yahoo search engine.
* Strength: probabilistic framework, results easyly interprateable, effective when number of feature larger than number of data points (need to use regularization)
* Weakness: assume one smooth linear decision boundary. When there are multiple or non-linear decision boundary, LR will have difficulty.
* Reasons: The data is a classfification problem with modertate number of features. There are only two classese to predict. Logistic regression fit for this situation and is computationally efficient for the problem.
Setup
Run the code cell below to initialize three helper functions which you can use for training and testing the three supervised learning models you've chosen above. The functions are as follows:
- train_classifier - takes as input a classifier and training data and fits the classifier to the data.
- predict_labels - takes as input a fit classifier, features, and a target labeling and makes predictions using the F<sub>1</sub> score.
- train_predict - takes as input a classifier, and the training and testing data, and performs train_clasifier and predict_labels.
- This function will report the F<sub>1</sub> score for both the training and testing data separately.
End of explanation
"""
# TODO: Import the three supervised learning models from sklearn
# from sklearn import model_A
# from sklearn import model_B
# from skearln import model_C
#from sklearn import tree
from sklearn import svm
from sklearn import ensemble
from sklearn import linear_model
# TODO: Initialize the three models
#clf_C = tree.DecisionTreeClassifier(min_samples_split=20, min_samples_leaf=10, random_state=global_seed)
clf_A = ensemble.AdaBoostClassifier(n_estimators=50, learning_rate=1, random_state=global_seed)
clf_B = svm.SVC(kernel="rbf",C=1, random_state=global_seed)
clf_C = linear_model.LogisticRegression(penalty='l1',random_state=global_seed)
#clf_C = linear_model.SGDClassifier(n_iter=50,random_state=global_seed)
#clf_C = ensemble.GradientBoostingClassifier(n_estimators=100, learning_rate=0.95, max_depth = 3, random_state=global_seed)
# TODO: Set up the training set sizes
#X_train_100 = X_train.iloc[:100]
#y_train_100 = y_train.iloc[:100]
X_train_100 = X_train[:100]
y_train_100 = y_train[:100]
X_train_200 = X_train[:200]
y_train_200 = y_train[:200]
X_train_300 = X_train[:300]
y_train_300 = y_train[:300]
# TODO: Execute the 'train_predict' function for each classifier and each training set size
train_predict(clf_A, X_train_100, y_train_100, X_test, y_test)
train_predict(clf_A, X_train_200, y_train_200, X_test, y_test)
train_predict(clf_A, X_train_300, y_train_300, X_test, y_test)
"""
Explanation: Implementation: Model Performance Metrics
With the predefined functions above, you will now import the three supervised learning models of your choice and run the train_predict function for each one. Remember that you will need to train and predict on each classifier for three different training set sizes: 100, 200, and 300. Hence, you should expect to have 9 different outputs below — 3 for each model using the varying training set sizes. In the following code cell, you will need to implement the following:
- Import the three supervised learning models you've discussed in the previous section.
- Initialize the three models and store them in clf_A, clf_B, and clf_C.
- Use a random_state for each model you use, if provided.
- Note: Use the default settings for each model — you will tune one specific model in a later section.
- Create the different training set sizes to be used to train each model.
- Do not reshuffle and resplit the data! The new training points should be drawn from X_train and y_train.
- Fit each model with each training set size and make predictions on the test set (9 in total).
Note: Three tables are provided after the following code cell which can be used to store your results.
End of explanation
"""
train_predict(clf_B, X_train_100, y_train_100, X_test, y_test)
train_predict(clf_B, X_train_200, y_train_200, X_test, y_test)
train_predict(clf_B, X_train_300, y_train_300, X_test, y_test)
"""
Explanation: Classifer 1 - Adaboost
| Training Set Size | Training Time | Prediction Time (test) | F1 Score (train) | F1 Score (test) |
| :---------------: | :---------------------: | :--------------------: | :--------------: | :-------------: |
| 100 | 0.069 | 0.0020 | 0.9538 | 0.7200 |
| 200 | 0.047 | 0.000 | 0.8826 | 0.8058 |
| 300 | 0.062 | 0.016 | 0.8688 | 0.7794 |
End of explanation
"""
train_predict(clf_C, X_train_100, y_train_100, X_test, y_test)
train_predict(clf_C, X_train_200, y_train_200, X_test, y_test)
train_predict(clf_C, X_train_300, y_train_300, X_test, y_test)
"""
Explanation: Classifer 2 - SVM
| Training Set Size | Training Time | Prediction Time (test) | F1 Score (train) | F1 Score (test) |
| :---------------: | :---------------------: | :--------------------: | :--------------: | :-------------: |
| 100 | 0.015 | 0.00 | 0.8591 | 0.7838 |
| 200 | 0.0 | 0.005 | 0.8693 | 0.7755 |
| 300 | 0.019 | 0.002 | 0.8692 | 0.7586 |
End of explanation
"""
clf_D = svm.SVC(kernel="linear",C=1, random_state=global_seed)
train_predict(clf_D, X_train_100, y_train_100, X_test, y_test)
train_predict(clf_D, X_train_200, y_train_200, X_test, y_test)
train_predict(clf_D, X_train_300, y_train_300, X_test, y_test)
"""
Explanation: Classifer 3 - Logistic Regression
| Training Set Size | Training Time | Prediction Time (test) | F1 Score (train) | F1 Score (test) |
| :---------------: | :---------------------: | :--------------------: | :--------------: | :-------------: |
| 100 | 0.00 | 0 | 0.8421 | 0.7591 |
| 200 | 0.0030 | 0.00 | 0.8235 | 0.7883 |
| 300 | 0.004 | 0.00 | 0.8282 | 0.7826 |
Tabular Results
Edit the cell below to see how a table can be designed in Markdown. You can record your results from above in the tables provided.
Classifer 1 - Adaboost
| Training Set Size | Training Time | Prediction Time (test) | F1 Score (train) | F1 Score (test) |
| :---------------: | :---------------------: | :--------------------: | :--------------: | :-------------: |
| 100 | 0.069 | 0.0020 | 0.9538 | 0.7200 |
| 200 | 0.047 | 0.000 | 0.8826 | 0.8058 |
| 300 | 0.062 | 0.016 | 0.8688 | 0.7794 |
Classifer 2 - SVM
| Training Set Size | Training Time | Prediction Time (test) | F1 Score (train) | F1 Score (test) |
| :---------------: | :---------------------: | :--------------------: | :--------------: | :-------------: |
| 100 | 0.015 | 0.00 | 0.8591 | 0.7838 |
| 200 | 0.0 | 0.005 | 0.8693 | 0.7755 |
| 300 | 0.019 | 0.002 | 0.8692 | 0.7586 |
Classifer 3 - Logistic Regression
| Training Set Size | Training Time | Prediction Time (test) | F1 Score (train) | F1 Score (test) |
| :---------------: | :---------------------: | :--------------------: | :--------------: | :-------------: |
| 100 | 0.00 | 0 | 0.8421 | 0.7591 |
| 200 | 0.0030 | 0.00 | 0.8235 | 0.7883 |
| 300 | 0.004 | 0.00 | 0.8282 | 0.7826 |
Choosing the Best Model
In this final section, you will choose from the three supervised learning models the best model to use on the student data. You will then perform a grid search optimization for the model over the entire training set (X_train and y_train) by tuning at least one parameter to improve upon the untuned model's F<sub>1</sub> score.
Question 3 - Choosing the Best Model
Based on the experiments you performed earlier, in one to two paragraphs, explain to the board of supervisors what single model you chose as the best model. Which model is generally the most appropriate based on the available data, limited resources, cost, and performance?
Answer: Based one the experiment, I can choose to use logistic regression. The model has similar running time, but higher prediction accuracy than SVM. The model has similar prediction accuracy but shorter running time than Adaboost.
Observing this, we should expect using linear kernel for SVM should get similar prediction accurary. The following result varified the guess. However, linear SVM appears to cost more time than logistic regression in sklearn.
End of explanation
"""
# TODO: Import 'GridSearchCV' and 'make_scorer'
from sklearn.grid_search import GridSearchCV
from sklearn.metrics import make_scorer
# TODO: Create the parameters list you wish to tune
#parameters = [
# {'C': [1, 10, 100, 1000], 'kernel': ['linear']},
# {'C': [1, 10, 100, 1000], 'gamma': [0.001, 0.0001], 'kernel': ['rbf']},
# ]
#parameters = {'C': [0.01, 0.1, 1, 10, 100, 1000], 'kernel': ['linear','rbf']}
parameters = {'penalty': ['l1','l2'], 'C':[ 0.01, 0.1, 1, 10, 100, 500]}
# TODO: Initialize the classifier
#clf = svm.SVC()
clf = linear_model.LogisticRegression(random_state=global_seed)
# TODO: Make an f1 scoring function using 'make_scorer'
f1_scorer = make_scorer(f1_score, pos_label='yes')
# TODO: Perform grid search on the classifier using the f1_scorer as the scoring method
grid_obj = GridSearchCV(clf,param_grid = parameters, scoring = f1_scorer, cv=5, n_jobs=5)
# TODO: Fit the grid search object to the training data and find the optimal parameters
grid_obj.fit(X_train, y_train)
# Get the estimator
clf = grid_obj.best_estimator_
# Report the final F1 score for training and testing after parameter tuning
print "Tuned model has a training F1 score of {:.4f}.".format(predict_labels(clf, X_train, y_train))
print "Tuned model has a testing F1 score of {:.4f}.".format(predict_labels(clf, X_test, y_test))
grid_obj.best_params_
# fine tune around C=0.1
parameters = {'penalty': ['l1'], 'C':[ 0.025, 0.05, 0.1, 0.2, 0.4]}
# TODO: Initialize the classifier
#clf = svm.SVC()
clf = linear_model.LogisticRegression(random_state=global_seed)
# TODO: Make an f1 scoring function using 'make_scorer'
f1_scorer = make_scorer(f1_score, pos_label='yes')
# TODO: Perform grid search on the classifier using the f1_scorer as the scoring method
grid_obj = GridSearchCV(clf,param_grid = parameters, scoring = f1_scorer, cv=5, n_jobs=5)
# TODO: Fit the grid search object to the training data and find the optimal parameters
grid_obj.fit(X_train, y_train)
# Get the estimator
clf = grid_obj.best_estimator_
# Report the final F1 score for training and testing after parameter tuning
print "Tuned model has a training F1 score of {:.4f}.".format(predict_labels(clf, X_train, y_train))
print "Tuned model has a testing F1 score of {:.4f}.".format(predict_labels(clf, X_test, y_test))
grid_obj.best_params_
"""
Explanation: Question 4 - Model in Layman's Terms
In one to two paragraphs, explain to the board of directors in layman's terms how the final model chosen is supposed to work. Be sure that you are describing the major qualities of the model, such as how the model is trained and how the model makes a prediction. Avoid using advanced mathematical or technical jargon, such as describing equations or discussing the algorithm implementation.
Answer: We will use logistic regression to make prediction of student's performance. Logistic regression how likely (probability) a student will pass. In particular, it first calculated a score by weighted sum of the features for each student. Then, there is a function to transform the score to a value between 0 and 1 (i.e. probability) as indicator how likely the student will pass.
If the probability is larger than a pre-secified value (for example, 0.5), then we give a prediction of "pass". Otherwise, we give a prediction of "not pass".
Implementation: Model Tuning
Fine tune the chosen model. Use grid search (GridSearchCV) with at least one important parameter tuned with at least 3 different values. You will need to use the entire training set for this. In the code cell below, you will need to implement the following:
- Import sklearn.grid_search.gridSearchCV and sklearn.metrics.make_scorer.
- Create a dictionary of parameters you wish to tune for the chosen model.
- Example: parameters = {'parameter' : [list of values]}.
- Initialize the classifier you've chosen and store it in clf.
- Create the F<sub>1</sub> scoring function using make_scorer and store it in f1_scorer.
- Set the pos_label parameter to the correct value!
- Perform grid search on the classifier clf using f1_scorer as the scoring method, and store it in grid_obj.
- Fit the grid search object to the training data (X_train, y_train), and store it in grid_obj.
End of explanation
"""
|
turbomanage/training-data-analyst | courses/machine_learning/deepdive/09_sequence_keras/poetry.ipynb | apache-2.0 | %%bash
pip freeze | grep tensor
# Choose a version of TensorFlow that is supported on TPUs
TFVERSION='1.13'
import os
os.environ['TFVERSION'] = TFVERSION
%%bash
pip install tensor2tensor==${TFVERSION} gutenberg
# install from sou
#git clone https://github.com/tensorflow/tensor2tensor.git
#cd tensor2tensor
#yes | pip install --user -e .
"""
Explanation: Text generation using tensor2tensor on Cloud ML Engine
This notebook illustrates using the <a href="https://github.com/tensorflow/tensor2tensor">tensor2tensor</a> library to do from-scratch, distributed training of a poetry model. Then, the trained model is used to complete new poems.
<br/>
Install tensor2tensor, and specify Google Cloud Platform project and bucket
Install the necessary packages. tensor2tensor will give us the Transformer model. Project Gutenberg gives us access to historical poems.
<b>p.s.</b> Note that this notebook uses Python2 because Project Gutenberg relies on BSD-DB which was deprecated in Python 3 and removed from the standard library.
tensor2tensor itself can be used on Python 3. It's just Project Gutenberg that has this issue.
End of explanation
"""
%%bash
pip freeze | grep tensor
import os
PROJECT = 'cloud-training-demos' # REPLACE WITH YOUR PROJECT ID
BUCKET = 'cloud-training-demos-ml' # REPLACE WITH YOUR BUCKET NAME
REGION = 'us-central1' # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# this is what this notebook is demonstrating
PROBLEM= 'poetry_line_problem'
# for bash
os.environ['PROJECT'] = PROJECT
os.environ['BUCKET'] = BUCKET
os.environ['REGION'] = REGION
os.environ['PROBLEM'] = PROBLEM
#os.environ['PATH'] = os.environ['PATH'] + ':' + os.getcwd() + '/tensor2tensor/tensor2tensor/bin/'
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
"""
Explanation: If the following cell does not reflect the version of tensorflow and tensor2tensor that you just installed, click "Reset Session" on the notebook so that the Python environment picks up the new packages.
End of explanation
"""
%%bash
rm -rf data/poetry
mkdir -p data/poetry
from gutenberg.acquire import load_etext
from gutenberg.cleanup import strip_headers
import re
books = [
# bookid, skip N lines
(26715, 1000, 'Victorian songs'),
(30235, 580, 'Baldwin collection'),
(35402, 710, 'Swinburne collection'),
(574, 15, 'Blake'),
(1304, 172, 'Bulchevys collection'),
(19221, 223, 'Palgrave-Pearse collection'),
(15553, 522, 'Knowles collection')
]
with open('data/poetry/raw.txt', 'w') as ofp:
lineno = 0
for (id_nr, toskip, title) in books:
startline = lineno
text = strip_headers(load_etext(id_nr)).strip()
lines = text.split('\n')[toskip:]
# any line that is all upper case is a title or author name
# also don't want any lines with years (numbers)
for line in lines:
if (len(line) > 0
and line.upper() != line
and not re.match('.*[0-9]+.*', line)
and len(line) < 50
):
cleaned = re.sub('[^a-z\'\-]+', ' ', line.strip().lower())
ofp.write(cleaned)
ofp.write('\n')
lineno = lineno + 1
else:
ofp.write('\n')
print('Wrote lines {} to {} from {}'.format(startline, lineno, title))
!wc -l data/poetry/*.txt
"""
Explanation: Download data
We will get some <a href="https://www.gutenberg.org/wiki/Poetry_(Bookshelf)">poetry anthologies</a> from Project Gutenberg.
End of explanation
"""
with open('data/poetry/raw.txt', 'r') as rawfp,\
open('data/poetry/input.txt', 'w') as infp,\
open('data/poetry/output.txt', 'w') as outfp:
prev_line = ''
for curr_line in rawfp:
curr_line = curr_line.strip()
# poems break at empty lines, so this ensures we train only
# on lines of the same poem
if len(prev_line) > 0 and len(curr_line) > 0:
infp.write(prev_line + '\n')
outfp.write(curr_line + '\n')
prev_line = curr_line
!head -5 data/poetry/*.txt
"""
Explanation: Create training dataset
We are going to train a machine learning model to write poetry given a starting point. We'll give it one line, and it is going to tell us the next line. So, naturally, we will train it on real poetry. Our feature will be a line of a poem and the label will be next line of that poem.
<p>
Our training dataset will consist of two files. The first file will consist of the input lines of poetry and the other file will consist of the corresponding output lines, one output line per input line.
End of explanation
"""
%%bash
rm -rf poetry
mkdir -p poetry/trainer
%%writefile poetry/trainer/problem.py
import os
import tensorflow as tf
from tensor2tensor.utils import registry
from tensor2tensor.models import transformer
from tensor2tensor.data_generators import problem
from tensor2tensor.data_generators import text_encoder
from tensor2tensor.data_generators import text_problems
from tensor2tensor.data_generators import generator_utils
tf.summary.FileWriterCache.clear() # ensure filewriter cache is clear for TensorBoard events file
@registry.register_problem
class PoetryLineProblem(text_problems.Text2TextProblem):
"""Predict next line of poetry from the last line. From Gutenberg texts."""
@property
def approx_vocab_size(self):
return 2**13 # ~8k
@property
def is_generate_per_split(self):
# generate_data will NOT shard the data into TRAIN and EVAL for us.
return False
@property
def dataset_splits(self):
"""Splits of data to produce and number of output shards for each."""
# 10% evaluation data
return [{
"split": problem.DatasetSplit.TRAIN,
"shards": 90,
}, {
"split": problem.DatasetSplit.EVAL,
"shards": 10,
}]
def generate_samples(self, data_dir, tmp_dir, dataset_split):
with open('data/poetry/raw.txt', 'r') as rawfp:
prev_line = ''
for curr_line in rawfp:
curr_line = curr_line.strip()
# poems break at empty lines, so this ensures we train only
# on lines of the same poem
if len(prev_line) > 0 and len(curr_line) > 0:
yield {
"inputs": prev_line,
"targets": curr_line
}
prev_line = curr_line
# Smaller than the typical translate model, and with more regularization
@registry.register_hparams
def transformer_poetry():
hparams = transformer.transformer_base()
hparams.num_hidden_layers = 2
hparams.hidden_size = 128
hparams.filter_size = 512
hparams.num_heads = 4
hparams.attention_dropout = 0.6
hparams.layer_prepostprocess_dropout = 0.6
hparams.learning_rate = 0.05
return hparams
@registry.register_hparams
def transformer_poetry_tpu():
hparams = transformer_poetry()
transformer.update_hparams_for_tpu(hparams)
return hparams
# hyperparameter tuning ranges
@registry.register_ranged_hparams
def transformer_poetry_range(rhp):
rhp.set_float("learning_rate", 0.05, 0.25, scale=rhp.LOG_SCALE)
rhp.set_int("num_hidden_layers", 2, 4)
rhp.set_discrete("hidden_size", [128, 256, 512])
rhp.set_float("attention_dropout", 0.4, 0.7)
%%writefile poetry/trainer/__init__.py
from . import problem
%%writefile poetry/setup.py
from setuptools import find_packages
from setuptools import setup
REQUIRED_PACKAGES = [
'tensor2tensor'
]
setup(
name='poetry',
version='0.1',
author = 'Google',
author_email = 'training-feedback@cloud.google.com',
install_requires=REQUIRED_PACKAGES,
packages=find_packages(),
include_package_data=True,
description='Poetry Line Problem',
requires=[]
)
!touch poetry/__init__.py
!find poetry
"""
Explanation: We do not need to generate the data beforehand -- instead, we can have Tensor2Tensor create the training dataset for us. So, in the code below, I will use only data/poetry/raw.txt -- obviously, this allows us to productionize our model better. Simply keep collecting raw data and generate the training/test data at the time of training.
Set up problem
The Problem in tensor2tensor is where you specify parameters like the size of your vocabulary and where to get the training data from.
End of explanation
"""
%%bash
DATA_DIR=./t2t_data
TMP_DIR=$DATA_DIR/tmp
rm -rf $DATA_DIR $TMP_DIR
mkdir -p $DATA_DIR $TMP_DIR
# Generate data
t2t-datagen \
--t2t_usr_dir=./poetry/trainer \
--problem=$PROBLEM \
--data_dir=$DATA_DIR \
--tmp_dir=$TMP_DIR
"""
Explanation: Generate training data
Our problem (translation) requires the creation of text sequences from the training dataset. This is done using t2t-datagen and the Problem defined in the previous section.
(Ignore any runtime warnings about np.float64. they are harmless).
End of explanation
"""
!ls t2t_data | head
"""
Explanation: Let's check to see the files that were output. If you see a broken pipe error, please ignore.
End of explanation
"""
%%bash
DATA_DIR=./t2t_data
gsutil -m rm -r gs://${BUCKET}/poetry/
gsutil -m cp ${DATA_DIR}/${PROBLEM}* ${DATA_DIR}/vocab* gs://${BUCKET}/poetry/data
%%bash
PROJECT_ID=$PROJECT
AUTH_TOKEN=$(gcloud auth print-access-token)
SVC_ACCOUNT=$(curl -X GET -H "Content-Type: application/json" \
-H "Authorization: Bearer $AUTH_TOKEN" \
https://ml.googleapis.com/v1/projects/${PROJECT_ID}:getConfig \
| python -c "import json; import sys; response = json.load(sys.stdin); \
print(response['serviceAccount'])")
echo "Authorizing the Cloud ML Service account $SVC_ACCOUNT to access files in $BUCKET"
gsutil -m defacl ch -u $SVC_ACCOUNT:R gs://$BUCKET
gsutil -m acl ch -u $SVC_ACCOUNT:R -r gs://$BUCKET # error message (if bucket is empty) can be ignored
gsutil -m acl ch -u $SVC_ACCOUNT:W gs://$BUCKET
"""
Explanation: Provide Cloud ML Engine access to data
Copy the data to Google Cloud Storage, and then provide access to the data. gsutil throws an error when removing an empty bucket, so you may see an error the first time this code is run.
End of explanation
"""
%%bash
BASE=gs://${BUCKET}/poetry/data
OUTDIR=gs://${BUCKET}/poetry/subset
gsutil -m rm -r $OUTDIR
gsutil -m cp \
${BASE}/${PROBLEM}-train-0008* \
${BASE}/${PROBLEM}-dev-00000* \
${BASE}/vocab* \
$OUTDIR
"""
Explanation: Train model locally on subset of data
Let's run it locally on a subset of the data to make sure it works.
End of explanation
"""
%%bash
DATA_DIR=gs://${BUCKET}/poetry/subset
OUTDIR=./trained_model
rm -rf $OUTDIR
t2t-trainer \
--data_dir=gs://${BUCKET}/poetry/subset \
--t2t_usr_dir=./poetry/trainer \
--problem=$PROBLEM \
--model=transformer \
--hparams_set=transformer_poetry \
--output_dir=$OUTDIR --job-dir=$OUTDIR --train_steps=10
"""
Explanation: Note: the following will work only if you are running Jupyter on a reasonably powerful machine. Don't be alarmed if your process is killed.
End of explanation
"""
%%bash
LOCALGPU="--train_steps=7500 --worker_gpu=1 --hparams_set=transformer_poetry"
DATA_DIR=gs://${BUCKET}/poetry/data
OUTDIR=gs://${BUCKET}/poetry/model
rm -rf $OUTDIR
t2t-trainer \
--data_dir=gs://${BUCKET}/poetry/subset \
--t2t_usr_dir=./poetry/trainer \
--problem=$PROBLEM \
--model=transformer \
--hparams_set=transformer_poetry \
--output_dir=$OUTDIR ${LOCALGPU}
"""
Explanation: Option 1: Train model locally on full dataset (use if running on Notebook Instance with a GPU)
You can train on the full dataset if you are on a Google Cloud Notebook Instance with a P100 or better GPU
End of explanation
"""
%%bash
GPU="--train_steps=7500 --cloud_mlengine --worker_gpu=1 --hparams_set=transformer_poetry"
DATADIR=gs://${BUCKET}/poetry/data
OUTDIR=gs://${BUCKET}/poetry/model
JOBNAME=poetry_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR
yes Y | t2t-trainer \
--data_dir=gs://${BUCKET}/poetry/subset \
--t2t_usr_dir=./poetry/trainer \
--problem=$PROBLEM \
--model=transformer \
--output_dir=$OUTDIR \
${GPU}
%%bash
## CHANGE the job name (based on output above: You will see a line such as Launched transformer_poetry_line_problem_t2t_20190322_233159)
gcloud ml-engine jobs describe transformer_poetry_line_problem_t2t_20190323_003001
"""
Explanation: Option 2: Train on Cloud ML Engine
tensor2tensor has a convenient --cloud_mlengine option to kick off the training on the managed service.
It uses the Python API mentioned in the Cloud ML Engine docs, rather than requiring you to use gcloud to submit the job.
<p>
Note: your project needs P100 quota in the region.
<p>
The echo is because t2t-trainer asks you to confirm before submitting the job to the cloud. Ignore any error about "broken pipe".
If you see a message similar to this:
<pre>
[... cloud_mlengine.py:392] Launched transformer_poetry_line_problem_t2t_20190323_000631. See console to track: https://console.cloud.google.com/mlengine/jobs/.
</pre>
then, this step has been successful.
End of explanation
"""
%%bash
# use one of these
TPU="--train_steps=7500 --use_tpu=True --cloud_tpu_name=laktpu --hparams_set=transformer_poetry_tpu"
DATADIR=gs://${BUCKET}/poetry/data
OUTDIR=gs://${BUCKET}/poetry/model_tpu
JOBNAME=poetry_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR
echo "'Y'" | t2t-trainer \
--data_dir=gs://${BUCKET}/poetry/subset \
--t2t_usr_dir=./poetry/trainer \
--problem=$PROBLEM \
--model=transformer \
--output_dir=$OUTDIR \
${TPU}
%%bash
gsutil ls gs://${BUCKET}/poetry/model_tpu
"""
Explanation: The job took about <b>25 minutes</b> for me and ended with these evaluation metrics:
<pre>
Saving dict for global step 8000: global_step = 8000, loss = 6.03338, metrics-poetry_line_problem/accuracy = 0.138544, metrics-poetry_line_problem/accuracy_per_sequence = 0.0, metrics-poetry_line_problem/accuracy_top5 = 0.232037, metrics-poetry_line_problem/approx_bleu_score = 0.00492648, metrics-poetry_line_problem/neg_log_perplexity = -6.68994, metrics-poetry_line_problem/rouge_2_fscore = 0.00256089, metrics-poetry_line_problem/rouge_L_fscore = 0.128194
</pre>
Notice that accuracy_per_sequence is 0 -- Considering that we are asking the NN to be rather creative, that doesn't surprise me. Why am I looking at accuracy_per_sequence and not the other metrics? This is because it is more appropriate for problem we are solving; metrics like Bleu score are better for translation.
Option 3: Train on a directly-connected TPU
If you are running on a VM connected directly to a Cloud TPU, you can run t2t-trainer directly. Unfortunately, you won't see any output from Jupyter while the program is running.
Compare this command line to the one using GPU in the previous section.
End of explanation
"""
%%bash
XXX This takes 3 hours on 4 GPUs. Remove this line if you are sure you want to do this.
DATADIR=gs://${BUCKET}/poetry/data
OUTDIR=gs://${BUCKET}/poetry/model_full2
JOBNAME=poetry_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR
echo "'Y'" | t2t-trainer \
--data_dir=gs://${BUCKET}/poetry/subset \
--t2t_usr_dir=./poetry/trainer \
--problem=$PROBLEM \
--model=transformer \
--hparams_set=transformer_poetry \
--output_dir=$OUTDIR \
--train_steps=75000 --cloud_mlengine --worker_gpu=4
"""
Explanation: The job took about <b>10 minutes</b> for me and ended with these evaluation metrics:
<pre>
Saving dict for global step 8000: global_step = 8000, loss = 6.03338, metrics-poetry_line_problem/accuracy = 0.138544, metrics-poetry_line_problem/accuracy_per_sequence = 0.0, metrics-poetry_line_problem/accuracy_top5 = 0.232037, metrics-poetry_line_problem/approx_bleu_score = 0.00492648, metrics-poetry_line_problem/neg_log_perplexity = -6.68994, metrics-poetry_line_problem/rouge_2_fscore = 0.00256089, metrics-poetry_line_problem/rouge_L_fscore = 0.128194
</pre>
Notice that accuracy_per_sequence is 0 -- Considering that we are asking the NN to be rather creative, that doesn't surprise me. Why am I looking at accuracy_per_sequence and not the other metrics? This is because it is more appropriate for problem we are solving; metrics like Bleu score are better for translation.
Option 4: Training longer
Let's train on 4 GPUs for 75,000 steps. Note the change in the last line of the job.
End of explanation
"""
%%bash
gsutil ls gs://${BUCKET}/poetry/model #_modeltpu
"""
Explanation: This job took <b>12 hours</b> for me and ended with these metrics:
<pre>
global_step = 76000, loss = 4.99763, metrics-poetry_line_problem/accuracy = 0.219792, metrics-poetry_line_problem/accuracy_per_sequence = 0.0192308, metrics-poetry_line_problem/accuracy_top5 = 0.37618, metrics-poetry_line_problem/approx_bleu_score = 0.017955, metrics-poetry_line_problem/neg_log_perplexity = -5.38725, metrics-poetry_line_problem/rouge_2_fscore = 0.0325563, metrics-poetry_line_problem/rouge_L_fscore = 0.210618
</pre>
At least the accuracy per sequence is no longer zero. It is now 0.0192308 ... note that we are using a relatively small dataset (12K lines) and this is tiny in the world of natural language problems.
<p>
In order that you have your expectations set correctly: a high-performing translation model needs 400-million lines of input and takes 1 whole day on a TPU pod!
## Check trained model
End of explanation
"""
%%writefile data/poetry/rumi.txt
Where did the handsome beloved go?
I wonder, where did that tall, shapely cypress tree go?
He spread his light among us like a candle.
Where did he go? So strange, where did he go without me?
All day long my heart trembles like a leaf.
All alone at midnight, where did that beloved go?
Go to the road, and ask any passing traveler —
That soul-stirring companion, where did he go?
Go to the garden, and ask the gardener —
That tall, shapely rose stem, where did he go?
Go to the rooftop, and ask the watchman —
That unique sultan, where did he go?
Like a madman, I search in the meadows!
That deer in the meadows, where did he go?
My tearful eyes overflow like a river —
That pearl in the vast sea, where did he go?
All night long, I implore both moon and Venus —
That lovely face, like a moon, where did he go?
If he is mine, why is he with others?
Since he’s not here, to what “there” did he go?
If his heart and soul are joined with God,
And he left this realm of earth and water, where did he go?
Tell me clearly, Shams of Tabriz,
Of whom it is said, “The sun never dies” — where did he go?
"""
Explanation: Batch-predict
How will our poetry model do when faced with Rumi's spiritual couplets?
End of explanation
"""
%%bash
awk 'NR % 2 == 1' data/poetry/rumi.txt | tr '[:upper:]' '[:lower:]' | sed "s/[^a-z\'-\ ]//g" > data/poetry/rumi_leads.txt
head -3 data/poetry/rumi_leads.txt
%%bash
# same as the above training job ...
TOPDIR=gs://${BUCKET}
OUTDIR=${TOPDIR}/poetry/model #_tpu # or ${TOPDIR}/poetry/model_full
DATADIR=${TOPDIR}/poetry/data
MODEL=transformer
HPARAMS=transformer_poetry #_tpu
# the file with the input lines
DECODE_FILE=data/poetry/rumi_leads.txt
BEAM_SIZE=4
ALPHA=0.6
t2t-decoder \
--data_dir=$DATADIR \
--problem=$PROBLEM \
--model=$MODEL \
--hparams_set=$HPARAMS \
--output_dir=$OUTDIR \
--t2t_usr_dir=./poetry/trainer \
--decode_hparams="beam_size=$BEAM_SIZE,alpha=$ALPHA" \
--decode_from_file=$DECODE_FILE
"""
Explanation: Let's write out the odd-numbered lines. We'll compare how close our model can get to the beauty of Rumi's second lines given his first.
End of explanation
"""
%%bash
DECODE_FILE=data/poetry/rumi_leads.txt
cat ${DECODE_FILE}.*.decodes
"""
Explanation: <b> Note </b> if you get an error about "AttributeError: 'HParams' object has no attribute 'problems'" please <b>Reset Session</b>, run the cell that defines the PROBLEM and run the above cell again.
End of explanation
"""
from google.datalab.ml import TensorBoard
TensorBoard().start('gs://{}/poetry/model_full'.format(BUCKET))
for pid in TensorBoard.list()['pid']:
TensorBoard().stop(pid)
print('Stopped TensorBoard with pid {}'.format(pid))
"""
Explanation: Some of these are still phrases and not complete sentences. This indicates that we might need to train longer or better somehow. We need to diagnose the model ...
<p>
### Diagnosing training run
<p>
Let's diagnose the training run to see what we'd improve the next time around.
(Note that this package may not be present on Jupyter -- `pip install pydatalab` if necessary)
End of explanation
"""
%%bash
XXX This takes about 15 hours and consumes about 420 ML units. Uncomment if you wish to proceed anyway
DATADIR=gs://${BUCKET}/poetry/data
OUTDIR=gs://${BUCKET}/poetry/model_hparam
JOBNAME=poetry_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR
echo "'Y'" | t2t-trainer \
--data_dir=gs://${BUCKET}/poetry/subset \
--t2t_usr_dir=./poetry/trainer \
--problem=$PROBLEM \
--model=transformer \
--hparams_set=transformer_poetry \
--output_dir=$OUTDIR \
--hparams_range=transformer_poetry_range \
--autotune_objective='metrics-poetry_line_problem/accuracy_per_sequence' \
--autotune_maximize \
--autotune_max_trials=4 \
--autotune_parallel_trials=4 \
--train_steps=7500 --cloud_mlengine --worker_gpu=4
"""
Explanation: <table>
<tr>
<td><img src="diagrams/poetry_loss.png"/></td>
<td><img src="diagrams/poetry_acc.png"/></td>
</table>
Looking at the loss curve, it is clear that we are overfitting (note that the orange training curve is well below the blue eval curve). Both loss curves and the accuracy-per-sequence curve, which is our key evaluation measure, plateaus after 40k. (The red curve is a faster way of computing the evaluation metric, and can be ignored). So, how do we improve the model? Well, we need to reduce overfitting and make sure the eval metrics keep going down as long as the loss is also going down.
<p>
What we really need to do is to get more data, but if that's not an option, we could try to reduce the NN and increase the dropout regularization. We could also do hyperparameter tuning on the dropout and network sizes.
## Hyperparameter tuning
tensor2tensor also supports hyperparameter tuning on Cloud ML Engine. Note the addition of the autotune flags.
<p>
The `transformer_poetry_range` was registered in problem.py above.
End of explanation
"""
%%bash
# same as the above training job ...
BEST_TRIAL=28 # CHANGE as needed.
TOPDIR=gs://${BUCKET}
OUTDIR=${TOPDIR}/poetry/model_hparam/$BEST_TRIAL
DATADIR=${TOPDIR}/poetry/data
MODEL=transformer
HPARAMS=transformer_poetry
# the file with the input lines
DECODE_FILE=data/poetry/rumi_leads.txt
BEAM_SIZE=4
ALPHA=0.6
t2t-decoder \
--data_dir=$DATADIR \
--problem=$PROBLEM \
--model=$MODEL \
--hparams_set=$HPARAMS \
--output_dir=$OUTDIR \
--t2t_usr_dir=./poetry/trainer \
--decode_hparams="beam_size=$BEAM_SIZE,alpha=$ALPHA" \
--decode_from_file=$DECODE_FILE \
--hparams="num_hidden_layers=4,hidden_size=512"
%%bash
DECODE_FILE=data/poetry/rumi_leads.txt
cat ${DECODE_FILE}.*.decodes
"""
Explanation: When I ran the above job, it took about 15 hours and finished with these as the best parameters:
<pre>
{
"trialId": "37",
"hyperparameters": {
"hp_num_hidden_layers": "4",
"hp_learning_rate": "0.026711152525921437",
"hp_hidden_size": "512",
"hp_attention_dropout": "0.60589466163419292"
},
"finalMetric": {
"trainingStep": "8000",
"objectiveValue": 0.0276162791997
}
</pre>
In other words, the accuracy per sequence achieved was 0.027 (as compared to 0.019 before hyperparameter tuning, so a <b>40% improvement!</b>) using 4 hidden layers, a learning rate of 0.0267, a hidden size of 512 and droput probability of 0.606. This is inspite of training for only 7500 steps instead of 75,000 steps ... we could train for 75k steps with these parameters, but I'll leave that as an exercise for you.
<p>
Instead, let's try predicting with this optimized model. Note the addition of the hp* flags in order to override the values hardcoded in the source code. (there is no need to specify learning rate and dropout because they are not used during inference). I am using 37 because I got the best result at trialId=37
End of explanation
"""
%%bash
TOPDIR=gs://${BUCKET}
OUTDIR=${TOPDIR}/poetry/model_full2
DATADIR=${TOPDIR}/poetry/data
MODEL=transformer
HPARAMS=transformer_poetry
BEAM_SIZE=4
ALPHA=0.6
t2t-exporter \
--model=$MODEL \
--hparams_set=$HPARAMS \
--problem=$PROBLEM \
--t2t_usr_dir=./poetry/trainer \
--decode_hparams="beam_size=$BEAM_SIZE,alpha=$ALPHA" \
--data_dir=$DATADIR \
--output_dir=$OUTDIR
%%bash
MODEL_LOCATION=$(gsutil ls gs://${BUCKET}/poetry/model_full2/export/Servo | tail -1)
echo $MODEL_LOCATION
saved_model_cli show --dir $MODEL_LOCATION --tag_set serve --signature_def serving_default
"""
Explanation: Take the first three line. I'm showing the first line of the couplet provided to the model, how the AI model that we trained complets it and how Rumi completes it:
<p>
INPUT: where did the handsome beloved go <br/>
AI: where art thou worse to me than dead <br/>
RUMI: I wonder, where did that tall, shapely cypress tree go?
<p>
INPUT: he spread his light among us like a candle <br/>
AI: like the hurricane eclipse <br/>
RUMI: Where did he go? So strange, where did he go without me? <br/>
<p>
INPUT: all day long my heart trembles like a leaf <br/>
AI: and through their hollow aisles it plays <br/>
RUMI: All alone at midnight, where did that beloved go?
<p>
Oh wow. The couplets as completed are quite decent considering that:
* We trained the model on American poetry, so feeding it Rumi is a bit out of left field.
* Rumi, of course, has a context and thread running through his lines while the AI (since it was fed only that one line) doesn't.
<p>
"Spreading light like a hurricane eclipse" is a metaphor I won't soon forget. And it was created by a machine learning model!
## Serving poetry
How would you serve these predictions? There are two ways:
<ol>
<li> Use [Cloud ML Engine](https://cloud.google.com/ml-engine/docs/deploying-models) -- this is serverless and you don't have to manage any infrastructure.
<li> Use [Kubeflow](https://github.com/kubeflow/kubeflow/blob/master/user_guide.md) on Google Kubernetes Engine -- this uses clusters but will also work on-prem on your own Kubernetes cluster.
</ol>
<p>
In either case, you need to export the model first and have TensorFlow serving serve the model. The model, however, expects to see *encoded* (i.e. preprocessed) data. So, we'll do that in the Python Flask application (in AppEngine Flex) that serves the user interface.
End of explanation
"""
%%writefile mlengine.json
description: Poetry service on ML Engine
autoScaling:
minNodes: 1 # We don't want this model to autoscale down to zero
%%bash
MODEL_NAME="poetry"
MODEL_VERSION="v1"
MODEL_LOCATION=$(gsutil ls gs://${BUCKET}/poetry/model_full2/export/Servo | tail -1)
echo "Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION ... this will take a few minutes"
gcloud ml-engine versions delete ${MODEL_VERSION} --model ${MODEL_NAME}
#gcloud ml-engine models delete ${MODEL_NAME}
#gcloud ml-engine models create ${MODEL_NAME} --regions $REGION
gcloud alpha ml-engine versions create --machine-type=mls1-highcpu-4 ${MODEL_VERSION} \
--model ${MODEL_NAME} --origin ${MODEL_LOCATION} --runtime-version=1.5 --config=mlengine.json
%%bash
gcloud components update --quiet
gcloud components install alpha --quiet
%%bash
MODEL_NAME="poetry"
MODEL_VERSION="v1"
MODEL_LOCATION=$(gsutil ls gs://${BUCKET}/poetry/model_full2/export/Servo | tail -1)
gcloud alpha ml-engine versions create --machine-type=mls1-highcpu-4 ${MODEL_VERSION} \
--model ${MODEL_NAME} --origin ${MODEL_LOCATION} --runtime-version=1.5 --config=mlengine.json
"""
Explanation: Cloud ML Engine
End of explanation
"""
!cat application/app.yaml
%%bash
cd application
#gcloud app create # if this is your first app
#gcloud app deploy --quiet --stop-previous-version app.yaml
"""
Explanation: Kubeflow
Follow these instructions:
* On the GCP console, launch a Google Kubernetes Engine (GKE) cluster named 'poetry' with 2 nodes, each of which is a n1-standard-2 (2 vCPUs, 7.5 GB memory) VM
* On the GCP console, click on the Connect button for your cluster, and choose the CloudShell option
* In CloudShell, run:
git clone https://github.com/GoogleCloudPlatform/training-data-analyst`
cd training-data-analyst/courses/machine_learning/deepdive/09_sequence
* Look at ./setup_kubeflow.sh and modify as appropriate.
AppEngine
What's deployed in Cloud ML Engine or Kubeflow is only the TensorFlow model. We still need a preprocessing service. That is done using AppEngine. Edit application/app.yaml appropriately.
End of explanation
"""
|
mne-tools/mne-tools.github.io | stable/_downloads/8de61cd59c9d83353f96a413e8484686/compute_mne_inverse_raw_in_label.ipynb | bsd-3-clause | # Author: Alexandre Gramfort <alexandre.gramfort@inria.fr>
#
# License: BSD-3-Clause
import matplotlib.pyplot as plt
import mne
from mne.datasets import sample
from mne.minimum_norm import apply_inverse_raw, read_inverse_operator
print(__doc__)
data_path = sample.data_path()
fname_inv = (
data_path / 'MEG' / 'sample' / 'sample_audvis-meg-oct-6-meg-inv.fif')
fname_raw = data_path / 'MEG' / 'sample' / 'sample_audvis_raw.fif'
label_name = 'Aud-lh'
fname_label = data_path / 'MEG' / 'sample' / 'labels' / f'{label_name}.label'
snr = 1.0 # use smaller SNR for raw data
lambda2 = 1.0 / snr ** 2
method = "sLORETA" # use sLORETA method (could also be MNE or dSPM)
# Load data
raw = mne.io.read_raw_fif(fname_raw)
inverse_operator = read_inverse_operator(fname_inv)
label = mne.read_label(fname_label)
raw.set_eeg_reference('average', projection=True) # set average reference.
start, stop = raw.time_as_index([0, 15]) # read the first 15s of data
# Compute inverse solution
stc = apply_inverse_raw(raw, inverse_operator, lambda2, method, label,
start, stop, pick_ori=None)
# Save result in stc files
stc.save('mne_%s_raw_inverse_%s' % (method, label_name), overwrite=True)
"""
Explanation: Compute sLORETA inverse solution on raw data
Compute sLORETA inverse solution on raw dataset restricted
to a brain label and stores the solution in stc files for
visualisation.
End of explanation
"""
plt.plot(1e3 * stc.times, stc.data[::100, :].T)
plt.xlabel('time (ms)')
plt.ylabel('%s value' % method)
plt.show()
"""
Explanation: View activation time-series
End of explanation
"""
|
kirichoi/tellurium | examples/notebooks/core/roadrunnerBasics.ipynb | apache-2.0 | from __future__ import print_function
import tellurium as te
te.setDefaultPlottingEngine('matplotlib')
%matplotlib inline
model = """
model test
compartment C1;
C1 = 1.0;
species S1, S2;
S1 = 10.0;
S2 = 0.0;
S1 in C1; S2 in C1;
J1: S1 -> S2; k1*S1;
k1 = 1.0;
end
"""
# load models
r = te.loada(model)
"""
Explanation: Back to the main Index
Model Loading
To load models use any the following functions. Each function takes a model with the corresponding format and converts it to a RoadRunner simulator instance.
te.loadAntimony (te.loada): Load an Antimony model.
te.loadSBML: Load an SBML model.
te.loadCellML: Load a CellML model (this passes the model through Antimony and converts it to SBML, may be lossy).
End of explanation
"""
# simulate from 0 to 50 with 100 steps
r.simulate(0, 50, 100)
# plot the simulation
r.plot()
"""
Explanation: Running Simulations
Simulating a model in roadrunner is as simple as calling the simulate function on the RoadRunner instance r. The simulate acceps three positional arguments: start time, end time, and number of points. The simulate function also accepts the keyword arguments selections, which is a list of variables to include in the output, and steps, which is the number of integration time steps and can be specified instead of number of points.
End of explanation
"""
# what is the current integrator?
print('The current integrator is:')
print(r.integrator)
# enable variable stepping
r.integrator.variable_step_size = True
# adjust the tolerances (can set directly or via setValue)
r.integrator.absolute_tolerance = 1e-3 # set directly via property
r.integrator.setValue('relative_tolerance', 1e-1) # set via a call to setValue
# run a simulation, stop after reaching or passing time 10
r.reset()
results = r.simulate(0, 10)
r.plot()
# print the time values from the simulation
print('Time values:')
print(results[:,0])
# set integrator to Gillespie solver
r.setIntegrator('gillespie')
# identical ways to set integrator
r.setIntegrator('rk4')
r.integrator = 'rk4'
# set back to cvode (the default)
r.setIntegrator('cvode')
# set integrator settings
r.integrator.setValue('variable_step_size', False)
r.integrator.setValue('stiff', True)
# print integrator settings
print(r.integrator)
"""
Explanation: Integrator and Integrator Settings
To set the integrator use r.setIntegrator(<integrator-name>) or r.integrator = <integrator-name>. RoadRunner supports 'cvode', 'gillespie', and 'rk4' for the integrator name. CVODE uses adaptive stepping internally, regardless of whether the output is gridded or not. The size of these internal steps is controlled by the tolerances, both absolute and relative.
To set integrator settings use r.integrator.<setting-name> = <value> or r.integrator.setValue(<setting-name>, <value>). Here are some important settings for the cvode integrator:
variable_step_size: Adaptive step-size integration (True/False).
stiff: Stiff solver for CVODE only (True/False). Enabled by default.
absolute_tolerance: Absolute numerical tolerance for integrator internal stepping.
relative_tolerance: Relative numerical tolerance for integrator internal stepping.
Settings for the gillespie integrator:
seed: The RNG seed for the Gillespie method. You can set this before running a simulation, or leave it alone for a different seed each time. Simulations initialized with the same seed will have the same results.
End of explanation
"""
# simulate from 0 to 6 with 6 points in the result
r.reset()
# pass args explicitly via keywords
res1 = r.simulate(start=0, end=10, points=6)
print(res1)
r.reset()
# use positional args to pass start, end, num. points
res2 = r.simulate(0, 10, 6)
print(res2)
"""
Explanation: Simulation options
The RoadRunner.simulate method is responsible for running simulations using the current integrator. It accepts the following arguments:
start: Start time.
end: End time.
points: Number of points in solution (exclusive with steps, do not pass both). If the output is gridded, the points will be evenly spaced in time. If not, the simulation will stop when it reaches the end time or the number of points, whichever happens first.
steps: Number of steps in solution (exclusive with points, do not pass both).
End of explanation
"""
print('Floating species in model:')
print(r.getFloatingSpeciesIds())
# provide selections to simulate
print(r.simulate(0,10,6, selections=r.getFloatingSpeciesIds()))
r.resetAll()
# try different selections
print(r.simulate(0,10,6, selections=['time','J1']))
"""
Explanation: Selections
The selections list can be used to set which state variables will appear in the output array. By default, it includes all SBML species and the time variable. Selections can be either given as argument to r.simulate.
End of explanation
"""
# show the current values
for s in ['S1', 'S2']:
print('r.{} == {}'.format(s, r[s]))
# reset initial concentrations
r.reset()
print('reset')
# S1 and S2 have now again the initial values
for s in ['S1', 'S2']:
print('r.{} == {}'.format(s, r[s]))
# change a parameter value
print('r.k1 before = {}'.format(r.k1))
r.k1 = 0.1
print('r.k1 after = {}'.format(r.k1))
# reset parameters
r.resetAll()
print('r.k1 after resetAll = {}'.format(r.k1))
"""
Explanation: Reset model variables
To reset the model's state variables use the r.reset() and r.reset(SelectionRecord.*) functions. If you have made modifications to parameter values, use the r.resetAll() function to reset parameters to their initial values when the model was loaded.
End of explanation
"""
|
sujitpal/polydlot | src/tf-serving/01a-mnist-cnn-keras-in-tf.ipynb | apache-2.0 | from __future__ import division, print_function
from sklearn.preprocessing import OneHotEncoder
from sklearn.metrics import accuracy_score, confusion_matrix
import numpy as np
import matplotlib.pyplot as plt
import os
import shutil
import tensorflow as tf
%matplotlib inline
DATA_DIR = "../../data"
TRAIN_FILE = os.path.join(DATA_DIR, "mnist_train.csv")
TEST_FILE = os.path.join(DATA_DIR, "mnist_test.csv")
OUTPUT_DATA_DIR = os.path.join(DATA_DIR, "01-mnist-cnn")
LOG_DIR = os.path.join(OUTPUT_DATA_DIR, "logs")
MODEL_FILE = os.path.join(OUTPUT_DATA_DIR, "model")
IMG_SIZE = 28
LEARNING_RATE = 0.001
BATCH_SIZE = 128
NUM_CLASSES = 10
NUM_EPOCHS = 5
"""
Explanation: MNIST Digit Recognition - Hybrid CNN w/Keras in TF
MNIST Digit Recognition built using Keras built into Tensorflow.
End of explanation
"""
def parse_file(filename):
xdata, ydata = [], []
fin = open(filename, "rb")
i = 0
for line in fin:
if i % 10000 == 0:
print("{:s}: {:d} lines read".format(
os.path.basename(filename), i))
cols = line.strip().split(",")
ydata.append(int(cols[0]))
xdata.append(np.reshape(np.array([float(x) / 255.
for x in cols[1:]]), (IMG_SIZE, IMG_SIZE, 1)))
i += 1
fin.close()
print("{:s}: {:d} lines read".format(os.path.basename(filename), i))
y = np.array(ydata)
X = np.array(xdata)
return X, y
Xtrain, ytrain = parse_file(TRAIN_FILE)
Xtest, ytest = parse_file(TEST_FILE)
print(Xtrain.shape, ytrain.shape, Xtest.shape, ytest.shape)
def datagen(X, y, batch_size=BATCH_SIZE, num_classes=NUM_CLASSES):
ohe = OneHotEncoder(n_values=num_classes)
while True:
shuffled_indices = np.random.permutation(np.arange(len(y)))
num_batches = len(y) // batch_size
for bid in range(num_batches):
batch_indices = shuffled_indices[bid*batch_size:(bid+1)*batch_size]
Xbatch = np.zeros((batch_size, X.shape[1], X.shape[2], X.shape[3]))
Ybatch = np.zeros((batch_size, num_classes))
for i in range(batch_size):
Xbatch[i] = X[batch_indices[i]]
Ybatch[i] = ohe.fit_transform(y[batch_indices[i]]).todense()
yield Xbatch, Ybatch
self_test_gen = datagen(Xtrain, ytrain)
Xbatch, Ybatch = self_test_gen.next()
print(Xbatch.shape, Ybatch.shape)
"""
Explanation: Prepare Data
End of explanation
"""
sess = tf.Session()
tf.contrib.keras.backend.set_session(sess)
with tf.name_scope("data"):
X = tf.placeholder(tf.float32, [None, IMG_SIZE, IMG_SIZE, 1], name="X")
Y = tf.placeholder(tf.float32, [None, NUM_CLASSES], name="Y")
model = tf.contrib.keras.models.Sequential()
model.add(tf.contrib.keras.layers.Conv2D(32, (3, 3), activation="relu",
input_shape=(IMG_SIZE, IMG_SIZE, 1)))
model.add(tf.contrib.keras.layers.Conv2D(64, (3, 3), activation="relu"))
model.add(tf.contrib.keras.layers.MaxPooling2D(pool_size=(2, 2)))
model.add(tf.contrib.keras.layers.Dropout(0.25))
model.add(tf.contrib.keras.layers.Flatten())
model.add(tf.contrib.keras.layers.Dense(128, activation="relu"))
model.add(tf.contrib.keras.layers.Dropout(0.5))
model.add(tf.contrib.keras.layers.Dense(NUM_CLASSES, activation="softmax"))
Y_ = model(X)
loss = tf.reduce_mean(tf.contrib.keras.losses.categorical_crossentropy(Y, Y_))
accuracy = tf.reduce_mean(tf.contrib.keras.metrics.categorical_accuracy(Y, Y_))
optimizer = tf.train.AdamOptimizer(learning_rate=LEARNING_RATE).minimize(loss)
init_op = tf.global_variables_initializer()
sess.run(init_op)
shutil.rmtree(OUTPUT_DATA_DIR)
tf.summary.scalar("loss", loss)
tf.summary.scalar("accuracy", accuracy)
# Merge all summaries into a single op
summary = tf.summary.merge_all()
"""
Explanation: Define Network
The network is defined using Keras. The loss and accuracy also use Keras functions. However, we use a Tensorflow optimizer, as well as execute the whole thing in the context of a Tensorflow session. Note that we need to set the Keras session and pass in the value of learning_phase during training and evaluation.
We also use the SummaryWriter to log the loss and accuracy at each step so they can be viewed using Tensorboard.
Finally, and most importantly for our Tensorflow Serving experiment, we use the Tensorflow Saver to save the model in Tensorflow format.
End of explanation
"""
with sess.as_default():
saver = tf.train.Saver()
logger = tf.summary.FileWriter(LOG_DIR, sess.graph)
train_gen = datagen(Xtrain, ytrain, BATCH_SIZE)
num_batches = len(Xtrain) // BATCH_SIZE
for epoch in range(NUM_EPOCHS):
total_loss, total_acc = 0, 0
for bid in range(num_batches):
Xbatch, Ybatch = train_gen.next()
_, batch_loss, batch_acc, batch_summary = sess.run(
[optimizer, loss, accuracy, summary],
feed_dict={X: Xbatch, Y: Ybatch, tf.contrib.keras.backend.learning_phase(): 1})
# write to tensorboard
logger.add_summary(batch_summary, epoch * num_batches + bid)
# accumulate to print once per epoch
total_acc += batch_acc
total_loss += batch_loss
total_acc /= num_batches
total_loss /= num_batches
print("Epoch {:d}/{:d}: loss={:.3f}, accuracy={:.3f}".format(
(epoch + 1), NUM_EPOCHS, total_loss, total_acc))
saver.save(sess, MODEL_FILE, (epoch + 1))
logger.close()
"""
Explanation: Train Network
End of explanation
"""
BEST_MODEL = os.path.join(OUTPUT_DATA_DIR, "model-5")
saver = tf.train.Saver()
ys, ys_ = [], []
with sess.as_default():
sess.run(tf.global_variables_initializer())
saver.restore(sess, BEST_MODEL)
test_gen = datagen(Xtest, ytest, BATCH_SIZE)
val_loss, val_acc = 0., 0.
num_batches = len(Xtrain) // BATCH_SIZE
for _ in range(num_batches):
Xbatch, Ybatch = test_gen.next()
Ybatch_ = sess.run(Y_, feed_dict={X: Xbatch,
tf.contrib.keras.backend.learning_phase(): 0})
ys.extend(np.argmax(Ybatch, axis=1))
ys_.extend(np.argmax(Ybatch_, axis=1))
acc = accuracy_score(ys_, ys)
cm = confusion_matrix(ys_, ys)
print("Accuracy: {:.4f}".format(acc))
print("Confusion Matrix")
print(cm)
"""
Explanation: Visualize Training logs via Tensorboard
On the command line, run following commands:
cd ../../data/01-tf-serving
tensorboard --logdir=logs
Control-Click on http://localhost:6006 to see loss and accuracy plots on the browser.
Here are (representative) images from tensorboard for the accuracy and loss.
<img src="01a-tensorboard-lossplot.png"/>
Evaluate Network
End of explanation
"""
|
nwilbert/async-examples | notebook/aio36.ipynb | mit | import asyncio
loop = asyncio.get_event_loop()
"""
Explanation: asyncio IO Loop
Create an event loop (which automatically becomes the default event loop in the context).
End of explanation
"""
def hello_world():
print('Hello World!')
loop.stop()
loop.call_soon(hello_world)
loop.run_forever()
"""
Explanation: Run a simple callback as soon as possible:
End of explanation
"""
async def aprint(text):
await asyncio.sleep(1)
print(text)
return 42
loop.run_until_complete(aprint('Hello world!'))
"""
Explanation: Coroutine Examples
Coroutines can be directly scheduled in the eventloop.
End of explanation
"""
async def aprint_twice(text):
await asyncio.sleep(1)
print(text)
await asyncio.sleep(1)
print(text + ' (once more)')
return 42
loop.run_until_complete(aprint_twice('Hello world!'))
"""
Explanation: You can use as many awaits as you like in a couroutine:
End of explanation
"""
async def aprint_twice():
for i in range(1, 7):
await asyncio.sleep(0.5)
if i % 2:
print('even')
else:
print('uneven, waiting some more...')
await asyncio.sleep(1)
loop.run_until_complete(aprint_twice())
"""
Explanation: All normal control structures can be used:
End of explanation
"""
async def raiser():
await asyncio.sleep(1)
raise ValueError()
async def catcher():
try:
await raiser()
except ValueError:
print('caught something')
loop.run_until_complete(catcher())
"""
Explanation: Exceptions work just like you would expect
End of explanation
"""
tasks = asyncio.gather(aprint('Task 1'), aprint('Task 2'))
loop.run_until_complete(tasks)
"""
Explanation: Multiple Coroutines can be combined and executed concurrently:
End of explanation
"""
async def remember_me():
print('I started.')
await aprint('Did I forget something?')
a = remember_me()
"""
Explanation: Note that this only took one second, not two!
Automatic Checks
End of explanation
"""
a = 42
"""
Explanation: Note that nothing happens as long as the coroutine is not awaited.
Even the synchronous print is not executed.
End of explanation
"""
a = aprint('Did I forget something?')
loop.run_until_complete(a)
del(a)
"""
Explanation: Not awaiting a coroutine raises an error.
Awaiting a coroutine "later" is ok though.
End of explanation
"""
async def fail():
await aprint
loop.run_until_complete(fail())
"""
Explanation: Awaiting something that is not awaitable raises an error.
End of explanation
"""
from motor.motor_asyncio import AsyncIOMotorClient
collection = AsyncIOMotorClient().aiotest.test
loop.run_until_complete(collection.insert({'value': i} for i in range(10)))
"""
Explanation: Async for-loop
Prepare a simple MongoDB collection to show this feature.
End of explanation
"""
async def f():
async for doc in collection.find():
print(doc)
loop.run_until_complete(f())
loop.run_until_complete(collection.drop())
"""
Explanation: The async for-loop saves us the boilerplate code to await each next value. Note that it runs sequentially (i.e., the elements are fetched after each other).
End of explanation
"""
class AsyncContextManager:
async def __aenter__(self):
await aprint('entering context')
async def __aexit__(self, exc_type, exc, tb):
await aprint('exiting context')
async def use_async_context():
async with AsyncContextManager():
print('Hello World!')
loop.run_until_complete(use_async_context())
"""
Explanation: Async Context Manager
End of explanation
"""
lock = asyncio.Lock()
async def use_lock():
async with lock:
await asyncio.sleep(1)
print('one after the other...')
tasks = asyncio.gather(use_lock(), use_lock())
loop.run_until_complete(tasks)
"""
Explanation: One example is using locks (even though this doesn't require async exiting).
End of explanation
"""
|
gangadhara691/gangadhara691.github.io | P5 machine_learning/report_p5.ipynb | mit | #!/usr/bin/python
import sys
import pickle
sys.path.append("../tools/")
from feature_format import featureFormat, targetFeatureSplit
from tester import dump_classifier_and_data
### Task 1: Select what features you'll use.
### features_list is a list of strings, each of which is a feature name.
### The first feature must be "poi".
financial_features = ['salary', 'deferral_payments', 'total_payments',
'loan_advances', 'bonus', 'restricted_stock_deferred', 'deferred_income',
'total_stock_value', 'expenses', 'exercised_stock_options', 'other',
'long_term_incentive', 'restricted_stock', 'director_fees']#(all units are in US dollars)
email_features = ['to_messages', 'from_poi_to_this_person', 'email_address',
'from_messages', 'from_this_person_to_poi', 'shared_receipt_with_poi']
#(units are generally number of emails messages;
#notable exception is ‘email_address’, which is a text string)
poi_label = ['poi']# (boolean, represented as integer)
features_list = poi_label + financial_features + email_features
### Load the dictionary containing the dataset
with open("final_project_dataset.pkl", "r") as data_file:
data_dict = pickle.load(data_file)
import pandas as pd
import numpy as np
enron= pd.DataFrame.from_dict(data_dict, orient = 'index')
print"total poi in dataset:", sum(enron['poi']==1)
#enron.describe()
#We can see all the null values for each index
enron = enron.replace('NaN', np.nan)
print(enron.info())
enron.describe()
"""
Explanation: Q1
### Summarize for us the goal of this project and how machine learning is useful in trying to accomplish it. As part of your answer, give some background on the dataset and how it can be used to answer the project question. Were there any outliers in the data when you got it, and how did you handle those?
Introduction to Enron Dataset
Enron was one of the largest companies in the United States resulting into bankruptcy due to corporate fraud which is one of the largest bankruptcies in U.S. History.In the resulting Federal investigation, there was a significant amount of typically confidential information entered into public record, including tens of thousands of emails and detailed financial data for top executives.
The main goal is identifying person of interests (POI's) using supervised machine learning algorithms for prediction.This model will classify weather the individual is a POI or notPOI by using rest of the features available and various machine learning algorithms.
End of explanation
"""
#now checking these features for poi to conclude how much of data is missing for a poi .
missing = ['loan_advances', 'director_fees', 'restricted_stock_deferred',\
'deferral_payments', 'deferred_income', 'long_term_incentive']
enron_poi=enron[enron['poi']==1][missing]
enron_poi.info()
# its better to remove these with less non null values
removing = ['loan_advances', 'director_fees', 'restricted_stock_deferred']
for x in removing:
if x in features_list:
features_list.remove(x)
features_list
### Task 2: Remove outliers
#visualising the outlier
import matplotlib.pyplot
e = enron[(enron.total_payments != np.nan) & (enron.total_stock_value != np.nan)]
matplotlib.pyplot.scatter(x="total_payments", y="total_stock_value", data=e)
matplotlib.pyplot.xlabel("total_payments")
matplotlib.pyplot.ylabel("total_stock_value")
matplotlib.pyplot.show()
# removing outlier
enron.total_payments.idxmax()
#droping total it must be a spreadsheet mistake
enron=enron.drop("TOTAL")
#data_dict.pop( 'TOTAL', 0 )
e = enron[(enron.total_payments != np.nan) & (enron.total_stock_value != np.nan)]
matplotlib.pyplot.scatter(x="total_payments", y="total_stock_value", data=e)
matplotlib.pyplot.xlabel("total_payments")
matplotlib.pyplot.ylabel("total_stock_value")
matplotlib.pyplot.show()
enron.total_payments.idxmax()
"""
Explanation: From th above information about dataset we can conlcude that any point with less than 73 non-null will be having more than 50%
of missing data.
And the features seem to be in more than 50% null group
|Feature|No.of non-null out of 146|
|---|---|
|deferral_payments | 39 non-null|
|restricted_stock_deferred | 18 non-null|
|loan_advances | 4 non-null|
|director_fees | 17 non-null |
|deferred_income | 49 non-null |
|long_term_incentive | 66 non-null |
End of explanation
"""
#After observing insiderpay.pdf file, I got to know it is not a person so we have to remove THE TRAVEL AGENCY IN THE PARK.
enron=enron.drop("THE TRAVEL AGENCY IN THE PARK")
enron[enron[financial_features].isnull().all(axis=1)].index
#There is 1 person without any financial data that will also need to be removed.
enron=enron.drop( 'LOCKHART EUGENE E')
enron = enron.replace(np.nan, 'NaN') # since to use tester code, i needed to convert back to "NaN"
data_dict = enron[features_list].to_dict(orient = 'index')
from tester import test_classifier
from sklearn.tree import DecisionTreeClassifier
clf = DecisionTreeClassifier()
if "email_address" in features_list:
features_list.remove("email_address")
feat=features_list
else :
feat=features_list
test_classifier(clf, data_dict, feat)
"""
Explanation: LAY KENNETH L is the next outlier but it is a valid point
End of explanation
"""
### Task 3: Create new feature(s)
### Store to my_dataset for easy export below.
my_dataset = data_dict
#Adding three new features
for key, value in my_dataset.items():
if value['from_messages'] == 'NaN' or value['from_this_person_to_poi'] == 'NaN':
value['person_to_poi/total_msgs'] = 0.0
else:
value['person_to_poi/total_msgs'] = value['from_this_person_to_poi'] / (1.0*value['from_messages'])
if value['to_messages'] == 'NaN' or value['from_poi_to_this_person'] == 'NaN':
value['poi_to_person/to_msgs'] = 0.0
else:
value['poi_to_person/to_msgs'] = value['from_poi_to_this_person'] / (1.0*value['to_messages'])
if value['shared_receipt_with_poi'] == 'NaN' or value['from_poi_to_this_person'] == 'NaN' \
or value['from_this_person_to_poi'] == 'NaN':
value['total_poi_interaction'] = 0.0
else:
value['total_poi_interaction'] = value['shared_receipt_with_poi'] + \
value['from_this_person_to_poi'] + \
value['from_poi_to_this_person']
features_new_list=features_list+['person_to_poi/total_msgs','poi_to_person/to_msgs','total_poi_interaction']
"""
Explanation: Q2
What features did you end up using in your POI identifier, and what selection process did you use to pick them? Did you have to do any scaling? Why or why not? As part of the assignment, you should attempt to engineer your own feature that does not come ready-made in the dataset -- explain what feature you tried to make, and the rationale behind it. (You do not necessarily have to use it in the final analysis, only engineer and test it.) In your feature selection step, if you used an algorithm like a decision tree, please also give the feature importances of the features that you use, and if you used an automated feature selection function like SelectKBest, please report the feature scores and reasons for your choice of parameter values.
End of explanation
"""
#Selectkbest used to rank the features
data = featureFormat(my_dataset, features_new_list, sort_keys = True)
labels, features = targetFeatureSplit(data)
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import f_classif, chi2
selector = SelectKBest(k='all').fit(features, labels)
results = pd.DataFrame(selector.scores_,
index=features_new_list[1:])
results.columns = ['Importances']
results = results.sort(['Importances'], ascending=False)
results
from tester import test_classifier
from sklearn.tree import DecisionTreeClassifier
clf = DecisionTreeClassifier()
if "email_address" in features_new_list:
features_new_list.remove("email_address")
feat=features_new_list
else :
feat=features_list
test_classifier(clf, my_dataset, features_new_list)
"""
Explanation: I tried to make features of ratios :
$$\frac{from-this-person-to-poi}{from-messages}$$
$$\frac{from-poi-to-this-person}{from-messages}$$
and
$$total-poi-interaction = (shared-receipt-with-poi) + (from-this-person-to-poi) + (from-poi-to-this-person) $$
These features were selected since , a poi is often tend to be in contact of another poi.Which may lead to higher importance of features.And the total interaction with poi gives a higher chance of finding a poi
End of explanation
"""
### Task 4: Try a varity of classifiers
### Please name your classifier clf for easy export below.
### Note that if you want to do PCA or other multi-stage operations,
### you'll need to use Pipelines.
# Provided to give you a starting point. Try a variety of classifiers.
from sklearn.neighbors import KNeighborsClassifier
from sklearn.preprocessing import StandardScaler
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import f_classif, chi2
from sklearn.feature_selection import RFE
from sklearn import tree
from sklearn.svm import SVC, SVR
from sklearn.ensemble import AdaBoostClassifier
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.metrics import accuracy_score, precision_score, recall_score
from sklearn.cross_validation import train_test_split, StratifiedKFold
from sklearn.pipeline import Pipeline
from sklearn.grid_search import GridSearchCV
from sklearn.decomposition import PCA
from sklearn.preprocessing import MinMaxScaler
#from sklearn.model_selection import RandomizedSearchCV
parameters={}
parameters["DecisionTreeClassifier"] = [{'min_samples_split': [2,3], 'criterion': [ 'entropy']}]
parameters["GaussianNB"] = [{ 'selection__k':[9,10,11], 'pca__n_components': [2,3,4,5] }]
parameters["SVC"] = [{'selection__k':[11], 'svc__kernel': ['rbf',"sigmoid"], 'svc__C': [x/1.0 for x in range(1, 100,10)]
,'svc__gamma':[0.1**(x) for x in range(1,9)]}]
parameters["AdaBoostClassifier"] = [{ "base_estimator":[DecisionTreeClassifier(min_samples_split= 2, criterion= 'entropy')],'learning_rate' : [x/30.0 for x in range(1, 30)],'n_estimators' : range(1,100,20),\
'algorithm': ['SAMME','SAMME.R'] }]
parameters["KNeighborsClassifier"] = [{'selection__k': [10,11], "knn__p":range(3,4),'pca__n_components': [2,3,4,5],"knn__n_neighbors": range(1,10), 'knn__weights': ['uniform','distance'] ,'knn__algorithm': ['ball_tree','kd_tree','brute']}]
pipe={}
pipe["DecisionTreeClassifier"] = DecisionTreeClassifier()
pipe["GaussianNB"] = Pipeline([('scaler', MinMaxScaler()),('selection', SelectKBest()),('pca', PCA()),('naive_bayes', GaussianNB())])
pipe["SVC"] =Pipeline([('selection', SelectKBest()),('scaler', StandardScaler())
,('svc', SVC())])
pipe["AdaBoostClassifier"] = AdaBoostClassifier()
pipe["KNeighborsClassifier"] = Pipeline([('selection',SelectKBest()),
('pca', PCA()),('knn', KNeighborsClassifier())])
### Task 5: Tune your classifier to achieve better than .3 precision and recall
### using our testing script. Check the tester.py script in the final project
### folder for details on the evaluation method, especially the test_classifier
### function. Because of the small size of the dataset, the script uses
### stratified shuffle split cross validation. For more info:
### http://scikit-learn.org/stable/modules/generated/sklearn.cross_validation.StratifiedShuffleSplit.html
data = featureFormat(my_dataset, features_new_list, sort_keys = True)
labels, features = targetFeatureSplit(data)
# Example starting point. Try investigating other evaluation techniques!
from sklearn.cross_validation import train_test_split
features_train, features_test, labels_train, labels_test = \
train_test_split(features, labels, test_size=0.3, random_state=42)
clf=DecisionTreeClassifier()
clf_name = clf.__class__.__name__
grid = GridSearchCV(estimator =pipe[clf_name],param_grid = parameters[clf_name],\
cv = StratifiedKFold(labels_train, n_folds = 6) \
,n_jobs = -1,scoring = 'f1')
grid.fit(features_train, labels_train)
print clf_name
test_classifier(grid.best_estimator_, my_dataset, features_new_list)
clf=SVC()
clf_name = clf.__class__.__name__
grid = GridSearchCV(estimator =pipe[clf_name],param_grid = parameters[clf_name],\
cv = StratifiedKFold(labels_train, n_folds = 6) \
,n_jobs = -1,scoring = 'f1')
grid.fit(features_train, labels_train)
print clf_name
test_classifier(grid.best_estimator_, my_dataset, features_new_list)
clf=AdaBoostClassifier()
clf_name = clf.__class__.__name__
grid = GridSearchCV(estimator =pipe[clf_name],param_grid = parameters[clf_name],\
cv = StratifiedKFold(labels_train, n_folds = 6) \
,n_jobs = -1,scoring = 'f1')
grid.fit(features_train, labels_train)
print clf_name
test_classifier(grid.best_estimator_, my_dataset, features_new_list)
clf=GaussianNB()
clf_name = clf.__class__.__name__
grid = GridSearchCV(estimator =pipe[clf_name],param_grid = parameters[clf_name],\
cv = StratifiedKFold(labels_train, n_folds = 6) \
,n_jobs = -1,scoring = 'f1')
grid.fit(features_train, labels_train)
print clf_name
test_classifier(grid.best_estimator_, my_dataset, features_new_list)
clf=KNeighborsClassifier()
clf_name = clf.__class__.__name__
grid = GridSearchCV(estimator =pipe[clf_name],param_grid = parameters[clf_name],\
cv = StratifiedKFold(labels_train, n_folds = 6) \
,n_jobs = -1,scoring = 'f1')
grid.fit(features_train, labels_train)
print clf_name
test_classifier(grid.best_estimator_, my_dataset, features_new_list)
"""
Explanation: - We can clearly see that the performance values are increasing which indicates these would be a good predictor
- From the above we can observe the ranking of various features, on adding these features their is a certain increase in accuracy, precision and recall
- SelectKBest was used to rank and select the features .
- GridSearchCV was used to select the appropriate value of k in SelectKBest to give the maximum F1 score for the classifier.
Q3
What algorithm did you end up using? What other one(s) did you try? How did model performance differ between algorithms?
End of explanation
"""
### Task 6: Dump your classifier, dataset, and features_list so anyone can
### check your results. You do not need to change anything below, but make sure
### that the version of poi_id.py that you submit can be run on its own and
### generates the necessary .pkl files for validating your results.
dump_classifier_and_data(clf, my_dataset, features_list)
"""
Explanation: Several different classifiers are deployed:
gaussian naive bayes
k-nearest neighbors
support vector machine
gaussianNB
adaboost
There was not much change in accuracy.
Highest accuracy was for Knearest,Highest precision is for Knearest,Highest recall is for decision tree ,I ended up with really high precision scores for k-nearest neighbors. Unfortunately the recall scores for these weren't as high andalmost all were equal. The F1 score was highest for the KNN classifier.
End of explanation
"""
|
jwyang/joint-unsupervised-learning | matlab/approaches/nmf-deep/Deep-Semi-NMF-master/Deep Semi-NMF.ipynb | mit | %load_ext autoreload
%autoreload 2
%matplotlib inline
from __future__ import print_function
import matplotlib.pyplot as plt
import numpy as np
import sklearn
from sklearn.cluster import KMeans
from dsnmf import DSNMF, appr_seminmf
from scipy.io import loadmat
mat = loadmat('PIE_pose27.mat', struct_as_record=False, squeeze_me=True)
data, gnd = mat['fea'].astype('float32'), mat['gnd']
# Normalise each feature to have an l2-norm equal to one.
data /= np.linalg.norm(data, 2, 1)[:, None]
"""
Explanation: Deep Semi-NMF demo the CMU PIE Pose dataset
End of explanation
"""
n_classes = np.unique(gnd).shape[0]
kmeans = KMeans(n_classes, precompute_distances=False)
"""
Explanation: In order to evaluate the different features we will use a simple k-means clustering with the only assumption of knowing the true number of classes existing in the dataset.
End of explanation
"""
def evaluate_nmi(X):
pred = kmeans.fit_predict(X)
score = sklearn.metrics.normalized_mutual_info_score(gnd, pred)
return score
"""
Explanation: Using the cluster indicators for each data sample we then use the normalised mutual information score to evalutate the similarity between the predicted labels and the ground truth labels.
End of explanation
"""
print("K-means on the raw pixels has an NMI of {:.2f}%".format(100 * evaluate_nmi(data)))
from sklearn.decomposition import PCA
fea = PCA(100).fit_transform(data)
score = evaluate_nmi(fea)
print("K-means clustering using the top 100 eigenvectors has an NMI of {:.2f}%".format(100 * score))
"""
Explanation: First we will perform k-means clustering on the raw feature space.
It will take some time, depending on your setup.
End of explanation
"""
Z, H = appr_seminmf(data.T, 100) # seminmf expects a num_features x num_samples matrix
print("K-means clustering using the Semi-NMF features has an NMI of {:.2f}%".format(100 * evaluate_nmi(H.T)))
"""
Explanation: Now use a single layer DSNMF model -- i.e. Semi-NMF
Semi-NMF factorisation decomposes the original data-matrix
$$\mathbf X \approx \mathbf Z \mathbf H$$
subject to the elements of H being non-negative. The objective function of Semi-NMF is closely related
to the one of K-means clustering. In fact, if we had a matrix ${\mathbf H}$ that was comprised only by zeros and ones (i.e. a binary matrix) then this would be exactly equivalent to K-means clustering. Instead, Semi-NMF only forces the elements to be non-negative and thus can be seen as a soft clustering method where the features matrix describes the compatibility of each component with a cluster centroid, a base in $\mathbf Z$.
End of explanation
"""
dsnmf = DSNMF(data, layers=(400, 100))
"""
Explanation: Not bad! That's a huge improvement over using k-means
on the raw pixels!
Let's try doing the same with a Deep Semi-NMF model with more than one
layer.
Initialize a Deep Semi-NMF model with 2 layers
In Semi-NMF the goal is to construct a low-dimensional representation $\mathbf H^+$ of our original data $\mathbf X^\pm$, with the bases matrix $\mathbf Z^\pm$ serving as the mapping between our original data and its lower-dimensional representation.
In many cases the data we wish to analyse is often rather complex and has a collection of distinct, often unknown, attributes. In this example, we deal with datasets of human faces where the variability in the data does not only stem from the difference in the appearance of the subjects, but also from other attributes, such as the pose of the head in relation to the camera, or the facial expression of the subject. The multi-attribute nature of our data calls for a hierarchical framework that is better at representing it than a shallow Semi-NMF.
$$ \mathbf X^{\pm} \approx {\mathbf Z}_1^{\pm}{\mathbf Z}_2^{\pm}\cdots{\mathbf Z}_m^{\pm}{\mathbf H}^+_m $$
In this example we have a 2-layer network ($m=2$), with $\mathbf Z_1 \in \mathbb{R}^{1024\times 400}$, $\mathbf Z_2 \in \mathbb{R}^{400 \times 100}$, and $\mathbf H_2 \in \mathbb{R}^{100 \times 2856}$
End of explanation
"""
for epoch in range(1000):
residual = float(dsnmf.train_fun())
print("Epoch {}. Residual [{:.2f}]".format(epoch, residual), end="\r")
"""
Explanation: Train the model
End of explanation
"""
fea = dsnmf.get_features().T # this is the last layers features i.e. h_2
pred = kmeans.fit_predict(fea)
score = sklearn.metrics.normalized_mutual_info_score(gnd, pred)
print("NMI: {:.2f}%".format(100 * score))
"""
Explanation: Evaluate it in terms of clustering performance using
the normalised mutual information score.
End of explanation
"""
|
unpingco/Python-for-Probability-Statistics-and-Machine-Learning | chapters/probability/notebooks/intro.ipynb | mit | d={(i,j):i+j for i in range(1,7) for j in range(1,7)}
"""
Explanation: Python for Probability, Statistics, and Machine Learning
This chapter takes a geometric view of probability theory and relates it to
familiar concepts in linear algebra and geometry. This approach connects your
natural geometric intuition to the key abstractions in probability that can
help guide your reasoning. This is particularly important in probability
because it is easy to be misled. We need a bit of rigor and some
intuition to guide us.
In grade school, you were introduced to the natural numbers (i.e., 1,2,3,..)
and you learned how to manipulate them by operations like addition,
subtraction, and multiplication. Later, you were introduced to positive and
negative numbers and were again taught how to manipulate them. Ultimately, you
were introduced to the calculus of the real line, and learned how to
differentiate, take limits, and so on. This progression provided more
abstractions, but also widened the field of problems you could successfully
tackle. The same is true of probability. One way to think about probability is
as a new number concept that allows you to tackle problems that have a special
kind of uncertainty built into them. Thus, the key idea is that there is some
number, say $x$, with a traveling companion, say, $f(x)$, and this companion
represents the uncertainties about the value of $x$ as if looking at the number
$x$ through a frosted window. The degree of opacity of the window is
represented by $f(x)$. If we want to manipulate $x$, then we have to figure
out what to do with $f(x)$. For example if we want $y= 2 x $, then we have to
understand how $f(x)$ generates $f(y)$.
Where is the random part? To conceptualize this, we need still another
analogy: think about a beehive with the swarm around it representing $f(x)$,
and the hive itself, which you can barely see through the swarm, as $x$. The
random piece is you don't know which bee in particular is going to sting you!
Once this happens the uncertainty evaporates.
Up until that happens, all we have is a concept of a swarm (i.e., density of
bees) which represents a potentiality of which bee will ultimately sting.
In summary, one way to think about probability is as a way of carrying through
mathematical reasoning (e.g., adding, subtracting, taking
limits) with a notion of potentiality that is so-transformed by these
operations.
Understanding Probability Density
In order to understand the heart of modern probability, which is built
on the Lesbesgue theory of integration, we need to extend the concept
of integration from basic calculus. To begin, let us consider the
following piecewise function
$$
f(x) = \begin{cases}
1 & \mbox{if } 0 < x \leq 1 \\
2 & \mbox{if } 1 < x \leq 2 \\
0 & \mbox{otherwise }
\end{cases}
$$
as shown in Figure. In calculus, you learned
Riemann integration, which you can apply here as
<!-- dom:FIGURE: [fig-probability/intro_001.jpg, width=500 frac=0.75] <div id="fig:intro_001"></div> -->
<!-- begin figure -->
<div id="fig:intro_001"></div>
<p></p>
<img src="fig-probability/intro_001.jpg" width=500>
<!-- end figure -->
$$
\int_0^2 f(x) dx = 1 + 2 = 3
$$
which has the usual interpretation as the area of the two rectangles
that make up $f(x)$. So far, so good.
With Lesbesgue integration, the idea is very similar except that we
focus on the y-axis instead of moving along the x-axis. The question
is given $f(x) = 1$, what is the set of $x$ values for which this is
true? For our example, this is true whenever $x\in (0,1]$. So now we
have a correspondence between the values of the function (namely, 1
and 2) and the sets of $x$ values for which this is true, namely,
$\lbrace (0,1] \rbrace$ and $\lbrace (1,2] \rbrace$, respectively. To
compute the integral, we simply take the function values (i.e., 1,2)
and some way of measuring the size of the corresponding interval
(i.e., $\mu$) as in the following:
$$
\int_0^2 f d\mu = 1 \mu(\lbrace (0,1] \rbrace) + 2 \mu(\lbrace (1,2] \rbrace)
$$
We have suppressed some of the notation above to emphasize generality. Note
that we obtain the same value of the integral as in the Riemann case when
$\mu((0,1]) = \mu((1,2]) = 1$. By introducing the $\mu$ function as a way of
measuring the intervals above, we have introduced another degree of freedom in
our integration. This accommodates many weird functions that are not tractable
using the usual Riemann theory, but we refer you to a proper introduction to
Lesbesgue integration for further study [jones2001lebesgue]. Nonetheless,
the key step in the above discussion is the introduction of the $\mu$ function,
which we will encounter again as the so-called probability density function.
Random Variables
Most introductions to probability jump straight into random variables and
then explain how to compute complicated integrals. The problem with this
approach is that it skips over some of the important subtleties that we will now
consider. Unfortunately, the term random variable is not very descriptive. A
better term is measurable function. To understand why this is a better term,
we have to dive into the formal constructions of probability by way of a simple
example.
Consider tossing a fair six-sided die. There are only six outcomes possible,
$$
\Omega=\lbrace 1,2,3,4,5,6 \rbrace
$$
As we know, if the die is fair, then the probability of each outcome is $1/6$.
To say this formally, the measure of each set (i.e., $\lbrace 1 \rbrace,\lbrace
2 \rbrace,\ldots,\lbrace 6 \rbrace$) is $\mu(\lbrace 1 \rbrace ) =\mu(\lbrace 2
\rbrace ) \ldots = \mu(\lbrace 6 \rbrace ) = 1/6$. In this case, the $\mu$
function we discussed earlier is the usual probability mass function, denoted by
$\mathbb{P}$. The measurable function maps a set into a
number on the real line. For example, $ \lbrace 1 \rbrace \mapsto 1 $ is
one such uninteresting function.
Now, here's where things get interesting. Suppose you were asked to construct a
fair coin from the fair die. In other words, we want to throw the die and then
record the outcomes as if we had just tossed a fair coin. How could we do this?
One way would be to define a measurable function that says if the die comes up
3 or less, then we declare heads and otherwise declare tails. This has
some strong intuition behind it, but let's articulate it in terms of formal
theory. This strategy creates two different non-overlapping sets $\lbrace
1,2,3 \rbrace$ and $\lbrace 4,5,6 \rbrace$. Each set has the same probability
measure,
$$
\begin{eqnarray}
\mathbb{P}(\lbrace 1,2,3 \rbrace) & = & 1/2 \\
\mathbb{P}(\lbrace 4,5,6 \rbrace) & = & 1/2
\end{eqnarray}
$$
And the problem is solved. Everytime the die comes up
$\lbrace 1,2,3 \rbrace$, we record heads and record tails otherwise.
Is this the only way to construct a fair coin experiment from a
fair die? Alternatively, we can define the sets as $\lbrace 1 \rbrace$,
$\lbrace 2 \rbrace$, $\lbrace 3,4,5,6 \rbrace$. If we define the corresponding
measure for each set as the following
$$
\begin{eqnarray}
\mathbb{P}(\lbrace 1 \rbrace) & = & 1/2 \\
\mathbb{P}(\lbrace 2 \rbrace) & = & 1/2 \\
\mathbb{P}(\lbrace 3,4,5,6 \rbrace) & = & 0
\end{eqnarray}
$$
then, we have another solution to the fair coin problem. To
implement this, all we do is ignore every time the die shows 3,4,5,6 and
throw again. This is wasteful, but it solves the problem. Nonetheless,
we hope you can see how the interlocking pieces of the theory provide a
framework for carrying the notion of uncertainty/potentiality from one problem
to the next (e.g., from the fair die to the fair coin).
Let's consider a slightly more interesting problem where we toss two dice. We
assume that each throw is independent, meaning that the outcome of one does
not influence the other. What are the sets in this case? They are all pairs
of possible outcomes from two throws as shown below,
$$
\Omega = \lbrace (1,1),(1,2),\ldots,(5,6),(6,6) \rbrace
$$
What are the measures of each of these sets? By virtue of the
independence claim, the measure of each is the product of the respective measures
of each element. For instance,
$$
\mathbb{P}((1,2)) = \mathbb{P}(\lbrace 1 \rbrace) \mathbb{P}(\lbrace 2 \rbrace) = \frac{1}{6^2}
$$
With all that established, we can ask the following
question: what is the probability that the sum of the dice equals
seven? As before, the first thing to do is characterize the
measurable function for this as $X:(a,b) \mapsto (a+b)$. Next, we
associate all of the $(a,b)$ pairs with their sum. We can create a
Python dictionary for this as shown,
End of explanation
"""
from collections import defaultdict
dinv = defaultdict(list)
for i,j in d.iteritems():
dinv[j].append(i)
"""
Explanation: The next step is to collect all of the $(a,b)$ pairs that sum to
each of the possible values from two to twelve.
End of explanation
"""
[(1, 6), (2, 5), (5, 2), (6, 1), (4, 3), (3, 4)]
"""
Explanation: Programming Tip.
The defaultdict object from the built-in collections module creates dictionaries with
default values when it encounters a new key. Otherwise, we would have had to
create default values manually for a regular dictionary.
For example, dinv[7] contains the following list of pairs that
sum to seven,
End of explanation
"""
X={i:len(j)/36. for i,j in dinv.iteritems() }
print X
{2: 0.027777777777777776,
3: 0.05555555555555555,
4: 0.08333333333333333,
5: 0.1111111111111111,
6: 0.1388888888888889,
7: 0.16666666666666666,
8: 0.1388888888888889,
9: 0.1111111111111111,
10: 0.08333333333333333,
11: 0.05555555555555555,
12: 0.027777777777777776}
"""
Explanation: The next step is to compute the probability measured for each of these items.
Using the independence assumption, this means we have to compute the sum of the
products of the individual item probabilities in dinv. Because we know that
each outcome is equally likely, every term in the sum equals $1/36$. Thus, all
we have to do is count the number of items in the corresponding list for each
key in dinv and divide by 36. For example, dinv[11] contains [(5, 6),
(6, 5)]. The probability of 5+6=6+5=11 is the probability of this set which
is composed of the sum of the probabilities of the individual elements
{(5,6),(6,5)}. In this case, we have $\mathbb{P}(11) = \mathbb{P}(\lbrace
(5,6) \rbrace)+ \mathbb{P}(\lbrace (6,5) \rbrace) = 1/36 + 1/36 = 2/36$.
Repeating this procedure for all the elements, we derive the probability mass
function as shown below,
End of explanation
"""
d={(i,j,k):((i*j*k)/2>i+j+k) for i in range(1,7)
for j in range(1,7)
for k in range(1,7)}
"""
Explanation: Programming Tip.
In the preceding code note that 36. is written with
the trailing decimal mark. This is a good habit to get into because division
in Python 2.x is integer division by default, which is not what we want here.
This can be fixed with a top-level from __future__ import division, but
that's easy to forget to do, especially when you are passing code
around and others may not reflexively do the future import.
The above example exposes the elements of probability theory that
are in play for this simple problem while deliberately suppressing some of the
gory technical details. With this framework, we can ask other questions like
what is the probability that half the product of three dice will exceed the
their sum? We can solve this using the same method as in the following. First,
let's create the first mapping,
End of explanation
"""
dinv = defaultdict(list)
for i,j in d.iteritems(): dinv[j].append(i)
"""
Explanation: The keys of this dictionary are the triples and the values are the
logical values of whether or not half the product of three dice exceeds their sum.
Now, we do the inverse mapping to collect the corresponding lists,
End of explanation
"""
X={i:len(j)/6.0**3 for i,j in dinv.iteritems() }
print X
{False: 0.37037037037037035, True: 0.6296296296296297}
"""
Explanation: Note that dinv contains only two keys, True and False. Again,
because the dice are independent, the probability of any triple is $1/6^3$.
Finally, we collect this for each outcome as in the following,
End of explanation
"""
from pandas import DataFrame
d=DataFrame(index=[(i,j) for i in range(1,7) for j in range(1,7)],
columns=['sm','d1','d2','pd1','pd2','p'])
"""
Explanation: Thus, the probability of half the product of three dice exceeding their sum is
136/(6.0**3) = 0.63. The set that is induced by the random variable has only
two elements in it, True and False, with $\mathbb{P}(\mbox{True})=136/216$
and $\mathbb{P}(\mbox{False})=1-136/216$.
As a final example to exercise another layer of generality, let is consider the
first problem with the two dice where we want the probability of a
seven, but this time one of the dice is no longer fair. The distribution for
the unfair die is the following:
$$
\begin{eqnarray}
\mathbb{P}(\lbrace 1\rbrace)=\mathbb{P}(\lbrace 2 \rbrace)=\mathbb{P}(\lbrace 3 \rbrace) = \frac{1}{9} \\
\mathbb{P}(\lbrace 4\rbrace)=\mathbb{P}(\lbrace 5 \rbrace)=\mathbb{P}(\lbrace 6 \rbrace) = \frac{2}{9}
\end{eqnarray}
$$
From our earlier work, we know the elements corresponding to the sum of seven
are the following:
$$
\lbrace (1,6),(2,5),(3,4),(4,3),(5,2),(6,1) \rbrace
$$
Because we still have the independence assumption, all we need to
change is the probability computation of each of elements. For example, given
that the first die is the unfair one, we have
$$
\mathbb{P}((1,6)) = \mathbb{P}(1)\mathbb{P}(6) = \frac{1}{9} \times \frac{1}{6}
$$
and likewise for $(2,5)$ we have the following:
$$
\mathbb{P}((2,5)) = \mathbb{P}(2)\mathbb{P}(5) = \frac{1}{9} \times \frac{1}{6}
$$
and so forth. Summing all of these gives the following:
$$
\mathbb{P}_X(7) = \frac{1}{9} \times \frac{1}{6}
+\frac{1}{9} \times \frac{1}{6}
+\frac{1}{9} \times \frac{1}{6}
+\frac{2}{9} \times \frac{1}{6}
+\frac{2}{9} \times \frac{1}{6}
+\frac{2}{9} \times \frac{1}{6} = \frac{1}{6}
$$
Let's try computing this using Pandas instead
of Python dictionaries. First, we construct
a DataFrame object with an index of tuples
consisting of all pairs of possible dice outcomes.
End of explanation
"""
d.d1=[i[0] for i in d.index]
d.d2=[i[1] for i in d.index]
"""
Explanation: Now, we can populate the columns that we set up above
where the outcome of the first die is the d1 column and
the outcome of the second die is d2,
End of explanation
"""
d.sm=map(sum,d.index)
"""
Explanation: Next, we compute the sum of the dices in the sm
column,
End of explanation
"""
d.head(5) # show first five lines
"""
Explanation: With that established, the DataFrame now looks like
the following:
End of explanation
"""
d.loc[d.d1<=3,'pd1']=1/9.
d.loc[d.d1 > 3,'pd1']=2/9.
d.pd2=1/6.
d.head(10)
"""
Explanation: Next, we fill out the probabilities for each face of the
unfair die (d1) and the fair die (d2),
End of explanation
"""
d.p = d.pd1 * d.pd2
d.head(5)
"""
Explanation: Finally, we can compute the joint probabilities
for the sum of the shown faces as the following:
End of explanation
"""
d.groupby('sm')['p'].sum()
"""
Explanation: With all that established, we can compute the
density of all the dice outcomes by using groupby as in the
following,
End of explanation
"""
>>> import numpy as np
>>> x,y = np.random.rand(2,1000) # uniform rv
>>> a,b,c = x,(y-x),1-y # 3 sides
>>> s = (a+b+c)/2
>>> np.mean((s>a) & (s>b) & (s>c) & (y>x)) # approx 1/8=0.125
"""
Explanation: These examples have shown how the theory of probability
breaks down sets and measurements of those sets and how these can be
combined to develop the probability mass functions for new random
variables.
Continuous Random Variables
The same ideas work with continuous variables but managing the sets
becomes trickier because the real line, unlike discrete sets, has many
limiting properties already built into it that have to be handled
carefully. Nonetheless, let's start with an example that should
illustrate the analogous ideas. Suppose a random variable $X$ is
uniformly distributed on the unit interval. What is the probability
that the variable takes on values less than 1/2?
In order to build intuition onto the discrete case, let's go back to our
dice-throwing experiment with the fair dice. The sum of the values of the dice
is a measurable function,
$$
Y \colon \lbrace 1,2,\dots,6 \rbrace^2 \mapsto \lbrace 2,3,\ldots, 12 \rbrace
$$
That is, $Y$ is a mapping of the cartesian product of sets to a
discrete set of outcomes. In order to compute probabilities of the set of
outcomes, we need to derive the probability measure for $Y$, $\mathbb{P}_Y$,
from the corresponding probability measures for each die. Our previous discussion
went through the mechanics of that. This means that
$$
\mathbb{P}_Y \colon \lbrace 2,3,\ldots,12 \rbrace \mapsto [0,1]
$$
Note there is a separation between the function definition and where the
target items of the function are measured in probability. More bluntly,
$$
Y \colon A \mapsto B
$$
with,
$$
\mathbb{P}_Y \colon B \mapsto [0,1]
$$
Thus, to compute $\mathbb{P}_Y$, which is derived
from other random variables, we have to express the equivalence classes
in $B$ in terms of their progenitor $A$ sets.
The situation for continuous variables follows the same pattern, but
with many more deep technicalities that we are going to skip. For the continuous
case, the random variable is now,
$$
X \colon \mathbb{R} \mapsto \mathbb{R}
$$
with corresponding probability measure,
$$
\mathbb{P}_X \colon \mathbb{R} \mapsto [0,1]
$$
But where are the corresponding sets here? Technically, these are the
Borel sets, but we can just think of them as intervals. Returning to our
question, what is the probability that a uniformly distributed random variable
on the unit interval takes values less than $1/2$? Rephrasing this question
according to the framework, we have the following:
$$
X \colon [0,1] \mapsto [0,1]
$$
with corresponding,
$$
\mathbb{P}_X \colon [0,1] \mapsto [0,1]
$$
To answer the question, by the definition of the uniform random
variable on the unit interval, we compute the following integral,
$$
\mathbb{P}_X([0,1/2]) = \mathbb{P}_X(0 < X < 1/2) = \int_0^{1/2} dx = 1/2
$$
where the above integral's $dx$ sweeps through intervals of the
$B$-type. The measure of any $dx$ interval (i.e., $A$-type set) is equal to
$dx$, by definition of the uniform random variable. To get all the moving parts
into one notationally rich integral, we can also write this as,
$$
\mathbb{P}_X(0 < X < 1/2) = \int_0^{ 1/2 } d\mathbb{P}_X(dx) = 1/2
$$
Now, let's consider a slightly more complicated and interesting example. As
before, suppose we have a uniform random variable, $X$ and let us introduce
another random variable defined,
$$
Y = 2 X
$$
Now, what is the probability that $0 < Y < \frac{1}{2}$?
To express this in our framework, we write,
$$
Y \colon [0,1] \mapsto [0,2]
$$
with corresponding,
$$
\mathbb{P}_Y \colon [0,2] \mapsto [0,1]
$$
To answer the question, we need to measure the set $[0,1/2]$, with
the probability measure for $Y$, $\mathbb{P}_Y([0,1/2])$. How can we do this?
Because $Y$ is derived from the $X$ random variable, as with the fair-dice
throwing experiment, we have to create a set of equivalences in the target
space (i.e., $B$-type sets) that reflect back on the input space (i.e.,
$A$-type sets). That is, what is the interval $[0,1/2]$ equivalent to in terms
of the $X$ random variable? Because, functionally, $Y=2 X$, then the $B$-type
interval $[0,1/2]$ corresponds to the $A$-type interval $[0,1/4]$. From the
probability measure of $X$, we compute this with the integral,
$$
\mathbb{P}_Y([0,1/2]) =\mathbb{P}_X([0,1/4])= \int_0^{1/4} dx = 1/4
$$
Now, let's up the ante and consider the following random variable,
$$
Y = X^2
$$
where now $X$ is still uniformly distributed, but now over the
interval $[-1/2,1/2]$. We can express this in our framework as,
$$
Y \colon [-1/2,1/2] \mapsto [0,1/4]
$$
with corresponding,
$$
\mathbb{P}_Y \colon [0,1/4] \mapsto [0,1]
$$
What is the $\mathbb{P}_Y(Y < 1/8)$? In other words, what is the
measure of the set $B_Y= [0,1/8]$? As before, because $X$ is derived from our
uniformly distributed random variable, we have to reflect the $B_Y$ set onto
sets of the $A$-type. The thing to recognize is that because $X^2$
is symmetric about zero, all $B_Y$ sets reflect back into two sets.
This means that for any set $B_Y$, we have the correspondence $B_Y = A_X^+ \cup
A_X^{-}$. So, we have,
$$
B_Y=\Big\lbrace 0<Y<\frac{1}{8}\Big\rbrace=\Big\lbrace 0<X<\frac{1}{\sqrt{8}} \Big\rbrace \bigcup \Big\lbrace -\frac{1}{\sqrt {8}}<X<0 \Big\rbrace
$$
From this perspective, we have the following solution,
$$
\mathbb{P}_Y(B_Y)=\mathbb{P}(A_X^+)/2 + \mathbb{P}(A_X^{-})/2
$$
where the $\frac{1}{2}$ comes from normalizing the $\mathbb{P}_Y$ to
one. Also,
$$
A_X^+ = \Big\lbrace 0< X<\frac{1}{\sqrt{8}} \Big\rbrace
$$
$$
\
A_X^{-} = \Big\lbrace -\frac{1}{\sqrt {8}} < X<0 \Big\rbrace
$$
Therefore,
$$
\mathbb{P}_Y(B_Y) = \frac{1}{2\sqrt 8} + \frac{1}{2\sqrt 8}
$$
because $\mathbb{P}(A_X^+) =\mathbb{P}(A_X^-) = 1/\sqrt 8$. Let's
see if this comes out using the usual transformation of variables method from
calculus. Using this method, the density $f_Y(y) = f_X(\sqrt y)/(2 \sqrt y) =
\frac{1}{2 \sqrt y} $. Then, we obtain,
$$
\int_0^{\frac{1}{8}} \frac{1}{2 \sqrt y} dy = \frac{1}{\sqrt 8}
$$
which is what we got using the sets method. Note that you would
favor the calculus method in practice, but it is important to
understand the deeper mechanics, because sometimes
the usual calculus method fails, as the next problem shows.
Transformation of Variables Beyond Calculus
Suppose $X$ and $Y$ are uniformly distributed in the unit interval and we
define $Z$ as
$$
Z = \frac{X}{Y-X}
$$
What is the $f_Z(z)$? If you try this using the usual calculus
method, you will fail (try it!). The problem is one
of the technical prerequisites for the calculus method is not in force.
The key observation is that $Z \notin [-1,0]$. If this were
possible, the $X$ and $Y$ would have different signs, which cannot happen,
given that $X$ and $Y$ are uniformly distributed over $[0,1]$. Now, let's
consider when $Z>0$. In this case, $Y>X$ because $Z$ cannot be positive
otherwise. For the density function, we are interested in the set
$\lbrace 0 < Z < z \rbrace $. We want to compute
$$
\mathbb{P}(Z<z) = \int \int B_1 dX dY
$$
with,
$$
B_1 = \lbrace 0 < Z < z \rbrace
$$
Now, we have to translate that interval into an interval
relevant to $X$ and $Y$. For $0 < Z$, we have $ Y > X$. For $Z < z $,
we have $Y > X(1/z+1)$. Putting this together gives
$$
A_1 = \lbrace \max (X,X(1/z+1)) < Y < 1 \rbrace
$$
Integrating this over $Y$ as follows,
$$
\int_0^1\lbrace\max(X,X(1/z+1))<Y<1 \rbrace dY=\frac{z-X-Xz}{z}\mbox{ where } z > \frac{X}{1-X}
$$
and integrating this one more time over $X$ gives
$$
\int_0^{\frac{z}{1+z}} \frac{-X+z-Xz}{z} dX = \frac{z}{2(z+1)} \mbox{ where } z > 0
$$
Note that this is the computation for the probability
itself, not the probability density function. To get that, all we have
to do is differentiate the last expression to obtain
$$
f_Z(z) = \frac{1}{(z+1)^2} \mbox{ where } z > 0
$$
Now we need to compute this density using the same process
for when $z < -1$. We want the interval $ Z < z $ for when $z < -1$.
For a fixed $z$, this is equivalent to $ X(1+1/z) < Y$. Because $z$
is negative, this also means that $Y < X$. Under these terms, we
have the following integral,
$$
\int_0^1 \lbrace X(1/z+1) <Y< X\rbrace dY = -\frac{X}{z} \mbox{ where } z < -1
$$
and integrating this one more time over $X$ gives the following
$$
-\frac{1}{2 z} \mbox{ where } z < -1
$$
To get the density for $z<-1$, we differentiate this with
respect to $z$ to obtain the following,
$$
f_Z(z) = \frac{1}{2 z^2} \mbox{ where } z < -1
$$
Putting this all together, we obtain,
$$
f_Z(z) =
\begin{cases}
\frac{1}{(z+1)^2} & \mbox{if } z > 0 \\
\frac{1}{2 z^2} & \mbox{if } z < -1 \\
0 & \mbox{otherwise }
\end{cases}
$$
We will leave it as an exercise to show that this
integrates out to one.
Independent Random Variables
Independence is a standard assumption. Mathematically, the
necessary and sufficient condition for independence between two
random variables $X$ and $Y$ is the following:
$$
\mathbb{P}(X,Y) = \mathbb{P}(X)\mathbb{P}(Y)
$$
Two random variables $X$ and $Y$
are uncorrelated if,
$$
\mathbb{E}(X-\overline{X})\mathbb{E}(Y-\overline{Y})=0
$$
where $\overline{X}=\mathbb{E}(X)$ Note that uncorrelated random
variables are sometimes called orthogonal random variables. Uncorrelatedness
is a weaker property than independence, however. For example, consider the
discrete random variables $X$ and $Y$ uniformly distributed over the set
$\lbrace 1,2,3 \rbrace$ where
$$
X =
\begin{cases}
1 & \mbox{if } \omega =1 \\
0 & \mbox{if } \omega =2 \\
-1 & \mbox{if } \omega =3
\end{cases}
$$
and also,
$$
Y =
\begin{cases}
0 & \mbox{if } \omega =1 \\
1 & \mbox{if } \omega =2 \\
0 & \mbox{if } \omega =3
\end{cases}
$$
Thus, $\mathbb{E}(X)=0$ and $\mathbb{E}(X Y)=0$, so
$X$ and $Y$ are uncorrelated. However, we have
$$
\mathbb{P}(X=1,Y=1)=0\neq \mathbb{P}(X=1)\mathbb{P}(Y=1)=\frac{1}{9}
$$
So, these two random variables are not independent.
Thus, uncorrelatedness does not imply independence, generally, but
there is the important case of Gaussian random variables for which
it does. To see this, consider the probability density function
for two zero-mean, unit-variance Gaussian random variables $X$ and
$Y$,
$$
f_{X,Y}(x,y) = \frac{e^{\frac{x^2-2 \rho x
y+y^2}{2 \left(\rho^2-1\right)}}}{2 \pi
\sqrt{1-\rho^2}}
$$
where $\rho:=\mathbb{E}(X Y)$ is the correlation coefficient. In
the uncorrelated case where $\rho=0$, the probability density function factors
into the following,
$$
f_{X,Y}(x,y)=\frac{e^{-\frac{1}{2}\left(x^2+y^2\right)}}{2\pi}=\frac{e^{-\frac{x^2}{2}}}{\sqrt{2\pi}}\frac{e^{-\frac{y^2}{2}}}{\sqrt{2\pi}} =f_X(x)f_Y(y)
$$
which means that $X$ and $Y$ are independent.
Independence and conditional independence are closely related, as in the following:
$$
\mathbb{P}(X,Y\vert Z) =\mathbb{P}(X\vert Z) \mathbb{P}(Y\vert Z)
$$
which says that $X$ and $Y$ and independent conditioned
on $Z$. Conditioning independent random variables can break
their independence. For example, consider two independent
Bernoulli-distributed random variables, $X_1, X_2\in\lbrace 0,1
\rbrace$. We define $Z=X_1+X_2$. Note that $Z\in \lbrace
0,1,2 \rbrace$. In the case where $Z=1$, we have,
$$
\mathbb{P}(X_1\vert Z=1) >0
$$
$$
\
\mathbb{P}(X_2\vert Z=1) >0
$$
Even though $X_1,X_2$ are independent,
after conditioning on $Z$, we have the following,
$$
\mathbb{P}(X_1=1,X_2=1\vert Z=1)=0\neq \mathbb{P}(X_1=1\vert Z=1)\mathbb{P}(X_2=1\vert Z=1)
$$
Thus, conditioning on $Z$ breaks the independence of
$X_1,X_2$. This also works in the opposite direction ---
conditioning can make dependent random variables independent.
Define $Z_n=\sum_i^n X_i$ with $X_i$ independent, integer-valued
random variables. The $Z_n$ variables are
dependent because they stack the same telescoping set of
$X_i$ variables. Consider the following,
<!-- Equation labels as ordinary links -->
<div id="_auto1"></div>
$$
\begin{equation}
\mathbb{P}(Z_1=i,Z_3=j\vert Z_2=k) = \frac{\mathbb{P}(Z_1=i,Z_2=k,Z_3=j)}{\mathbb{P}(Z_2 =k)}
\label{_auto1} \tag{1}
\end{equation}
$$
<!-- Equation labels as ordinary links -->
<div id="eq:condIndep"></div>
$$
\begin{equation} \
=\frac{\mathbb{P}(X_1 =i)\mathbb{P}(X_2 =k-i)\mathbb{P}(X_3 =j-k) }{\mathbb{P}(Z_2 =k)}
\end{equation}
\label{eq:condIndep} \tag{2}
$$
where the factorization comes from the independence of
the $X_i$ variables. Using the definition of conditional
probability,
$$
\mathbb{P}(Z_1=i\vert Z_2)=\frac{\mathbb{P}(Z_1=i,Z_2=k)}{\mathbb{P}(Z_2=k)}
$$
We can continue to expand Equation ref{eq:condIndep},
$$
\mathbb{P}(Z_1=i,Z_3=j\vert Z_2=k) =\mathbb{P}(Z_1 =i\vert Z_2) \frac{\mathbb{P}( X_3 =j-k)\mathbb{P}( Z_2 =k)}{\mathbb{P}( Z_2 =k)}
$$
$$
\
=\mathbb{P}(Z_1 =i\vert Z_2)\mathbb{P}(Z_3 =j\vert Z_2)
$$
where $\mathbb{P}(X_3=j-k)\mathbb{P}(Z_2=k)=
\mathbb{P}(Z_3=j,Z_2)$. Thus, we see that dependence between
random variables can be broken by conditioning to create
conditionally independent random variables. As we have just
witnessed, understanding how conditioning influences independence
is important and is the main topic of
study in Probabilistic Graphical Models, a field
with many algorithms and concepts to extract these
notions of conditional independence from graph-based
representations of random variables.
Classic Broken Rod Example
Let's do one last example to exercise fluency in our methods by
considering the following classic problem: given a rod of unit-length,
broken independently and randomly at two places, what is the
probability that you can assemble the three remaining pieces into a
triangle? The first task is to find a representation of a triangle as
an easy-to-apply constraint. What we want is something like the
following:
$$
\mathbb{P}(\mbox{ triangle exists }) = \int_0^1 \int_0^1 \lbrace \mbox{ triangle exists } \rbrace dX dY
$$
where $X$ and $Y$ are independent and uniformly distributed
in the unit-interval. Heron's formula for the area of the triangle,
$$
\mbox{ area } = \sqrt{(s-a)(s-b)(s-c)s}
$$
where $s = (a+b+c)/2$ is what we need. The idea is that this
yields a valid area only when each of the terms under the square root is
greater than or equal to zero. Thus, suppose that we have
$$
\begin{eqnarray}
a & = & X \\
b & = & Y-X \\
c & = & 1-Y
\end{eqnarray}
$$
assuming that $Y>X$. Thus, the criterion for a valid triangle boils down
to
$$
\lbrace (s > a) \wedge (s > b) \wedge (s > c) \wedge (X<Y) \rbrace
$$
After a bit of manipulation, this consolidates into:
$$
\Big\lbrace \frac{1}{2} < Y < 1 \bigwedge \frac{1}{2}(2 Y-1) < X < \frac{1}{2} \Big\rbrace
$$
which we integrate out by $dX$ first to obtain
$$
\mathbb{P}(\mbox{ triangle exists }) = \int_{0}^1 \int_{0}^1 \Big\lbrace \frac{1}{2} < Y < 1 \bigwedge \frac{1}{2}(2 Y-1) < X < \frac{1}{2} \Big\rbrace dX dY
$$
$$
\mathbb{P}(\mbox{ triangle exists }) = \int_{\frac{1}{2}}^1 (1-Y) dY
$$
and then by $dY$ to obtain finally,
$$
\mathbb{P}(\mbox{ triangle exists }) = \frac{1}{8}
$$
when $Y>X$. By symmetry, we get the same result for $X>Y$. Thus, the
final result is the following:
$$
\mathbb{P}(\mbox{ triangle exists }) = \frac{1}{8}+\frac{1}{8} = \frac{1}{4}
$$
We can quickly check using this result using Python for the case $Y>X$ using
the following code:
End of explanation
"""
|
steven-murray/halomod | devel/halo_exclusion_testing.ipynb | mit | %pylab inline
"""
Explanation: Interactive Tests of Python-Implemented Halo Exclusion
End of explanation
"""
m = np.logspace(10,18,400)
density = m**-2
I = np.outer(np.ones(10),m**-4)
bias = m**2
deltah = 200.0
rhob = 10.**11
r = np.logspace(-1,2,40)
"""
Explanation: First we set up the "test" as it were, hoping to do each case analytically if possible. We will use
$$ n_g(m) = m^{-2}, $$
$$ I(m) = m^{-4}, $$
$$ b(m) = m^2 $$
$$ \Delta_h = 200, $$
$$ \rho_{mean} = 10^{11} $$
End of explanation
"""
# Imports
import numpy as np
from hmf._framework import Model
from cached_property import cached_property
from scipy import integrate as intg
from numba import jit
"""
Explanation: All the code here should be copy-pasted from the latest halo_exclusion module for quick updating. Throughout, we use a postfix underscore to indicate a jit-compiled function.
End of explanation
"""
@jit
def outer(a,b):
return np.outer(a,b).reshape(a.shape+b.shape)
def dblsimps(X,dx,dy=None):
"""
Double-integral over the last two dimensions of X.
"""
if dy is None:
dy = dx
if X.shape[-2]%2==0:
X = X[...,:-1,:]
if X.shape[-1]%2 == 0:
X = X[...,:-1]
(nx,ny) = X.shape[-2:]
W = makeW(nx,ny)
return dx * dy * np.sum(W * X,axis=(-2,-1)) / 9.0
def makeW(nx,ny):
W = np.ones((nx,ny))
W[1:nx-1:2, :] *= 4
W[:, 1:ny-1:2] *= 4
W[2:nx-1:2, :] *= 2
W[:, 2:ny-1:2] *= 2
return W
@jit(nopython=True)
def dblsimps_(X,dx,dy):
"""
Double-integral of X.
"""
nx = X.shape[0]
ny = X.shape[1]
# Must be odd number
if nx%2==0:
nx -= 1
if ny%2==0:
ny -= 1
W = makeW_(nx,ny) #only upper
tot=0.0
for ix in range(nx):
tot += W[ix,ix]*X[ix,ix]
for iy in range(ix+1,ny):
tot += 2*W[ix,iy] * X[ix,iy]
return dx * dy * tot / 9.0
@jit(nopython=True)
def makeW_(nx,ny):
W = np.ones((nx,ny))
for ix in range(1,nx-1,2):
for iy in range(ny):
W[ix,iy] *= 4
W[iy,ix] *= 4
for ix in range(2,nx-1,2):
for iy in range(ny):
W[ix,iy] *= 2
W[iy,ix] *= 2
return W
"""
Explanation: Tools
Definitions
End of explanation
"""
# Test simple integration, output should be [0,0.25,4.0,20.25]
a = np.zeros((4,101,101))
for i in range(4):
a[i,:,:] = np.outer(np.linspace(0,i,101),np.linspace(0,i,101))
dblsimps(a,arange(4)/100.,arange(4)/100.)
for i in range(4):
a = np.outer(np.linspace(0,i,101),np.linspace(0,i,101))
print dblsimps_(a,i/100.,i/100.)
"""
Explanation: Accuracy Tests
End of explanation
"""
%timeit makeW(501,501)
%timeit makeW_(501,501)
a = np.outer(np.linspace(0,1,501),np.linspace(0,1,501))
%timeit dblsimps(a,0.002,0.002)
%timeit dblsimps_(a,0.002,0.002)
"""
Explanation: Timing
End of explanation
"""
class Exclusion(Model):
"""
Base class for exclusion models.
All models will need to perform single or double integrals over
arrays that may have an extra two dimensions. The maximum possible
size is k*r*m*m, which for normal values of the vectors equates to
~ 1000*50*500*500 = 12,500,000,000 values, which in 64-bit reals is
1e11 bytes = 100GB. We thus limit this to a maximum of either k*r*m
or r*m*m, both of which should be less than a GB of memory.
It is possibly better to limit it to k*r or m*m, which should be quite
memory efficient, but then without accelerators (ie. Numba), these
will be very slow.
"""
def __init__(self,m,density,I,bias,r,delta_halo,mean_density):
self.density = density # 1d, (m)
self.m = m # 1d, (m)
self.I = I # 2d, (k,m)
self.bias = bias # 1d (m) or 2d (r,m)
self.r = r # 1d (r)
self.mean_density = mean_density
self.delta_halo=delta_halo
self.dlnx = np.log(m[1]/m[0])
def raw_integrand(self):
"""
Returns either a 2d (k,m) or 3d (r,k,m) array with the general integrand.
"""
if len(self.bias.shape)==1:
return self.I * self.bias * self.m # *m since integrating in logspace
else:
return np.einsum("ij,kj->kij",self.I*self.m,self.bias)
def integrate(self):
"""
This should pass back whatever is multiplied by P_m(k) to get the two-halo
term. Often this will be a square of an integral, sometimes a Double-integral.
"""
pass
"""
Explanation: Base Class
End of explanation
"""
class NoExclusion(Exclusion):
def integrate(self):
return intg.simps(self.raw_integrand(),dx=self.dlnx)**2
"""
Explanation: No exclusion
Definition
End of explanation
"""
cls = NoExclusion(m,density,I,bias,None,deltah,rhob)
cls.integrate()[0]
"""
Explanation: Analytic Case
The integral in this case should simply be
$$ P(k) = P_m(k) \left[\int_{10^{10}}^{10^{18}} m^{-4} m^2 dm \right]^2 \approx 10^{-20} $$
Numerical Test
End of explanation
"""
%timeit cls.integrate()
"""
Explanation: Timing
End of explanation
"""
class Sphere(Exclusion):
def raw_integrand(self):
if len(self.bias.shape)==1:
return outer(np.ones_like(self.r),self.I * self.bias * self.m) # *m since integrating in logspace
else:
return np.einsum("ij,kj->kij",self.I*self.m,self.bias)
@cached_property
def density_mod(self):
"""
Return the modified density, under new limits
"""
density = np.outer(np.ones_like(self.r),self.density*self.m)
density[self.mask] = 0
if hasattr(self.m,"unit"):
return intg.simps(density,dx=self.dlnx)*self.m.unit*self.density.unit
else:
return intg.simps(density,dx=self.dlnx)
@cached_property
def mask(self):
"Elements that should be set to 0"
return (np.outer(self.m,np.ones_like(self.r)) > self.mlim().value).T
def mlim(self):
return 4*np.pi*(self.r/2)**3 * self.mean_density * self.delta_halo/3
def integrate(self):
integ = self.raw_integrand() #r,k,m
integ.transpose((1,0,2))[:,self.mask] = 0
return intg.simps(integ,dx=self.dlnx)**2
"""
Explanation: Sphere
Again, we should not need to jit-compile anything in this case, since we can do everything with broadcasting in Numpy without too much memory overhead.
End of explanation
"""
cls = Sphere(m,density,I,bias,r,deltah,rhob)
mlim = 4*pi*(r/2)**3 *deltah *rhob/3
analytic = (1e-10 - 1./mlim)**2
fig,ax = plt.subplots(2,1,sharex=True,subplot_kw={"xscale":"log"},gridspec_kw={"height_ratios":(2.5,1)})
ax[0].plot(r,cls.integrate()[:,0])
ax[0].plot(r,analytic)
ax[1].plot(r,cls.integrate()[:,0]/analytic)
"""
Explanation: Analytic Case
In this case, the limit of the integral changes, and we should have
$$ P(k) = P_m(k) \left[\int_{10^{10}}^{m_{lim}} m^{-4} m^2 dm \right]^2 = \left(10^{-10} - 1/m_{lim}\right)^2, $$
with
$$ m_{lim} = \frac{4\pi \Delta_h \bar{\rho}}{3} (r/2)^3. $$
At about $r = 0.1$, we have $m_{lim} \approx 10^{10}$, and so the result should drop to zero.
Numerical Test
End of explanation
"""
%timeit cls.integrate()
"""
Explanation: Timing
End of explanation
"""
class DblSphere(Sphere):
@property
def rvir(self):
return (3*self.m/(4*np.pi*self.delta_halo*self.mean_density))**(1./3.)
@cached_property
def mask(self):
"Elements that should be set to 0 (r,m,m)"
rvir = self.rvir
return (outer(np.add.outer(rvir,rvir),np.ones_like(self.r)) > self.r).T
def density_mod(self):
out = np.zers_like(self.r)
for i,r in enumerate(self.r):
integrand = np.outer(self.density*self.m,np.ones_like(self.density))
integrand[self.mask] = 0
out[i] = dblsimps(integrand,self.dlnx)
if hasattr(self.m,"unit"):
return out*self.m.unit*self.density.unit
else:
return out
def integrate(self):
integ = self.raw_integrand() #(r,k,m)
return integrate_dblsphere(integ,self.mask,self.dlnx)
def integrate_dblsphere(integ,mask,dx):
out = np.zeros_like(integ[:,:,0])
integrand = np.zeros_like(mask)
for ik in range(integ.shape[1]):
for ir in range(mask.shape[0]):
integrand[ir] = np.outer(integ[ir,ik,:],integ[ir,ik,:])
integrand[mask] = 0
out[:,ik] = dblsimps(integrand,dx)
return out
##### ACCELERATED METHODS
@jit(nopython=True)
def integrate_dblsphere_(integ,mask,dx):
nr = integ.shape[0]
nk = integ.shape[1]
nm = mask.shape[1]
out = np.zeros((nr,nk))
integrand = np.zeros((nm,nm))
for ir in range(nr):
for ik in range(nk):
for im in range(nm):
for jm in range(im,nm):
if mask[ir,im,jm]:
integrand[im,jm] = 0
else:
integrand[im,jm] = integ[ir,ik,im]*integ[ir,ik,jm]
# if ir==0 and ik==0:
# print np.sum(integrand)
out[ir,ik] = dblsimps_(integrand,dx,dx)
return out
class DblSphere_(DblSphere):
def integrate(self):
integ = self.raw_integrand() #(r,k,m)
return integrate_dblsphere_(integ,self.mask,self.dlnx)
"""
Explanation: DblSphere
In this case, we'll need to use some acceleration, since otherwise at one point we need an (r,k,m,m) matrix which is too big.
End of explanation
"""
anl = np.zeros_like(r)
for ir, rr in enumerate(r):
mlim = 4*np.pi*rhob*deltah*rr**3/3
anl[ir] = 10**-10*(10**-10 - 1/mlim) - intg.simps((m[m<mlim]**(2./3.)*(4*np.pi*deltah*rhob/3)**(1./3.) * rr - m[m<mlim])**-3,m[m<mlim])
"""
Explanation: Analytic Case
The precise analytic case is difficult here, owing to the double-integral etc. Basically, we get:
$$ P(k,r)/P_m(k) = 10^{-10}\left(10^{-10} - 1/m_{lim}\right) - \int_{10^{10}}^{m_{lim}} \left[m^{2/3} \left(\frac{4\pi \Delta_h \bar{\rho}}{3}\right)^{1/3} r - m\right]^{-3} dm $$
The first term is similar to the single-integral sphere case, but here
$$ m_{lim} = \frac{4\pi \Delta_h \bar{\rho}}{3}r^3 $$
and we have the second term as well. The second term becomes undefined when $m = m_{lim}$.
This has an exact solution, but for these purposes, its probably just easier to do it numerically:
End of explanation
"""
anl
cls = DblSphere(m,density,I,bias,r,deltah,rhob)
cls_ = DblSphere_(m,density,I,bias,r,deltah,rhob)
py = cls.integrate()[:,0]
nba = cls_.integrate()[:,0]
fig,ax = plt.subplots(2,1,sharex=True,subplot_kw={"xscale":"log"},gridspec_kw={"height_ratios":(2.5,1)})
ax[0].plot(r,py,label="python")
ax[0].plot(r,nba,label="numba")
#ax[0].plot(r,anl,label="analytic")
ax[0].legend(loc=0)
ax[1].plot(r,py/anl)
ax[1].plot(r,nba/anl)
"""
Explanation: Numerical Test
End of explanation
"""
%timeit cls.integrate()
%timeit cls_.integrate()
"""
Explanation: Timing
End of explanation
"""
class DblEllipsoid(DblSphere):
@cached_property
def mask(self):
"Unecessary for this approach"
return None
@cached_property
def prob(self):
rvir = self.rvir
x = outer(self.r,1/np.add.outer(rvir,rvir))
x = (x-0.8)/0.29 #this is y but we re-use the memory
np.clip(x,0,1,x)
return 3*x**2 - 2*x**3
@cached_property
def density_mod(self):
integrand = self.prob * outer(np.ones_like(self.r),np.outer(self.density*self.m,self.density*self.m))
a = np.sqrt(dblsimps(integrand,self.dlnx))
if hasattr(self.density,"unit"):
return a*self.density.unit*self.m.unit
else:
return a
def integrate(self):
integ = self.raw_integrand() #(r,k,m)
out = np.zeros_like(integ[:,:,0])
integrand = np.zeros_like(self.prob)
for ik in range(integ.shape[1]):
for ir in range(len(self.r)):
integrand[ir] = self.prob[ir]*np.outer(integ[ir,ik,:],integ[ir,ik,:])
out[:,ik] = dblsimps(integrand,self.dlnx)
return out
class DblEllipsoid_(DblEllipsoid):
@cached_property
def density_mod(self):
if hasattr(self.density,"unit"):
return density_mod_(self.r,self.rvir,np.outer(self.density*self.m,self.density*self.m),self.dlnx)*self.density.unit*self.m.unit
else:
return density_mod_(self.r,self.rvir,np.outer(self.density*self.m,self.density*self.m),self.dlnx)
@cached_property
def prob(self):
return prob_inner_(self.r,self.rvir)
def integrate(self):
return integrate_dblell(self.raw_integrand(),self.prob,self.dlnx)
@jit(nopython=True)
def integrate_dblell(integ,prob,dx):
nr = integ.shape[0]
nk = integ.shape[1]
nm = prob.shape[1]
out = np.zeros((nr,nk))
integrand = np.zeros((nm,nm))
for ir in range(nr):
for ik in range(nk):
for im in range(nm):
for jm in range(im,nm):
integrand[im,jm] = integ[ir,ik,im]*integ[ir,ik,jm]*prob[ir,im,jm]
out[ir,ik] = dblsimps_(integrand,dx,dx)
return out
@jit(nopython=True)
def density_mod_(r,rvir,densitymat,dx):
d = np.zeros(len(r))
for ir,rr in enumerate(r):
integrand = prob_inner_r_(rr,rvir)*densitymat
d[ir] = dblsimps_(integrand,dx,dx)
return np.sqrt(d)
@jit(nopython=True)
def prob_inner_(r,rvir):
"""
Jit-compiled version of calculating prob, taking advantage of symmetry.
"""
nrv = len(rvir)
out = np.empty((len(r),nrv,nrv))
for ir,rr in enumerate(r):
for irv, rv1 in enumerate(rvir):
for jrv in range(irv,nrv):
rv2 = rvir[jrv]
x = (rr/(rv1+rv2) - 0.8)/0.29
if x<=0:
out[ir,irv,jrv] = 0
elif x>=1:
out[ir,irv,jrv] = 1
else:
out[ir,irv,jrv] = 3*x**2 - 2*x**3
return out
@jit(nopython=True)
def prob_inner_r_(r,rvir):
nrv = len(rvir)
out = np.empty((nrv,nrv))
for irv, rv1 in enumerate(rvir):
for jrv in range(irv,nrv):
rv2 = rvir[jrv]
x = (r/(rv1+rv2) - 0.8)/0.29
if x<=0:
out[irv,jrv] = 0
elif x>=1:
out[irv,jrv] = 1
else:
out[irv,jrv] = 3*x**2 - 2*x**3
return out
"""
Explanation: Ellipsoid
This case is similar to DblSphere, except that we integrate to infinity, with a well-behaved probability distribution tailing to zero.
End of explanation
"""
def integrand_log(m1,m2,r):
rvir1 = (3*10**m1/(4*np.pi*deltah*rhob))**(1./3.)
rvir2 = (3*10**m2/(4*np.pi*deltah*rhob))**(1./3.)
x = r/(rvir1+rvir2)
y = (x-0.8)/0.29
if y<=0:
p = 0.0
elif y>=1:
p = 1.0
else:
p = 3*y**2-2*y**3
#print rvir1, rvir2, x, y, p
return p/(10**(m1) * 10**(m2))
from scipy.integrate import dblquad
anl_dblell = np.array([np.log(10)**2*dblquad(integrand_log,10,18,lambda x: 10, lambda x: 18,args=(rr,))[0] for rr in r])
"""
Explanation: Analytic Case
This is unsolvable analytically because of $P(x)$. Indeed, the closest we can really get is
$$ P(k,r)/P_m(k) = \int \int \frac{P(m_1,m_2,r)}{m_1^2 m_2^2} dm_1 dm_2 $$
To have some measure of comparison, let's write a function to integrate with dblquad:
End of explanation
"""
cls = DblEllipsoid(m,density,I,bias,r,deltah,rhob)
cls_ = DblEllipsoid_(m,density,I,bias,r,deltah,rhob)
py = cls.integrate()[:,0]
nba = cls_.integrate()[:,0]
fig,ax = plt.subplots(2,1,sharex=True,subplot_kw={"xscale":"log"},gridspec_kw={"height_ratios":(2.5,1)})
ax[0].plot(r,py,label="python")
ax[0].plot(r,nba,label="numba")
ax[0].plot(r,anl_dblell,label="analytic")
ax[0].legend(loc=0)
ax[1].plot(r,py/anl_dblell)
ax[1].plot(r,nba/anl_dblell)
"""
Explanation: Numerical Test
End of explanation
"""
%timeit cls.integrate()
%timeit cls_.integrate()
"""
Explanation: Timing
End of explanation
"""
class NgMatched(DblEllipsoid):
@cached_property
def mask(self):
integrand = self.density*self.m
cumint = intg.cumtrapz(integrand,dx=self.dlnx,initial=0) #len m
cumint = np.outer(np.ones_like(self.r),cumint) # r,m
return np.where(cumint>np.outer(self.density_mod,np.ones_like(self.m)),
np.ones_like(cumint,dtype=bool),np.zeros_like(cumint,dtype=bool))
def integrate(self):
integ = self.raw_integrand() #r,k,m
integ.transpose((1,0,2))[:,self.mask] = 0
return intg.simps(integ,dx=self.dlnx)**2
class NgMatched_(DblEllipsoid_):
@cached_property
def mask(self):
integrand = self.density*self.m
cumint = intg.cumtrapz(integrand,dx=self.dlnx,initial=0) #len m
cumint = np.outer(np.ones_like(self.r),cumint) # r,m
return np.where(cumint>np.outer(self.density_mod,np.ones_like(self.m)),
np.ones_like(cumint,dtype=bool),np.zeros_like(cumint,dtype=bool))
def integrate(self):
integ = self.raw_integrand() #r,k,m
integ.transpose((1,0,2))[:,self.mask] = 0
return intg.simps(integ,dx=self.dlnx)**2
"""
Explanation: Ng-Matched
End of explanation
"""
cls = NgMatched(m,density,I,bias,r,deltah,rhob)
cls_ = NgMatched_(m,density,I,bias,r,deltah,rhob)
py = cls.integrate()[:,0]
nba = cls_.integrate()[:,0]
fig,ax = plt.subplots(2,1,sharex=True,subplot_kw={"xscale":"log"},gridspec_kw={"height_ratios":(2.5,1)})
ax[0].plot(r,py,label="python")
ax[0].plot(r,nba,label="numba")
ax[0].plot(r,anl_dblell,label="analytic")
ax[0].legend(loc=0)
ax[1].plot(r,py/anl_dblell)
ax[1].plot(r,nba/anl_dblell)
"""
Explanation: Analytic Case
In essense, this is the same as the single-integral sphere case, so that the result is
$$ P(k,r)/P_m(k) = \left[10^{-10} - 1/m_{lim}\right]^2, $$
but in this case $m_{lim}$ is defined slightly differently:
$$ \int_{10^{10}} ^ {m_{lim}} m^{-2} dm = \sqrt{\int \int \frac{P(x)}{m_1^2 m_2^2} dm_1 dm_2}. $$
We can simplify to
$$ m_{lim} = \frac{1}{10^{-10} - \sqrt{b_{eff}^{\rm dbl}}}, $$
where $b_{eff}^{\rm dbl}$ is merely the result from the DblEllipsoid class. Popping this back into the solution cancels everything so we're left with
$$ P(k,r)/P_m(k) \equiv b_{eff}^{\rm ngm}= b_{eff}^{\rm dbl}. $$
Note that this is only the case exactly because the density integrand and full integrand are exactly the same.
Numerical Test
End of explanation
"""
%timeit cls.integrate()
%timeit cls_.integrate()
"""
Explanation: Timing
End of explanation
"""
fig,ax = plt.subplots(2,1,sharex=True,subplot_kw={"xscale":"log"},gridspec_kw={"height_ratios":(2,1)},figsize=(8,8))
for cls in (Sphere,DblSphere_,DblEllipsoid_,NgMatched):
vec = cls(m,density,I,bias,r,deltah,rhob).integrate()[:,0]
ax[0].plot(r,vec,label=cls.__name__.replace("_",""))
ax[1.].plot(r,vec/anl_dblell)
ax[0].legend(loc=0)
"""
Explanation: Comparison
Visual
End of explanation
"""
m = np.logspace(10,18,650)
density = m**-2
I = np.outer(np.ones(550),m**-4)
bias = m**2
deltah = 200.0
rhob = 10.**11
r = np.logspace(-1,2,50)
# For reference, ng_matched fortran does it in ~300ms
%timeit NoExclusion(m,density,I,bias,r,deltah,rhob).integrate()
%timeit Sphere(m,density,I,bias,r,deltah,rhob).integrate()
#%timeit DblSphere(m,density,I,bias,r,deltah,rhob).integrate()
#%timeit DblSphere_(m,density,I,bias,r,deltah,rhob).integrate()
#%timeit DblEllipsoid(m,density,I,bias,r,deltah,rhob).integrate()
#%timeit DblEllipsoid_(m,density,I,bias,r,deltah,rhob).integrate()
%timeit NgMatched(m,density,I,bias,r,deltah,rhob).integrate()
%timeit NgMatched_(m,density,I,bias,r,deltah,rhob).integrate()
"""
Explanation: Timing
For this bit, we increase the numbers of elements of the arrays to reasonable amounts:
End of explanation
"""
from scipy.interpolate import InterpolatedUnivariateSpline as spline
def power_to_corr(r,power,lnk,N=640,h=0.005):
"""
Use Ogata's method for Hankel Transforms in 3D for nu=0 (nu=1/2 for 2D)
to convert a given power spectrum to a correlation function.
Note, in its current form, lnk must be evenly-spaced.
"""
spl = spline(lnk,power)
roots=np.arange(1,N+1)
t = h*roots
s = np.pi*np.sinh(t)
x = np.pi * roots * np.tanh(s/2)
dpsi = 1+np.cosh(s)
dpsi[dpsi!=0] = (np.pi*t*np.cosh(t)+np.sinh(s))/dpsi[dpsi!=0]
sumparts = np.pi*np.sin(x)*dpsi*x
allparts = sumparts * spl(np.log(np.divide.outer(x,r))).T
return np.sum(allparts,axis=-1)/(2*np.pi**2*r**3)
from halomod.tools import power_to_corr_ogata
from hmf import Transfer
t = Transfer()
xir = power_to_corr(r,t.power,np.log(t.k.value))
xir_fort = power_to_corr_ogata(t.power,t.k,r)
print xir/xir_fort-1
def power_to_corr_matrix(r,power,lnk,N=640,h=0.005):
"""
Use Ogata's method for Hankel Transforms in 3D for nu=0 (nu=1/2 for 2D)
to convert a given power spectrum to a correlation function.
In this case, `power` is a (k,r) matrix
"""
roots=np.arange(1,N+1)
t = h*roots
s = np.pi*np.sinh(t)
x = np.pi * roots * np.tanh(s/2)
dpsi = 1+np.cosh(s)
dpsi[dpsi!=0] = (np.pi*t*np.cosh(t)+np.sinh(s))/dpsi[dpsi!=0]
sumparts = np.pi*np.sin(x)*dpsi*x
out = np.zeros_like(r)
for ir,rr in enumerate(r):
spl = spline(lnk,power[:,ir])
allparts = sumparts * spl(np.log(x/rr))
out[ir] = np.sum(allparts)/(2*np.pi**2*rr**3)
return out
pow2 = np.repeat(t.power,len(r)).reshape((-1,len(r)))
xir2 = power_to_corr_matrix(r,pow2,np.log(t.k.value))
print xir2/xir_fort
%timeit power_to_corr_matrix(r,pow2,np.log(t.k.value))
%timeit power_to_corr(r,t.power,np.log(t.k.value))
"""
Explanation: Ogata Method
End of explanation
"""
from halomod import halo_model
from halomod.fort.routines import hod_routines as fort
from scipy.integrate import trapz,simps
from halomod.twohalo_wrapper import twohalo_wrapper as thalo
h = halo_model.HaloModel(rnum=50,dlog10m=0.03)
def fortran_2halo():
u = h.profile.u(h.k, h.M , norm='m')
return thalo("ng_matched", True,
h.M.value, h.bias, h.n_tot,
h.dndm.value, np.log(h.k.value),
h._power_halo_centres.value, u, h.r.value, h.corr_mm_base,
h.mean_gal_den.value, h.delta_halo,
h.mean_density.value, h.nthreads_2halo)
from halomod import tools
def python_2halo():
### POWER PART
u = h.profile.u(h.k,h.M,norm="m")
if h.scale_dependent_bias is not None:
bias = np.outer(h.sd_bias.bias_scale(),h.bias)
else:
bias = h.bias
inst = NgMatched_(m=h.M,density=h.n_tot*h.dndm,
I=h.n_tot*h.dndm*u/h.mean_gal_den,
bias=bias,r=h.r,delta_halo=h.delta_halo,
mean_density=h.mean_density)
if hasattr(inst,"density_mod"):
__density_mod = inst.density_mod
else:
__density_mod = h.mean_gal_den
power_gg_2h= inst.integrate() * h._power_halo_centres
if len(power_gg_2h.shape)==2:
corr = tools.power_to_corr_ogata_matrix(power_gg_2h,h.k.value,h.r)
else:
corr = tools.power_to_corr_ogata(power_gg_2h,h.k.value,h.r)
print __density_mod/h.mean_gal_den
## modify by the new density
return (__density_mod/h.mean_gal_den)**2 * (1+corr)-1
"""
Explanation: Full 2-halo Term
End of explanation
"""
py = python_2halo()
fort = fortran_2halo()
fig,ax = plt.subplots(2,1,sharex=True,subplot_kw={"xscale":"log"},gridspec_kw={"height_ratios":(2.5,1)})
ax[0].plot(h.r,py,label="python")
ax[0].plot(h.r,fort,label="fortran")
#ax[0].plot(r,anl,label="analytic")
ax[0].legend(loc=0)
ax[1].plot(h.r,py/fort)
"""
Explanation: Visual
End of explanation
"""
|
intel-analytics/BigDL | python/orca/colab-notebook/quickstart/keras_lenet_mnist.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""
Explanation: <a href="https://colab.research.google.com/github/intel-analytics/BigDL/blob/branch-2.0/python/orca/colab-notebook/quickstart/keras_lenet_mnist.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Copyright 2016 The BigDL Authors.
End of explanation
"""
# Install jdk8
!apt-get install openjdk-8-jdk-headless -qq > /dev/null
import os
# Set environment variable JAVA_HOME.
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
!update-alternatives --set java /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java
!java -version
"""
Explanation: Environment Preparation
Install Java 8
Run the cell on the Google Colab to install jdk 1.8.
Note: if you run this notebook on your computer, root permission is required when running the cell to install Java 8. (You may ignore this cell if Java 8 has already been set up in your computer).
End of explanation
"""
# Install latest pre-release version of BigDL Orca
# Installing BigDL Orca from pip will automatically install pyspark, bigdl, and their dependencies.
!pip install --pre --upgrade bigdl-orca
# Install python dependencies
# The tutorial below only supports TensorFlow 1.15
!pip install tensorflow==1.15.0 tensorflow-datasets==2.1.0
"""
Explanation: Install BigDL Orca
You can install the latest pre-release version using !pip install --pre --upgrade bigdl-orca.
End of explanation
"""
# import necesary libraries and modules
import argparse
from bigdl.orca import init_orca_context, stop_orca_context
from bigdl.orca.learn.tf.estimator import Estimator
from bigdl.orca import OrcaContext
"""
Explanation: Distributed Keras (v2.3) using Orca APIs
In this guide we will describe how to scale out Keras (v2.3) programs using Orca in 4 simple steps.
End of explanation
"""
OrcaContext.log_output = True # recommended to set it to True when running BigDL in Jupyter notebook (this will display terminal's stdout and stderr in the Jupyter notebook).
cluster_mode = "local"
if cluster_mode == "local":
init_orca_context(cluster_mode="local", cores=4) # run in local mode
dataset_dir = "~/tensorflow_datasets"
elif cluster_mode == "k8s":
init_orca_context(cluster_mode="k8s", num_nodes=2, cores=2) # run on K8s cluster
elif cluster_mode == "yarn":
init_orca_context(cluster_mode="yarn-client", num_nodes=2, cores=2) # run on Hadoop YARN cluster
dataset_dir = "hdfs:///tensorflow_datasets"
"""
Explanation: Step 1: Init Orca Context
End of explanation
"""
from tensorflow import keras
model = keras.Sequential(
[keras.layers.Conv2D(20, kernel_size=(5, 5), strides=(1, 1), activation='tanh',
input_shape=(28, 28, 1), padding='valid'),
keras.layers.MaxPooling2D(pool_size=(2, 2), strides=(2, 2), padding='valid'),
keras.layers.Conv2D(50, kernel_size=(5, 5), strides=(1, 1), activation='tanh',
padding='valid'),
keras.layers.MaxPooling2D(pool_size=(2, 2), strides=(2, 2), padding='valid'),
keras.layers.Flatten(),
keras.layers.Dense(500, activation='tanh'),
keras.layers.Dense(10, activation='softmax'),
]
)
model.compile(optimizer=keras.optimizers.RMSprop(),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
"""
Explanation: This is the only place where you need to specify local or distributed mode. View Orca Context for more details.
Note: You should export HADOOP_CONF_DIR=/path/to/hadoop/conf/dir when you run on Hadoop YARN cluster. To use tensorflow_datasets on HDFS, you should correctly set HADOOP_HOME, HADOOP_HDFS_HOME, LD_LIBRARY_PATH, etc. For more details, please refer to TensorFlow documentation link.
Step 2: Define the Model
You may define your model, loss and metrics in the same way as in any standard (single node) Keras program.
End of explanation
"""
import tensorflow as tf
import tensorflow_datasets as tfds
def preprocess(data):
data['image'] = tf.cast(data["image"], tf.float32) / 255.
return data['image'], data['label']
# get DataSet
mnist_train = tfds.load(name="mnist", split="train", data_dir=dataset_dir)
mnist_test = tfds.load(name="mnist", split="test", data_dir=dataset_dir)
mnist_train = mnist_train.map(preprocess)
mnist_test = mnist_test.map(preprocess)
"""
Explanation: Step 3: Define Train Dataset
You can define the dataset using standard tf.data.Dataset.
End of explanation
"""
from bigdl.orca.learn.tf.estimator import Estimator
est = Estimator.from_keras(keras_model=model)
"""
Explanation: Step 4: Fit with Orca Estimator
First, create an Estimator
End of explanation
"""
max_epoch = 1
est.fit(data=mnist_train,
batch_size=320,
epochs=max_epoch,
validation_data=mnist_test)
"""
Explanation: Next, fit and evaluate using the Estimator
End of explanation
"""
# evaluate and print result
result = est.evaluate(mnist_test)
print(result)
est.save_keras_model("/tmp/mnist_keras.h5")
"""
Explanation: Finally, evaluate using the Estimator.
End of explanation
"""
# stop orca context when program finishes
stop_orca_context()
"""
Explanation: Now, the accuracy of this model has reached 97%.
End of explanation
"""
|
vlad17/vlad17.github.io | assets/2020-10-25-linear-degeneracy.ipynb | apache-2.0 | import numpy as np
%matplotlib inline
import scipy.linalg as sla
np.random.seed(1234)
def invdiag(X):
n, p = X.shape
assert p <= n
Q, R, P = sla.qr(X, pivoting=True, mode='economic')
# P is a permutation, so right mul selects columns
# and left mul selects rows, but the indices are
# returned are column indices, so indexing is flipped.
# X P = Q R [p is a permutation matrix, thus unitary]
# X' X == P (X P)' (X P) P'
# == P R' R P'
# (X'X)^{-1} == P R^{-1} (R')^{-1} P' I
Pinv = np.empty_like(P)
Pinv[P] = np.arange(len(P), dtype=P.dtype)
XTXinv = sla.solve_triangular(
R,
sla.solve_triangular(
R,
np.eye(p)[P],
trans='T'
),
)[Pinv]
return np.diag(XTXinv)
X = np.random.normal(size=(100, 10))
print(np.linalg.norm(invdiag(X) - np.diag(sla.pinvh(X.T.dot(X)))))
def add1(X):
return np.hstack([np.ones((X.shape[0], 1)), X])
sigma = 5 # make the p-values spicy
n = 100
p = 10
beta0 = 1
centers = np.random.normal(size=p)
scales = np.random.gamma(sigma, size=p)
settings = [
('shift', centers, np.ones_like(scales)),
('scale', np.zeros_like(centers), scales),
('shift+scale', centers, scales),
]
for name, shift, scale in settings:
print(name)
beta = np.random.normal(size=p)
eps = np.random.normal(size=n) * sigma
X = np.random.normal(size=(n, p)) * scale + shift
y = beta0 + X.dot(beta) + eps
XX = add1(X)
beta_hat, rss_hat, rank, _ = np.linalg.lstsq(XX, y, rcond=None)
assert rank == p + 1, rank
var_hat = invdiag(XX) * rss_hat
XX = add1(X * scale - shift)
beta_tilde, rss_tilde, rank, _ = np.linalg.lstsq(XX, y, rcond=None)
assert rank == p + 1, rank
var_tilde = invdiag(XX) * rss_hat
print('rss diff', abs(rss_hat.item() - rss_tilde.item()))
print('se diff', np.linalg.norm(np.sqrt(var_hat[1:]) - scale * np.sqrt(var_tilde[1:])))
print('coef diff', np.linalg.norm(beta_hat[1:] / scale - beta_tilde[1:]))
print('intercept diff', beta_tilde[0] - beta_hat[0], 'coef sum', shift.dot(beta_hat[1:] / scale))
rss_hat, rss_tilde
"""
Explanation: Linear Degeneracy
This notebook comes from my Linear Regression Analysis notes.
In the ordinary least squares setting, we model our outputs $\mathbf{y}=X\boldsymbol\beta+\boldsymbol\varepsilon$ where $\boldsymbol\varepsilon\sim N(\mathbf{0}, \sigma^2 I)$, with $\boldsymbol\beta,\sigma^2$ unknown.
As a result the OLS fit $\hat{\boldsymbol\beta}\sim N\left((X^\top X)^{-1}X^\top \mathbf{y},\sigma^2 (X^\top X)^{-1}\right)$ (an distribution with an unknown variance scaling factor which we must still Studentize) is sensitive to the stability of the inverse of the Gram $X^\top X$.
One interesting question is whether to scale and center inputs. For a provided design matrix $X$, we could consider centering it, by subtracting the mean of each column, i.e, $(I-\frac{1}{n}J)X=X(I-\mathbf{1}\overline{\mathbf{x}}^\top)$, using numpy-like broadcasting on the left and $J=\mathbf{1}\mathbf{1}^\top$ on the right, or scaling each column by a diagonal matrix $N$ of $X$ column norms or $S$ of column standard deviations by a right-multiplication $XN$ or $XS$. We'll assume we're fitting a generalized linear model with an intercept.
In this document, we'll keep the intercept terms $\beta_0$ separate from coefficient terms.
There are several questions to ask:
How does standardization affect statistical inferences?
How does standardization affect numerical stability?
"How can we use standardization to aid interpretation?" is an interesting question people disagree on. I won't discuss it here (my view is that there's some "natural" parameterization that the experimenter selects, so that's what should be used). Note that this is usually the most important question when deciding on standardization. The goal of this notebook is to explore the computational effects of standardization, which are the parts that you usually don't need to worry about if calling a software package (until you do :) ).
After, we review other remedies for degeneracy.
Standardization and Inference
By rewriting the least squares objective, it's clear that any least squares solution $\tilde {\boldsymbol\beta}$ for centered covariates is isomorphic to the the uncentered solution $\hat{ \boldsymbol\beta}$, except the intercept, which will change as $\tilde{\beta}_0=\hat{\beta}_0-\overline{\mathbf{x}}^\top \hat{\beta}$. This includes the optimal solution, so centering won't change our coefficients. Using $\tilde X = (I-\frac{1}{n}J)X$:
$$
\left\|\mathbf{y}-X\hat{\boldsymbol\beta}-\hat\beta_0\mathbf{1}\right\|^2=\left\|\mathbf{y}-\left(\tilde{X} + \frac{1}{n}JX\right)\hat{\boldsymbol\beta}-\tilde\beta_0\mathbf{1}\right\|^2=\left\|\mathbf{y}-\tilde{X}\hat{\boldsymbol\beta}-\mathbf{1}\left(\hat\beta_0+\overline{\mathbf{x}}^\top\hat{\boldsymbol\beta}\right)\right\|^2
$$
What's less obvious is why the p-values stay the same (for the coefficients, predictably, for a non-centered intercept its standard error increases since it's an extrapolation). Intuitively, the transformed standard errors are proportional to $\mathrm{diag}\left(\begin{pmatrix}\mathbf{1}&\tilde X\end{pmatrix}^\top \begin{pmatrix}\mathbf 1&\tilde X\end{pmatrix}\right)^{-1}$, and the first step in any matrix Gram-Schmidt-like (i.e., QR) inversion routine would be to project out the first column). Mathematically, the invariance (on all but the first row and first column of the covariance matrix) is seen by idempotence that $(I-\frac{1}{n}J)^2=(I-\frac{1}{n}J)$, orthogonality $(I-\frac{1}{n}J)\mathbf{1}=\mathbf{0}$, and the fact that $\tilde X = (I-\frac{1}{n}J)X$.
A similar isomorphism holds for $XS$, with corresponding solution isomorphism $S^{-1}\boldsymbol\beta$. Here, the standard errors are shifted by a factor of $S$, but so are the coefficients, so p-values stay the same.
End of explanation
"""
%reload_ext rpy2.ipython
np.random.seed(1234)
n = 100
x = np.random.uniform(size=n)
x.sort()
%%R -i x -i n -w 5 -h 5 --units in -r 50
set.seed(1)
y <- .2*(x-.5)+(x-.5)^2 + rnorm(n)*.1
A <- lm(y ~ x+I(x^2))
# regression works fine
plot(x,y); lines(x, predict(A), col='red')
X <- x + 1000
B <- lm(y ~ X+I(X^2))
# regression breaks
plot(X,y); lines(X, predict(B), col='blue')
s <- function(x) scale(x, center=TRUE, scale=FALSE)
C <- lm(y ~ s(X)+I(s(X^2)))
# regression saved
plot(s(X),y); lines(s(X), predict(C), col='green')
X = x + 1000
XX = np.column_stack([X, X ** 2])
np.linalg.cond(add1(XX))
np.linalg.cond(add1(XX - XX.mean(axis=0)))
"""
Explanation: Indeed, it's easy to see that any linear transformation $G$ of our full design (includes scales) does not affect our objective or fit: $\mathbf{y}-XGG^{-1}\boldsymbol\beta=\mathbf{y}-X\boldsymbol\beta$. The p-values for fitted values $\hat{\boldsymbol\beta}$ likewise stay the same.
Usually, the intercept is not regularized. But it's clear that scaling would affect the regularization cost, since the norm of the scaled $S^{-1}\boldsymbol\beta$ can differ from the original norm of $\boldsymbol\beta$. Indeed, theory about inference with regularization typically works with $XN$, the norm-scaled matrix.
Standardization and Numerical Stability
Given that in the unregularized setting, linear transforms don't affect inference, if we're after the p-values and coefficients, should we standardize for stability purposes? In fact, why not consider arbitrary transforms $G$ such that $XG$ is well-conditioned?
Appendix 3B of Regression Diagnostics tackles this directly. It won't help to use arbitrary dense matrices $G$; see the appendix for an explanation of why any dense $G$ chosen such that $\kappa(XG)$ is controlled will itself have high $\kappa(G)$ (see the paragraph A More General Analysis). Thus deriving the original coefficients with $G^{-1}\boldsymbol\beta$ is no more stable than a direct solve.
As far as finding the coefficients $\boldsymbol\beta$ under our given parameterization, that's that, even from a computational perspective linear transformations, like scaling, can't help us.
But... this doesn't say anything about nonlinear transformations. In particular, centering is a coordinate-wise combination of two linear transformations (identity and mean removal), which together is not linear.
$$
\begin{pmatrix}\mathbf{1} & X\end{pmatrix}\mapsto \begin{pmatrix}\mathbf{1} & (I-\frac{1}{n}J) X\end{pmatrix}
$$
What's more is that the above is clearly a contraction, and it preserves column space. So centering always help conditioning (how much depends on $X$), but only if you include an external intercept and no column of $X$ is constant.
A fun experiment from Stack Overflow shows that centering is essential.
End of explanation
"""
# create a tall random matrix A st A'A is rank n-1
n = 5
np.random.seed(1234)
A = np.random.randn(n, n)
Q, R = sla.qr(A)
for i in range(n):
R[i, i] = max(R[i, i], 1e-3)
R[-1, -1] = 0
A = Q.dot(R)
A = np.tile(A, (3, 1))
def perturb(X, eps=1e-5, d=None):
nsamples = 200
change = 0
iX = np.sqrt(invdiag(X))
for _ in range(nsamples):
XX = X.copy()
if d is None:
r = np.random.randn(X.shape[0])
else:
r = d.copy()
r /= sla.norm(r)
r *= eps
XX[:, np.random.choice(X.shape[1])] += r
change += sla.norm(iX - invdiag(XX))
return change / sla.norm(iX) / eps / nsamples
pA = perturb(A)
pB = perturb(A / sla.norm(A, axis=0))
np.log(pA), np.log(pB)
"""
Explanation: The condition numbers above don't lie re: centering's contractive properties.
What's more is that even though scaling will not help conditioning, it can help with stability of the standard errors. Recall for the $i$-th coefficient the standard error is $\hat\sigma v_i(X)^{1/2}=\hat\sigma\sqrt{(X_+^\top X_+)^{-1}{ii}}$ where $X+=\begin{pmatrix}\mathbf{1} & X\end{pmatrix}$ is the intercept-extended design and $n\hat\sigma^2$ is the residual sum of squares from the regression.
Belsley et al (from Regression Diagnostics) show this through elasticity (as opposed to traditional perturbation analysis).
Consider the sensitivity of each variance estimator by viewing $v_i(X)$ as a function of the design.
The elasticity $\xi_{jk}^{(i)}(X)=\frac{\partial_{x_{jk}}v_i(X)}{v_i(X)/x_{jk}}$, which measures the instantaneous ratio of relative change in the output to the input $jk$-th entry of $X$, can be shown via matrix derivatives to be invariant to scaling. Coupled with another inequality, this shows that $|\xi_{jk}|\le 2 \kappa(XG)$ for any diagonal matrix $G$, so we may as well choose one that minimizes $\kappa(XG)$. This again plays no role on the actual $\boldsymbol\beta$ solution stiffness.
Finding a minimal $G$ for this is hard, but unit normalization (scaling each column by its $\ell_2$ norm) can be shown to be $\sqrt{p}$-optimal. This is remniscent of the Jacobi (diagonal) preconditioner for solving linear systems of $X^\top X$ (see this preconditioning reading). Altogether, this seems important if you want to control elasticities, I guess. Both of the norms below are wildly large.
End of explanation
"""
n = 30
x1 = np.random.uniform(size=n)
slope = -.3
x2 = x1 * slope
x2 += np.random.randn(len(x2)) * np.std(x2) / 10
x1 -= x1.mean()
x2 -= x2.mean()
from scipy.spatial.transform import Rotation as R
r = R.from_rotvec(np.pi/2 * np.array([0, 0, 1]))
ox1, ox2, _ = r.apply((1, slope, 0))
from matplotlib import pyplot as plt
plt.scatter(x1, x2)
plt.scatter(ox1, ox2)
plt.xlabel('x1')
plt.ylabel('x2')
plt.gca().axis('equal')
plt.show()
X = np.block([[x1, ox1], [x2, ox2]]).T
np.linalg.cond(X), np.linalg.cond(X[:-1])
n = 30
x1, x2 = np.random.uniform(size=(2, n)) * .05
ox1, ox2 = (5, -5)
from matplotlib import pyplot as plt
plt.scatter(x1, x2)
plt.scatter(ox1, ox2)
plt.xlabel('x1')
plt.ylabel('x2')
plt.gca().axis('equal')
plt.show()
X = np.block([[x1, ox1], [x2, ox2]]).T
np.linalg.cond(X), np.linalg.cond(X[:-1])
"""
Explanation: Other discussion
Centering can destroy sparsity if you're not careful (though there are ways around this).
Other good resources from SO include this question and some related links, with more discussion on interaction terms and centering and other SO links.
Row Deletion
Outliers, in the sense of high-leverage points, can both hide and create degeneracy.
In cases of hiding degeneracy, this can result in poor fits with a sense of false security. Removing the outlier which obscures degeneracy means we can fit a more appropriate regression by dropping one of the columns (next section).
Similarly, outliers can create degeneracy as well. In this case they should simply be removed. Both examples are from Linear Regression Analysis 10.7.3.
End of explanation
"""
|
markvanheeswijk/kryptos | Kryptos.ipynb | mit | def rot(s, key, alphabet="ABCDEFGHIJKLMNOPQRSTUVWXYZ", direction=1):
keyval = alphabet.find(key)
t = ""
for sc in s:
i = alphabet.find(sc)
t += alphabet[(i + keyval * direction) % len(alphabet)] if i > -1 else sc
return t
"""
Explanation: Table of Contents
<p><div class="lev1 toc-item"><a href="#Kryptos-Decoding-From-Scratch" data-toc-modified-id="Kryptos-Decoding-From-Scratch-1"><span class="toc-item-num">1 </span>Kryptos Decoding From Scratch</a></div><div class="lev2 toc-item"><a href="#Introduction" data-toc-modified-id="Introduction-11"><span class="toc-item-num">1.1 </span>Introduction</a></div><div class="lev2 toc-item"><a href="#References" data-toc-modified-id="References-12"><span class="toc-item-num">1.2 </span>References</a></div><div class="lev1 toc-item"><a href="#2-Easier-Pieces:-Caesar-meets-Vigenere" data-toc-modified-id="2-Easier-Pieces:-Caesar-meets-Vigenere-2"><span class="toc-item-num">2 </span>2 Easier Pieces: Caesar meets Vigenere</a></div><div class="lev2 toc-item"><a href="#Caesar-Cipher" data-toc-modified-id="Caesar-Cipher-21"><span class="toc-item-num">2.1 </span>Caesar Cipher</a></div><div class="lev3 toc-item"><a href="#Caesar-Cipher-Solution-#1:-Brute-force" data-toc-modified-id="Caesar-Cipher-Solution-#1:-Brute-force-211"><span class="toc-item-num">2.1.1 </span>Caesar Cipher Solution #1: Brute-force</a></div><div class="lev3 toc-item"><a href="#Caesar-Cipher-Solution-#2:-Frequency-Analysis" data-toc-modified-id="Caesar-Cipher-Solution-#2:-Frequency-Analysis-212"><span class="toc-item-num">2.1.2 </span>Caesar Cipher Solution #2: Frequency Analysis</a></div><div class="lev3 toc-item"><a href="#CHALLENGE:-aligning-frequency-distributions-to-determine-the-Caesar-key" data-toc-modified-id="CHALLENGE:-aligning-frequency-distributions-to-determine-the-Caesar-key-213"><span class="toc-item-num">2.1.3 </span>CHALLENGE: aligning frequency distributions to determine the Caesar key</a></div><div class="lev3 toc-item"><a href="#Caesar-Cipher-Solution-#3:-Chi-squared-Statistic" data-toc-modified-id="Caesar-Cipher-Solution-#3:-Chi-squared-Statistic-214"><span class="toc-item-num">2.1.4 </span>Caesar Cipher Solution #3: Chi-squared Statistic</a></div><div class="lev3 toc-item"><a href="#Caesar-Cipher-Solution-#4:-Maximum-likelihood" data-toc-modified-id="Caesar-Cipher-Solution-#4:-Maximum-likelihood-215"><span class="toc-item-num">2.1.5 </span>Caesar Cipher Solution #4: Maximum likelihood</a></div><div class="lev4 toc-item"><a href="#Computing-likelihood-based-on-observed-n-grams" data-toc-modified-id="Computing-likelihood-based-on-observed-n-grams-2151"><span class="toc-item-num">2.1.5.1 </span>Computing likelihood based on observed n-grams</a></div><div class="lev4 toc-item"><a href="#Computing-n-gram-statistics-for-a-language" data-toc-modified-id="Computing-n-gram-statistics-for-a-language-2152"><span class="toc-item-num">2.1.5.2 </span>Computing n-gram statistics for a language</a></div><div class="lev4 toc-item"><a href="#Cracking-Caesar-Cipher-using-Maximum-Likelihood" data-toc-modified-id="Cracking-Caesar-Cipher-using-Maximum-Likelihood-2153"><span class="toc-item-num">2.1.5.3 </span>Cracking Caesar Cipher using Maximum Likelihood</a></div><div class="lev2 toc-item"><a href="#Vigenere" data-toc-modified-id="Vigenere-22"><span class="toc-item-num">2.2 </span>Vigenere</a></div><div class="lev3 toc-item"><a href="#Vigenere-Solution-#0:-Manually-Cracking-a-Vigenere-Cipher" data-toc-modified-id="Vigenere-Solution-#0:-Manually-Cracking-a-Vigenere-Cipher-221"><span class="toc-item-num">2.2.1 </span>Vigenere Solution #0: Manually Cracking a Vigenere Cipher</a></div><div class="lev3 toc-item"><a href="#CHALLENGE:-Manually-Cracking-a-Vigenere-Cipher" data-toc-modified-id="CHALLENGE:-Manually-Cracking-a-Vigenere-Cipher-222"><span class="toc-item-num">2.2.2 </span>CHALLENGE: Manually Cracking a Vigenere Cipher</a></div><div class="lev3 toc-item"><a href="#Index-of-Coincidence-(IC)" data-toc-modified-id="Index-of-Coincidence-(IC)-223"><span class="toc-item-num">2.2.3 </span>Index of Coincidence (IC)</a></div><div class="lev3 toc-item"><a href="#Determining-Vigenere-Key-Length-using-IC" data-toc-modified-id="Determining-Vigenere-Key-Length-using-IC-224"><span class="toc-item-num">2.2.4 </span>Determining Vigenere Key Length using IC</a></div><div class="lev2 toc-item"><a href="#Vigenere-Solution-#1:-Chi-square-Criterion-for-Automatically-Determining-the-Best-Key" data-toc-modified-id="Vigenere-Solution-#1:-Chi-square-Criterion-for-Automatically-Determining-the-Best-Key-23"><span class="toc-item-num">2.3 </span>Vigenere Solution #1: Chi-square Criterion for Automatically Determining the Best Key</a></div><div class="lev3 toc-item"><a href="#CHALLENGE:-Using-Chi-square-criterion-to-decrypt-Vigenere" data-toc-modified-id="CHALLENGE:-Using-Chi-square-criterion-to-decrypt-Vigenere-231"><span class="toc-item-num">2.3.1 </span>CHALLENGE: Using Chi-square criterion to decrypt Vigenere</a></div><div class="lev2 toc-item"><a href="#Vigenere-Solution-#2:-Maximum-Likelihood,-Combined-with-a-Search-Procedure" data-toc-modified-id="Vigenere-Solution-#2:-Maximum-Likelihood,-Combined-with-a-Search-Procedure-24"><span class="toc-item-num">2.4 </span>Vigenere Solution #2: Maximum Likelihood, Combined with a Search Procedure</a></div><div class="lev2 toc-item"><a href="#Vigenere-Solution-#3:-Maximum-Likelihood,-Combined-with-Brute-Force" data-toc-modified-id="Vigenere-Solution-#3:-Maximum-Likelihood,-Combined-with-Brute-Force-25"><span class="toc-item-num">2.5 </span>Vigenere Solution #3: Maximum Likelihood, Combined with Brute Force</a></div><div class="lev1 toc-item"><a href="#Keyed-Caesar-Variants" data-toc-modified-id="Keyed-Caesar-Variants-3"><span class="toc-item-num">3 </span>Keyed Caesar Variants</a></div><div class="lev2 toc-item"><a href="#Keyed-Caesar-Variants" data-toc-modified-id="Keyed-Caesar-Variants-31"><span class="toc-item-num">3.1 </span>Keyed Caesar Variants</a></div><div class="lev2 toc-item"><a href="#Cryptanalysis---Keyed-Caesar" data-toc-modified-id="Cryptanalysis---Keyed-Caesar-32"><span class="toc-item-num">3.2 </span>Cryptanalysis - Keyed Caesar</a></div><div class="lev3 toc-item"><a href="#Effect-of-Keyed-Caesar-on-Frequency-Distributions" data-toc-modified-id="Effect-of-Keyed-Caesar-on-Frequency-Distributions-321"><span class="toc-item-num">3.2.1 </span>Effect of Keyed Caesar on Frequency Distributions</a></div><div class="lev3 toc-item"><a href="#Determine-Key-by-Aligning-Frequency-Distributions?" data-toc-modified-id="Determine-Key-by-Aligning-Frequency-Distributions?-322"><span class="toc-item-num">3.2.2 </span>Determine Key by Aligning Frequency Distributions?</a></div><div class="lev3 toc-item"><a href="#Keyed-Caesar-Solution-#1:-Maximum-Likelihood,-Combined-with-Search-Procedure" data-toc-modified-id="Keyed-Caesar-Solution-#1:-Maximum-Likelihood,-Combined-with-Search-Procedure-323"><span class="toc-item-num">3.2.3 </span>Keyed Caesar Solution #1: Maximum Likelihood, Combined with Search Procedure</a></div><div class="lev2 toc-item"><a href="#Cryptanalysis---Caesar-with-Keyed-Alphabet" data-toc-modified-id="Cryptanalysis---Caesar-with-Keyed-Alphabet-33"><span class="toc-item-num">3.3 </span>Cryptanalysis - Caesar with Keyed Alphabet</a></div><div class="lev3 toc-item"><a href="#Effect-of-Caesar-with-Keyed-Alphabet-on-Frequency-Distributions" data-toc-modified-id="Effect-of-Caesar-with-Keyed-Alphabet-on-Frequency-Distributions-331"><span class="toc-item-num">3.3.1 </span>Effect of Caesar with Keyed Alphabet on Frequency Distributions</a></div><div class="lev3 toc-item"><a href="#Determine-Key-by-Aligning-Frequency-Distributions?" data-toc-modified-id="Determine-Key-by-Aligning-Frequency-Distributions?-332"><span class="toc-item-num">3.3.2 </span>Determine Key by Aligning Frequency Distributions?</a></div><div class="lev3 toc-item"><a href="#Caesar-with-Keyed-Alphabet-Solution-#1:-Maximum-Likelihood,-Combined-with-Search-Procedure" data-toc-modified-id="Caesar-with-Keyed-Alphabet-Solution-#1:-Maximum-Likelihood,-Combined-with-Search-Procedure-333"><span class="toc-item-num">3.3.3 </span>Caesar with Keyed Alphabet Solution #1: Maximum Likelihood, Combined with Search Procedure</a></div><div class="lev1 toc-item"><a href="#Keyed-Vigenere" data-toc-modified-id="Keyed-Vigenere-4"><span class="toc-item-num">4 </span>Keyed Vigenere</a></div><div class="lev2 toc-item"><a href="#Cryptanalysis---Keyed-Vigenere" data-toc-modified-id="Cryptanalysis---Keyed-Vigenere-41"><span class="toc-item-num">4.1 </span>Cryptanalysis - Keyed Vigenere</a></div><div class="lev3 toc-item"><a href="#Determine-if-Plaintext" data-toc-modified-id="Determine-if-Plaintext-411"><span class="toc-item-num">4.1.1 </span>Determine if Plaintext</a></div><div class="lev3 toc-item"><a href="#Determine-Key-Length" data-toc-modified-id="Determine-Key-Length-412"><span class="toc-item-num">4.1.2 </span>Determine Key Length</a></div><div class="lev3 toc-item"><a href="#Frequency-analysis-of-subsequences" data-toc-modified-id="Frequency-analysis-of-subsequences-413"><span class="toc-item-num">4.1.3 </span>Frequency analysis of subsequences</a></div><div class="lev3 toc-item"><a href="#CHALLENGE:-Determine-Vigenere-Key-in-Special-Case-of-Keyed-Alphabet-Starting-with-Letter-E" data-toc-modified-id="CHALLENGE:-Determine-Vigenere-Key-in-Special-Case-of-Keyed-Alphabet-Starting-with-Letter-E-414"><span class="toc-item-num">4.1.4 </span>CHALLENGE: Determine Vigenere Key in Special Case of Keyed Alphabet Starting with Letter E</a></div><div class="lev3 toc-item"><a href="#Case-1:-Vigenere-Key-is-Known,-Alphabet-Key-Unknown" data-toc-modified-id="Case-1:-Vigenere-Key-is-Known,-Alphabet-Key-Unknown-415"><span class="toc-item-num">4.1.5 </span>Case 1: Vigenere Key is Known, Alphabet Key Unknown</a></div><div class="lev3 toc-item"><a href="#CHALLENGE:-A-Special-Case" data-toc-modified-id="CHALLENGE:-A-Special-Case-416"><span class="toc-item-num">4.1.6 </span>CHALLENGE: A Special Case</a></div><div class="lev3 toc-item"><a href="#Case-2:-Vigenere-Key-is-Unknown,-Alphabet-Key-is-Known" data-toc-modified-id="Case-2:-Vigenere-Key-is-Unknown,-Alphabet-Key-is-Known-417"><span class="toc-item-num">4.1.7 </span>Case 2: Vigenere Key is Unknown, Alphabet Key is Known</a></div><div class="lev1 toc-item"><a href="#Kryptos" data-toc-modified-id="Kryptos-5"><span class="toc-item-num">5 </span>Kryptos</a></div><div class="lev2 toc-item"><a href="#Prerequisites" data-toc-modified-id="Prerequisites-51"><span class="toc-item-num">5.1 </span>Prerequisites</a></div><div class="lev2 toc-item"><a href="#Kryptos---K1" data-toc-modified-id="Kryptos---K1-52"><span class="toc-item-num">5.2 </span>Kryptos - K1</a></div><div class="lev3 toc-item"><a href="#K1---Determine-if-plaintext" data-toc-modified-id="K1---Determine-if-plaintext-521"><span class="toc-item-num">5.2.1 </span>K1 - Determine if plaintext</a></div><div class="lev3 toc-item"><a href="#K1---Determine-Key-Length" data-toc-modified-id="K1---Determine-Key-Length-522"><span class="toc-item-num">5.2.2 </span>K1 - Determine Key Length</a></div><div class="lev3 toc-item"><a href="#K1---Frequency-Analysis-of-Subsequences-#1" data-toc-modified-id="K1---Frequency-Analysis-of-Subsequences-#1-523"><span class="toc-item-num">5.2.3 </span>K1 - Frequency Analysis of Subsequences #1</a></div><div class="lev3 toc-item"><a href="#K1---Frequency-Analysis-of-Subsequences-#2" data-toc-modified-id="K1---Frequency-Analysis-of-Subsequences-#2-524"><span class="toc-item-num">5.2.4 </span>K1 - Frequency Analysis of Subsequences #2</a></div><div class="lev3 toc-item"><a href="#K1---Vigenere-Key-is-Unknown,-Alphabet-Key-is-Known" data-toc-modified-id="K1---Vigenere-Key-is-Unknown,-Alphabet-Key-is-Known-525"><span class="toc-item-num">5.2.5 </span>K1 - Vigenere Key is Unknown, Alphabet Key is Known</a></div><div class="lev3 toc-item"><a href="#K1---Determining-Message-Boundaries" data-toc-modified-id="K1---Determining-Message-Boundaries-526"><span class="toc-item-num">5.2.6 </span>K1 - Determining Message Boundaries</a></div><div class="lev3 toc-item"><a href="#K1---Determining-Word-Boundaries" data-toc-modified-id="K1---Determining-Word-Boundaries-527"><span class="toc-item-num">5.2.7 </span>K1 - Determining Word Boundaries</a></div><div class="lev2 toc-item"><a href="#Kryptos---K2" data-toc-modified-id="Kryptos---K2-53"><span class="toc-item-num">5.3 </span>Kryptos - K2</a></div><div class="lev3 toc-item"><a href="#K2---Determine-if-Plaintext" data-toc-modified-id="K2---Determine-if-Plaintext-531"><span class="toc-item-num">5.3.1 </span>K2 - Determine if Plaintext</a></div><div class="lev3 toc-item"><a href="#K2---Determine-Key-Length" data-toc-modified-id="K2---Determine-Key-Length-532"><span class="toc-item-num">5.3.2 </span>K2 - Determine Key Length</a></div><div class="lev3 toc-item"><a href="#K2---Frequency-Analysis-of-Subsequences-#1" data-toc-modified-id="K2---Frequency-Analysis-of-Subsequences-#1-533"><span class="toc-item-num">5.3.3 </span>K2 - Frequency Analysis of Subsequences #1</a></div><div class="lev3 toc-item"><a href="#K2---Frequency-Analysis-of-Subsequences-#2" data-toc-modified-id="K2---Frequency-Analysis-of-Subsequences-#2-534"><span class="toc-item-num">5.3.4 </span>K2 - Frequency Analysis of Subsequences #2</a></div><div class="lev3 toc-item"><a href="#K2---Frequency-Analysis-of-Subsequences-#3" data-toc-modified-id="K2---Frequency-Analysis-of-Subsequences-#3-535"><span class="toc-item-num">5.3.5 </span>K2 - Frequency Analysis of Subsequences #3</a></div><div class="lev3 toc-item"><a href="#K2---Vigenere-Key-is-Unknown,-Alphabet-Key-is-Known" data-toc-modified-id="K2---Vigenere-Key-is-Unknown,-Alphabet-Key-is-Known-536"><span class="toc-item-num">5.3.6 </span>K2 - Vigenere Key is Unknown, Alphabet Key is Known</a></div><div class="lev3 toc-item"><a href="#K2---Determining-Message-Boundaries" data-toc-modified-id="K2---Determining-Message-Boundaries-537"><span class="toc-item-num">5.3.7 </span>K2 - Determining Message Boundaries</a></div><div class="lev3 toc-item"><a href="#K2---Determining-Word-Boundaries" data-toc-modified-id="K2---Determining-Word-Boundaries-538"><span class="toc-item-num">5.3.8 </span>K2 - Determining Word Boundaries</a></div><div class="lev3 toc-item"><a href="#K2---Correction-to-Ciphertext" data-toc-modified-id="K2---Correction-to-Ciphertext-539"><span class="toc-item-num">5.3.9 </span>K2 - Correction to Ciphertext</a></div>
# Kryptos Decoding From Scratch
For the latest version of this document see https://github.com/markvanheeswijk/kryptos
## Introduction
This document describes how to get started with decoding Kryptos from scratch, gradually going all the way from ROT-13 to decoding Kryptos, discussing the relevant cryptanalysis principles and code along the way. Take your time to really understand each approach and its limitations. There will be some challenge ciphers to solve along the way.
First, we will look at the **Caesar cipher** and 4 possible ways to attack it:
- solution #1: **brute-force** all possible keys
- solution #2: **frequency analysis** showing the effect applying a Caesar cipher
- solution #3: **chi-square criterion** for automatically determining the best key based on alignment of the frequency plots.
- solution #4: **maximum likelihood** for automatically determining the best key based on how likely it is that the decrypted ciphertext is a plaintext in a certain language.
Then we will move on to the **Vigenere cipher**, which is very similar to the Caesar cipher, except that the consecutive shifts of the plaintext characters are determined by a keyword. If the key has length N, then each Nth character of the plaintext will be shifted by the same shift, determined by the corresponding character in the key. Therefore, we can re-use many of the same principles which we already saw for solving the Caesar cipher.
- pre-requisite #1: using **index of coincidence** for determining the **key length**, say N
- (pre-requisite #2: using **index of coincidence** for determining the **language** of the message and whether it might be a transposition cipher)
- solution #0: **manually** cracking the Vigenere cipher
- solution #1: **chi-square criterion** for automatically determining the best keys for each of the N Caesar ciphers
- solution #2: **maximum likelihood, combined with a search procedure** (in this case a genetic algorithm) for determining the best key
- solution #3: **maximum likelihood, combined brute-force** for determining the best keyword in a dictionary/wordlist
Finally, we will look at the **Keyed Caesar** and **Keyed Vigenere** variants, which add an extra complication in the form of a keyed alphabet:
- Keyed Caesar cipher and variants
- solution #1: **maximum likelihood, combined with a search procedure** to solve the substitution defined by the cipher
- Keyed Vigenere cipher
- solution #1: **recognize special case to guess vigenere key**, use **maximum likelihood, combined with brute-force** to find the alphabet key.
- solution #2: **know the alphabet key, like in Kryptos**, use **maximum likelihood, combined with a search procedure** to find the vigenere key.
To conclude, we will use the principles discussed to **solve Kryptos K1 and K2, using nothing but the 4 panels of Kryptos**.
## References
- **http://practicalcryptography.com/
(highly recommended for getting started with python code for cryptanalysis of various ciphers, and the website a lot of the code here is adapted from!)**
- https://en.wikipedia.org/wiki/Caesar_cipher
- https://en.wikipedia.org/wiki/Substitution_cipher
- https://en.wikipedia.org/wiki/Vigen%C3%A8re_cipher
- https://en.wikipedia.org/wiki/Index_of_coincidence
- http://www.simonsingh.net/The_Black_Chamber/vigenere_cracking_tool.html
- http://practicalcryptography.com/cryptanalysis/stochastic-searching/cryptanalysis-vigenere-cipher/
- http://rumkin.com/tools/cipher/
(for more references, see below)
# 2 Easier Pieces: Caesar meets Vigenere
## Caesar Cipher
The Caesar cipher is an example of a substution cipher, where each symbol in the plaintext is replaced by another symbol through a fixed mapping. In case of the Caesar cipher, the target alphabet is a shifted version of plaintext alphabet. A particular instance of this is ROT-13, where the target alphabet is shifted by 13 positions
ABCDEFGHIJKLMNOPQRSTUVWXYZ
becomes
NOPQRSTUVWXYZABCDEFGHIJKLM
and A gets replaced by N, B by O, etc.
Now, let's define a simple function for encrypting and decrypting using a Caesar cipher:
End of explanation
"""
ptext = '"I HEAR AND I FORGET. I SEE AND I REMEMBER. I DO AND I UNDERSTAND." --CONFUSIUS'
ctext = rot(ptext, 'N')
ctext
"""
Explanation: It takes a letter as key, finds its position in the alphabet, and shifts / rotates the alphabet by that amount. Symbols that are not in the alphabet are unaffected. Now, define some text and encrypt it using the function:
End of explanation
"""
decrypted_ctext = rot(ctext, 'N', direction = -1)
decrypted_ctext
"""
Explanation: Since we know the key we can decrypt it:
End of explanation
"""
alphabet = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'
for key in alphabet:
decrypted_ctext = rot(ctext, key, direction = -1)
print "%s:\t%s" % (key, decrypted_ctext)
"""
Explanation: Caesar Cipher Solution #1: Brute-force
However, what if we do not know the key? One option would be to brute-fortry all possible keys, and just read off the solution. There are only 26 possibilities after all.
End of explanation
"""
from __future__ import division
from matplotlib import pyplot as plt
import numpy as np
%matplotlib inline
plt.rcParams["figure.figsize"] = [12,4]
alphabet = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
# determine letter frequency in plaintext, and plot
ptext_freq = [ptext.count(c)/len(ptext) for c in alphabet]
pfig = plt.figure()
plt.bar(range(len(alphabet)), ptext_freq, tick_label = list(alphabet), align = 'center', color = 'b')
plt.title('plaintext letter frequency')
# determine letter frequency in ciphertext, and plot
ctext_freq = [ctext.count(c)/len(ctext) for c in alphabet]
cfig = plt.figure()
plt.bar(range(len(alphabet)), ctext_freq, tick_label = list(alphabet), align = 'center', color = 'r')
plt.title('ciphertext letter frequency')
"""
Explanation: Caesar Cipher Solution #2: Frequency Analysis
Alternatively, statistics of the text could be used to find the most likely key. Let's look at the frequency of the letters and see what happens when you encrypt / decrypt with Caesar cipher.
End of explanation
"""
# https://blog.dominodatalab.com/interactive-dashboards-in-jupyter/
# http://ipywidgets.readthedocs.io/en/latest/examples/Using%20Interact.html
from __future__ import division
from matplotlib import pyplot as plt
import numpy as np
from ipywidgets import interact, interactive, fixed
#import ipywidgets as widgets
alphabet = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
def rotate(l, n):
return l[n:] + l[:n]
@interact(key_i = (0,25,1))
def plot_graphs(key_i = 0):
# decrypt ciphertext using this key
key = alphabet[key_i]
decrypted_ctext = rot(ctext, key, direction = -1)
# determine letter frequency in plaintext, and plot # FIXME: base on separate text
ptext_freq = np.array([ptext.count(c)/len(ptext) for c in alphabet])
pfig = plt.subplot(2,1,1)
plt.bar(range(len(alphabet)), ptext_freq, tick_label = list(alphabet), align = 'center', color = 'b')
plt.title('blue = plaintext letter frequency, green = decrypted ciphertext letter frequency')
# determine letter frequency in ciphertext, and plot
ctext_freq = np.array([decrypted_ctext.count(c)/len(decrypted_ctext) for c in alphabet])
cfig = plt.subplot(2,1,2)
plt.bar(range(len(alphabet)), ctext_freq, tick_label = rotate(list(alphabet), key_i), align = 'center', color = 'g')
plt.xlabel("key = %s" % key)
plt.show()
print decrypted_ctext
"""
Explanation: From these plots, it can clearly be seen that the Caesar cipher shifts the frequencies of the letters (which is what we would expect, knowing how Caesar works). Finding the right key now corresponds to finding that shift, which aligns the frequency plots as well as possible. This can be done in two ways:
- by hand
- automatically, using e.g. some criterion to measure how well the plots match.
In the next sections, we'll look at ways to find the alignment automatically. For now, let's align and determine the key manually by aligning the frequency plots. Use the slider to align the frequency plots.
Note: normally you would not have the frequency plot of the plaintext of course, and you would instead use the frequencies based on some sufficiently large text in the same language.
End of explanation
"""
import numpy as np
# english letter frequencies, estimated from small sample text # FIXME: replace by better estimates
g_english = [0.0736, 0.0148, 0.0445, 0.0302, 0.102, 0.0227, 0.0122, 0.0277, 0.0855, 0.000557, 0.00237, 0.0342, 0.0206, 0.0717, 0.103, 0.0246, 0.00181, 0.0735, 0.0608, 0.0889, 0.0392, 0.0153, 0.0173, 0.000557, 0.032, 0.000278]
def chi_square(f, g, N):
chi2 = 0
chi2 = N * np.sum(np.array([(ff-gg)**2/gg for (ff,gg) in zip(f,g)]))
return chi2
alphabet = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'
results = []
for key in alphabet:
decrypted_ctext = rot(ctext, key, direction = -1)
f = np.array([decrypted_ctext.count(c) for c in alphabet])
N = np.sum(f) # note: not counting the non-alphabet characters here
f = f / N
chi2_decrypted_ctext = chi_square(f, g_english, N)
results.append((chi2_decrypted_ctext, key, decrypted_ctext))
print "%s:\t%f\t%s" % (key, chi2_decrypted_ctext, decrypted_ctext)
print "\nthe best key and plaintext:\n %s" % repr(min(results))
"""
Explanation: <div class="alert-warning">
<h3>CHALLENGE: aligning frequency distributions to determine the Caesar key</h3>
<br/>
Use the slider above to align the frequency distributions and determine the right key.
</div>
Caesar Cipher Solution #3: Chi-squared Statistic
We just saw that finding the right key consists of aligning the letter-frequency distributions. If we can measure how well aligned the distributions are, it can be done automatically. This is where the Chi-squared Statistic comes in: it measures how different an observed distribution is from an expected distribution:
$$
\chi^2(O,E) = \sum_{i=A}^Z \frac{(O_i - E_i)^2}{E_i}
$$
where $O_i$ and $E_i$ are the observed and expected numbers of character $i$ in a text. It can also be expressed in terms of frequencies/probabilities:
$$
\chi^2 = N \sum_{i=A}^Z \frac{(O_i/N - p_i)^2}{p_i}
= N \sum_{i=A}^Z \frac{(f_i - p_i)^2}{p_i}
$$
where $f_i$ and $p_i$ are the observed and expected relative frequencies of the characters, and $N$ is the length of the text.
For more details see:
- http://practicalcryptography.com/cryptanalysis/text-characterisation/chi-squared-statistic/
- https://en.wikipedia.org/wiki/Pearson%27s_chi-squared_test
Now, let's use it to find the key:
End of explanation
"""
import re
def find_ngrams(text, n):
text = re.sub(r'[^A-Z]', r'', text.upper()) #remove everything that is not A-Z
return zip(*[text[i:] for i in range(n)]) #determine ngrams
"""
Explanation: Caesar Cipher Solution #4: Maximum likelihood
TL;DR: for a given decoding, estimate how much like English (or another language) it is. Best key is the one that gives the decoding that is most like English (or another language)...
The final way to solve is to, for each possible key, compute the likelihood that the decrypted text is plaintext in a particular language. This of course is unnecessary for something as simple as the Caesar cipher, but now think of a cipher where you would have thousands or millions of possible keys...
For now, let's focus on understanding how it works. Spoiler: in a next section, this approach will be used to automatically determine the key for Kryptos and decrypted the K1 and K2 parts of it.
In the following we will assume this to be English, but it could just as well be a different language with different statistics. This is something that you should decide or guess from context.
Computing likelihood based on observed n-grams
In order to measure how much like English a certain string is, we need to compare its properties with strings from the English language. The criterion that will be used here to measure this will be so-called n-gram statistics, which in short, measuring for each sequence of n symbols, what is the frequency / probability that it appears in an English text.
Then, the likelihood that the observed decrypted ciphertext is an English text, is the joint probability of the observing the ngrams in English. Assuming the observations are independent and identically distributed, the joint probability of the observed ngrams in the decrypted ciphertext as follows:
$$
\mathcal{L}(English \mid decrypt(ctext, key)) = p(decrypt(ctext, key)) \mid English) = \prod_i p(ngram_i \mid English)
$$
where $ngram_i$ is the $i^{th}$ sequence of $n$ letters in the decrypted ciphertext. For example, in case $decrypt(ctext, key) = \texttt{ANEXAMPLETEXT}$ and using 3-grams, the observed 3-grams are
$$
\texttt{ANE, NEX, EXA, XAM, AMP, MPL, PLE, LET, ETE, TEX, EXT},
$$
and therefore the likelihood that it is an English text can be computed as
$$
\begin{align}
\mathcal{L}(English \mid \texttt{ANEXAMPLETEXT}) &= p(\texttt{ANEXAMPLETEXT} \mid English) \
&= p(\texttt{ANE} \mid English) \cdot p(\texttt{NEX} \mid English) \cdot \ldots \cdot p(\texttt{TEX} \mid English) \cdot p(\texttt{EXT} \mid English).
\end{align}
$$
Since the numbers involved typically get very small, people often work with the log-probabilities instead:
$$
\begin{align}
\log \mathcal{L}(English \mid decrypt(ctext, key)) &= \log \prod_i p(ngram_i \mid English)\
&= \sum_i \log p(ngram_i \mid English)
\end{align}
$$
This so-called log-likelihood that the decrypted ctext is English text (i.e. $\log \mathcal{L}(English \mid decrypt(ctext, key))$) can now be used as a criterion to determine the best key. However, we need to estimate the probabilities of certain ngrams appearing in the English language (i.e. all $\log p(ngram_i \mid English)$). For this, we analyze some reference text (the longer the better).
Computing n-gram statistics for a language
Let's define a function that takes in a text, and returns its ngrams.
End of explanation
"""
def compute_ngram_statistics(infile, outfile, n):
# read in file, and remove newlines
s = ''
print "reading %s..." % infile
with open(infile) as f:
s += " ".join(line.strip() for line in f)
print "finding and counting %d-grams..." % n
# compute ngrams
qgs = find_ngrams(s, n)
# count them
d = {}
for qg in qgs:
if d.has_key("".join(qg)):
d["".join(qg)]+=1
else:
d["".join(qg)]=1
# convert to sorted list and write to file
print "writing results to %s..." % outfile
qg_list = sorted(list(d.items()), key=lambda x: x[1], reverse=True)
f = open(outfile, 'w')
for t in qg_list:
f.write("%s %d\n" % t)
print "done!"
compute_ngram_statistics('ngrams/gutenberg_sherlock.txt', 'ngrams/en_sherlock_1grams', 1)
compute_ngram_statistics('ngrams/gutenberg_sherlock.txt', 'ngrams/en_sherlock_2grams', 2)
compute_ngram_statistics('ngrams/gutenberg_sherlock.txt', 'ngrams/en_sherlock_3grams', 3)
compute_ngram_statistics('ngrams/gutenberg_sherlock.txt', 'ngrams/en_sherlock_4grams', 4)
"""
Explanation: The above function takes a text and a number n and returns that text as ngrams:
- text[i:] for i in range(3) would be
- $\texttt{ANEXAMPLETEXT}$
- $\texttt{NEXAMPLETEXT}$
- $\texttt{EXAMPLETEXT}$
- *[...] passes these as individual arguments into the zip() function
- zip() zips the lists together and returns a list of tuples of the 1st entries in the lists, the 2nd entries in the list, etc until one of the lists runs out:
- $\texttt{(A,N,E)}$
- $\texttt{(N,E,X)}$
- ...
- $\texttt{(E,X,T)}$
Now, using this function, we can easily create a file of ngrams and their counts in some reference text. These statistics are later used to estimate the likelihood that another string is from the same language.
End of explanation
"""
from ngram_score import ngram_score
fitness = ngram_score('ngrams/en_sherlock_1grams') # load our ngram statistics
alphabet = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'
results = []
for key in alphabet:
decrypted_ctext = rot(ctext, key, direction = -1)
ll_decrypted_ctext = fitness.score(decrypted_ctext)
results.append((ll_decrypted_ctext, key, decrypted_ctext))
print "%s:\t%f\t%s" % (key, ll_decrypted_ctext, decrypted_ctext)
print "\nthe most likely key and plaintext:\n %s" % repr(max(results))
"""
Explanation: Cracking Caesar Cipher using Maximum Likelihood
Finally, let's crack Caesar using this. We use function ngram_score (source: practicalcryptography.com) for computing the log-likelihood that a text is a particular language, given a file with ngram counts for that language.
End of explanation
"""
def vigenere(s, key, alphabet="ABCDEFGHIJKLMNOPQRSTUVWXYZ", direction=1):
t = []
key_i = 0
for i in range(len(s)):
if s[i] in alphabet: # only process symbols from the specified alphabet
t.append(rot(s[i],key[key_i], alphabet, direction))
key_i = (key_i + 1) % len(key)
else:
t.append(s[i])
return "".join(t)
"""
Explanation: Perhaps a bit underwhelming to use this for cracking the Caesar cipher. However, now we have a useful tool in hand that we can use for other more complicated ciphers as well, like the Vigenere cipher which we will now look at.
Vigenere
Now we will move on to the Vigenere cipher, which is very similar to the Caesar cipher, except that the consecutive shifts of the plaintext characters are determined by a keyword. If the key has length $N$, then each $N^{th}$ character of the plaintext will be shifted by the same shift, determined by the corresponding character in the key. This relation of Vigenere being N interleaved Caesar ciphers is also visible from the code:
End of explanation
"""
ptext = '"I HEAR AND I FORGET. I SEE AND I REMEMBER. I DO AND I UNDERSTAND." --CONFUSIUS'
ctext = rot(ptext, 'N')
ctext
"""
Explanation: With Caesar we had:
End of explanation
"""
ptext = '"I HEAR AND I FORGET. I SEE AND I REMEMBER. I DO AND I UNDERSTAND." --CONFUSIUS'
ctext = vigenere(ptext, 'NO')
ctext
"""
Explanation: which can be seen as:
"I HEAR AND I FORGET. I SEE AND I REMEMBER. I DO AND I UNDERSTAND." --CONFUSIUS
N NNNN NNN N NNNNNN N NNN NNN N NNNNNNNN N NN NNN N NNNNNNNNNN NNNNNNNNN
------------------------------------------------------------------------------
"V URNE NAQ V SBETRG. V FRR NAQ V ERZRZORE. V QB NAQ V HAQREFGNAQ." --PBASHFVHF
Whereas, for Vigenere we have:
End of explanation
"""
from __future__ import division
import numpy as np
# computes the IC for a given string
def IC(s, alphabet = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'):
s_filtered = filter(s, alphabet) # we only care about the letters in our alphabet
ic = 0
for c in alphabet:
c_count = s_filtered.count(c)
N = len(s_filtered)
if c_count > 1:
ic += (c_count * (c_count - 1)) / (N * (N - 1) / len(alphabet))
return ic
# helper function to filter out non-alphabet characters
def filter(s, alphabet = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'):
return "".join([c for c in s if c in alphabet])
# computes the avg IC of subsequences of each n'th character
def mean_IC(s, n, alphabet = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'):
s_filtered = filter(s, alphabet) # we only care about the letters in our alphabet
s_filtered_subseq = [s_filtered[i::n] for i in range(n)]
ic = []
for i in range(n):
ic.append(IC(s_filtered_subseq[i]))
return np.mean(ic)
ctext = "VYCHVUYPESJMZCJZTXNOOEFSXMBQJTTNQAXWKBWTPDDKTUODCOPGJFNTMOPRLACBZGTWEAEEEOKWAKCXCHVOATLGFMVYZRWBNGHPQUCDRKSGXSGWZRZWMURLSTLCHLZWBAECCIMEESTLLPWVNCCMYLELPKAAPTCZKYCTZQOIKGICRMEQTSPWMGHKFTYOHRDCUBAQEQWTVRBPCFKGPWGGFEQTZFSDWDVCCLOYVPBFWGPVFPLYRTMCWVSDCCIHSBLGCLPWTVKPBNVNRLAVWVPQTOEACSYJIUVVPBXSFARC"
from matplotlib import pyplot as plt
mean_ic_n = []
for n in range(1,32):
mean_ic = mean_IC(ctext.upper(), n)
mean_ic_n.append(mean_ic)
print "%2d %02f" % (n, mean_ic)
plt.bar(range(1,len(mean_ic_n)+1), mean_ic_n, align = 'center')
plt.xlabel("key length")
plt.ylabel("average IC")
"""
Explanation: which can be seen as:
"I HEAR AND I FORGET. I SEE AND I REMEMBER. I DO AND I UNDERSTAND." --CONFUSIUS
N ONON ONO N ONONON O NON ONO N ONONONON O NO NON O NONONONONO NONONONON
------------------------------------------------------------------------------
"V VROE OAR V TBFTSG. W FSR OAR V FRARAOSE. W QC NBQ W HBQSEGGOAR." --PCATHGVIF
Vigenere Solution #0: Manually Cracking a Vigenere Cipher
Since Vigenere is so similar to the Caesar cipher, we can re-use many of the same principles which we already saw for solving the Caesar cipher. However, before we can attack the Vigenere cipher in the same way as the Caesar cipher, we need to guess the length of the key. Again, for this we can use the frequency distributions, however, now we look at the frequency distribution of every $N^{th}$, since each $N^{th}$ character is shifted with the same key. In summary, we are going to try different key lengths and see for which key length, the N different frequency distributions look most like shifted English distributions. The technical way to measure this is through the Index of Coincidence (IC), which characterizes piece of text by the likelihood of drawing a pair of identical letters at random from that text.
<div class="alert-warning">
<h3>CHALLENGE: Manually Cracking a Vigenere Cipher</h3>
<br/>
Before moving on to the IC, an alternative more visual approach for determining key length is used on http://www.simonsingh.net/The_Black_Chamber/vigenere_cracking_tool.html, which finds common letter combinations and how far they are apart in the text. Since these **distances between common letter combinations are likely a multiple of the key length**, investigating the common factors of these distances will give an indication of the likely key length.
<br/><br/>
Use the vigenere cracking tool at the above site to crack the following vigenere ciphertext:
<pre>
YFMRFDYCYRBEEXEBKRTTKMBRYJKEQNEHXVLXHDMRIRBEEGLNWVKXDUGRSVZIVCUNWVWVLSEAJIMXDNLRYTWRVUSGXFNXKQAYUYIFHFWENKBIQAUGYNMRWKSVCKQQHEIAIZNJHDEAYIWAVQAPMRTTKMBRYJPMIFEQHPKPLOAYQPBSWTEYJWBGRYPNWVLXRFHRUIMZLAUFFCXLDNEGHFZVHEPBSUQRJFOGMVBAHZTLXZFTRESVGCMEHEAEHZXLHDSGIZNJHDEAYGWMQFSVSKPIHZCEDGBMRZPETTMWVFHRHZXLHDUFJJIHLRFRWVVXDXPUFSMXIDOZTEMSIFHRWFEWKQAYUYIFHFUFJUIXHMCUUFQRWPECJELWRZAEJGMEWUNTPVGARDD
</pre>
Note how similar this approach is to the principles used to crack the Caesar cipher.
</div>
Index of Coincidence (IC)
As mentioned, the technical way to measure how much a text is like a possibly shifted English letter frequency distribution, is through the Index of Coincidence (IC), which characterizes piece of text by the likelihood of drawing (without replacement) a pair of identical letters at random from that text:
$$
IC = \sum_{i=1}^c \big( \frac{n_i}{N} \times \frac{n_i-1}{N-1} \big)
$$
where $n_i$ is the count of the $i^{th}$ character in the alphabet, $c$ is the number of characters in the alphabet and $N$ is the length of the text.
Each language has its own typical IC, and for N large enough you would get:
$$
IC = \sum_{i=1}^c f_i^2
$$
where $f_i$ is the relative frequency of the $i^{th}$ character in the alphabet of that language. As you can see from the formula, a nice property of the IC is it is not affected by a substitution cipher. Therefore, it can also be used to determine the language of the plaintext if you are only given the ciphertext of a substitution cipher, or whether you are even dealing with a substitution cipher.
For more details, see:
- http://practicalcryptography.com/cryptanalysis/text-characterisation/index-coincidence/
- https://en.wikipedia.org/wiki/Index_of_coincidence
- http://www.cs.mtu.edu/~shene/NSF-4/Tutorial/VIG/Vig-IOC.html
Determining Vigenere Key Length using IC
The IC can be used to determine the most likely key length, using the fact that it is higher for plaintext (or plaintext under a particular substitution) than for random text. To illustrate how this can be used for determining the key length, consider this as a representation of a ciphertext encrypted with a key of length 5:
ABCDEABCDEABCDEABCDEABCDEABCDEABCDE
where A represents a ciphertext character encrypted with the first character of the key, B the second, etc.
Not knowing the key length, we could try a key length of 3 or 4 and obtain:
supposed key length=3
ABC
DEA
BCD
EAB
CDE
ABC
DEA
BCD
EAB
CDE
ABC
DE
and
supposed key length=4
ABCD
EABC
DEAB
CDEA
BCDE
ABCD
EABC
DEAB
CDE
It can be seen that in each column the characters are encrypted using different letters of the key. Therefore the ciphertext characters in that column will be quite random and the IC will be low. However, when trying key length 5, we get
supposed key length=5
ABCDE
ABCDE
ABCDE
ABCDE
ABCDE
ABCDE
ABCDE
Each column will now correspond to a particular substitution / Caesar cipher corresponding to that key character, and the IC will be higher. Looking at average IC of all columns, will indicate the key length.
For more details, see:
- http://practicalcryptography.com/cryptanalysis/stochastic-searching/cryptanalysis-vigenere-cipher/
Now, suppose we have the following ciphertext:
VYCHVUYPESJMZCJZTXNOOEFSXMBQJTTNQAXWKBWTPDDKTUODCOPGJFNTMOPRLACBZGTWEAEEEOKWAKCXCHVOATLGFMVYZRWBNGHPQUCDRKSGXSGWZRZWMURLSTLCHLZWBAECCIMEESTLLPWVNCCMYLELPKAAPTCZKYCTZQOIKGICRMEQTSPWMGHKFTYOHRDCUBAQEQWTVRBPCFKGPWGGFEQTZFSDWDVCCLOYVPBFWGPVFPLYRTMCWVSDCCIHSBLGCLPWTVKPBNVNRLAVWVPQTOEACSYJIUVVPBXSFARC
Let's try different key lengths and determine the average IC for each of them.
End of explanation
"""
key_length = 13
for i in range(key_length):
ctext_i = ctext[i::key_length]
alphabet = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'
chi2_k = []
for key in alphabet:
decrypted_ctext_i_k = rot(ctext_i, key, direction = -1)
f = np.array([decrypted_ctext_i_k.count(c) for c in alphabet])
N = np.sum(f) # note: not counting the non-alphabet characters here
f = f / N
chi2_decrypted_ctext_i_k = chi_square(f, g_english, N)
chi2_k.append((chi2_decrypted_ctext_i_k, key, decrypted_ctext_i_k))
print "%s:\t%f\t%s" % (key, chi2_decrypted_ctext_i_k, decrypted_ctext_i_k)
print "the best key and plaintext:\n %s\n" % repr(min(chi2_k))
"""
Explanation: Given the peaks at a key length of 13 and 26, it can be concluded that the key is likely 13 characters long.
Vigenere Solution #1: Chi-square Criterion for Automatically Determining the Best Key
Now we know that the key is likely 13 characters long, we can determine the most likely shifts for each of the 13 Caesar ciphers using the Chi-square criterion.
End of explanation
"""
vigenere(ctext, 'OCYPTANALYSIS', direction = -1)
"""
Explanation: According to the Chi-square analysis the key is OCYPTANALYSIS.
End of explanation
"""
vigenere(ctext, 'CRYPTANALYSIS', direction = -1)
"""
Explanation: Pretty close, but no cigar. Let's try CRYPTANALYSIS.
End of explanation
"""
# keep a list of the N best things we have seen, discard anything else
# this class can also be imported using 'import nbest' from a separate file nbest.py
class nbest(object):
def __init__(self,N=1000):
self.store = []
self.N = N
def add(self,item):
self.store.append(item)
self.store.sort(reverse=True)
self.store = self.store[:self.N]
def __getitem__(self,k):
return self.store[k]
def __len__(self):
return len(self.store)
ctext = "VYCHVUYPESJMZCJZTXNOOEFSXMBQJTTNQAXWKBWTPDDKTUODCOPGJFNTMOPRLACBZGTWEAEEEOKWAKCXCHVOATLGFMVYZRWBNGHPQUCDRKSGXSGWZRZWMURLSTLCHLZWBAECCIMEESTLLPWVNCCMYLELPKAAPTCZKYCTZQOIKGICRMEQTSPWMGHKFTYOHRDCUBAQEQWTVRBPCFKGPWGGFEQTZFSDWDVCCLOYVPBFWGPVFPLYRTMCWVSDCCIHSBLGCLPWTVKPBNVNRLAVWVPQTOEACSYJIUVVPBXSFARC"
"""
Explanation: <div class="alert-warning">
<h3>CHALLENGE: Using Chi-square criterion to decrypt Vigenere</h3>
<br/>
<b>Using the techniques described above, try deciphering these two messages:</b>
<pre>
TZKHT BTWJN BKGDP GVFVO HBTWA ZFFZP JVMJT VTHAV VAMFB BSBKU
JBUKL BDOKR MWZLH RHFFH WLEGF GISTR WBDSL VOHVY UQSWN PFOXR
WKKAN YBBXY NGHAR TJBZN BQZUE VUGUR MQOST VPBFA WWHWM OFFKU
AMQFI AFHVR WVUFE GZHYR AMTSS OFSEZ DKTKP RDICN CQAFA OPIKG
QMYWA AJBXB OBTWF BVFVA LZKHT REAVF BISWS VUPVN AAAXT UFTFH
AUQKS NHSJG QMRAR FUHYE NMTSV RCSVA BWXNE QXVZY NBTWF BVFKU
VMEKA TFFVZ JQZKA FPBVB OBTWM BTHWN VWGKU ATCCI NLOGD RTWEG
QMIGR YEHYR BKGDP GVFVP XVFAN HFGKB KMAXI AUSIR BBFGC EZDKN
WIXQS GTPFG QIYST RVFRA MXDGF RTGZB WIXOH BBFVN CBQEP GJBXG
XLQUI CISIG QMRGU EUVJR LBUGN GISRE CQELH NTGFS JZSAV ROHNB
LTGWS GPHYV BAQUT VPB
</pre>
<pre>
more difficult to break (due to the text not being very typical?):
IUOCA LGTAM DAQFN ISJTN MZGWS UZCBH GSSJT NM?LZ EYVKG LLZEE
BJVPK EAGOW VQUXI EMVZB ZWING GTUSL IOOOC AYSTH FJGLS FDTSS
PAEAT TFVWV VWRGS MWVVL OAOMP SFGWN MGEIL AONYV QMKDA NHDGG
CFOWB TQCLL HIT?L JMQKH OVDFQ LKBUS AGLGM TTIWT MKGME XZGZW
PWHPC PWOKT HFWZI ULLOD SVQGF ?ONMQ YELZI SXSUP AKLAT LOMKK
AGFPV PAJTY FAIPL VEGSW GAXAF TZKGD WFMIO MVMKK IXQGK VLXIV
FKGKG FDSOG TBZKE VFFVG KWVEO VGOJW ESFAI PLEIN VLGAX GRTZX
QCJKE CPFFA OWSTJ VDGJG WS
</pre>
<b>Points to consider:</b>
<ol>
<li> Without solving it, do you think this is a transposition cipher of English text? (hint: investigate letter frequencies)
<li> Without solving it, do you think this is a substitution cipher of English text? (hint: investigate the IC of the full text)
<li> In case it is a Vigenere cipher (spoiler: it is):
<ol>
<li> what would be the mosdt likely key length? (hint: investigate the IC for different key lengths)
<li> which positions would be shifted by the same key/amount?
<li> what can you say about the plaintext language? (hint: investigate IC of subsequences)
<li> solve the key by hand using frequency distributions of the subsequences.
<li> solve the key automatically using the Chi-squared criterion.
</ol>
</ol>
</div>
Vigenere Solution #2: Maximum Likelihood, Combined with a Search Procedure
For solving Vigenere, we could also use Maximum Likelihood to evaluate how much like plaintext a decrypted ciphertext is. Combined with a way to search through possible keys, we can search for the best key. The advantage of using maximum likelihood is also that we can easily consider longer subsequences, rather than just look at the message as a bag of unordered letters where each letter has a frequency. By considering statistics of longer subsequences, we can better recognize when some text is written in a particular language.
This approach is adapted from:
- http://practicalcryptography.com/cryptanalysis/stochastic-searching/cryptanalysis-vigenere-cipher-part-2/
Note that this is just one possible way to search through the possible keys. It does not use any information like IC for determining the likely key length N, or Chi-square for determining the likely values for the shifts of the N Caesar ciphers. Instead, for a range of key lengths, it exhaustively searches the possibilities for the first 3 characters of the key and keeps track of the best candidates. Then, it extends these subkeys one character at a time, while keeping track of the best candidates.
An alternative way would be to determine the key length and likely key, using IC and Chi-square criteria, and then gradually improve they key using a stochastic search algorithm like simulated annealing.
During the search, we need a way to keep track of the best results so far. This can be easily achieved using the $\texttt{nbest}$ class:
End of explanation
"""
from itertools import permutations
from ngram_score import ngram_score
import re
import pprint as pp
qgram = ngram_score('ngrams/en_sherlock_4grams') # load our 4gram statistics
trigram = ngram_score('ngrams/en_sherlock_3grams') # load our 3gram statistics
# keep track of the 100 best keys
N=100
# test keys up to a length 15
for KLEN in range(3,16):
print "="*80
rec = nbest(N)
# exhaustively test all possible letters for first 3 entries of the key and keep track of the N best ones
# if KLEN=7, this will test e.g. FOOAAAA and BARAAAA
for i in permutations('ABCDEFGHIJKLMNOPQRSTUVWXYZ',3):
i = "".join(i)
key = ''.join(i) + 'A'*(KLEN-len(i))
decrypted_ctext = vigenere(ctext, key, direction = -1)
score = 0
for j in range(0,len(ctext),KLEN):
score += trigram.score(decrypted_ctext[j:j+3])
rec.add((score,''.join(i), decrypted_ctext))
next_rec = nbest(N)
# for the remaining KLEN-3 characters of the key,
for i in range(0,KLEN-3):
# go over the N best keys found so far...
for k in xrange(N):
# ...and determine the best next character of the key, while keeping best N keys so far
for c in 'ABCDEFGHIJKLMNOPQRSTUVWXYZ':
key = rec[k][1] + c
fullkey = key + 'A'*(KLEN-len(key))
decrypted_ctext = vigenere(ctext, fullkey, direction = -1)
score = 0
for j in range(0,len(ctext),KLEN):
score += qgram.score(decrypted_ctext[j:j+len(key)])
next_rec.add((score,key, decrypted_ctext))
rec = next_rec
next_rec = nbest(N)
# show the results
bestscore = rec[0][0]
bestkey = rec[0][1]
#decrypted_ctext = rec[0][2]
# always show entire decrypted ctext, even if the above analysis is done only on part of the ctext, e.g. ctext[0:100]
decrypted_ctext = vigenere(ctext, bestkey, direction = -1)
print bestscore, 'klen', KLEN, ':"'+bestkey+'",', decrypted_ctext
# uncomment the following lines to see top-10 results
#pp.pprint(rec.store[0:10])
#print '\n'
"""
Explanation: The search for the best key, as described above, is performed using the code below. Rather than solving and evaluating each shift individually, the criterion used to evaluate the quality of a keyword is the likelihood of the 4-grams in the decrypted ciphertext being English.
End of explanation
"""
ctext = "VYCHVUYPESJMZCJZTXNOOEFSXMBQJTTNQAXWKBWTPDDKTUODCOPGJFNTMOPRLACBZGTWEAEEEOKWAKCXCHVOATLGFMVYZRWBNGHPQUCDRKSGXSGWZRZWMURLSTLCHLZWBAECCIMEESTLLPWVNCCMYLELPKAAPTCZKYCTZQOIKGICRMEQTSPWMGHKFTYOHRDCUBAQEQWTVRBPCFKGPWGGFEQTZFSDWDVCCLOYVPBFWGPVFPLYRTMCWVSDCCIHSBLGCLPWTVKPBNVNRLAVWVPQTOEACSYJIUVVPBXSFARC"
from ngram_score import ngram_score
import re
import pprint as pp
qgram = ngram_score('ngrams/en_sherlock_4grams') # load our 4gram statistics
trigram = ngram_score('ngrams/en_sherlock_3grams') # load our 3gram statistics
# keep track of the 100 best keys
N=100
rec = nbest(N)
f = open('wordlists/websters-dictionary','r')
i = 1
for key in f:
key = re.sub(r'[^A-Z]','',key.upper())
decrypted_ctext = vigenere(ctext, key, direction = -1)
score = qgram.score(decrypted_ctext)
rec.add((score,key, decrypted_ctext))
i += 1
if i % 10000 == 0:
bestscore = rec[0][0]
bestkey = rec[0][1]
decrypted_ctext = vigenere(ctext, bestkey, direction = -1)
print "%20s\t%5.2f,\t%s,\t%s,\t%s" % (key, bestscore, len(bestkey), bestkey, decrypted_ctext)
"""
Explanation: Here we can see that using the different criterion and search algorithm the correct key and plaintext have successfully been found.
Vigenere Solution #3: Maximum Likelihood, Combined with Brute Force
Instead of trying to determine the key character by character, we can also bruteforce it and keep track of the best key as before.
The wordlist used here was downloaded from:
https://packetstormsecurity.com/Crackers/wordlists/dictionaries/
End of explanation
"""
import re
def compute_keyed_alphabet(keyword, alphabet = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"):
# remove double keyword letters, keeping first occurence
keyword_unique = ""
for c in keyword:
if c not in keyword_unique:
keyword_unique += c
# compute cipher alphabet
keyed_alphabet = keyword_unique + re.sub("[%s]" % keyword, "", alphabet)
return keyed_alphabet
def keyed_caesar(s, key, alphabet="ABCDEFGHIJKLMNOPQRSTUVWXYZ", alpha_key="A", direction=1):
# compute cipher alphabet
keyed_alphabet = compute_keyed_alphabet(alpha_key, alphabet)
# set source and destination alphabet depending on direction
if direction == 1:
src_alphabet = alphabet
dst_alphabet = keyed_alphabet
else:
src_alphabet = keyed_alphabet
dst_alphabet = alphabet
# encrypt / decrypt
keyval = alphabet.find(key)
t = ""
for sc in s:
i = src_alphabet.find(sc)
t += dst_alphabet[(i + keyval * direction) % len(dst_alphabet)] if i > -1 else sc
return t
"""
Explanation: Again, the key was found successfully. Interestingly, before finding CRYPTANALYSIS, very similar words of the same length were identified as candidate keys:
- ACROPARALYSIS
- CHROMATOLYSIS
- CRYPTANALYSIS
Keyed Caesar Variants
Keyed Caesar Variants
The Caesar Cipher is a substitution cipher, which substitutes letters by letters at the same index in a shifted version of the alphabet, i.e. in case of rot13, a Caesar cipher with key 'N'
plain alphabet ABCDEFGHIJKLMNOPQRSTUVWXYZ
cipher alphabet NOPQRSTUVWXYZABCDEFGHIJKLM (and other shifted versions of it, shown alphabet corresponds to key 'N')
A variant of the Caesar cipher is the Keyed Caesar Cipher, which uses an additional alphabet keyword to create a keyed cipher alphabet, i.e. in case the alphabet keyword is KRYPTOS:
plain alphabet ABCDEFGHIJKLMNOPQRSTUVWXYZ
cipher alphabet KRYPTOSABCDEFGHIJLMNQUVWXZ (and other shifted versions of it, shown alphabet corresponds to key 'A')
RYPTOSABCDEFGHIJLMNQUVWXZK (and other shifted versions of it, shown alphabet corresponds to key 'B')
Finally, we can also use a keyed alphabet for both the plain alphabet and the cipher alphabet to obtain we will call a Caesar Cipher with a keyed alphabet here (note the difference with Keyed Caesar):
plain alphabet KRYPTOSABCDEFGHIJLMNQUVWXZ
cipher alphabet KRYPTOSABCDEFGHIJLMNQUVWXZ (and other shifted versions of it, shown alphabet corresponds to key 'K')
RYPTOSABCDEFGHIJLMNQUVWXZK (and other shifted versions of it, shown alphabet corresponds to key 'R')
First, let's define a function for computing a keyed alphabet. It takes an alphabet and a keyword consisting of symbols from that alphabet. Then, it removes all but the first occurence of each character in the keyword (i.e. CRYPTANALYSIS becomes CRYPTANLSI), and then appends the remaining characters of the alphabet in order to obtain the keyed alphabet (i.e. CRYPTANLSIBDEFGHJKMOQUVWXZ)
End of explanation
"""
ptext = "THESCULPTUREHASBEENBOTHAPUZZLEANDAMYSTERYFORTHOSEWHOHOPETOCRACKTHECYPHEREDMESSAGESCONTAINEDWITHINTHESCULPTURESTWOTHOUSANDALPHABETICLETTERSINTHETWENTYYEARSSINCEKRYPTOSWASERECTEDTHREEOFTHEFOURSECTIONSHAVEBEENCONFIRMEDTOHAVEBEENSOLVEDNOONEHASYETBEENABLETOSOLVETHEREMAININGNINETYSEVENCHARACTERMESSAGE"
"""
Explanation: Cryptanalysis - Keyed Caesar
Effect of Keyed Caesar on Frequency Distributions
End of explanation
"""
# https://blog.dominodatalab.com/interactive-dashboards-in-jupyter/
# http://ipywidgets.readthedocs.io/en/latest/examples/Using%20Interact.html
from __future__ import division
from matplotlib import pyplot as plt
import numpy as np
from ipywidgets import interact, interactive, fixed
#import ipywidgets as widgets
plt.rcParams["figure.figsize"] = [12,9]
alphabet = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
keyed_alphabet = compute_keyed_alphabet("KRYPTOS", alphabet) # UNCOMMENT IF YOU WANT ORDER THE
def rotate(l, n):
return l[n:] + l[:n]
@interact(key_i = (0,25,1))
def plot_graphs(key_i = 0):
# encrypt ptext using this key
key = alphabet[key_i]
ctext = keyed_caesar(ptext, key, alpha_key="KRYPTOS", direction=1)
# determine letter frequency in plaintext, and plot
ptext_freq = np.array([ptext.count(c)/len(ptext) for c in alphabet])
pfig = plt.subplot(3,1,1)
plt.bar(range(len(alphabet)), ptext_freq, tick_label = list(alphabet), align = 'center', color = 'b')
plt.title('blue = plaintext letter frequency, red = ciphertext letter frequency')
# determine letter frequency in ciphertext, and plot, ordered by keyed alphabet
ctext_freq = np.array([ctext.count(c)/len(ctext) for c in keyed_alphabet])
cfig = plt.subplot(3,1,2)
plt.bar(range(len(keyed_alphabet)), ctext_freq, tick_label = rotate(list(keyed_alphabet), key_i), align = 'center', color = 'r')
plt.xlabel("key = %s" % key)
# determine letter frequency in ciphertext, and plot, ordered by alphabet
ctext_freq = np.array([ctext.count(c)/len(ctext) for c in alphabet])
cfig = plt.subplot(3,1,3)
plt.bar(range(len(alphabet)), ctext_freq, tick_label = rotate(list(alphabet), key_i), align = 'center', color = 'r')
plt.xlabel("key = %s" % key)
plt.show()
print ctext
"""
Explanation: Now, like we did before with the normal Caesar cipher and look at the effect of the cipher on the frequency distribution:
End of explanation
"""
ctext = keyed_caesar(ptext, "N", alpha_key="KRYPTOS", direction=1)
ctext
# https://blog.dominodatalab.com/interactive-dashboards-in-jupyter/
# http://ipywidgets.readthedocs.io/en/latest/examples/Using%20Interact.html
from __future__ import division
from matplotlib import pyplot as plt
import numpy as np
from ipywidgets import interact, interactive, fixed
#import ipywidgets as widgets
plt.rcParams["figure.figsize"] = [12,6]
alphabet = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
def rotate(l, n):
return l[n:] + l[:n]
@interact(key_i = (0,25,1))
def plot_graphs(key_i = 0):
# decrypt ciphertext using this key
key = alphabet[key_i]
# NOTE THAT WE DO NOT KNOW THE KEY TO THE ALPHABET, SO WE CANNOT USE THAT!!!
decrypted_ctext = keyed_caesar(ctext, key, alpha_key = "A", direction=-1)
# determine letter frequency in plaintext, and plot # FIXME: base on separate text
ptext_freq = np.array([ptext.count(c)/len(ptext) for c in alphabet])
pfig = plt.subplot(2,1,1)
plt.bar(range(len(alphabet)), ptext_freq, tick_label = list(alphabet), align = 'center', color = 'b')
plt.title('blue = plaintext letter frequency, green = decrypted ciphertext letter frequency')
# determine letter frequency in ciphertext, and plot
ctext_freq = np.array([decrypted_ctext.count(c)/len(decrypted_ctext) for c in alphabet])
cfig = plt.subplot(2,1,2)
plt.bar(range(len(alphabet)), ctext_freq, tick_label = rotate(list(alphabet), key_i), align = 'center', color = 'g')
plt.xlabel("key = %s" % key)
plt.show()
print decrypted_ctext
"""
Explanation: We can see that due to the keyed alphabet, the effect of Keyed Caesar is not just a shift of the frequency distributions (unless we look at the frequencies in order of the keyed alphabet).
Determine Key by Aligning Frequency Distributions?
Now, we'll start with a ciphertext resulting from Keyed Caesar encryption, and look at decrypting it by aligning frequency distributions. Note that when just given the ciphertext, we have no idea what the key to create the keyed alphabet is.
End of explanation
"""
ctext = keyed_caesar(ptext, "N", alpha_key="KRYPTOS", direction=1)
ctext
def substitution(s, key, alphabet="ABCDEFGHIJKLMNOPQRSTUVWXYZ", direction=1):
if direction == 1:
src_alphabet = alphabet
dst_alphabet = key
else:
src_alphabet = key
dst_alphabet = alphabet
t = ""
for c in s:
if c in src_alphabet:
t += dst_alphabet[src_alphabet.find(c)]
else:
t += c
return t
import random
import re
from ngram_score import ngram_score
fitness = ngram_score('ngrams/en_sherlock_4grams') # load our quadgram statistics
alphabet = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
maxkey = list(alphabet)
maxscore = -99e9
parentscore,parentkey = maxscore,maxkey[:]
print "Substitution Cipher solver, you may have to wait several iterations for the correct result. Halt execution by halting the kernel, using the stop button in the toolbar."
# keep going until we are killed by the user
i = 0
rec = nbest(1000)
MAX_ITER = 10
while i < MAX_ITER:
i = i+1
random.shuffle(parentkey)
# KEYED CAESAR WITH KEY "A" IS JUST A SUBSTITUTION CIPHER
deciphered = substitution(ctext, "".join(parentkey), alphabet, direction=-1)
parentscore = fitness.score(deciphered)
count = 0
while count < 1000:
a = random.randint(0,len(alphabet)-1)
b = random.randint(0,len(alphabet)-1)
child = parentkey[:]
# swap two characters in the child
child[a],child[b] = child[b],child[a]
deciphered = substitution(ctext, "".join(child), alphabet, direction=-1)
score = fitness.score(deciphered)
# if the child was better, replace the parent with it
if score > parentscore:
parentscore = score
parentkey = child[:]
count = 0
count = count+1
rec.add((score, child, deciphered))
# keep track of best score printed so far, and print if improved
if parentscore > maxscore:
maxscore = rec[0][0]
maxkey = rec[0][1]
#deciphered = rec[0][2]
deciphered = substitution(ctext, "".join(maxkey), alphabet, direction=-1)
print '\nbest score so far:',maxscore,'on iteration',i
print ' best key: '+ "".join(maxkey)
print ' plaintext: '+ deciphered
"""
Explanation: For easy reference, this is the subtitution when keying the alphabet with 'KRYPTOS', and using key 'N':
plain alphabet ABCDEFGHIJKLMNOPQRSTUVWXYZ
cipher alphabet KRYPTOSABCDEFGHIJLMNQUVWXZ (and other shifted versions of it, shown alphabet corresponds to key 'A')
GHIJLMNQUVWXZKRYPTOSABCDEF (and other shifted versions of it, shown alphabet corresponds to key 'N')
From this plot, it can be seen that when we do not know what was the keyword for creating the keyed alphabet, the cipher cannot be broken by simply aligning the frequency distributions and a more general approach for breaking the substitution is needed.
Keyed Caesar Solution #1: Maximum Likelihood, Combined with Search Procedure
However, since it is a substitution cipher, we could find the substitution and entire cipher alphabet as follows:
- guessing the individual substitutions based on frequency: assume most frequent ciphertext character is 'E' etc
- given this cipher alphabet, gradually try to improve the substitution by swapping characters in the cipher alphabet
- measure the likelihood of plaintext being English as before, and keep track of best keys found so far
More on this approach can be found here.
End of explanation
"""
def caesar_keyed_alphabet(s, key, alphabet="ABCDEFGHIJKLMNOPQRSTUVWXYZ", alpha_key="", direction=1):
# compute keyed alphabet
keyed_alphabet = compute_keyed_alphabet(alpha_key, alphabet)
return rot(s, key, keyed_alphabet, direction)
ptext = "THESCULPTUREHASBEENBOTHAPUZZLEANDAMYSTERYFORTHOSEWHOHOPETOCRACKTHECYPHEREDMESSAGESCONTAINEDWITHINTHESCULPTURESTWOTHOUSANDALPHABETICLETTERSINTHETWENTYYEARSSINCEKRYPTOSWASERECTEDTHREEOFTHEFOURSECTIONSHAVEBEENCONFIRMEDTOHAVEBEENSOLVEDNOONEHASYETBEENABLETOSOLVETHEREMAININGNINETYSEVENCHARACTERMESSAGE"
alpha_key_ = "KRYPTOS"
# https://blog.dominodatalab.com/interactive-dashboards-in-jupyter/
# http://ipywidgets.readthedocs.io/en/latest/examples/Using%20Interact.html
from __future__ import division
from matplotlib import pyplot as plt
import numpy as np
from ipywidgets import interact, interactive, fixed
#import ipywidgets as widgets
plt.rcParams["figure.figsize"] = [12,9]
alphabet = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
keyed_alphabet = compute_keyed_alphabet(alpha_key_, alphabet)
def rotate(l, n):
return l[n:] + l[:n]
@interact(key_i = (0,25,1))
def plot_graphs(key_i = 0):
# encrypt plaintext using this key
key = keyed_alphabet[key_i]
ctext = caesar_keyed_alphabet(ptext, key, alphabet="ABCDEFGHIJKLMNOPQRSTUVWXYZ", alpha_key=alpha_key_, direction=1)
# determine letter frequency in plaintext, and plot
ptext_freq = np.array([ptext.count(c)/len(ptext) for c in keyed_alphabet])
pfig = plt.subplot(3,1,1)
plt.bar(range(len(keyed_alphabet)), ptext_freq, tick_label = list(keyed_alphabet), align = 'center', color = 'b')
plt.title('blue = plaintext letter frequency, red = ciphertext letter frequency')
# determine letter frequency in ciphertext, and plot, in order keyed alphabet
ctext_freq = np.array([ctext.count(c)/len(ctext) for c in keyed_alphabet])
cfig = plt.subplot(3,1,2)
plt.bar(range(len(keyed_alphabet)), ctext_freq, tick_label = rotate(list(keyed_alphabet), key_i), align = 'center', color = 'r')
plt.xlabel("key = %s" % key)
# determine letter frequency in ciphertext, and plot, in order of alphabet
ctext_freq = np.array([ctext.count(c)/len(ctext) for c in alphabet])
cfig = plt.subplot(3,1,3)
plt.bar(range(len(alphabet)), ctext_freq, tick_label = rotate(list(alphabet), key_i), align = 'center', color = 'r')
plt.xlabel("key = %s" % key)
plt.show()
print ctext
"""
Explanation: Cryptanalysis - Caesar with Keyed Alphabet
Effect of Caesar with Keyed Alphabet on Frequency Distributions
End of explanation
"""
ptext = "THESCULPTUREHASBEENBOTHAPUZZLEANDAMYSTERYFORTHOSEWHOHOPETOCRACKTHECYPHEREDMESSAGESCONTAINEDWITHINTHESCULPTURESTWOTHOUSANDALPHABETICLETTERSINTHETWENTYYEARSSINCEKRYPTOSWASERECTEDTHREEOFTHEFOURSECTIONSHAVEBEENCONFIRMEDTOHAVEBEENSOLVEDNOONEHASYETBEENABLETOSOLVETHEREMAININGNINETYSEVENCHARACTERMESSAGE"
ctext = caesar_keyed_alphabet(ptext, key="N", alphabet="ABCDEFGHIJKLMNOPQRSTUVWXYZ", alpha_key="KRYPTOS", direction=1)
ctext
# https://blog.dominodatalab.com/interactive-dashboards-in-jupyter/
# http://ipywidgets.readthedocs.io/en/latest/examples/Using%20Interact.html
from __future__ import division
from matplotlib import pyplot as plt
import numpy as np
from ipywidgets import interact, interactive, fixed
#import ipywidgets as widgets
plt.rcParams["figure.figsize"] = [12,6]
alphabet = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
def rotate(l, n):
return l[n:] + l[:n]
@interact(key_i = (0,25,1))
def plot_graphs(key_i = 0):
# decrypt ciphertext using this key
key = alphabet[key_i]
# NOTE THAT WE DO NOT KNOW THE KEY TO THE ALPHABET, SO WE CANNOT USE THAT!!!
decrypted_ctext = caesar_keyed_alphabet(ctext, key, alphabet="ABCDEFGHIJKLMNOPQRSTUVWXYZ", alpha_key="A", direction=-1)
# determine letter frequency in plaintext, and plot
ptext_freq = np.array([ptext.count(c)/len(ptext) for c in alphabet])
pfig = plt.subplot(2,1,1)
plt.bar(range(len(alphabet)), ptext_freq, tick_label = list(alphabet), align = 'center', color = 'b')
plt.title('blue = plaintext letter frequency, green = decrypted ciphertext letter frequency')
# determine letter frequency in ciphertext, and plot
ctext_freq = np.array([decrypted_ctext.count(c)/len(decrypted_ctext) for c in alphabet])
cfig = plt.subplot(2,1,2)
plt.bar(range(len(alphabet)), ctext_freq, tick_label = rotate(list(alphabet), key_i), align = 'center', color = 'g')
plt.xlabel("key = %s" % key)
plt.show()
print decrypted_ctext
"""
Explanation: If we look at the frequencies in order of the keyed alphabet, then again the effect of Caesar with Keyed Alphabet can be seen as shifting the frequency distributions. However, this is not the case when the looking at the frequencies in order of the alphabet.
Determine Key by Aligning Frequency Distributions?
End of explanation
"""
ptext = "THESCULPTUREHASBEENBOTHAPUZZLEANDAMYSTERYFORTHOSEWHOHOPETOCRACKTHECYPHEREDMESSAGESCONTAINEDWITHINTHESCULPTURESTWOTHOUSANDALPHABETICLETTERSINTHETWENTYYEARSSINCEKRYPTOSWASERECTEDTHREEOFTHEFOURSECTIONSHAVEBEENCONFIRMEDTOHAVEBEENSOLVEDNOONEHASYETBEENABLETOSOLVETHEREMAININGNINETYSEVENCHARACTERMESSAGE"
ctext = caesar_keyed_alphabet(ptext, key="N", alphabet="ABCDEFGHIJKLMNOPQRSTUVWXYZ", alpha_key="KRYPTOS", direction=1)
ctext
import random
import re
from ngram_score import ngram_score
fitness = ngram_score('ngrams/en_sherlock_4grams') # load our quadgram statistics
alphabet = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
maxkey = list(alphabet)
maxscore = -99e9
parentscore,parentkey = maxscore,maxkey[:]
print "Substitution Cipher solver, you may have to wait several iterations for the correct result."
# keep going until we are killed by the user
i = 0
rec = nbest(1000)
MAX_ITER = 10 # increase if needed
while i < MAX_ITER:
i = i+1
random.shuffle(parentkey)
# KEYED CAESAR WITH KEY "A" IS JUST A SUBSTITUTION CIPHER
deciphered = substitution(ctext, "".join(parentkey), alphabet, direction=-1)
parentscore = fitness.score(deciphered)
count = 0
while count < 1000:
a = random.randint(0,len(alphabet)-1)
b = random.randint(0,len(alphabet)-1)
child = parentkey[:]
# swap two characters in the child
child[a],child[b] = child[b],child[a]
deciphered = substitution(ctext, "".join(child), alphabet, direction=-1)
score = fitness.score(deciphered)
# if the child was better, replace the parent with it
if score > parentscore:
parentscore = score
parentkey = child[:]
count = 0
count = count+1
rec.add((score, child, deciphered))
# keep track of best score printed so far, and print if improved
if parentscore > maxscore:
maxscore = rec[0][0]
maxkey = rec[0][1]
#deciphered = rec[0][2]
deciphered = substitution(ctext, "".join(maxkey), alphabet, direction=-1)
print '\nbest score so far:',maxscore,'on iteration',i
print ' best key: '+ "".join(maxkey)
print ' plaintext: '+ deciphered
"""
Explanation: Caesar with Keyed Alphabet Solution #1: Maximum Likelihood, Combined with Search Procedure
End of explanation
"""
def keyed_vigenere(s, key, alphabet="ABCDEFGHIJKLMNOPQRSTUVWXYZ", alpha_key="", direction=1):
# compute keyed alphabet
keyed_alphabet = compute_keyed_alphabet(alpha_key, alphabet)
t = []
key_i = 0
for i in range(len(s)):
if s[i] in alphabet:
t_i = rot(s[i], key[key_i % len(key)],keyed_alphabet, direction)
t.append(t_i)
key_i += 1
else:
t.append(s[i])
return "".join(t)
"""
Explanation: Keyed Vigenere
Now we will move on to the Keyed Vigenere cipher, which is very similar to the Caesar cipher with a keyed alphabet, except that the consecutive shifts of the plaintext characters are determined by a keyword. If the key has length $N$, then each $N^{th}$ character of the plaintext will be shifted by the same shift, determined by the corresponding character in the key. This relation of Vigenere being N interleaved Caesar ciphers with keyed alphabet is also visible from the code:
End of explanation
"""
ctext = "BYDMXVPQAMCWFJGOTCCNXTKHOGSEIPAJTWGNBYDNBHCJWLBNCBHPNRNXNHSUOXHLTQQKYAUEMBOUUBWGNQVWUFKKBRVIITHVWNOWCSEXWOICJBPANTCNRPTNHSJVVLICIRSEIAPMQEEHIIVRETZGSEIQPLAXLXKOTWPCSOJGOSEEOKETKBDOOBHSJVPMOWHLTSHHPOVGVATKBSEVTFDVHONNQALNUKOXHPCDQKYAUZSPQPSKAGFPOQWBCWUHLPLMSAAZPBIECBXOTFMIKTITTICPOFTVHONKFIZDNCAJIVWDNXUWUHLCSEWTIHTXOECKBGUUMOHSUUCNLEQJETEOMABYDRUURUHLSWSJDAPMPPDNJIOBTCADIHYNBRQOGEFKQDAEBEHXDOFCNGXEFJVSELPZEHTSCLPPJAHEAPLEIZACVODENXXACTANMLAJITWXSORTPOWDDUEMGZITKOSHSNRONPYGCXVWNHVBUSNXDNZHKQTLWNEPKCJEUYLRXECNRATZEXBNHOCPQWUKYGNBYDTOTPUYAQALSKCNRAIRTITNBTPPLLCGTCEBTVOTWHGTXMIEHKQPNRTZKHYPFAMOPUJCDUOWCSEZCGQEMGVAJTBUHLSLITJVVLICIRSEIITSQFSNCDSSLGLRXKONNPRKAJVVAEABLGSJCJRLOWFOGQDHXTGEYTOIDLVHPITAND"
"""
Explanation: Cryptanalysis - Keyed Vigenere
First, let's load some Keyed Vigenere ciphertext, with unknown key and unknown alphabet key.
End of explanation
"""
IC(ctext)
"""
Explanation: Determine if Plaintext
End of explanation
"""
from matplotlib import pyplot as plt
%matplotlib inline
mean_ic_n = []
for n in range(1,32):
mean_ic = mean_IC(ctext.upper(), n)
mean_ic_n.append(mean_ic)
print "%2d %02f" % (n, mean_ic)
plt.rcParams["figure.figsize"] = [12,4]
plt.bar(range(1,len(mean_ic_n)+1), mean_ic_n, align = 'center')
plt.xlabel("key length")
plt.ylabel("average IC")
"""
Explanation: The IC tells us that it is likely not a transposition of English plaintext.
Determine Key Length
End of explanation
"""
# https://blog.dominodatalab.com/interactive-dashboards-in-jupyter/
# http://ipywidgets.readthedocs.io/en/latest/examples/Using%20Interact.html
from __future__ import division
from matplotlib import pyplot as plt
import numpy as np
from ipywidgets import interact, interactive, fixed
#import ipywidgets as widgets
plt.rcParams["figure.figsize"] = [12,16]
def rotate(l, n):
return l[n:] + l[:n]
alphabet = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
key_length = 9
g_english = [0.0736, 0.0148, 0.0445, 0.0302, 0.102, 0.0227, 0.0122, 0.0277, 0.0855, 0.000557, 0.00237, 0.0342, 0.0206, 0.0717, 0.103, 0.0246, 0.00181, 0.0735, 0.0608, 0.0889, 0.0392, 0.0153, 0.0173, 0.000557, 0.032, 0.000278]
pfig = plt.subplot(key_length+1,1,1)
plt.bar(range(len(alphabet)), g_english, tick_label = rotate(list(alphabet), 0), align = 'center', color = 'b')
ctext_freq = []
for i in range(key_length):
ctext_i = ctext[i::key_length]
ctext_freq.append(np.array([ctext_i.count(c)/len(ctext_i) for c in alphabet]))
cfig = plt.subplot(key_length+1,1,i+2)
plt.bar(range(len(alphabet)), ctext_freq[i], tick_label = rotate(list(alphabet), 0), align = 'center', color = 'r')
"""
Explanation: The average IC for different key lengths tells us that it is likely encrypted with a key of length 9. We therefore have to crack 9 Caesar ciphers with a Keyed Alphabet, each of them with their own shift as defined by the Vigenere key.
Frequency analysis of subsequences
Now, let's look at the frequency distributions for the 9 subsequences:
End of explanation
"""
from ngram_score import ngram_score
import re
import pprint as pp
qgram = ngram_score('ngrams/en_sherlock_4grams') # load our 4gram statistics
trigram = ngram_score('ngrams/en_sherlock_3grams') # load our 3gram statistics
# keep track of the 100 best keys
N=100
rec = nbest(N)
f = open('wordlists/websters-dictionary','r')
L = 42 # for speed, only consider first L characters in evaluation
i = 1
for alphakey_ in f:
alphakey_ = re.sub(r'[^A-Z]','',alphakey_.upper())
decrypted_ctext = keyed_vigenere(ctext[:L], key = "LODESTONE", alpha_key = alphakey_, direction = -1)
score = qgram.score(decrypted_ctext)
rec.add((score,alphakey_, decrypted_ctext))
i += 1
if i % 10000 == 0:
bestscore = rec[0][0]
bestalphakey_ = rec[0][1]
decrypted_ctext = keyed_vigenere(ctext, key = "LODESTONE", alpha_key = bestalphakey_, direction = -1)
print "%20s\t%5.2f,\t%s,\t%s,\t%s" % (alphakey_, bestscore, len(bestalphakey_), bestalphakey_, decrypted_ctext)
"""
Explanation: Each distribution looks quite different as a result using Caesar ciphers keyed alphabet. Each of the 9 Caesar ciphers with keyed alphabet uses the same unknown keyed alphabet, and its own shift, as defined by the 9-letter Vigenere keyword. Therefore, we need to determine both the keyed alphabet, and the Vigenere keywords.
<div class="alert-warning">
<h3>CHALLENGE: Determine Vigenere Key in Special Case of Keyed Alphabet Starting with Letter E</h3>
<br/>
Luck has it that the key to determine the keyed alphabet starts with the letter E. As a result, it becomes pretty easy to determine the Vigenere key. Can you guess the Vigenere key from the frequency distribution plots above?
</div>
Case 1: Vigenere Key is Known, Alphabet Key Unknown
When the Vigenere key is known, the alphabet key can often be determined like before through e.g. brute force:
End of explanation
"""
ctext = "BYDMXVPQAMCWFJGOTCCNXTKHOGSEIPAJTWGNBYDNBHCJWLBNCBHPNRNXNHSUOXHLTQQKYAUEMBOUUBWGNQVWUFKKBRVIITHVWNOWCSEXWOICJBPANTCNRPTNHSJVVLICIRSEIAPMQEEHIIVRETZGSEIQPLAXLXKOTWPCSOJGOSEEOKETKBDOOBHSJVPMOWHLTSHHPOVGVATKBSEVTFDVHONNQALNUKOXHPCDQKYAUZSPQPSKAGFPOQWBCWUHLPLMSAAZPBIECBXOTFMIKTITTICPOFTVHONKFIZDNCAJIVWDNXUWUHLCSEWTIHTXOECKBGUUMOHSUUCNLEQJETEOMABYDRUURUHLSWSJDAPMPPDNJIOBTCADIHYNBRQOGEFKQDAEBEHXDOFCNGXEFJVSELPZEHTSCLPPJAHEAPLEIZACVODENXXACTANMLAJITWXSORTPOWDDUEMGZITKOSHSNRONPYGCXVWNHVBUSNXDNZHKQTLWNEPKCJEUYLRXECNRATZEXBNHOCPQWUKYGNBYDTOTPUYAQALSKCNRAIRTITNBTPPLLCGTCEBTVOTWHGTXMIEHKQPNRTZKHYPFAMOPUJCDUOWCSEZCGQEMGVAJTBUHLSLITJVVLICIRSEIITSQFSNCDSSLGLRXKONNPRKAJVVAEABLGSJCJRLOWFOGQDHXTGEYTOIDLVHPITAND"
from itertools import permutations
from ngram_score import ngram_score
import re
import pprint as pp
qgram = ngram_score('ngrams/en_sherlock_4grams') # load our 4gram statistics
trigram = ngram_score('ngrams/en_sherlock_3grams') # load our 3gram statistics
# keep track of the 100 best keys
N=100
L=42 # only process L characters for speed
alphakey = "ENIGMA"
for KLEN in [9]: #range(3,10):
print "="*80
rec = nbest(N)
# exhaustively test all possible letters for first 3 entries of the key and keep track of the N best ones
# if KLEN=7, this will test e.g. FOOAAAA and BARAAAA
for i in permutations('ABCDEFGHIJKLMNOPQRSTUVWXYZ',3):
i = "".join(i)
key = ''.join(i) + 'A'*(KLEN-len(i))
decrypted_ctext = keyed_vigenere(ctext[:L], key, alpha_key = alphakey, direction = -1)
score = 0
for j in range(0,len(ctext),KLEN):
score += trigram.score(decrypted_ctext[j:j+3])
rec.add((score,''.join(i), decrypted_ctext))
next_rec = nbest(N)
# for the remaining KLEN-3 characters of the key,
for i in range(0,KLEN-3):
# go over the N best keys found so far...
for k in xrange(N):
# ...and determine the best next character of the key, while keeping best N keys so far
for c in 'ABCDEFGHIJKLMNOPQRSTUVWXYZ':
key = rec[k][1] + c
fullkey = key + 'A'*(KLEN-len(key))
decrypted_ctext = keyed_vigenere(ctext[:L], fullkey, alpha_key = alphakey, direction = -1)
score = 0
for j in range(0,len(ctext),KLEN):
score += qgram.score(decrypted_ctext[j:j+len(key)])
next_rec.add((score, key, decrypted_ctext))
rec = next_rec
next_rec = nbest(N)
# show the results
bestscore = rec[0][0]
bestkey = rec[0][1]
#decrypted_ctext = rec[0][2]
# always show entire decrypted ctext, even if the above analysis is done only on part of the ctext, e.g. ctext[0:100]
decrypted_ctext = keyed_vigenere(ctext, bestkey, alpha_key = alphakey, direction = -1)
print bestscore, 'klen', KLEN, ':"'+bestkey+'",', decrypted_ctext
# uncomment the following lines to see top-10 results
#pp.pprint(rec.store[0:10])
#print '\n'
"""
Explanation: <div class="alert-warning">
<h3>CHALLENGE: A Special Case</h3>
<br/>
Break the following keyed vigenere ciphertext:
<pre>
KWRGHGPEOKDBXGAWVMJDFVCVDHJZAMQGRNFHKWRLAVKWKWVFSCHLIVGRZCIHDUAWHPUFOKDNTLLEFCWHVBKBJOZRHVWCYVVGWAWADSDKNWGMBLUUPNKAHOHFZAJICCLGYKIMGZFKFODLIYCVHKHBIMGEFWEZHKZWVLMGAYDSDORZIYDJZMKBPLEMDGPSWAOJKAHPABWEXUGFIORNHHWSHWIHYOOLFFDUAODNUFOKDIQJTHIFTHIOIPNCWNDSDJOGIXTIFVAOSCDWVQTCDVRNQJUOIHKSHWIXNCHNPBTWENMNPKVNDSDGAWXNGRZCIOSFXHDDTAEMFHKAXZFRDJUWQJKWRKFHYGAWXBIGFKAYMJRLDQDKZEENMLOZHVYASWEFMPOZVOZKFWMMGEFWEOSORWGVDLRGJCMJNUVMTCXZAVQEEWKNGRFUYNTADWERMJNAEBOKUYXZJGRKYVMJZWESQDIYPLZHUCKBPLEMDOGRRLKVBEZWMFDTZVRNSWOKHMKAHMHVDKXZPBJJTPFFZHVVRNKURLDTIHMZIFKAHMAQKWRZHFMJOZYSQMRVHCGJNPFFZVYWVFMCVGHVVLOLMJTAUEDBJGWADSDPWHYNTEXUDNIGAWXJMJJICCLGYKIMGJZUFHIRWEEODEOKHFDAVOQYQGEIXNILOBIOKWHWIBXUAOKSZKSWJNDJTWKGLWRKIP
</pre>
<br/>
</div>
Case 2: Vigenere Key is Unknown, Alphabet Key is Known
Now, let's take the same ciphertext and suppose we only know the alphabet key.
End of explanation
"""
kryptos_L1 = """
EMUFPHZLRFAXYUSDJKZLDKRNSHGNFIVJ
YQTQUXQBQVYUVLLTREVJYQTMKYRDMFD
VFPJUDEEHZWETZYVGWHKKQETGFQJNCE
GGWHKK?DQMCPFQZDQMMIAGPFXHQRLG
TIMVMZJANQLVKQEDAGDVFRPJUNGEUNA
QZGZLECGYUXUEENJTBJLBQCRTBJDFHRR
YIZETKZEMVDUFKSJHKFWHKUWQLSZFTI
HHDDDUVH?DWKBFUFPWNTDFIYCUQZERE
EVLDKFEZMOQQJLTTUGSYQPFEUNLAVIDX
FLGGTEZ?FKZBSFDQVGOGIPUFXHHDRKF
FHQNTGPUAECNUVPDJMQCLQUMUNEDFQ
ELZZVRRGKFFVOEEXBDMVPNFQXEZLGRE
DNQFMPNZGLFLPMRJQYALMGNUVPDXVKP
DQUMEBEDMHDAFMJGZNUPLGEWJLLAETG"""
kryptos_L2 = """
ENDYAHROHNLSRHEOCPTEOIBIDYSHNAIA
CHTNREYULDSLLSLLNOHSNOSMRWXMNE
TPRNGATIHNRARPESLNNELEBLPIIACAE
WMTWNDITEENRAHCTENEUDRETNHAEOE
TFOLSEDTIWENHAEIOYTEYQHEENCTAYCR
EIFTBRSPAMHHEWENATAMATEGYEERLB
TEEFOASFIOTUETUAEOTOARMAEERTNRTI
BSEDDNIAAHTTMSTEWPIEROAGRIEWFEB
AECTDDHILCEIHSITEGOEAOSDDRYDLORIT
RKLMLEHAGTDHARDPNEOHMGFMFEUHE
ECDMRIPFEIMEHNLSSTTRTVDOHW?OBKR
UOXOGHULBSOLIFBBWFLRVQQPRNGKSSO
TWTQSJQSSEKZZWATJKLUDIAWINFBNYP
VTTMZFPKWGDKZXTJCDIGKUHUAUEKCAR"""
ctext_kryptos = (kryptos_L1+kryptos_L2).replace(" ", "").strip().split("\n")
"""
Explanation: Kryptos
End of explanation
"""
def rot(s, key, alphabet="ABCDEFGHIJKLMNOPQRSTUVWXYZ", direction=1):
keyval = alphabet.find(key)
t = ""
for sc in s:
i = alphabet.find(sc)
t += alphabet[(i + keyval * direction) % len(alphabet)] if i > -1 else sc
return t
import re
def compute_keyed_alphabet(keyword, alphabet = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"):
# remove double keyword letters, keeping first occurence
keyword_unique = ""
for c in keyword:
if c not in keyword_unique:
keyword_unique += c
# compute cipher alphabet
keyed_alphabet = keyword_unique + re.sub("[%s]" % keyword, "", alphabet)
return keyed_alphabet
def keyed_vigenere(s, key, alphabet="ABCDEFGHIJKLMNOPQRSTUVWXYZ", alpha_key="", direction=1):
# compute keyed alphabet
keyed_alphabet = compute_keyed_alphabet(alpha_key, alphabet)
t = []
key_i = 0
for i in range(len(s)):
if s[i] in alphabet:
t_i = rot(s[i], key[key_i % len(key)],keyed_alphabet, direction)
t.append(t_i)
key_i += 1
else:
t.append(s[i])
return "".join(t)
from __future__ import division
import numpy as np
# computes the IC for a given string
def IC(s, alphabet = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'):
s_filtered = filter(s, alphabet) # we only care about the letters in our alphabet
ic = 0
for c in alphabet:
c_count = s_filtered.count(c)
N = len(s_filtered)
if c_count > 1:
ic += (c_count * (c_count - 1)) / (N * (N - 1) / len(alphabet))
return ic
# helper function to filter out non-alphabet characters
def filter(s, alphabet = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'):
return "".join([c for c in s if c in alphabet])
# computes the avg IC of subsequences of each n'th character
def mean_IC(s, n, alphabet = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'):
s_filtered = filter(s, alphabet) # we only care about the letters in our alphabet
s_filtered_subseq = [s_filtered[i::n] for i in range(n)]
ic = []
for i in range(n):
ic.append(IC(s_filtered_subseq[i]))
return np.mean(ic)
# keep a list of the N best things we have seen, discard anything else
# this class can also be imported using 'import nbest' from a separate file nbest.py
class nbest(object):
def __init__(self,N=1000):
self.store = []
self.N = N
def add(self,item):
self.store.append(item)
self.store.sort(reverse=True)
self.store = self.store[:self.N]
def __getitem__(self,k):
return self.store[k]
def __len__(self):
return len(self.store)
"""
Explanation: Prerequisites
These are the functions used for deciphering Kryptos K1 and K2 (copied from above for easy initialization of the notebook).
End of explanation
"""
ctext = "".join(ctext_kryptos[:2])
ctext
"""
Explanation: Kryptos - K1
End of explanation
"""
IC(ctext)
"""
Explanation: K1 - Determine if plaintext
End of explanation
"""
from matplotlib import pyplot as plt
%matplotlib inline
mean_ic_n = []
for n in range(1,32):
mean_ic = mean_IC(ctext.upper(), n)
mean_ic_n.append(mean_ic)
print "%2d %02f" % (n, mean_ic)
plt.rcParams["figure.figsize"] = [12,4]
plt.bar(range(1,len(mean_ic_n)+1), mean_ic_n, align = 'center')
plt.xlabel("key length")
plt.ylabel("average IC")
"""
Explanation: Based on the IC, it looks like this is not plaintext, or a transposition of plaintext.
K1 - Determine Key Length
End of explanation
"""
# https://blog.dominodatalab.com/interactive-dashboards-in-jupyter/
# http://ipywidgets.readthedocs.io/en/latest/examples/Using%20Interact.html
from __future__ import division
from matplotlib import pyplot as plt
import numpy as np
from ipywidgets import interact, interactive, fixed
#import ipywidgets as widgets
plt.rcParams["figure.figsize"] = [12,16]
def rotate(l, n):
return l[n:] + l[:n]
alphabet = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
key_length = 10
g_english = [0.0736, 0.0148, 0.0445, 0.0302, 0.102, 0.0227, 0.0122, 0.0277, 0.0855, 0.000557, 0.00237, 0.0342, 0.0206, 0.0717, 0.103, 0.0246, 0.00181, 0.0735, 0.0608, 0.0889, 0.0392, 0.0153, 0.0173, 0.000557, 0.032, 0.000278]
pfig = plt.subplot(key_length+1,1,1)
plt.bar(range(len(alphabet)), g_english, tick_label = rotate(list(alphabet), 0), align = 'center', color = 'b')
ctext_freq = []
for i in range(key_length):
ctext_i = ctext[i::key_length]
ctext_freq.append(np.array([ctext_i.count(c)/len(ctext_i) for c in alphabet]))
cfig = plt.subplot(key_length+1,1,i+2)
plt.bar(range(len(alphabet)), ctext_freq[i], tick_label = rotate(list(alphabet), 0), align = 'center', color = 'r')
"""
Explanation: From the mean IC and peaks at key length 10 and 20, it looks like the key length might be 10.
K1 - Frequency Analysis of Subsequences #1
End of explanation
"""
# https://blog.dominodatalab.com/interactive-dashboards-in-jupyter/
# http://ipywidgets.readthedocs.io/en/latest/examples/Using%20Interact.html
from __future__ import division
from matplotlib import pyplot as plt
import numpy as np
from ipywidgets import interact, interactive, fixed
#import ipywidgets as widgets
plt.rcParams["figure.figsize"] = [12,16]
def rotate(l, n):
return l[n:] + l[:n]
alphabet = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
keyed_alphabet = compute_keyed_alphabet("KRYPTOS")
key_length = 10
g_english = [0.0736, 0.0148, 0.0445, 0.0302, 0.102, 0.0227, 0.0122, 0.0277, 0.0855, 0.000557, 0.00237, 0.0342, 0.0206, 0.0717, 0.103, 0.0246, 0.00181, 0.0735, 0.0608, 0.0889, 0.0392, 0.0153, 0.0173, 0.000557, 0.032, 0.000278]
g_english_keyed = [g_english[alphabet.find(c)] for c in keyed_alphabet]
pfig = plt.subplot(key_length+1,1,1)
plt.bar(range(len(alphabet)), g_english_keyed, tick_label = rotate(list(keyed_alphabet), 0), align = 'center', color = 'b')
ctext_freq = []
for i in range(key_length):
ctext_i = ctext[i::key_length]
ctext_freq.append(np.array([ctext_i.count(c)/len(ctext_i) for c in keyed_alphabet]))
cfig = plt.subplot(key_length+1,1,i+2)
plt.bar(range(len(keyed_alphabet)), ctext_freq[i], tick_label = rotate(list(keyed_alphabet), 0), align = 'center', color = 'r')
"""
Explanation: K1 - Frequency Analysis of Subsequences #2
Now, let's try that again, but use the cipher alphabet order.
End of explanation
"""
from itertools import permutations
from ngram_score import ngram_score
import re
import pprint as pp
qgram = ngram_score('ngrams/en_sherlock_4grams') # load our 4gram statistics
trigram = ngram_score('ngrams/en_sherlock_3grams') # load our 3gram statistics
# keep track of the 100 best keys
N=100
L=42 # only process L characters for speed
alphakey = "KRYPTOS"
for KLEN in range(3,16):
print "="*80
rec = nbest(N)
# exhaustively test all possible letters for first 3 entries of the key and keep track of the N best ones
# if KLEN=7, this will test e.g. FOOAAAA and BARAAAA
for i in permutations('ABCDEFGHIJKLMNOPQRSTUVWXYZ',3):
i = "".join(i)
key = ''.join(i) + 'A'*(KLEN-len(i))
decrypted_ctext = keyed_vigenere(ctext[:L], key, alpha_key = alphakey, direction = -1)
score = 0
for j in range(0,len(ctext),KLEN):
score += trigram.score(decrypted_ctext[j:j+3])
rec.add((score,''.join(i), decrypted_ctext))
next_rec = nbest(N)
# for the remaining KLEN-3 characters of the key,
for i in range(0,KLEN-3):
# go over the N best keys found so far...
for k in xrange(N):
# ...and determine the best next character of the key, while keeping best N keys so far
for c in 'ABCDEFGHIJKLMNOPQRSTUVWXYZ':
key = rec[k][1] + c
fullkey = key + 'A'*(KLEN-len(key))
decrypted_ctext = keyed_vigenere(ctext[:L], fullkey, alpha_key = alphakey, direction = -1)
score = 0
for j in range(0,len(ctext),KLEN):
score += qgram.score(decrypted_ctext[j:j+len(key)])
next_rec.add((score, key, decrypted_ctext))
rec = next_rec
next_rec = nbest(N)
# show the results
bestscore = rec[0][0]
bestkey = rec[0][1]
#decrypted_ctext = rec[0][2]
# always show entire decrypted ctext, even if the above analysis is done only on part of the ctext, e.g. ctext[0:100]
decrypted_ctext = keyed_vigenere(ctext, bestkey, alpha_key = alphakey, direction = -1)
print bestscore, 'klen', KLEN, ':"'+bestkey+'",', decrypted_ctext
# uncomment the following lines to see top-10 results
#pp.pprint(rec.store[0:10])
#print '\n'
"""
Explanation: K1 - Vigenere Key is Unknown, Alphabet Key is Known
End of explanation
"""
ctext = "".join(ctext_kryptos[:3])
key = "PALIMPSEST"
alphakey = "KRYPTOS"
print ctext
print keyed_vigenere(ctext, key, alpha_key = alphakey, direction = -1)
"""
Explanation: K1 - Determining Message Boundaries
End of explanation
"""
ctext_k1 = "".join(ctext_kryptos[:2])
key = "PALIMPSEST"
alphakey = "KRYPTOS"
ptext_k1 = keyed_vigenere(ctext_k1, key, alpha_key = alphakey, direction = -1)
print ctext_k1
print ptext_k1
"""
Explanation: From this, it can be seen that only the first 2 lines make up K1.
End of explanation
"""
from word_score import word_score
fitness = word_score()
print fitness.score(ptext_k1)
"""
Explanation: K1 - Determining Word Boundaries
http://practicalcryptography.com/cryptanalysis/text-characterisation/word-statistics-fitness-measure/
End of explanation
"""
ctext = "".join(ctext_kryptos[2:14])
ctext
"""
Explanation: BETWEEN SUBTLE SHADING AND THE ABSENCE OF LIGHT LIES THE NUANCE OF IQLUSION
Kryptos - K2
Let's start with the rest of the lines of the first panel for K2:
End of explanation
"""
IC(ctext)
"""
Explanation: K2 - Determine if Plaintext
End of explanation
"""
from matplotlib import pyplot as plt
%matplotlib inline
mean_ic_n = []
for n in range(1,32):
mean_ic = mean_IC(ctext.upper(), n)
mean_ic_n.append(mean_ic)
print "%2d %02f" % (n, mean_ic)
plt.rcParams["figure.figsize"] = [12,4]
plt.bar(range(1,len(mean_ic_n)+1), mean_ic_n, align = 'center')
plt.xlabel("key length")
plt.ylabel("average IC")
"""
Explanation: Based on the IC value, it looks like we are not dealing with plaintext, or a transposition of plaintext.
K2 - Determine Key Length
End of explanation
"""
# https://blog.dominodatalab.com/interactive-dashboards-in-jupyter/
# http://ipywidgets.readthedocs.io/en/latest/examples/Using%20Interact.html
from __future__ import division
from matplotlib import pyplot as plt
import numpy as np
from ipywidgets import interact, interactive, fixed
#import ipywidgets as widgets
plt.rcParams["figure.figsize"] = [12,16]
def rotate(l, n):
return l[n:] + l[:n]
alphabet = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
key_length = 8
g_english = [0.0736, 0.0148, 0.0445, 0.0302, 0.102, 0.0227, 0.0122, 0.0277, 0.0855, 0.000557, 0.00237, 0.0342, 0.0206, 0.0717, 0.103, 0.0246, 0.00181, 0.0735, 0.0608, 0.0889, 0.0392, 0.0153, 0.0173, 0.000557, 0.032, 0.000278]
pfig = plt.subplot(key_length+1,1,1)
plt.bar(range(len(alphabet)), g_english, tick_label = rotate(list(alphabet), 0), align = 'center', color = 'b')
ctext_freq = []
for i in range(key_length):
ctext_i = ctext[i::key_length]
ctext_freq.append(np.array([ctext_i.count(c)/len(ctext_i) for c in alphabet]))
cfig = plt.subplot(key_length+1,1,i+2)
plt.bar(range(len(alphabet)), ctext_freq[i], tick_label = rotate(list(alphabet), 0), align = 'center', color = 'r')
"""
Explanation: This suggests a key length of 8.
### K2 - Frequency Analysis of Subsequences #1
End of explanation
"""
# https://blog.dominodatalab.com/interactive-dashboards-in-jupyter/
# http://ipywidgets.readthedocs.io/en/latest/examples/Using%20Interact.html
from __future__ import division
from matplotlib import pyplot as plt
import numpy as np
from ipywidgets import interact, interactive, fixed
#import ipywidgets as widgets
plt.rcParams["figure.figsize"] = [12,16]
def rotate(l, n):
return l[n:] + l[:n]
alphabet = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
keyed_alphabet = compute_keyed_alphabet("KRYPTOS")
key_length = 8
g_english = [0.0736, 0.0148, 0.0445, 0.0302, 0.102, 0.0227, 0.0122, 0.0277, 0.0855, 0.000557, 0.00237, 0.0342, 0.0206, 0.0717, 0.103, 0.0246, 0.00181, 0.0735, 0.0608, 0.0889, 0.0392, 0.0153, 0.0173, 0.000557, 0.032, 0.000278]
g_english_keyed = [g_english[alphabet.find(c)] for c in keyed_alphabet]
pfig = plt.subplot(key_length+1,1,1)
plt.bar(range(len(alphabet)), g_english_keyed, tick_label = rotate(list(keyed_alphabet), 0), align = 'center', color = 'b')
ctext_freq = []
for i in range(key_length):
ctext_i = ctext[i::key_length]
ctext_freq.append(np.array([ctext_i.count(c)/len(ctext_i) for c in keyed_alphabet]))
cfig = plt.subplot(key_length+1,1,i+2)
plt.bar(range(len(keyed_alphabet)), ctext_freq[i], tick_label = rotate(list(keyed_alphabet), 0), align = 'center', color = 'r')
"""
Explanation: K2 - Frequency Analysis of Subsequences #2
Now, let's try that again, but use the cipher alphabet order instead.
End of explanation
"""
# https://blog.dominodatalab.com/interactive-dashboards-in-jupyter/
# http://ipywidgets.readthedocs.io/en/latest/examples/Using%20Interact.html
from __future__ import division
from matplotlib import pyplot as plt
import numpy as np
from ipywidgets import interact, interactive, fixed
#import ipywidgets as widgets
plt.rcParams["figure.figsize"] = [18,16]
def rotate(l, n):
return l[n:] + l[:n]
alphabet = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
keyed_alphabet = compute_keyed_alphabet("KRYPTOS")
ncopies = 2
key_length = 8
g_english = [0.0736, 0.0148, 0.0445, 0.0302, 0.102, 0.0227, 0.0122, 0.0277, 0.0855, 0.000557, 0.00237, 0.0342, 0.0206, 0.0717, 0.103, 0.0246, 0.00181, 0.0735, 0.0608, 0.0889, 0.0392, 0.0153, 0.0173, 0.000557, 0.032, 0.000278]
g_english_keyed = [g_english[alphabet.find(c)] for c in keyed_alphabet]
pfig = plt.subplot(key_length+1,1,1)
plt.bar(range(len(alphabet)*ncopies), g_english_keyed + [0]*26, tick_label = rotate(list(keyed_alphabet)*ncopies, 0), align = 'center', color = 'b')
ctext_freq = []
for i in range(key_length):
ctext_i = ctext[i::key_length]
ctext_freq.append(np.array([ctext_i.count(c)/len(ctext_i) for c in keyed_alphabet]))
cfig = plt.subplot(key_length+1,1,i+2)
plt.bar(range(len(keyed_alphabet)*ncopies), list(ctext_freq[i])*ncopies, tick_label = rotate(list(keyed_alphabet)*ncopies, 0), align = 'center', color = 'r')
"""
Explanation: K2 - Frequency Analysis of Subsequences #3
For K2, it looks like it might be possible to align the frequency plots by hand to break the key. To make this easier, let's create plots with two concatenated alphabets. That way, we can print them and align them (or do this by yourself on graph paper of course).
End of explanation
"""
from itertools import permutations
from ngram_score import ngram_score
import re
import pprint as pp
qgram = ngram_score('ngrams/en_sherlock_4grams') # load our 4gram statistics
trigram = ngram_score('ngrams/en_sherlock_3grams') # load our 3gram statistics
# keep track of the 100 best keys
N=100
L=42 # only process first L characters for speed, increase if needed for accuracy
alphakey = "KRYPTOS"
for KLEN in [8]: #range(3,16):
print "="*80
rec = nbest(N)
# exhaustively test all possible letters for first 3 entries of the key and keep track of the N best ones
# if KLEN=7, this will test e.g. FOOAAAA and BARAAAA
for i in permutations('ABCDEFGHIJKLMNOPQRSTUVWXYZ',3):
i = "".join(i)
key = ''.join(i) + 'A'*(KLEN-len(i))
decrypted_ctext = keyed_vigenere(ctext[:L], key, alpha_key = alphakey, direction = -1)
score = 0
for j in range(0,len(ctext),KLEN):
score += trigram.score(decrypted_ctext[j:j+3])
rec.add((score,''.join(i), decrypted_ctext))
next_rec = nbest(N)
# for the remaining KLEN-3 characters of the key,
for i in range(0,KLEN-3):
# go over the N best keys found so far...
for k in xrange(N):
# ...and determine the best next character of the key, while keeping best N keys so far
for c in 'ABCDEFGHIJKLMNOPQRSTUVWXYZ':
key = rec[k][1] + c
fullkey = key + 'A'*(KLEN-len(key))
decrypted_ctext = keyed_vigenere(ctext[:L], fullkey, alpha_key = alphakey, direction = -1)
score = 0
for j in range(0,len(ctext),KLEN):
score += qgram.score(decrypted_ctext[j:j+len(key)])
next_rec.add((score, key, decrypted_ctext))
rec = next_rec
next_rec = nbest(N)
# show the results
bestscore = rec[0][0]
bestkey = rec[0][1]
#decrypted_ctext = rec[0][2]
# always show entire decrypted ctext, even if the above analysis is done only on part of the ctext, e.g. ctext[0:100]
decrypted_ctext = keyed_vigenere(ctext, bestkey, alpha_key = alphakey, direction = -1)
print bestscore, 'klen', KLEN, ':"'+bestkey+'",', decrypted_ctext
# uncomment the following lines to see top-10 results
#pp.pprint(rec.store[0:10])
#print '\n'
"""
Explanation: Some of these could be guessed based on the alignment: at least the A and the first S seem reasonable to guess, giving A????S?? for the key, after which the possible candidate keys can be guessed or looked up, e.g. on onelook.com.
K2 - Vigenere Key is Unknown, Alphabet Key is Known
End of explanation
"""
ctext = "".join(ctext_kryptos[2:])
key = "ABSCISSA"
alphakey = "KRYPTOS"
print ctext
print keyed_vigenere(ctext, key, alpha_key = alphakey, direction = -1)
"""
Explanation: K2 - Determining Message Boundaries
End of explanation
"""
ctext_k2 = "".join(ctext_kryptos[2:14])
key = "ABSCISSA"
alphakey = "KRYPTOS"
ptext_k2 = keyed_vigenere(ctext_k2, key, alpha_key = alphakey, direction = -1)
print ctext_k2
print ptext_k2
"""
Explanation: So K2 indeed seems to be just the remainder of the first panel.
End of explanation
"""
from word_score import word_score
fitness = word_score()
print fitness.score(ptext_k2)
"""
Explanation: K2 - Determining Word Boundaries
End of explanation
"""
ctext_k2_corrected = ctext_k2.replace("WJLLAETG", "SWJLLAETG") # insert missing character near the end
key = "ABSCISSA"
alphakey = "KRYPTOS"
ptext_k2_corrected = keyed_vigenere(ctext_k2_corrected, key, alpha_key = alphakey, direction = -1)
print ctext_k2_corrected
print ptext_k2_corrected
from word_score import word_score
fitness = word_score()
print fitness.score(ptext_k2_corrected)
"""
Explanation: K2 - Correction to Ciphertext
http://www.elonka.com/kryptos/CorrectedK2Announcement.html
End of explanation
"""
|
mne-tools/mne-tools.github.io | dev/_downloads/2d3a2ce4cdcb2dad9804801c80816516/parcellation.ipynb | bsd-3-clause | # Author: Eric Larson <larson.eric.d@gmail.com>
# Denis Engemann <denis.engemann@gmail.com>
#
# License: BSD-3-Clause
import mne
Brain = mne.viz.get_brain_class()
subjects_dir = mne.datasets.sample.data_path() / 'subjects'
mne.datasets.fetch_hcp_mmp_parcellation(subjects_dir=subjects_dir,
verbose=True)
mne.datasets.fetch_aparc_sub_parcellation(subjects_dir=subjects_dir,
verbose=True)
labels = mne.read_labels_from_annot(
'fsaverage', 'HCPMMP1', 'lh', subjects_dir=subjects_dir)
brain = Brain('fsaverage', 'lh', 'inflated', subjects_dir=subjects_dir,
cortex='low_contrast', background='white', size=(800, 600))
brain.add_annotation('HCPMMP1')
aud_label = [label for label in labels if label.name == 'L_A1_ROI-lh'][0]
brain.add_label(aud_label, borders=False)
"""
Explanation: Plot a cortical parcellation
In this example, we download the HCP-MMP1.0 parcellation
:footcite:GlasserEtAl2016 and show it on fsaverage.
We will also download the customized 448-label aparc
parcellation from :footcite:KhanEtAl2018.
<div class="alert alert-info"><h4>Note</h4><p>The HCP-MMP dataset has license terms restricting its use.
Of particular relevance:
"I will acknowledge the use of WU-Minn HCP data and data
derived from WU-Minn HCP data when publicly presenting any
results or algorithms that benefitted from their use."</p></div>
End of explanation
"""
brain = Brain('fsaverage', 'lh', 'inflated', subjects_dir=subjects_dir,
cortex='low_contrast', background='white', size=(800, 600))
brain.add_annotation('HCPMMP1_combined')
"""
Explanation: We can also plot a combined set of labels (23 per hemisphere).
End of explanation
"""
brain = Brain('fsaverage', 'lh', 'inflated', subjects_dir=subjects_dir,
cortex='low_contrast', background='white', size=(800, 600))
brain.add_annotation('aparc_sub')
"""
Explanation: We can add another custom parcellation
End of explanation
"""
|
andmax/gpufilter | python/alg5pe.ipynb | mit | import math
import cmath
import numpy as np
from scipy import ndimage, linalg
from skimage.color import rgb2gray
from skimage.measure import structural_similarity as ssim
from sklearn.metrics import mean_squared_error
import matplotlib.pyplot as plt
%matplotlib inline
plt.gray() # to plot gray images using gray scale
%run 'all_functions.ipynb'
"""
Explanation: Parallel Recursive Filtering of Infinite Input Extensions
This notebook tests alg5pe
Algorithm 5 Periodic Extension
End of explanation
"""
%%time
X1 = plt.imread('input.png')
X1 = rgb2gray(X1)
s = 16. # sigma for testing filtering
X2 = np.copy(X1).astype(np.float64)
# Gaussian filter runs with periodic extension
X2 = ndimage.filters.gaussian_filter(X1, sigma=s, mode='wrap')
"""
Explanation: First: load the test image and run Gaussian filter on it
End of explanation
"""
%%time
b = 32 # squared block size (b,b)
w = [ weights1(s), weights2(s) ] # weights of the recursive filter
width, height = X1.shape[1], X1.shape[0]
m_size, n_size = get_mn(X1, b)
blocks = break_blocks(X1, b, m_size, n_size)
# Pre-computation of matrices and pre-allocation of carries
alg5m1 = build_alg5_matrices(b, 1, w[0], width, height)
alg5m2 = build_alg5_matrices(b, 2, w[1], width, height)
alg5c1 = build_alg5_carries(m_size, n_size, b, 1)
alg5c2 = build_alg5_carries(m_size, n_size, b, 2)
alg5pem1 = build_pe_matrices(1, w[0], alg5m1)
alg5pem2 = build_pe_matrices(2, w[1], alg5m2)
"""
Explanation: Second: setup basic parameters from the input image
End of explanation
"""
%%time
# Running alg5pe with filter order r = 1
alg5_stage1(m_size, n_size, 1, w[0], alg5m1, alg5c1, blocks)
alg5_pe_stage23(m_size, n_size, alg5m1, alg5pem1, alg5c1)
alg5_pe_stage45(m_size, n_size, 1, alg5m1, alg5pem1, alg5c1)
alg5_stage6(m_size, n_size, w[0], alg5c1, blocks)
# Running alg5pe with filter order r = 2
alg5_stage1(m_size, n_size, 2, w[1], alg5m2, alg5c2, blocks)
alg5_pe_stage23(m_size, n_size, alg5m2, alg5pem2, alg5c2)
alg5_pe_stage45(m_size, n_size, 2, alg5m2, alg5pem2, alg5c2)
alg5_stage6(m_size, n_size, w[1], alg5c2, blocks)
# Join blocks back together
X3 = join_blocks(blocks, b, m_size, n_size, X1.shape)
"""
Explanation: Third: run alg5pe with filter order 1 then 2
End of explanation
"""
fig, (ax2, ax3) = plt.subplots(1, 2)
fig.set_figheight(9)
fig.set_figwidth(14)
ax2.imshow(X2)
ax3.imshow(X3)
print '[ Mean Squared Error:', mean_squared_error(X2, X3), ' ]',
print '[ Structural similarity:', ssim(X2, X3), ' ]'
"""
Explanation: Fourth: show both results and error measurements
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/test-institute-2/cmip6/models/sandbox-1/aerosol.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'test-institute-2', 'sandbox-1', 'aerosol')
"""
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: TEST-INSTITUTE-2
Source ID: SANDBOX-1
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 69 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:44
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
"""
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
"""
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.5. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact internal mixture?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
"""
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation
"""
|
legacysurvey/pipeline | doc/nb/qa-dr8c-maskbits.ipynb | gpl-2.0 | import os, time
import numpy as np
import fitsio
from glob import glob
import matplotlib.pyplot as plt
from astropy.table import vstack, Table, hstack
"""
Explanation: Maskbits QA in dr8c
End of explanation
"""
MASKBITS = dict(
NPRIMARY = 0x1, # not PRIMARY
BRIGHT = 0x2,
SATUR_G = 0x4,
SATUR_R = 0x8,
SATUR_Z = 0x10,
ALLMASK_G = 0x20,
ALLMASK_R = 0x40,
ALLMASK_Z = 0x80,
WISEM1 = 0x100, # WISE masked
WISEM2 = 0x200,
BAILOUT = 0x400, # bailed out of processing
MEDIUM = 0x800, # medium-bright star
GALAXY = 0x1000, # LSLGA large galaxy
CLUSTER = 0x2000, # Cluster catalog source
)
# Bits in the "brightblob" bitmask
IN_BLOB = dict(
BRIGHT = 0x1,
MEDIUM = 0x2,
CLUSTER = 0x4,
GALAXY = 0x8,
)
def gather_gaia(camera='decam'):
#dr8dir = '/global/project/projectdirs/cosmo/work/legacysurvey/dr8b'
dr8dir = '/Users/ioannis/work/legacysurvey/dr8c'
#outdir = os.getenv('HOME')
outdir = dr8dir
for cam in np.atleast_1d(camera):
outfile = os.path.join(outdir, 'check-gaia-{}.fits'.format(cam))
if os.path.isfile(outfile):
gaia = Table.read(outfile)
else:
out = []
catfile = glob(os.path.join(dr8dir, cam, 'tractor', '???', 'tractor*.fits'))
for ii, ff in enumerate(catfile[1:]):
if ii % 100 == 0:
print('{} / {}'.format(ii, len(catfile)))
cc = Table(fitsio.read(ff, upper=True, columns=['BRICK_PRIMARY', 'BRICKNAME', 'BX', 'BY',
'REF_CAT', 'REF_ID', 'RA', 'DEC', 'TYPE',
'FLUX_G', 'FLUX_R', 'FLUX_Z',
'FLUX_IVAR_G', 'FLUX_IVAR_R', 'FLUX_IVAR_Z',
'BRIGHTBLOB', 'MASKBITS', 'GAIA_PHOT_G_MEAN_MAG']))
cc = cc[cc['BRICK_PRIMARY']]
out.append(cc)
out = vstack(out)
out.write(outfile, overwrite=True)
return gaia
%time gaia = gather_gaia(camera='decam')
"""
Explanation: Check the masking
End of explanation
"""
idup = gaia['TYPE'] == 'DUP'
assert(np.all(gaia[idup]['MASKBITS'] & MASKBITS['GALAXY'] != 0))
assert(np.all(gaia[idup]['FLUX_G'] == 0))
for band in ('G', 'R', 'Z'):
assert(np.all(gaia[idup]['FLUX_{}'.format(band)] == 0))
assert(np.all(gaia[idup]['FLUX_IVAR_{}'.format(band)] == 0))
gaia[idup]
"""
Explanation: All DUPs should be in an LSLGA blob.
End of explanation
"""
ibright = np.where(((gaia['MASKBITS'] & MASKBITS['BRIGHT']) != 0) * (gaia['REF_CAT'] == 'G2') * (gaia['TYPE'] != 'DUP'))[0]
#bb = (gaia['BRIGHTBLOB'][ibright] & IN_BLOB['BRIGHT'] != 0) == False
#gaia[ibright][bb]
#gaia[ibright]
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(14, 6))
_ = ax1.hist(gaia[ibright]['GAIA_PHOT_G_MEAN_MAG'], bins=100)
ax1.set_xlabel('Gaia G')
ax1.set_title('MASKBITS & BRIGHT, REF_CAT==G2, TYPE!=DUP', fontsize=14)
isb = np.where(gaia[ibright]['GAIA_PHOT_G_MEAN_MAG'] < 13.0)[0]
isf = np.where(gaia[ibright]['GAIA_PHOT_G_MEAN_MAG'] >= 13.0)[0]
print(len(isb), len(isf))
ax2.scatter(gaia['RA'][ibright][isb], gaia['DEC'][ibright][isb], s=10, color='green', label='G<13')
ax2.scatter(gaia['RA'][ibright][isf], gaia['DEC'][ibright][isf], s=10, color='red', alpha=0.5, label='G>=13')
ax2.legend(fontsize=14, frameon=True)
ax2.set_title('MASKBITS & BRIGHT, REF_CAT==G2, TYPE!=DUP', fontsize=14)
#ax.set_xlim(136.8, 137.2)
#ax.set_ylim(32.4, 32.8)
print(np.sum(gaia['BRIGHTBLOB'][ibright][isf] & IN_BLOB['BRIGHT'] != 0))
check = np.where(gaia['BRIGHTBLOB'][ibright][isf] & IN_BLOB['BRIGHT'] == 0)[0] # no bright targeting bit set
for key in MASKBITS.keys():
print(key, np.sum(gaia['MASKBITS'][ibright][isf][check] & MASKBITS[key] != 0))
gaia[ibright][isf][check]
"""
Explanation: 1) Find all bright Gaia stars.
2) Make sure the magnitude limits are correct.
3) Make sure the masking behavior around them is correct.
End of explanation
"""
mask = fitsio.read('decam/coadd/132/1325p325/legacysurvey-1325p325-maskbits.fits.fz')
#print(mask.max())
c = plt.imshow(mask > 0, origin='lower')
#plt.colorbar(c)
ww = gaia['BRICKNAME'] == '1325p325'
eq = []
for obj in gaia[ww]:
eq.append(mask[int(obj['BY']), int(obj['BX'])] == obj['MASKBITS'])
assert(np.all(eq))
"""
Explanation: Make sure the MASKBITS values are set correctly.
End of explanation
"""
|
yunfeiz/py_learnt | quant/sample_code/tushare.ipynb | apache-2.0 | import tushare as ts
import pandas as pd
stock_selected='600699'
df1, data1 = ts.top10_holders(code=stock_selected, gdtype='1')
df1 = df1.sort_values('quarter', ascending=True)
df1.tail(10)
#qts = list(df1['quarter'])
#data = list(df1['props'])
#name = ts.get_realtime_quotes(stock_selected)['name'][0]
"""
Explanation: 1、top 10 share holder
End of explanation
"""
import tushare as ts
import pandas as pd
from IPython.display import HTML
#浦发银行2016三季度前十大流通股东情况
df2, data2 = ts.top10_holders(code=stock_selected, year=2016, quarter=3, gdtype='1')
#取前十大流通股东名称
top10name = str(list(data2['name']))
top10name
"""
Explanation: 2、Top 10 share holder
End of explanation
"""
import tushare as ts
df=ts.get_stock_basics()
#data=df.loc('002281')
#print(data['gpr'])
#data=df.loc('002281')
#df.ix['002281']
#df.ix['002281']
#df.ix['002281']
#df.info()
df[df.name == u'四维图新']
df_out=df[(df.profit>20) &
(df.gpr > 25) &
(df.pe <120) &
(df.pe >0) &
(df.rev >0)][['name','industry','pe','profit','esp','rev','holders','gpr','npr']]
df_out.sort_values(by='npr',ascending=False, inplace = True)
df_out.rename(columns={'name':u'股票','industry':u'行业','pe':u'市盈率',
'profit':u'利润同比','esp':u'每股收益','rev':u'收入同比',
'holders':u'股东人数','gpr':u'毛利率','npr':u'净利率'})[:50]
"""
Explanation: 获取沪深上市公司基本情况。属性包括:
code,代码
name,名称
industry,所属行业
area,地区
pe,市盈率
outstanding,流通股本(亿)
totals,总股本(亿)
totalAssets,总资产(万)
liquidAssets,流动资产
fixedAssets,固定资产
reserved,公积金
reservedPerShare,每股公积金
esp,每股收益
bvps,每股净资
pb,市净率
timeToMarket,上市日期
undp,未分利润
perundp, 每股未分配
rev,收入同比(%)
profit,利润同比(%)
gpr,毛利率(%)
npr,净利润率(%)
holders,股东人数
调用方法:
End of explanation
"""
import tushare as ts
df=ts.get_report_data(2016,4)
#df[df.code=='002405']
df
"""
Explanation: 业绩报告(主表)
按年度、季度获取业绩报表数据。数据获取需要一定的时间,网速取决于您的网速,请耐心等待。结果返回的数据属性说明如下:
code,代码
name,名称
esp,每股收益
eps_yoy,每股收益同比(%)
bvps,每股净资产
roe,净资产收益率(%)
epcf,每股现金流量(元)
net_profits,净利润(万元)
profits_yoy,净利润同比(%)
distrib,分配方案
report_date,发布日期
调用方法:
获取2014年第3季度的业绩报表数据
ts.get_report_data(2014,3)
结果返回:
code name esp eps_yoy bvps roe epcf net_profits
End of explanation
"""
import tushare as ts
df_profit = ts.get_profit_data(2017,1)
#df_profit.info()
#df_profit[df_profit.code == '002405']
df_out=df_profit[(df_profit.roe>10) & (df_profit.gross_profit_rate > 25) & (df_profit.net_profits >0)]
df_out.sort_values(by='roe',ascending=False, inplace = True)
df_out[:50]
"""
Explanation: 盈利能力
按年度、季度获取盈利能力数据,结果返回的数据属性说明如下:
code,代码
name,名称
roe,净资产收益率(%)
net_profit_ratio,净利率(%)
gross_profit_rate,毛利率(%)
net_profits,净利润(万元)
esp,每股收益
business_income,营业收入(百万元)
bips,每股主营业务收入(元)
调用方法:
获取2014年第3季度的盈利能力数据
ts.get_profit_data(2014,3)
结果返回:
End of explanation
"""
import tushare as ts
df_operation = ts.get_operation_data(2017,1)
df_out=df_operation[df_operation.currentasset_days<120]
df_out.sort_values(by='currentasset_days',ascending=False, inplace = True)
df_out[:50]
"""
Explanation: 营运能力
按年度、季度获取营运能力数据,结果返回的数据属性说明如下:
code,代码
name,名称
arturnover,应收账款周转率(次)
arturndays,应收账款周转天数(天)
inventory_turnover,存货周转率(次)
inventory_days,存货周转天数(天)
currentasset_turnover,流动资产周转率(次)
currentasset_days,流动资产周转天数(天)
调用方法:
获取2014年第3季度的营运能力数据
ts.get_operation_data(2014,3)
结果返回:
code name arturnover arturndays inventory_turnover inventory_days \
End of explanation
"""
# -*- coding: UTF-8 -*-
import tushare as ts
df_growth = ts.get_growth_data(2017,1)
import numpy as np
import pandas as pd
df_out = df_growth[(df_growth.nprg >20) &
(df_growth.mbrg >20)]
df_out.sort_values(by= 'nprg', ascending = True, inplace=True)
writer = pd.ExcelWriter('growth.xlsx')
df_out.to_excel(writer,'growth')
writer.save()
#df_out.to_csv(".\growth.csv",encoding="utf_8_sig",dtype={'code':np.string})
df_out[:50]
"""
Explanation: 成长能力
按年度、季度获取成长能力数据,结果返回的数据属性说明如下:
code,代码
name,名称
mbrg,主营业务收入增长率(%)
nprg,净利润增长率(%)
nav,净资产增长率
targ,总资产增长率
epsg,每股收益增长率
seg,股东权益增长率
调用方法:
获取2014年第3季度的成长能力数据
ts.get_growth_data(2014,3)
结果返回:
End of explanation
"""
import tushare as ts
df_cash = ts.get_cashflow_data(2016,4)
df_out = df_cash[(df_cash.cf_sales > 0)]
df_out.sort_values(by = 'cf_sales', ascending = True, inplace = True)
df_out[:50]
"""
Explanation: 偿债能力
按年度、季度获取偿债能力数据,结果返回的数据属性说明如下:
code,代码
name,名称
currentratio,流动比率
quickratio,速动比率
cashratio,现金比率
icratio,利息支付倍数
sheqratio,股东权益比率
adratio,股东权益增长率
调用方法:
获取2014年第3季度的偿债能力数据
ts.get_debtpaying_data(2014,3)
结果返回:
code name currentratio quickratio cashratio icratio \
现金流量
按年度、季度获取现金流量数据,结果返回的数据属性说明如下:
code,代码
name,名称
cf_sales,经营现金净流量对销售收入比率
rateofreturn,资产的经营现金流量回报率
cf_nm,经营现金净流量与净利润的比率
cf_liabilities,经营现金净流量对负债比率
cashflowratio,现金流量比率
调用方法:
获取2014年第3季度的现金流量数据
ts.get_cashflow_data(2014,3)
结果返回:
code name cf_sales rateofreturn cf_nm cf_liabilities \
'''
End of explanation
"""
import tushare as ts
import pandas as pd
from IPython.display import HTML
#中国联通前复权数据
#df = ts.get_k_data(stock_selected, start='2016-01-01', end='2016-12-02')
df = ts.get_k_data(stock_selected, start='2016-01-01')
datastr = ''
for idx in df.index:
rowstr = '[\'%s\',%s,%s,%s,%s]' % (df.ix[idx]['date'], df.ix[idx]['open'],
df.ix[idx]['close'], df.ix[idx]['low'],
df.ix[idx]['high'])
datastr += rowstr + ','
datastr = datastr[:-1]
#取股票名称
name = ts.get_realtime_quotes(stock_selected)['name'][0]
datahead = """
<div id="chart" style="width:800px; height:600px;"></div>
<script>
require.config({ paths:{ echarts: '//cdn.bootcss.com/echarts/3.2.3/echarts.min', } });
require(['echarts'],function(ec){
var myChart = ec.init(document.getElementById('chart'));
"""
datavar = 'var data0 = splitData([%s]);' % datastr
funcstr = """
function splitData(rawData) {
var categoryData = [];
var values = []
for (var i = 0; i < rawData.length; i++) {
categoryData.push(rawData[i].splice(0, 1)[0]);
values.push(rawData[i])
}
return {
categoryData: categoryData,
values: values
};
}
function calculateMA(dayCount) {
var result = [];
for (var i = 0, len = data0.values.length; i < len; i++) {
if (i < dayCount) {
result.push('-');
continue;
}
var sum = 0;
for (var j = 0; j < dayCount; j++) {
sum += data0.values[i - j][1];
}
result.push((sum / dayCount).toFixed(2));
}
return result;
}
option = {
title: {
"""
namestr = 'text: \'%s\',' %name
functail = """
left: 0
},
tooltip: {
trigger: 'axis',
axisPointer: {
type: 'line'
}
},
legend: {
data: ['日K', 'MA5', 'MA10', 'MA20', 'MA30']
},
grid: {
left: '10%',
right: '10%',
bottom: '15%'
},
xAxis: {
type: 'category',
data: data0.categoryData,
scale: true,
boundaryGap : false,
axisLine: {onZero: false},
splitLine: {show: false},
splitNumber: 20,
min: 'dataMin',
max: 'dataMax'
},
yAxis: {
scale: true,
splitArea: {
show: true
}
},
dataZoom: [
{
type: 'inside',
start: 50,
end: 100
},
{
show: true,
type: 'slider',
y: '90%',
start: 50,
end: 100
}
],
series: [
{
name: '日K',
type: 'candlestick',
data: data0.values,
markPoint: {
label: {
normal: {
formatter: function (param) {
return param != null ? Math.round(param.value) : '';
}
}
},
data: [
{
name: '标点',
coord: ['2013/5/31', 2300],
value: 2300,
itemStyle: {
normal: {color: 'rgb(41,60,85)'}
}
},
{
name: 'highest value',
type: 'max',
valueDim: 'highest'
},
{
name: 'lowest value',
type: 'min',
valueDim: 'lowest'
},
{
name: 'average value on close',
type: 'average',
valueDim: 'close'
}
],
tooltip: {
formatter: function (param) {
return param.name + '<br>' + (param.data.coord || '');
}
}
},
markLine: {
symbol: ['none', 'none'],
data: [
[
{
name: 'from lowest to highest',
type: 'min',
valueDim: 'lowest',
symbol: 'circle',
symbolSize: 10,
label: {
normal: {show: false},
emphasis: {show: false}
}
},
{
type: 'max',
valueDim: 'highest',
symbol: 'circle',
symbolSize: 10,
label: {
normal: {show: false},
emphasis: {show: false}
}
}
],
{
name: 'min line on close',
type: 'min',
valueDim: 'close'
},
{
name: 'max line on close',
type: 'max',
valueDim: 'close'
}
]
}
},
{
name: 'MA5',
type: 'line',
data: calculateMA(5),
smooth: true,
lineStyle: {
normal: {opacity: 0.5}
}
},
{
name: 'MA10',
type: 'line',
data: calculateMA(10),
smooth: true,
lineStyle: {
normal: {opacity: 0.5}
}
},
{
name: 'MA20',
type: 'line',
data: calculateMA(20),
smooth: true,
lineStyle: {
normal: {opacity: 0.5}
}
},
{
name: 'MA30',
type: 'line',
data: calculateMA(30),
smooth: true,
lineStyle: {
normal: {opacity: 0.5}
}
},
]
};
myChart.setOption(option);
});
</script>
"""
HTML(datahead + datavar + funcstr + namestr + functail)
import tushare as ts
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
stock_selected='002281'
df = ts.get_k_data(stock_selected, start='2016-01-01')
df.info()
#df['close'].plot(grid=True)
#df['42d']= np.round(pd.rolling_mean(df['close'],window=42),2)
#df['252d']= np.round(pd.rolling_mean(df['close'],window=252),2)
df['42d']= np.round(pd.Series.rolling(df['close'],window=42).mean(),2)
df['252d']= np.round(pd.Series.rolling(df['close'],window=252).mean(),2)
#df[['close','42d','252d']].tail(10)
df[['close','42d','252d']].plot(grid=True)
df['42-252']=df['42d']-df['252d']
#df['42-252'].tail(10)
SD=1
df['regime'] = np.where(df['42-252']>SD,1,0)
df['regime'] = np.where(df['42-252'] < -SD,-1,df['regime'])
#df['regime'].head(10)
df['regime'].tail(10)
#df['regime'].plot(lw=1.5)
#plt.ylim(-1.1, 1.1)
plt.show()
"""
Explanation: 3、CandleStick
End of explanation
"""
|
keras-team/keras-io | examples/vision/ipynb/keypoint_detection.ipynb | apache-2.0 | !pip install -q -U imgaug
"""
Explanation: Keypoint Detection with Transfer Learning
Author: Sayak Paul<br>
Date created: 2021/05/02<br>
Last modified: 2021/05/02<br>
Description: Training a keypoint detector with data augmentation and transfer learning.
Keypoint detection consists of locating key object parts. For example, the key parts
of our faces include nose tips, eyebrows, eye corners, and so on. These parts help to
represent the underlying object in a feature-rich manner. Keypoint detection has
applications that include pose estimation, face detection, etc.
In this example, we will build a keypoint detector using the
StanfordExtra dataset,
using transfer learning. This example requires TensorFlow 2.4 or higher,
as well as imgaug library,
which can be installed using the following command:
End of explanation
"""
!wget -q http://vision.stanford.edu/aditya86/ImageNetDogs/images.tar
"""
Explanation: Data collection
The StanfordExtra dataset contains 12,000 images of dogs together with keypoints and
segmentation maps. It is developed from the Stanford dogs dataset.
It can be downloaded with the command below:
End of explanation
"""
!tar xf images.tar
!unzip -qq ~/stanfordextra_v12.zip
"""
Explanation: Annotations are provided as a single JSON file in the StanfordExtra dataset and one needs
to fill this form to get access to it. The
authors explicitly instruct users not to share the JSON file, and this example respects this wish:
you should obtain the JSON file yourself.
The JSON file is expected to be locally available as stanfordextra_v12.zip.
After the files are downloaded, we can extract the archives.
End of explanation
"""
from tensorflow.keras import layers
from tensorflow import keras
import tensorflow as tf
from imgaug.augmentables.kps import KeypointsOnImage
from imgaug.augmentables.kps import Keypoint
import imgaug.augmenters as iaa
from PIL import Image
from sklearn.model_selection import train_test_split
from matplotlib import pyplot as plt
import pandas as pd
import numpy as np
import json
import os
"""
Explanation: Imports
End of explanation
"""
IMG_SIZE = 224
BATCH_SIZE = 64
EPOCHS = 5
NUM_KEYPOINTS = 24 * 2 # 24 pairs each having x and y coordinates
"""
Explanation: Define hyperparameters
End of explanation
"""
IMG_DIR = "Images"
JSON = "StanfordExtra_V12/StanfordExtra_v12.json"
KEYPOINT_DEF = (
"https://github.com/benjiebob/StanfordExtra/raw/master/keypoint_definitions.csv"
)
# Load the ground-truth annotations.
with open(JSON) as infile:
json_data = json.load(infile)
# Set up a dictionary, mapping all the ground-truth information
# with respect to the path of the image.
json_dict = {i["img_path"]: i for i in json_data}
"""
Explanation: Load data
The authors also provide a metadata file that specifies additional information about the
keypoints, like color information, animal pose name, etc. We will load this file in a pandas
dataframe to extract information for visualization purposes.
End of explanation
"""
# Load the metdata definition file and preview it.
keypoint_def = pd.read_csv(KEYPOINT_DEF)
keypoint_def.head()
# Extract the colours and labels.
colours = keypoint_def["Hex colour"].values.tolist()
colours = ["#" + colour for colour in colours]
labels = keypoint_def["Name"].values.tolist()
# Utility for reading an image and for getting its annotations.
def get_dog(name):
data = json_dict[name]
img_data = plt.imread(os.path.join(IMG_DIR, data["img_path"]))
# If the image is RGBA convert it to RGB.
if img_data.shape[-1] == 4:
img_data = img_data.astype(np.uint8)
img_data = Image.fromarray(img_data)
img_data = np.array(img_data.convert("RGB"))
data["img_data"] = img_data
return data
"""
Explanation: A single entry of json_dict looks like the following:
'n02085782-Japanese_spaniel/n02085782_2886.jpg':
{'img_bbox': [205, 20, 116, 201],
'img_height': 272,
'img_path': 'n02085782-Japanese_spaniel/n02085782_2886.jpg',
'img_width': 350,
'is_multiple_dogs': False,
'joints': [[108.66666666666667, 252.0, 1],
[147.66666666666666, 229.0, 1],
[163.5, 208.5, 1],
[0, 0, 0],
[0, 0, 0],
[0, 0, 0],
[54.0, 244.0, 1],
[77.33333333333333, 225.33333333333334, 1],
[79.0, 196.5, 1],
[0, 0, 0],
[0, 0, 0],
[0, 0, 0],
[0, 0, 0],
[0, 0, 0],
[150.66666666666666, 86.66666666666667, 1],
[88.66666666666667, 73.0, 1],
[116.0, 106.33333333333333, 1],
[109.0, 123.33333333333333, 1],
[0, 0, 0],
[0, 0, 0],
[0, 0, 0],
[0, 0, 0],
[0, 0, 0],
[0, 0, 0]],
'seg': ...}
In this example, the keys we are interested in are:
img_path
joints
There are a total of 24 entries present inside joints. Each entry has 3 values:
x-coordinate
y-coordinate
visibility flag of the keypoints (1 indicates visibility and 0 indicates non-visibility)
As we can see joints contain multiple [0, 0, 0] entries which denote that those
keypoints were not labeled. In this example, we will consider both non-visible as well as
unlabeled keypoints in order to allow mini-batch learning.
End of explanation
"""
# Parts of this code come from here:
# https://github.com/benjiebob/StanfordExtra/blob/master/demo.ipynb
def visualize_keypoints(images, keypoints):
fig, axes = plt.subplots(nrows=len(images), ncols=2, figsize=(16, 12))
[ax.axis("off") for ax in np.ravel(axes)]
for (ax_orig, ax_all), image, current_keypoint in zip(axes, images, keypoints):
ax_orig.imshow(image)
ax_all.imshow(image)
# If the keypoints were formed by `imgaug` then the coordinates need
# to be iterated differently.
if isinstance(current_keypoint, KeypointsOnImage):
for idx, kp in enumerate(current_keypoint.keypoints):
ax_all.scatter(
[kp.x], [kp.y], c=colours[idx], marker="x", s=50, linewidths=5
)
else:
current_keypoint = np.array(current_keypoint)
# Since the last entry is the visibility flag, we discard it.
current_keypoint = current_keypoint[:, :2]
for idx, (x, y) in enumerate(current_keypoint):
ax_all.scatter([x], [y], c=colours[idx], marker="x", s=50, linewidths=5)
plt.tight_layout(pad=2.0)
plt.show()
# Select four samples randomly for visualization.
samples = list(json_dict.keys())
num_samples = 4
selected_samples = np.random.choice(samples, num_samples, replace=False)
images, keypoints = [], []
for sample in selected_samples:
data = get_dog(sample)
image = data["img_data"]
keypoint = data["joints"]
images.append(image)
keypoints.append(keypoint)
visualize_keypoints(images, keypoints)
"""
Explanation: Visualize data
Now, we write a utility function to visualize the images and their keypoints.
End of explanation
"""
class KeyPointsDataset(keras.utils.Sequence):
def __init__(self, image_keys, aug, batch_size=BATCH_SIZE, train=True):
self.image_keys = image_keys
self.aug = aug
self.batch_size = batch_size
self.train = train
self.on_epoch_end()
def __len__(self):
return len(self.image_keys) // self.batch_size
def on_epoch_end(self):
self.indexes = np.arange(len(self.image_keys))
if self.train:
np.random.shuffle(self.indexes)
def __getitem__(self, index):
indexes = self.indexes[index * self.batch_size : (index + 1) * self.batch_size]
image_keys_temp = [self.image_keys[k] for k in indexes]
(images, keypoints) = self.__data_generation(image_keys_temp)
return (images, keypoints)
def __data_generation(self, image_keys_temp):
batch_images = np.empty((self.batch_size, IMG_SIZE, IMG_SIZE, 3), dtype="int")
batch_keypoints = np.empty(
(self.batch_size, 1, 1, NUM_KEYPOINTS), dtype="float32"
)
for i, key in enumerate(image_keys_temp):
data = get_dog(key)
current_keypoint = np.array(data["joints"])[:, :2]
kps = []
# To apply our data augmentation pipeline, we first need to
# form Keypoint objects with the original coordinates.
for j in range(0, len(current_keypoint)):
kps.append(Keypoint(x=current_keypoint[j][0], y=current_keypoint[j][1]))
# We then project the original image and its keypoint coordinates.
current_image = data["img_data"]
kps_obj = KeypointsOnImage(kps, shape=current_image.shape)
# Apply the augmentation pipeline.
(new_image, new_kps_obj) = self.aug(image=current_image, keypoints=kps_obj)
batch_images[i,] = new_image
# Parse the coordinates from the new keypoint object.
kp_temp = []
for keypoint in new_kps_obj:
kp_temp.append(np.nan_to_num(keypoint.x))
kp_temp.append(np.nan_to_num(keypoint.y))
# More on why this reshaping later.
batch_keypoints[i,] = np.array(kp_temp).reshape(1, 1, 24 * 2)
# Scale the coordinates to [0, 1] range.
batch_keypoints = batch_keypoints / IMG_SIZE
return (batch_images, batch_keypoints)
"""
Explanation: The plots show that we have images of non-uniform sizes, which is expected in most
real-world scenarios. However, if we resize these images to have a uniform shape (for
instance (224 x 224)) their ground-truth annotations will also be affected. The same
applies if we apply any geometric transformation (horizontal flip, for e.g.) to an image.
Fortunately, imgaug provides utilities that can handle this issue.
In the next section, we will write a data generator inheriting the
keras.utils.Sequence class
that applies data augmentation on batches of data using imgaug.
Prepare data generator
End of explanation
"""
train_aug = iaa.Sequential(
[
iaa.Resize(IMG_SIZE, interpolation="linear"),
iaa.Fliplr(0.3),
# `Sometimes()` applies a function randomly to the inputs with
# a given probability (0.3, in this case).
iaa.Sometimes(0.3, iaa.Affine(rotate=10, scale=(0.5, 0.7))),
]
)
test_aug = iaa.Sequential([iaa.Resize(IMG_SIZE, interpolation="linear")])
"""
Explanation: To know more about how to operate with keypoints in imgaug check out
this document.
Define augmentation transforms
End of explanation
"""
np.random.shuffle(samples)
train_keys, validation_keys = (
samples[int(len(samples) * 0.15) :],
samples[: int(len(samples) * 0.15)],
)
"""
Explanation: Create training and validation splits
End of explanation
"""
train_dataset = KeyPointsDataset(train_keys, train_aug)
validation_dataset = KeyPointsDataset(validation_keys, test_aug, train=False)
print(f"Total batches in training set: {len(train_dataset)}")
print(f"Total batches in validation set: {len(validation_dataset)}")
sample_images, sample_keypoints = next(iter(train_dataset))
assert sample_keypoints.max() == 1.0
assert sample_keypoints.min() == 0.0
sample_keypoints = sample_keypoints[:4].reshape(-1, 24, 2) * IMG_SIZE
visualize_keypoints(sample_images[:4], sample_keypoints)
"""
Explanation: Data generator investigation
End of explanation
"""
def get_model():
# Load the pre-trained weights of MobileNetV2 and freeze the weights
backbone = keras.applications.MobileNetV2(
weights="imagenet", include_top=False, input_shape=(IMG_SIZE, IMG_SIZE, 3)
)
backbone.trainable = False
inputs = layers.Input((IMG_SIZE, IMG_SIZE, 3))
x = keras.applications.mobilenet_v2.preprocess_input(inputs)
x = backbone(x)
x = layers.Dropout(0.3)(x)
x = layers.SeparableConv2D(
NUM_KEYPOINTS, kernel_size=5, strides=1, activation="relu"
)(x)
outputs = layers.SeparableConv2D(
NUM_KEYPOINTS, kernel_size=3, strides=1, activation="sigmoid"
)(x)
return keras.Model(inputs, outputs, name="keypoint_detector")
"""
Explanation: Model building
The Stanford dogs dataset (on which
the StanfordExtra dataset is based) was built using the ImageNet-1k dataset.
So, it is likely that the models pretrained on the ImageNet-1k dataset would be useful
for this task. We will use a MobileNetV2 pre-trained on this dataset as a backbone to
extract meaningful features from the images and then pass those to a custom regression
head for predicting coordinates.
End of explanation
"""
get_model().summary()
"""
Explanation: Our custom network is fully-convolutional which makes it more parameter-friendly than the
same version of the network having fully-connected dense layers.
End of explanation
"""
model = get_model()
model.compile(loss="mse", optimizer=keras.optimizers.Adam(1e-4))
model.fit(train_dataset, validation_data=validation_dataset, epochs=EPOCHS)
"""
Explanation: Notice the output shape of the network: (None, 1, 1, 48). This is why we have reshaped
the coordinates as: batch_keypoints[i, :] = np.array(kp_temp).reshape(1, 1, 24 * 2).
Model compilation and training
For this example, we will train the network only for five epochs.
End of explanation
"""
sample_val_images, sample_val_keypoints = next(iter(validation_dataset))
sample_val_images = sample_val_images[:4]
sample_val_keypoints = sample_val_keypoints[:4].reshape(-1, 24, 2) * IMG_SIZE
predictions = model.predict(sample_val_images).reshape(-1, 24, 2) * IMG_SIZE
# Ground-truth
visualize_keypoints(sample_val_images, sample_val_keypoints)
# Predictions
visualize_keypoints(sample_val_images, predictions)
"""
Explanation: Make predictions and visualize them
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/cccma/cmip6/models/sandbox-1/ocnbgchem.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cccma', 'sandbox-1', 'ocnbgchem')
"""
Explanation: ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era: CMIP6
Institute: CCCMA
Source ID: SANDBOX-1
Topic: Ocnbgchem
Sub-Topics: Tracers.
Properties: 65 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:46
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean biogeochemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean biogeochemistry model code (PISCES 2.0,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean biogeochemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
"""
Explanation: 1.4. Elemental Stoichiometry
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe elemental stoichiometry (fixed, variable, mix of the two)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.5. Elemental Stoichiometry Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe which elements have fixed/variable stoichiometry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all prognostic tracer variables in the ocean biogeochemistry component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.7. Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all diagnotic tracer variables in the ocean biogeochemistry component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.8. Damping
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any tracer damping used (such as artificial correction or relaxation to climatology,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for passive tracers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for passive tracers (if different from ocean)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for biology sources and sinks
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for biology sources and sinks (if different from ocean)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transport scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 4.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Transport scheme used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.3. Use Different Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Decribe transport scheme if different than that of ocean model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how atmospheric deposition is modeled
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
"""
Explanation: 5.2. River Input
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how river input is modeled
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Sediments From Boundary Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.4. Sediments From Explicit Model
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from explicit sediment model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CO2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.2. CO2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe CO2 gas exchange
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.3. O2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is O2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.4. O2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe O2 gas exchange
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.5. DMS Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is DMS gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.6. DMS Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify DMS gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.7. N2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.8. N2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.9. N2O Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2O gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.10. N2O Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2O gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.11. CFC11 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC11 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.12. CFC11 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC11 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.13. CFC12 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC12 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.14. CFC12 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC12 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.15. SF6 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is SF6 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.16. SF6 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify SF6 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.17. 13CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 13CO2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.18. 13CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 13CO2 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.19. 14CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 14CO2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.20. 14CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 14CO2 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.21. Other Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any other gas exchange
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how carbon chemistry is modeled
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7.2. PH Scale
Is Required: FALSE Type: ENUM Cardinality: 0.1
If NOT OMIP protocol, describe pH scale.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Constants If Not OMIP
Is Required: FALSE Type: STRING Cardinality: 0.1
If NOT OMIP protocol, list carbon chemistry constants.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of tracers in ocean biogeochemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.2. Sulfur Cycle Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sulfur cycle modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Nutrients Present
Is Required: TRUE Type: ENUM Cardinality: 1.N
List nutrient species present in ocean biogeochemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.4. Nitrous Species If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous species.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.5. Nitrous Processes If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous processes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required: TRUE Type: STRING Cardinality: 1.1
Definition of upper trophic level (e.g. based on size) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Upper Trophic Levels Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Define how upper trophic level are treated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
"""
Explanation: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of phytoplankton
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.2. Pft
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton functional types (PFT) (if applicable)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.3. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton size classes (if applicable)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of zooplankton
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Zooplankton size classes (if applicable)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there bacteria representation ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Lability
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe treatment of lability in dissolved organic matter
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is particulate carbon represented in ocean biogeochemistry?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Types If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, type(s) of particulate matter taken into account
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
"""
Explanation: 13.3. Size If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13.4. Size If Discrete
Is Required: FALSE Type: STRING Cardinality: 0.1
If prognostic and discrete size, describe which size classes are used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Sinking Speed If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, method for calculation of sinking speed of particules
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
"""
Explanation: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which carbon isotopes are modelled (C13, C14)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 14.2. Abiotic Carbon
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is abiotic carbon modelled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
"""
Explanation: 14.3. Alkalinity
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is alkalinity modelled ?
End of explanation
"""
|
PythonFreeCourse/Notebooks | week04/2_Dictionaries.ipynb | mit | items = ['banana', 'apple', 'carrot']
stock = [2, 3, 4]
"""
Explanation: <img src="images/logo.jpg" style="display: block; margin-left: auto; margin-right: auto;" alt="לוגו של מיזם לימוד הפייתון. נחש מצויר בצבעי צהוב וכחול, הנע בין האותיות של שם הקורס: לומדים פייתון. הסלוגן המופיע מעל לשם הקורס הוא מיזם חינמי ללימוד תכנות בעברית.">
<span style="text-align: right; direction: rtl; float: right;">מילונים</span>
<span style="text-align: right; direction: rtl; float: right; clear: both;">הקדמה</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ברשימה הבאה, כל תבליט מייצג אוסף של נתונים:
</p>
<ul style="text-align: right; direction: rtl; float: right; clear: both;">
<li>בחנות של האדון קשטן יש 2 בננות, 3 תפוחים ו־4 גזרים.</li>
<li>מספר הזהות של ג'ני הוא 086753092, של קווין 133713370, של איינשטיין 071091797 ושל מנחם 111111118.</li>
<li>לקווין מהסעיף הקודם יש צוללות בצבע אדום וכחול. הצוללות של ג'ני מהסעיף הקודם בצבע שחור וירוק. הצוללת שלי צהובה.</li>
<li>המחיר של פאי בחנות של קשטן הוא 3.141 ש"ח. המחיר של אווז מחמד בחנות של קשטן הוא 9.0053 ש"ח.</li>
</ul>
<div class="align-center" style="display: flex; text-align: right; direction: rtl; clear: both;">
<div style="display: flex; width: 10%; float: right; clear: both;">
<img src="images/exercise.svg" style="height: 50px !important;" alt="תרגול">
</div>
<div style="width: 70%">
<p style="text-align: right; direction: rtl; float: right; clear: both;">
נסו למצוא מאפיינים משותפים לאוספים שהופיעו מעלה.
</p>
</div>
<div style="display: flex; width: 20%; border-right: 0.1rem solid #A5A5A5; padding: 1rem 2rem;">
<p style="text-align: center; direction: rtl; justify-content: center; align-items: center; clear: both;">
<strong>חשוב!</strong><br>
פתרו לפני שתמשיכו!
</p>
</div>
</div>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
אפשר לחלק כל אחד מהאוספים שכתבנו למעלה ל־2 קבוצות ערכים.<br>
הראשונה – הנושאים של האוסף. עבור החנות של קשטן, לדוגמה, הפריט שאנחנו מחזיקים בחנות.<br>
השנייה – הפריטים שהם <em>נתון כלשהו</em> בנוגע לפריט הראשון: המלאי של אותו פריט, לדוגמה.<br>
</p>
<figure>
<img src="images/dictionary_groups.svg?n=5" style="max-width:100%; margin-right: auto; margin-left: auto; text-align: center;" alt="בתמונה מופיעים 4 עיגולים בימין ו־4 עיגולים בשמאל. העיגולים בימין, בעלי הכותרת 'נושא', מצביעים על העיגולים בשמאל שכותרתם 'נתון לגבי הנושא'. כל עיגול בימין מצביע לעיגול בשמאל. 'פריט בחנות' מצביע ל'מלאי של הפריט', 'מספר תעודת זהות' מצביע ל'השם שמשויך למספר', 'בן אדם' מצביע ל'צבעי הצוללות שבבעלותו' ו'פריט בחנות' (עיגול נוסף באותו שם כמו העיגול הראשון) מצביע ל'מחיר הפריט'."/>
<figcaption style="text-align: center; direction: rtl;">חלוקת האוספים ל־2 קבוצות של ערכים.</figcaption>
</figure>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
פריטים מהקבוצה הראשונה לעולם לא יחזרו על עצמם – אין היגיון בכך ש"תפוח ירוק" יופיע פעמיים ברשימת המלאי בחנות, ולא ייתכן מצב של שני מספרי זהות זהים.<br>
הפריטים מהקבוצה השנייה, לעומת זאת, יכולים לחזור על עצמם – הגיוני שתהיה אותה כמות של בננות ותפוחים בחנות, או שיהיו אנשים בעלי מספרי זהות שונים שנקראים "משה כהן".
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
נבחן לעומק את המאפיינים המשותפים בדוגמאות שלעיל.
</p>
<table style="text-align: right; direction: rtl; clear: both; font-size: 1.3rem">
<caption style="text-align: center; direction: rtl; clear: both; font-size: 2rem; padding-bottom: 2rem;">המשותף לאוספים</caption>
<thead>
<tr>
<th>אוסף</th>
<th>הערך הקושר (קבוצה ראשונה)</th>
<th>הערך המתאים לו (קבוצה שנייה)</th>
<th>הסבר</th>
</tr>
</thead>
<tbody>
<tr>
<td>מוצרים והמלאי שלהם בחנות</td>
<td>המוצר שנמכר בחנות</td>
<td>המלאי מאותו מוצר</td>
<td>יכולים להיות בחנות 5 תפוזים ו־5 תפוחים, אבל אין משמעות לחנות שיש בה 5 תפוחים וגם 3 תפוחים.</td>
</tr>
<tr>
<td>מספרי הזהות של אזרחים</td>
<td>תעודת הזהות</td>
<td>השם של בעל מספר הזהות</td>
<td>יכולים להיות הרבה אזרחים העונים לשם משה לוי, ולכל אחד מהם יהיה מספר זהות שונה. לא ייתכן שמספר זהות מסוים ישויך ליותר מאדם אחד.</td>
</tr>
<tr>
<td>בעלות על צוללות צבעוניות</td>
<td>בעל הצוללות</td>
<td>צבע הצוללות</td>
<td>יכול להיות שגם לקווין וגם לג'ני יש צוללות בצבעים זהים. ג'ני, קווין ואני הם אנשים ספציפיים, שאין יותר מ־1 מהם בעולם (עד שנמציא דרך לשבט אנשים).</td>
</tr>
<tr>
<td>מוצרים ומחיריהם</td>
<td>שם המוצר</td>
<td>מחיר המוצר</td>
<td>לכל מוצר מחיר נקוב. עבור שני מוצרים שונים בחנות יכול להיות מחיר זהה.</td>
</tr>
</tbody>
</table>
<span style="text-align: right; direction: rtl; float: right; clear: both;">מיפוי ערכים</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
כמו שראינו בדוגמאות, מצב נפוץ במיוחד הוא הצורך לאחסן <em>מיפוי בין ערכים</em>.<br>
נחשוב על המיפוי בחנות של קשטן, שבה הוא סופר את המלאי עבור כל מוצר.<br>
נוכל לייצג את מלאי המוצרים בחנות של קשטן באמצעות הידע שכבר יש לנו. נשתמש בקוד הבא:
</p>
End of explanation
"""
def get_stock(item_name, items, stock):
item_index = items.index(item_name)
how_many = stock[item_index]
return how_many
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
עבור כל תא ברשימת <var>items</var>, שמרנו במקום התואם ברשימת <var>stock</var> את הכמות שנמצאת ממנו בחנות.<br>
יש 4 גזרים, 3 תפוחים ו־2 בננות על המדף בחנות של אדון קשטן.<br>
שליפה של כמות המלאי עבור מוצר כלשהו בחנות תתבצע בצורה הבאה:
</p>
End of explanation
"""
print(get_stock('apple', items, stock))
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
בשורה הראשונה בגוף הפונקציה מצאנו את מיקום המוצר שאנחנו מחפשים במלאי. נניח, "תפוח" מוחזק במקום 1 ברשימה.<br>
בשורה השנייה פנינו לרשימה השנייה, זו שמאחסנת את המלאי עבור כל מוצר, ומצאנו את המלאי שנמצא באותו מיקום.<br>
כמות היחידות של מוצר מאוחסנת במספר תא מסוים, התואם למספר התא ברשימה של שמות המוצרים. זו הסיבה לכך שהרעיון עובד.<br>
</p>
End of explanation
"""
items = [('banana', 2), ('apple', 3), ('carrot', 4)]
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
צורה נוספת למימוש אותו רעיון תהיה שמירה של זוגות סדורים בתוך רשימה של tuple־ים:
</p>
End of explanation
"""
def get_stock(item_name_to_find, items_with_stock):
for item_to_stock in items_with_stock:
item_name = item_to_stock[0]
stock = item_to_stock[1]
if item_name == item_name_to_find:
return stock
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
ברשימה הזו הרעיון נראה מובן יותר. בואו נממש דרך לחלץ איבר מסוים מתוך הרשימה:
</p>
End of explanation
"""
get_stock('apple', items)
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
עבור כל tuple ברשימה, בדקנו אם שם הפריט שהוא מכיל תואם לשם הפריט שחיפשנו.<br>
אם כן, החזרנו את הכמות של אותו פריט במלאי.<br>
שימוש בפונקציה הזו נראה כך:
</p>
End of explanation
"""
ages = {'Yam': 27, 'Methuselah': 969, 'Baby Groot': 3}
"""
Explanation: <div class="align-center" style="display: flex; text-align: right; direction: rtl; clear: both;">
<div style="display: flex; width: 10%; float: right; clear: both;">
<img src="images/exercise.svg" style="height: 50px !important;" alt="תרגול">
</div>
<div style="width: 70%">
<p style="text-align: right; direction: rtl; float: right; clear: both;">
השתמשו ב־unpacking שלמדנו במחברת הקודמת כדי לפשט את לולאת ה־<code>for</code> בקוד של <code>get_stock</code>.
</p>
</div>
<div style="display: flex; width: 20%; border-right: 0.1rem solid #A5A5A5; padding: 1rem 2rem;">
<p style="text-align: center; direction: rtl; justify-content: center; align-items: center; clear: both;">
<strong>חשוב!</strong><br>
פתרו לפני שתמשיכו!
</p>
</div>
</div>
<div class="align-center" style="display: flex; text-align: right; direction: rtl; clear: both;">
<div style="display: flex; width: 10%; float: right; clear: both;">
<img src="images/exercise.svg" style="height: 50px !important;" alt="תרגול">
</div>
<div style="width: 70%">
<p style="text-align: right; direction: rtl; float: right; clear: both;">
שני קטעי הקוד שנתנו כדוגמה פישטו את המצב יתר על המידה, והם אינם מתייחסים למצב שבו הפריט חסר במלאי.<br>
הרחיבו את הפונקציות <code>get_stock</code> כך שיחזירו 0 אם הפריט חסר במלאי.
</p>
</div>
<div style="display: flex; width: 20%; border-right: 0.1rem solid #A5A5A5; padding: 1rem 2rem;">
<p style="text-align: center; direction: rtl; justify-content: center; align-items: center; clear: both;">
<strong>חשוב!</strong><br>
פתרו לפני שתמשיכו!
</p>
</div>
</div>
<span style="text-align: right; direction: rtl; float: right; clear: both;">הגדרה</span>
<span style="text-align: right; direction: rtl; float: right; clear: both;">מה זה מילון?</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
מילון הוא סוג ערך בפייתון.<br>
תכליתו היא ליצור קשר בין סדרה של נתונים שנקראת <dfn>מפתחות</dfn>, לבין סדרה אחרת של נתונים שנקראת <dfn>ערכים</dfn>.<br>
לכל מפתח יש ערך שעליו הוא מצביע.<br>
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ישנן דוגמאות אפשריות רבות לקשרים כאלו:
</p>
<ol style="text-align: right; direction: rtl; float: right; clear: both;">
<li>קשר בין ערים בעולם לבין מספר האנשים שחיים בהן.</li>
<li>קשר בין ברקוד של מוצרים בחנות לבין מספר הפריטים במלאי מכל מוצר.</li>
<li>קשר בין מילים לבין רשימת הפירושים שלהן במילון אבן־שושן.</li>
</ol>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
לערך המצביע נקרא <dfn>מפתח</dfn> (<dfn>key</dfn>). זה האיבר מבין זוג האיברים שעל פיו נשמע הגיוני יותר לעשות חיפוש:<br>
</p>
<ol style="text-align: right; direction: rtl; float: right; clear: both;">
<li>העיר שאנחנו רוצים לדעת את מספר התושבים בה.</li>
<li>הברקוד שאנחנו רוצים לדעת כמה פריטים ממנו קיימים במלאי.</li>
<li>המילה שאת הפירושים שלה אנחנו רוצים למצוא.</li>
</ol>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
לערך השני מבין שני הערכים בזוג, נקרא... ובכן, <dfn>ערך</dfn> (<dfn>value</dfn>). זה הנתון שנרצה למצוא לפי המפתח:<br>
</p>
<ol style="text-align: right; direction: rtl; float: right; clear: both;">
<li>מספר התושבים בעיר.</li>
<li>מספר הפריטים הקיימים במלאי עבור ברקוד מסוים.</li>
<li>הפירושים של המילה במילון.</li>
</ol>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
אם כך, מילון הוא בסך הכול אוסף של זוגות שכאלו: מפתחות וערכים.</p>
<span style="text-align: right; direction: rtl; float: right; clear: both;">הבסיס</span>
<span style="text-align: right; direction: rtl; float: right; clear: both;">יצירת מילון</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ניצור מילון חדש:</p>
End of explanation
"""
age_of_my_elephants = {}
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
במילון הזה ישנם שלושה ערכים: הגיל של ים, של מתושלח ושל בייבי־גרוט.<br>
המפתחות במילון הזה הם <em>Yam</em> (הערך הקשור למפתח הזה הוא 27), <em>Methuselah</em> (עם הערך 969) ו־<em>Baby Groot</em> (אליו הוצמד הערך 3).
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
יצרנו את המילון כך:
</p>
<ol style="text-align: right; direction: rtl; float: right; clear: both;">
<li>פתחנו סוגריים מסולסלים.</li>
<li>יצרנו זוגות של מפתחות וערכים, מופרדים בפסיק:
<ol>
<li>המפתח.</li>
<li>הפרדה בנקודתיים.</li>
<li>הערך.</li>
</ol>
</li>
<li>סגרנו סוגריים מסולסלים.</li>
</ol>
<figure>
<img src="images/dictionary.svg?v=2" style="max-width:100%; margin-right: auto; margin-left: auto; text-align: center;" alt="בתמונה מופיעים 3 ריבועים בימין ו־3 ריבועים בשמאל. הריבועים בימין, שמתויגים כ'מפתח', מצביעים על הריבועים בשמאל שמתויגים כ'ערך'. כל ריבוע בימין מצביע לריבוע בשמאל. בריבוע הימני העליון כתוב Yam, והוא מצביע על ריבוע בו כתוב 27. כך גם עבור ריבוע שבו כתוב Methuselah ומצביע לריבוע בו כתוב 969, וריבוע בו כתוב Baby Groot ומצביע לריבוע בו כתוב 3."/>
<figcaption style="text-align: center; direction: rtl;">המחשה למילון שבו 3 מפתחות ו־3 ערכים.</figcaption>
</figure>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
אפשר ליצור מילון ריק בעזרת פתיחה וסגירה של סוגריים מסולסלים:</p>
End of explanation
"""
names = ['Yam', 'Mathuselah', 'Baby Groot']
"""
Explanation: <div class="align-center" style="display: flex; text-align: right; direction: rtl; clear: both;">
<div style="display: flex; width: 10%; float: right; clear: both;">
<img src="images/exercise.svg" style="height: 50px !important;" alt="תרגול">
</div>
<div style="width: 70%">
<p style="text-align: right; direction: rtl; float: right; clear: both;">
צרו מילון עבור המלאי בחנות של אדון קשטן.
</p>
</div>
<div style="display: flex; width: 20%; border-right: 0.1rem solid #A5A5A5; padding: 1rem 2rem;">
<p style="text-align: center; direction: rtl; justify-content: center; align-items: center; clear: both;">
<strong>חשוב!</strong><br>
פתרו לפני שתמשיכו!
</p>
</div>
</div>
<span style="text-align: right; direction: rtl; float: right; clear: both;">אחזור ערך</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ניזכר כיצד מאחזרים ערך מתוך רשימה:
</p>
End of explanation
"""
names[2]
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
כדי לחלץ את הערך שנמצא <em>במקום 2</em> ברשימה <var>names</var>, נכתוב:
</p>
End of explanation
"""
items = {'banana': 2, 'apple': 3, 'carrot': 4}
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
עד כאן הכול מוכר.<br>
ניקח את המילון שמייצג את המלאי בחנות של אדון קשטן:
</p>
End of explanation
"""
items['banana']
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
כדי לחלץ את ערך המלאי שנמצא <em>במקום שבו המפתח הוא 'banana'</em>, נרשום את הביטוי הבא:
</p>
End of explanation
"""
items['melon'] = 1
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
כיוון שבמילון המפתח הוא זה שמצביע על הערך ולא להפך, אפשר לאחזר ערך לפי מפתח, אבל אי־אפשר לאחזר מפתח לפי ערך.</p>
<div class="align-center" style="display: flex; text-align: right; direction: rtl;">
<div style="display: flex; width: 10%; ">
<img src="images/tip.png" style="height: 50px !important;" alt="אזהרה!">
</div>
<div style="width: 90%">
<p style="text-align: right; direction: rtl;">
ביום־יום, השתמשו במילה "בְּמָקוֹם" (b'e-ma-qom) כתחליף למילים סוגריים מרובעים.<br>
לדוגמה: עבור שורת הקוד האחרונה, אימרו <em><q>items במקום banana</q></em>.
</p>
</div>
</div>
<span style="text-align: right; direction: rtl; float: right; clear: both;">הוספה ועדכון</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
אפשר להוסיף מפתח וערך למילון, באמצעות השמת הערך אל המילון במקום של המפתח.<br>
ניקח כדוגמה מקרה שבו יש לנו במלאי מלון אחד.<br>
המפתח הוא <em>melon</em> והערך הוא <em>1</em>, ולכן נשתמש בהשמה הבאה:
</p>
End of explanation
"""
items['melon'] = items['melon'] + 4
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
אם הגיעו עוד 4 מלונים לחנות של אדון קשטן, נוכל לעדכן את מלאי המלונים באמצעות השמה למקום הנכון במילון:
</p>
End of explanation
"""
favorite_animals = {'Alice': 'Cat', 'Mad hatter': 'Hare', 'Achiles': 'Tortoise'}
for something in favorite_animals:
print(something)
"""
Explanation: <span style="text-align: right; direction: rtl; float: right; clear: both;">כללי המשחק</span>
<ul style="text-align: right; direction: rtl; float: right; clear: both;">
<li>לא יכולים להיות 2 מפתחות זהים במילון.</li>
<li>המפתחות במילון חייבים להיות immutables.</li>
<li>אנחנו נתייחס למילון כאל מבנה ללא סדר מסוים (אין "איבר ראשון" או "איבר אחרון").</li>
</ul>
<div class="align-center" style="display: flex; text-align: right; direction: rtl;">
<div style="display: flex; width: 10%; ">
<img src="images/deeper.svg?a=1" style="height: 50px !important;" alt="העמקה">
</div>
<div style="width: 90%">
<p style="text-align: right; direction: rtl;">
בגרסאות האחרונות של פייתון הפך מילון להיות מבנה סדור, שבו סדר האיברים הוא סדר ההכנסה שלהם למילון.<br>
למרות זאת, רק במצבים נדירים נצטרך להתייחס לסדר שבו האיברים מסודרים במילון, ובשלב זה נעדיף שלא להתייחס לתכונה הזו.
</p>
</div>
</div>
<span style="text-align: right; direction: rtl; float: right; clear: both;">סיכום ביניים</span>
<ul style="text-align: right; direction: rtl; float: right; clear: both;">
<li>מילון הוא מבנה שבנוי זוגות־זוגות: יש ערכים ומפתחות, ולכל מפתח יש ערך אחד שעליו הוא מצביע.</li>
<li>נתייחס למילון כאל מבנה ללא סדר מסוים. אין "איבר ראשון" או "איבר אחרון".</li>
<li>בניגוד לרשימה, כאן ה"מקום" שאליו אנחנו פונים כדי לאחזר ערך הוא המפתח, ולא מספר שמייצג את המקום הסידורי של התא.</li>
<li>בעזרת מפתח אפשר להגיע לערך המוצמד אליו, אבל לא להפך.</li>
</ul>
<div class="align-center" style="display: flex; text-align: right; direction: rtl;">
<div style="display: flex; width: 10%; ">
<img src="images/tip.png" style="height: 50px !important;" alt="אזהרה!">
</div>
<div style="width: 90%">
<p style="text-align: right; direction: rtl;">
חדי העין שמו לב שאנחנו מצליחים להוסיף ערכים למילון, ולשנות בו ערכים קיימים.<br>
מהתכונה הזו אנחנו למדים שמילון הוא mutable.
</p>
</div>
</div>
<span style="text-align: right; direction: rtl; float: right; clear: both;">מעבר על מילון</span>
<span style="text-align: right; direction: rtl; float: right; clear: both;">לולאת for</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
כיוון שמילון הוא iterable, דרך מקובלת לעבור עליו היא באמצעות לולאת <code>for</code>.<br>
ננסה להשתמש בלולאת <code>for</code> על מילון, ונראה מה התוצאות:
</p>
End of explanation
"""
favorite_animals = {'Alice': 'Cat', 'Mad hatter': 'Hare', 'Achiles': 'Tortoise'}
print('favorite_animals items:')
for key in favorite_animals:
value = favorite_animals[key]
print(f"{key:10} -----> {value}.") # תרגיל קטן: זהו את הטריק שגורם לזה להיראות טוב כל כך בהדפסה
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
נראה שקיבלנו רק את המפתחות, בלי הערכים.<br>
נסיק מכאן שמילון הוא אמנם iterable, אך בכל איטרציה הוא מחזיר לנו רק את המפתח, בלי הערך.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
אנחנו כבר יודעים איך מחלצים את הערך של מפתח מסוים.<br>
נוכל להשתמש בידע הזה כדי לקבל בכל חזרור גם את המפתח, וגם את הערך:
</p>
End of explanation
"""
print(list(favorite_animals.items()))
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
אבל הפתרון הזה לא נראה אלגנטי במיוחד, ונראה שנוכל למצוא אחד טוב יותר.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
לעזרתנו נחלצת הפעולה <code>items</code>, השייכת לערכים מסוג מילון.<br>
הפעולה הזו מחזירה זוגות איברים, כאשר בכל זוג האיבר הראשון הוא המפתח והאיבר השני הוא הערך.
</p>
End of explanation
"""
print('favorite_animals items:')
for key, value in favorite_animals.items():
print(f"{key:10} -----> {value}.")
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
מאחר שמדובר באיברים שבאים בזוגות, נוכל להשתמש בפירוק איברים כפי שלמדנו בשיעור על לולאות <code>for</code>:
</p>
End of explanation
"""
print('favorite_animals items:')
for character, animal in favorite_animals.items():
print(f"{character:10} -----> {animal}.")
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
בלולאה שמופיעה למעלה ניצלנו את העובדה שהפעולה <code>items</code> מחזירה לנו איברים בזוגות: מפתח וערך.<br>
בכל חזרור, אנחנו מכניסים למשתנה <var>key</var> את האיבר הראשון בזוג, ולמשתנה <var>value</var> את האיבר השני בזוג.<br>
נוכל להיות אפילו אלגנטיים יותר ולתת למשתנים הללו שמות ראויים:
</p>
End of explanation
"""
empty_dict = {}
empty_dict['DannyDin']
"""
Explanation: <div class="align-center" style="display: flex; text-align: right; direction: rtl; clear: both;">
<div style="display: flex; width: 10%; float: right; clear: both;">
<img src="images/exercise.svg" style="height: 50px !important;" alt="תרגול">
</div>
<div style="width: 70%">
<p style="text-align: right; direction: rtl; float: right; clear: both;">
כתבו פונקציה שמקבלת מילון ומדפיסה עבור כל מפתח את האורך של הערך המוצמד אליו.
</p>
</div>
</div>
<span style="text-align: right; direction: rtl; float: right; clear: both;">מפתחות שלא קיימים</span>
<span style="text-align: right; direction: rtl; float: right; clear: both;">הבעיה</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
מילונים הם טיפוסים קצת רגישים. הם לא אוהבים כשמזכירים להם מה אין בהם.<br>
אם ננסה לפנות למילון ולבקש ממנו מפתח שאין לו, נקבל הודעת שגיאה.<br>
בפעמים הראשונות שתתעסקו עם מילונים, יש סיכוי לא מבוטל שתקבלו <code>KeyError</code> שנראה כך:<br>
</p>
End of explanation
"""
loved_animals = {'Alice': 'Cat', 'Mad hatter': 'Hare', 'Achiles': 'Tortoise'}
print('Achiles' in loved_animals)
"""
Explanation: <span style="text-align: right; direction: rtl; float: right; clear: both;"><code>in</code> במילונים</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
יש כמה דרכים לפתור בעיה זו.<br>
דרך אפשרית אחת היא לבדוק שהמפתח קיים לפני שאנחנו ניגשים אליו:</p>
End of explanation
"""
loved_animals = {'Alice': 'Cat', 'Mad hatter': 'Hare', 'Achiles': 'Tortoise'}
if 'Achiles' in loved_animals:
value = loved_animals['Achiles']
else:
value = 'Pony'
print(value)
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
כאן השתמשנו באופרטור <code>in</code> כדי לבדוק אם מפתח מסוים נמצא במילון.<br>
נוכל גם לבקש את הערך לאחר שבדקנו שהוא קיים:
</p>
End of explanation
"""
def get_value(dictionary, key, default_value):
if key in dictionary:
return dictionary[key]
else:
return default_value
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
בקוד שלמעלה, השתמשנו באופרטור ההשוואה <code>in</code> כדי לבדוק אם מפתח מסוים ("אכילס") קיים בתוך המילון שיצרנו בשורה הראשונה.<br>
אם הוא נמצא שם, חילצנו את הערך שמוצמד לאותו מפתח (ל"אכילס"). אם לא, המצאנו ערך משלנו – "פוני".<br>
</p>
<div class="align-center" style="display: flex; text-align: right; direction: rtl;">
<div style="display: flex; width: 10%; float: right; ">
<img src="images/warning.png" style="height: 50px !important;" alt="אזהרה!">
</div>
<div style="width: 90%">
<p style="text-align: right; direction: rtl;">
מעבר על מילון יחזיר בכל חזרור מפתח מהמילון, ללא הערך הקשור אליו.<br>
מסיבה זו, אופרטור ההשוואה <code>in</code> יבדוק רק אם קיים <em>מפתח</em> מסוים במילון, ולא יבדוק אם ערך שכזה קיים.
</p>
</div>
</div>
<div class="align-center" style="display: flex; text-align: right; direction: rtl; clear: both;">
<div style="display: flex; width: 10%; float: right; clear: both;">
<img src="images/exercise.svg" style="height: 50px !important;" alt="תרגול">
</div>
<div style="width: 70%">
<p style="text-align: right; direction: rtl; float: right; clear: both;">
כתבו פונקציה שמקבלת שלושה פרמטרים: מילון, מפתח וערך ברירת מחדל.<br>
הפונקציה תחפש את המפתח במילון, ואם הוא קיים תחזיר את הערך שלו.<br>
אם המפתח לא קיים במילון, הפונקציה תחזיר את ערך ברירת המחדל.
</p>
</div>
<div style="display: flex; width: 20%; border-right: 0.1rem solid #A5A5A5; padding: 1rem 2rem;">
<p style="text-align: center; direction: rtl; justify-content: center; align-items: center; clear: both;">
<strong>חשוב!</strong><br>
פתרו לפני שתמשיכו!
</p>
</div>
</div>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ננסה לכתוב את הרעיון בקוד שלמעלה כפונקציה כללית:
</p>
End of explanation
"""
loved_animals = {'Alice': 'Cat', 'Mad hatter': 'Hare', 'Achiles': 'Tortoise'}
print("Mad hatter: " + get_value(loved_animals, 'Mad hatter', 'Pony'))
print("Queen of hearts: " + get_value(loved_animals, 'Queen of hearts', 'Pony'))
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
הפונקציה שלמעלה מקבלת מילון, מפתח וערך ברירת מחדל.<br>
אם היא מוצאת את המפתח במילון, היא מחזירה את הערך של אותו מפתח.<br>
אם היא לא מוצאת את המפתח במילון, היא מחזירה את ערך ברירת המחדל שנקבע.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
נבדוק שהפונקציה עובדת:
</p>
End of explanation
"""
loved_animals = {'Alice': 'Cat', 'Mad hatter': 'Hare', 'Achiles': 'Tortoise'}
print("Mad hatter: " + loved_animals.get('Mad hatter', 'Pony'))
print("Queen of hearts: " + loved_animals.get('Queen of hearts', 'Pony'))
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
ובכן, זו פונקציה כייפית. כמה נוח היה לו היא הייתה פעולה של מילון.</p>
<span style="text-align: right; direction: rtl; float: right; clear: both;">הפעולה <code>get</code> במילונים</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
מי היה מאמין, יש פעולה כזו במילונים! ננסה להפעיל אותה על המילון שלנו.<br>
שימו לב לצורת הקריאה לפעולה, ששונה מהקריאה לפונקציה שכתבנו למעלה – שם המשתנה של המילון בא לפני שם הפעולה. מדובר בפעולה, ולא בפונקציה:
</p>
End of explanation
"""
loved_animals = {'Alice': 'Cat', 'Mad hatter': 'Hare', 'Achiles': 'Tortoise'}
print(loved_animals.get('Mad hatter'))
print(loved_animals.get('Queen of hearts'))
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
טריק קסום אחרון שנראה הוא שהפעולה <code>get</code> סלחנית ממש, ומתפקדת גם אם לא נותנים לה ערך ברירת מחדל.<br>
אם תספקו רק את שם המפתח שממנו תרצו לאחזר ערך, היא תחפש אותו ותחזיר את הערך שלו, אם הוא קיים.<br>
אם המפתח לא קיים ולא סופק ערך ברירת מחדל, היא תחזיר את הערך <code>None</code>:
</p>
End of explanation
"""
decryption_key = {
'O': 'A', 'D': 'B', 'F': 'C', 'I': 'D', 'H': 'E',
'G': 'F', 'L': 'G', 'C': 'H', 'K': 'I', 'Q': 'J',
'B': 'K', 'J': 'L', 'Z': 'M', 'V': 'N', 'S': 'O',
'R': 'P', 'M': 'Q', 'X': 'R', 'E': 'S', 'P': 'T',
'A': 'U', 'Y': 'V', 'W': 'W', 'T': 'X', 'U': 'Y',
'N': 'Z',
}
SONG = """
sc, kg pchxh'e svh pckvl k covl svps
pcop lhpe zh pcxsalc pch vklcp
k okv'p lsvvo is wcop k isv'p wovp ps
k'z lsvvo jkyh zu jkgh
eckvkvl jkbh o ikozsvi, xsjjkvl wkpc pch ikfh
epovikvl sv pch jhilh, k ecsw pch wkvi csw ps gju
wchv pch wsxji lhpe kv zu gofh
k eou, coyh o vkfh iou
coyh o vkfh iou
"""
"""
Explanation: <div class="align-center" style="display: flex; text-align: right; direction: rtl;">
<div style="display: flex; width: 10%; float: right; ">
<img src="images/recall.svg" style="height: 50px !important;" alt="תזכורת">
</div>
<div style="width: 90%">
<p style="text-align: right; direction: rtl;">
הערך המיוחד <code>None</code> הוא דרך פייתונית להגיד "כלום".<br>
אפשר לדמיין אותו כמו רִיק (וָקוּם). לא הערך המספרי אפס, לא <code>False</code>. פשוט כלום.
</p>
</div>
</div>
<span style="align: right; direction: rtl; float: right; clear: both;">מונחים</span>
<dl style="text-align: right; direction: rtl; float: right; clear: both;">
<dt>מילון</dt><dd>טיפוס פייתוני שמאפשר לנו לשמור זוגות סדורים של מפתחות וערכים, שבהם כל מפתח מצביע על ערך.</dd>
<dt>מפתח</dt><dd>הנתון שלפיו נחפש את הערך הרצוי במילון, ויופיע כאיבר הראשון בזיווג שבין מפתח לערך.</dd>
<dt>ערך</dt><dd>הנתון שעליו מצביע המפתח במילון, יתקבל כאשר נחפש במילון לפי אותו מפתח. יופיע כאיבר השני בזיווג שבין מפתח לערך.<dd>
<dt>זוג סדור</dt><dd>זוג של שני איברים הקשורים זה לזה. במקרה של מילון, מפתח וערך.</dd>
</dl>
<span style="align: right; direction: rtl; float: right; clear: both;">תרגילים</span>
<span style="align: right; direction: rtl; float: right; clear: both;">מסר של יום טוב</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
יוגב נבו קיבל מסר מוצפן מאדון יום טוב, והצליח לשים את ידו על שיטה לפענוח המסר.<br>
כדי לפענח את המסר, החליפו כל אות במסר הסודי באות התואמת לה, לפי המילון המופיע למטה.<br>
לדוגמה, דאגו שכל המופעים של האות O במסר <var>SONG</var> יוחלפו באות A.
</p>
End of explanation
"""
encryption_key = {
'T': '1', 'F': '6', 'W': 'c', 'Y': 'h', 'B': 'k',
'P': '~', 'H': 'q', 'S': 's', 'E': 'w', 'Q': '@',
'U': '$', 'M': 'i', 'I': 'l', 'N': 'o', 'J': 'y',
'Z': 'z', 'G': '!', 'L': '#', 'A': '&', 'O': '+',
'D': ',', 'R': '-', 'C': ':', 'V': '?', 'X': '^',
'K': '|',
}
SONG = """
l1's ih #l6w
l1's o+c +- ow?w-
l &lo'1 !+oo& #l?w 6+-w?w-
l y$s1 c&o1 1+ #l?w cql#w l'i &#l?w
(l1's ih #l6w)
ih qw&-1 ls #l|w &o +~wo ql!qc&h
#l|w 6-&o|lw s&l,
l ,l, l1 ih c&h
l y$s1 c&o1 1+ #l?w cql#w l'i &#l?w
l1's ih #l6w
"""
"""
Explanation: <span style="align: right; direction: rtl; float: right; clear: both;">מראה מראה שעל הקיר</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
חברו של יום טוב, חיים, שלח ליום טוב מסר מוצפן.<br>
למרבה הצער יוגב שם את ידיו רק על מפת ההצפנה, ולא על מפת הפענוח.<br>
צרו ממילון ההצפנה מילון פענוח, שבו:<br>
</p>
<ul style="text-align: right; direction: rtl; float: right; clear: both;">
<li>הערכים במילון הפענוח שתיצרו הם המפתחות ממילון ההצפנה.</li>
<li>המפתחות במילון הפענוח שתיצרו הם הערכים ממילון ההצפנה.</li>
</ul>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
לדוגמה, המילון <code dir="ltr" style="direction: ltr;">{'a': '1', 'b': 2}</code> יהפוך למילון <code dir="ltr" style="direction: ltr;">{'1': 'a', '2': 'b'}</code>.<br>
השתמשו במילון הפענוח שיצרתם כדי לפענח את המסר שנשלח.
</p>
End of explanation
"""
|
hetland/python4geosciences | examples/numpy.ipynb | mit | import os # this package allows us to use terminal window commands from within python
import numpy as np
"""
Explanation: Numpy example: Reading in and analyzing topography/bathymetry data
End of explanation
"""
d = np.load('../data/cascadia.npz') # data was saved in compressed numpy format
"""
Explanation: Read in file
We have a dataset saved in the repository: cascadia.npz. This contains topography and bathymetry data from Washington state.
End of explanation
"""
d.keys() # notice that d is a dictionary!
d['z'].shape # this is an array instead of a list, so it can have more than 1 dimension
"""
Explanation: What is contained in this file?
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
import cmocean.cm as cmo
plt.figure(figsize=(10, 8))
plt.pcolormesh(d['lon'], d['lat'], d['z'], cmap=cmo.delta)
plt.colorbar()
plt.xlabel('Longitude [deg]')
plt.ylabel('Latitude [deg]')
plt.title('Topography and bathymetry [m]')
"""
Explanation: Investigate
Let's start with a quick look at the data. We'll keep it simple since we aren't to the plotting section yet.
End of explanation
"""
z = d['z'] # we can rename the vertical data information to save a little space
z.mean()
"""
Explanation: Anyone recognize this?
Let's do a few calculations using numpy.
How about a mean:
End of explanation
"""
iabove = z > 0 # indices of the z values that are above water
ibelow = z < 0 # indices of z values that are below water
print('above water: ', z[iabove])
print('below water: ', z[ibelow])
"""
Explanation: So overall we have a mean value of about -5 meters. But how meaningful is this? Let's break it down further.
We have both positive and negative values, and they represent pretty distinct areas: above and below water. It is logical that we separate the two.
End of explanation
"""
z[iabove].mean()
z[ibelow].mean()
"""
Explanation: Look good! Now let's do something with them.
First, how about the mean vertical level, separate for above and below water.
End of explanation
"""
|
muxiaobai/CourseExercises | python/kaggle/competition/house-price/house_price.ipynb | gpl-2.0 | import numpy as np
import pandas as pd
"""
Explanation: 房价预测案例
Step 1: 检视源数据集
End of explanation
"""
train_df = pd.read_csv('../input/train.csv', index_col=0)
test_df = pd.read_csv('../input/test.csv', index_col=0)
"""
Explanation: 读入数据
一般来说源数据的index那一栏没什么用,我们可以用来作为我们pandas dataframe的index。这样之后要是检索起来也省事儿。
有人的地方就有鄙视链。跟知乎一样。Kaggle的也是个处处呵呵的危险地带。Kaggle上默认把数据放在input文件夹下。所以我们没事儿写个教程什么的,也可以依据这个convention来,显得自己很有逼格。。
End of explanation
"""
train_df.head()
"""
Explanation: 检视源数据
End of explanation
"""
%matplotlib inline
prices = pd.DataFrame({"price":train_df["SalePrice"], "log(price + 1)":np.log1p(train_df["SalePrice"])})
prices.hist()
"""
Explanation: 这时候大概心里可以有数,哪些地方需要人为的处理一下,以做到源数据更加好被process。
Step 2: 合并数据
这么做主要是为了用DF进行数据预处理的时候更加方便。等所有的需要的预处理进行完之后,我们再把他们分隔开。
首先,SalePrice作为我们的训练目标,只会出现在训练集中,不会在测试集中(要不然你测试什么?)。所以,我们先把SalePrice这一列给拿出来,不让它碍事儿。
我们先看一下SalePrice长什么样纸:
End of explanation
"""
y_train = np.log1p(train_df.pop('SalePrice'))
"""
Explanation: 可见,label本身并不平滑。为了我们分类器的学习更加准确,我们会首先把label给“平滑化”(正态化)
这一步大部分同学会miss掉,导致自己的结果总是达不到一定标准。
这里我们使用最有逼格的log1p, 也就是 log(x+1),避免了复值的问题。
记住哟,如果我们这里把数据都给平滑化了,那么最后算结果的时候,要记得把预测到的平滑数据给变回去。
按照“怎么来的怎么去”原则,log1p()就需要expm1(); 同理,log()就需要exp(), ... etc.
End of explanation
"""
all_df = pd.concat((train_df, test_df), axis=0)
"""
Explanation: 然后我们把剩下的部分合并起来
End of explanation
"""
all_df.shape
"""
Explanation: 此刻,我们可以看到all_df就是我们合在一起的DF
End of explanation
"""
y_train.head()
"""
Explanation: 而y_train则是SalePrice那一列
End of explanation
"""
all_df['MSSubClass'].dtypes
all_df['MSSubClass'] = all_df['MSSubClass'].astype(str)
"""
Explanation: Step 3: 变量转化
类似『特征工程』。就是把不方便处理或者不unify的数据给统一了。
正确化变量属性
首先,我们注意到,MSSubClass 的值其实应该是一个category,
但是Pandas是不会懂这些事儿的。使用DF的时候,这类数字符号会被默认记成数字。
这种东西就很有误导性,我们需要把它变回成string
End of explanation
"""
all_df['MSSubClass'].value_counts()
"""
Explanation: 变成str以后,做个统计,就很清楚了
End of explanation
"""
pd.get_dummies(all_df['MSSubClass'], prefix='MSSubClass').head()
"""
Explanation: 把category的变量转变成numerical表达形式
当我们用numerical来表达categorical的时候,要注意,数字本身有大小的含义,所以乱用数字会给之后的模型学习带来麻烦。于是我们可以用One-Hot的方法来表达category。
pandas自带的get_dummies方法,可以帮你一键做到One-Hot。
End of explanation
"""
all_dummy_df = pd.get_dummies(all_df)
all_dummy_df.head()
"""
Explanation: 此刻MSSubClass被我们分成了12个column,每一个代表一个category。是就是1,不是就是0。
同理,我们把所有的category数据,都给One-Hot了
End of explanation
"""
all_dummy_df.isnull().sum().sort_values(ascending=False).head(10)
"""
Explanation: 处理好numerical变量
就算是numerical的变量,也还会有一些小问题。
比如,有一些数据是缺失的:
End of explanation
"""
mean_cols = all_dummy_df.mean()
mean_cols.head(10)
all_dummy_df = all_dummy_df.fillna(mean_cols)
"""
Explanation: 可以看到,缺失最多的column是LotFrontage
处理这些缺失的信息,得靠好好审题。一般来说,数据集的描述里会写的很清楚,这些缺失都代表着什么。当然,如果实在没有的话,也只能靠自己的『想当然』。。
在这里,我们用平均值来填满这些空缺。
End of explanation
"""
all_dummy_df.isnull().sum().sum()
"""
Explanation: 看看是不是没有空缺了?
End of explanation
"""
numeric_cols = all_df.columns[all_df.dtypes != 'object']
numeric_cols
"""
Explanation: 标准化numerical数据
这一步并不是必要,但是得看你想要用的分类器是什么。一般来说,regression的分类器都比较傲娇,最好是把源数据给放在一个标准分布内。不要让数据间的差距太大。
这里,我们当然不需要把One-Hot的那些0/1数据给标准化。我们的目标应该是那些本来就是numerical的数据:
先来看看 哪些是numerical的:
End of explanation
"""
numeric_col_means = all_dummy_df.loc[:, numeric_cols].mean()
numeric_col_std = all_dummy_df.loc[:, numeric_cols].std()
all_dummy_df.loc[:, numeric_cols] = (all_dummy_df.loc[:, numeric_cols] - numeric_col_means) / numeric_col_std
"""
Explanation: 计算标准分布:(X-X')/s
让我们的数据点更平滑,更便于计算。
注意:我们这里也是可以继续使用Log的,我只是给大家展示一下多种“使数据平滑”的办法。
End of explanation
"""
dummy_train_df = all_dummy_df.loc[train_df.index]
dummy_test_df = all_dummy_df.loc[test_df.index]
dummy_train_df.shape, dummy_test_df.shape
"""
Explanation: Step 4: 建立模型
把数据集分回 训练/测试集
End of explanation
"""
from sklearn.linear_model import Ridge
from sklearn.model_selection import cross_val_score
"""
Explanation: Ridge Regression
用Ridge Regression模型来跑一遍看看。(对于多因子的数据集,这种模型可以方便的把所有的var都无脑的放进去)
End of explanation
"""
X_train = dummy_train_df.values
X_test = dummy_test_df.values
"""
Explanation: 这一步不是很必要,只是把DF转化成Numpy Array,这跟Sklearn更加配
End of explanation
"""
alphas = np.logspace(-3, 2, 50)
test_scores = []
for alpha in alphas:
clf = Ridge(alpha)
test_score = np.sqrt(-cross_val_score(clf, X_train, y_train, cv=10, scoring='neg_mean_squared_error'))
test_scores.append(np.mean(test_score))
"""
Explanation: 用Sklearn自带的cross validation方法来测试模型
End of explanation
"""
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(alphas, test_scores)
plt.title("Alpha vs CV Error");
"""
Explanation: 存下所有的CV值,看看哪个alpha值更好(也就是『调参数』)
End of explanation
"""
from sklearn.ensemble import RandomForestRegressor
max_features = [.1, .3, .5, .7, .9, .99]
test_scores = []
for max_feat in max_features:
clf = RandomForestRegressor(n_estimators=200, max_features=max_feat)
test_score = np.sqrt(-cross_val_score(clf, X_train, y_train, cv=5, scoring='neg_mean_squared_error'))
test_scores.append(np.mean(test_score))
plt.plot(max_features, test_scores)
plt.title("Max Features vs CV Error");
"""
Explanation: 可见,大概alpha=10~20的时候,可以把score达到0.135左右。
Random Forest
End of explanation
"""
ridge = Ridge(alpha=15)
rf = RandomForestRegressor(n_estimators=500, max_features=.3)
ridge.fit(X_train, y_train)
rf.fit(X_train, y_train)
"""
Explanation: 用RF的最优值达到了0.137
Step 5: Ensemble
这里我们用一个Stacking的思维来汲取两种或者多种模型的优点
首先,我们把最好的parameter拿出来,做成我们最终的model
End of explanation
"""
y_ridge = np.expm1(ridge.predict(X_test))
y_rf = np.expm1(rf.predict(X_test))
"""
Explanation: 上面提到了,因为最前面我们给label做了个log(1+x), 于是这里我们需要把predit的值给exp回去,并且减掉那个"1"
所以就是我们的expm1()函数。
End of explanation
"""
y_final = (y_ridge + y_rf) / 2
"""
Explanation: 一个正经的Ensemble是把这群model的预测结果作为新的input,再做一次预测。这里我们简单的方法,就是直接『平均化』。
End of explanation
"""
submission_df = pd.DataFrame(data= {'Id' : test_df.index, 'SalePrice': y_final})
"""
Explanation: Step 6: 提交结果
End of explanation
"""
submission_df.head(10)
"""
Explanation: 我们的submission大概长这样:
End of explanation
"""
|
JakeColtman/BayesianSurvivalAnalysis | Basic Presentation.ipynb | mit | ####Data munging here
###Parametric Bayes
#Shout out to Cam Davidson-Pilon
## Example fully worked model using toy data
## Adapted from http://blog.yhat.com/posts/estimating-user-lifetimes-with-pymc.html
## Note that we've made some corrections
N = 2500
##Generate some random data
lifetime = pm.rweibull( 2, 5, size = N )
birth = pm.runiform(0, 10, N)
censor = ((birth + lifetime) >= 10)
lifetime_ = lifetime.copy()
lifetime_[censor] = 10 - birth[censor]
alpha = pm.Uniform('alpha', 0, 20)
beta = pm.Uniform('beta', 0, 20)
@pm.observed
def survival(value=lifetime_, alpha = alpha, beta = beta ):
return sum( (1-censor)*(log( alpha/beta) + (alpha-1)*log(value/beta)) - (value/beta)**(alpha))
mcmc = pm.MCMC([alpha, beta, survival ] )
mcmc.sample(50000, 30000)
pm.Matplot.plot(mcmc)
mcmc.trace("alpha")[:]
"""
Explanation: The first step in any data analysis is acquiring and munging the data
Our starting data set can be found here:
http://jakecoltman.com in the pyData post
It is designed to be roughly similar to the output from DCM's path to conversion
Download the file and transform it into something with the columns:
id,lifetime,age,male,event,search,brand
where lifetime is the total time that we observed someone not convert for and event should be 1 if we see a conversion and 0 if we don't. Note that all values should be converted into ints
It is useful to note that end_date = datetime.datetime(2016, 5, 3, 20, 36, 8, 92165)
End of explanation
"""
#### Fit to your data here
#### Plot the distribution of the median
"""
Explanation: Problems:
1 - Try to fit your data from section 1
2 - Use the results to plot the distribution of the median
Note that the media of a Weibull distribution is:
$$β(log 2)^{1/α}$$
End of explanation
"""
#### Adjust burn and thin, both paramters of the mcmc sample function
#### Narrow and broaden prior
"""
Explanation: Problems:
4 - Try adjusting the number of samples for burning and thinnning
5 - Try adjusting the prior and see how it affects the estimate
End of explanation
"""
#### Hypothesis testing
"""
Explanation: Problems:
7 - Try testing whether the median is greater than a different values
End of explanation
"""
### Fit a cox proprtional hazards model
"""
Explanation: If we want to look at covariates, we need a new approach.
We'll use Cox proprtional hazards, a very popular regression model.
To fit in python we use the module lifelines:
http://lifelines.readthedocs.io/en/latest/
End of explanation
"""
#### Plot baseline hazard function
#### Predict
#### Plot survival functions for different covariates
#### Plot some odds
"""
Explanation: Once we've fit the data, we need to do something useful with it. Try to do the following things:
1 - Plot the baseline survival function
2 - Predict the functions for a particular set of features
3 - Plot the survival function for two different set of features
4 - For your results in part 3 caculate how much more likely a death event is for one than the other for a given period of time
End of explanation
"""
#### BMA Coefficient values
#### Different priors
"""
Explanation: Model selection
Difficult to do with classic tools (here)
Problem:
1 - Calculate the BMA coefficient values
2 - Try running with different priors
End of explanation
"""
|
wcmac/sippycup | sippycup-unit-3.ipynb | gpl-2.0 | from geo880 import geo880_train_examples, geo880_test_examples
print('train examples:', len(geo880_train_examples))
print('test examples: ', len(geo880_test_examples))
print(geo880_train_examples[0])
print(geo880_test_examples[0])
"""
Explanation: <img src="img/sippycup-small.jpg" align="left" style="padding-right: 30px"/>
<h1 style="line-height: 125%">
SippyCup<br />
Unit 3: Geography queries
</h1>
<p>
<a href="http://nlp.stanford.edu/~wcmac/">Bill MacCartney</a><br/>
Spring 2015
<!-- <a href="mailto:wcmac@cs.stanford.edu">wcmac@cs.stanford.edu</a> -->
</p>
<div style="margin: 0px 0px; padding: 10px; background-color: #ddddff; border-style: solid; border-color: #aaaacc; border-width: 1px">
This is Unit 3 of the <a href="./sippycup-unit-0.ipynb">SippyCup codelab</a>.
</div>
Our third case study will examine the domain of geography queries. In particular, we'll focus on the Geo880 corpus, which contains 880 queries about U.S. geography. Examples include:
"which states border texas?"
"how many states border the largest state?"
"what is the size of the capital of texas?"
The Geo880 queries have a quite different character from the arithmetic queries and travel queries we have examined previously. They differ from the arithmetic queries in using a large vocabulary, and in exhibiting greater degrees of both lexical and syntactic ambiguity. They differ from the travel queries in adhering to conventional rules for spelling and syntax, and in having semantics with arbitrarily complex compositional structure. For example:
"what rivers flow through states that border the state with the largest population?"
"what is the population of the capital of the largest state through which the mississippi runs?"
"what is the longest river that passes the states that border the state that borders the most states?"
Geo880 was developed in Ray Mooney's group at UT Austin. It is of particular interest because it has for many years served as a standard evaluation for semantic parsing systems. (See, for example, Zelle & Mooney 1996, Tang & Mooney 2001, Zettlemoyer & Collins 2005, and Liang et al. 2011.) It has thereby become, for many, a paradigmatic application of semantic parsing. It has also served as a bridge between an older current of research on natural language interfaces to databases (NLIDBs) (see Androutsopoulos et al. 1995) and the modern era of semantic parsing.
The domain of geography queries is also of interest because there are many plausible real-world applications for semantic parsing which similarly involve complex compositional queries against a richly structured knowledge base. For example, some people are passionate about baseball statistics, and might want to ask queries like:
"pitchers who have struck out four batters in one inning"
"players who have stolen at least 100 bases in a season"
"complete games with fewer than 90 pitches"
"most home runs hit in one game"
Environmental advocates and policymakers might have queries like:
"which country has the highest co2 emissions"
"what five countries have the highest per capita co2 emissions"
"what country's co2 emissions increased the most over the last five years"
"what fraction of co2 emissions was from european countries in 2010"
Techniques that work in the geography domain are likely to work in these other domains too.
The Geo880 dataset
I've been told that the Geo880 queries were collected from students in classes taught by Ray Mooney at UT Austin. I'm not sure whether I've got the story right. But this account is consistent with one of the notable limitations of the dataset: it is not a natural distribution, and not a realistic representation of the geography queries that people actually ask on, say, Google. Nobody ever asks, "what is the longest river that passes the states that border the state that borders the most states?" Nobody. Ever.
The dataset was published online by Rohit Jaivant Kate in a Prolog file containing semantic representations in Prolog style. It was later republished by Yuk Wah Wong as an XML file containing additional metadata for each example, including translations into Spanish, Japanese, and Turkish; syntactic parse trees; and semantics in two different representations: Prolog and FunQL.
In SippyCup, we're not going to use either Prolog or FunQL semantics. Instead, we'll use examples which have been annotated only with denotations (which were provided by Percy Liang — thanks!). Of course, our grammar will require a semantic representation, even if our examples are not annotated with semantics. We will introduce one below.
The Geo880 dataset is conventionally divided into 600 training examples and 280 test examples. In SippyCup, the dataset can in found in geo880.py. Let's take a peek.
End of explanation
"""
from geobase import GeobaseReader
reader = GeobaseReader()
unaries = [str(t) for t in reader.tuples if len(t) == 2]
print('\nSome unaries:\n ' + '\n '.join(unaries[:10]))
binaries = [str(t) for t in reader.tuples if len(t) == 3]
print('\nSome binaries:\n ' + '\n '.join(binaries[:10]))
"""
Explanation: The Geobase knowledge base
Geobase is a small knowledge base about the geography of the United States. It contains (almost) all the information needed to answer queries in the Geo880 dataset, including facts about:
states: capital, area, population, major cities, neighboring states, highest and lowest points and elevations
cities: containing state and population
rivers: length and states traversed
mountains: containing state and height
roads: states traversed
lakes: area, states traversed
SippyCup contains a class called GeobaseReader (in geobase.py) which facilitates working with Geobase in Python. It reads and parses the Geobase Prolog file, and creates a set of tuples representing its content. Let's take a look.
End of explanation
"""
from graph_kb import GraphKB
simpsons_tuples = [
# unaries
('male', 'homer'),
('female', 'marge'),
('male', 'bart'),
('female', 'lisa'),
('female', 'maggie'),
('adult', 'homer'),
('adult', 'marge'),
('child', 'bart'),
('child', 'lisa'),
('child', 'maggie'),
# binaries
('has_age', 'homer', 36),
('has_age', 'marge', 34),
('has_age', 'bart', 10),
('has_age', 'lisa', 8),
('has_age', 'maggie', 1),
('has_brother', 'lisa', 'bart'),
('has_brother', 'maggie', 'bart'),
('has_sister', 'bart', 'maggie'),
('has_sister', 'bart', 'lisa'),
('has_sister', 'lisa', 'maggie'),
('has_sister', 'maggie', 'lisa'),
('has_father', 'bart', 'homer'),
('has_father', 'lisa', 'homer'),
('has_father', 'maggie', 'homer'),
('has_mother', 'bart', 'marge'),
('has_mother', 'lisa', 'marge'),
('has_mother', 'maggie', 'marge'),
]
simpsons_kb = GraphKB(simpsons_tuples)
"""
Explanation: Some observations here:
Unaries are pairs consisting of a unary predicate (a type) and an entity.
Binaries are triples consisting of binary predicate (a relation) and two entities (or an entity and a numeric or string value).
Entities are named by unique identifiers of the form /type/name. This is a GeobaseReader convention; these identifiers are not used in the original Prolog file.
Some entities have the generic type place because they occur in the Prolog file only as the highest or lowest point in a state, and it's hard to reliably assign such points to one of the more specific types.
The original Prolog file is inconsistent about units. For example, the area of states is expressed in square miles, but the area of lakes is expressed in square kilometers. GeobaseReader converts everything to SI units: meters and square meters.
Semantic representation <a id="geoquery-semantic-representation"></a>
GeobaseReader merely reads the data in Geobase into a set of tuples. It doesn't provide any facility for querying that data. That's where GraphKB and GraphKBExecutor come in. GraphKB is a graph-structured knowledge base, with indexing for fast lookups. GraphKBExecutor defines a representation for formal queries against that knowledge base, and supports query execution. The formal query language defined by GraphKBExecutor will serve as our semantic representation for the geography domain.
The GraphKB class
A GraphKB is a generic graph-structured knowledge base, or equivalently, a set of relational pairs and triples, with indexing for fast lookups. It represents a knowledge base as set of tuples, each either:
a pair, consisting of a unary relation and an element which belongs to it,
or
a triple consisting of a binary relation and a pair of elements which
belong to it.
For example, we can construct a GraphKB representing facts about The Simpsons:
End of explanation
"""
simpsons_kb.unaries['child']
simpsons_kb.binaries_fwd['has_sister']['lisa']
simpsons_kb.binaries_rev['has_sister']['lisa']
"""
Explanation: The GraphKB object now contains three indexes:
unaries[U]: all entities belonging to unary relation U
binaries_fwd[B][E]: all entities X such that (E, X) belongs to binary relation B
binaries_rev[B][E]: all entities X such that (X, E) belongs to binary relation B
For example:
End of explanation
"""
queries = [
'bart',
'male',
('has_sister', 'lisa'), # who has sister lisa?
('lisa', 'has_sister'), # lisa has sister who, i.e., who is a sister of lisa?
('lisa', 'has_brother'), # lisa has brother who, i.e., who is a brother of lisa?
('.and', 'male', 'child'),
('.or', 'male', 'adult'),
('.not', 'child'),
('.any',), # anything
('.any', 'has_sister'), # anything has sister who, i.e., who is a sister of anything?
('.and', 'child', ('.not', ('.any', 'has_sister'))),
('.count', ('bart', 'has_sister')),
('has_age', ('.gt', 21)),
('has_age', ('.lt', 2)),
('has_age', ('.eq', 10)),
('.max', 'has_age', 'female'),
('.min', 'has_age', ('bart', 'has_sister')),
('.max', 'has_age', '.any'),
('.argmax', 'has_age', 'female'),
('.argmin', 'has_age', ('bart', 'has_sister')),
('.argmax', 'has_age', '.any'),
]
executor = simpsons_kb.executor()
for query in queries:
print()
print('Q ', query)
print('D ', executor.execute(query))
"""
Explanation: The GraphKBExecutor class
A GraphKBExecutor executes formal queries against a GraphKB and returns their denotations.
Queries are represented by Python tuples, and can be nested.
Denotations are also represented by Python tuples, but are conceptually sets (possibly empty). The elements of these tuples are always sorted in canonical order, so that they can be reliably compared for set equality.
The query language defined by GraphKBExecutor is perhaps most easily explained by example:
End of explanation
"""
geobase = GraphKB(reader.tuples)
executor = geobase.executor()
queries = [
('/state/texas', 'capital'), # capital of texas
('.and', 'river', ('traverses', '/state/utah')), # rivers that traverse utah
('.argmax', 'height', 'mountain'), # tallest mountain
]
for query in queries:
print()
print(query)
print(executor.execute(query))
"""
Explanation: Note that the query (R E) denotes entities having relation R to entity E,
whereas the query (E R) denotes entities to which entity E has relation R.
For a more detailed understanding of the style of semantic representation defined by GraphKBExecutor, take a look at the source code.
Using GraphKBExecutor with Geobase
End of explanation
"""
from collections import defaultdict
from operator import itemgetter
from geo880 import geo880_train_examples
words = [word for example in geo880_train_examples for word in example.input.split()]
counts = defaultdict(int)
for word in words:
counts[word] += 1
counts = sorted([(count, word) for word, count in counts.items()], reverse=True)
print('There were %d tokens of %d types:\n' % (len(words), len(counts)))
print(', '.join(['%s (%d)' % (word, count) for count, word in counts[:50]] + ['...']))
"""
Explanation: Grammar engineering
It's time to start developing a grammar for the geography domain. As in Unit 2,
the performance metric we'll focus on during grammar engineering is oracle accuracy (the proportion of examples for which any parse is correct), not accuracy (the proportion of examples for which the first parse is correct). Remember that oracle accuracy is an upper bound on accuracy, and is a measure of the expressive power of the grammar: does it have the rules it needs to generate the correct parse? The gap between oracle accuracy and accuracy, on the other hand, reflects the ability of the scoring model to bring the correct parse to the top of the candidate list. <!-- (TODO: rewrite.) -->
As always, we're going to take a data-driven approach to grammar engineering. We want to introduce rules which will enable us to handle the lexical items and syntactic structures that we actually observe in the Geo880 training data. To that end, let's count the words that appear among the 600 training examples. (We do not examine the test data!)
End of explanation
"""
from parsing import Grammar, Rule
optional_words = [
'the', '?', 'what', 'is', 'in', 'of', 'how', 'many', 'are', 'which', 'that',
'with', 'has', 'major', 'does', 'have', 'where', 'me', 'there', 'give',
'name', 'all', 'a', 'by', 'you', 'to', 'tell', 'other', 'it', 'do', 'whose',
'show', 'one', 'on', 'for', 'can', 'whats', 'urban', 'them', 'list',
'exist', 'each', 'could', 'about'
]
rules_optionals = [
Rule('$ROOT', '?$Optionals $Query ?$Optionals', lambda sems: sems[1]),
Rule('$Optionals', '$Optional ?$Optionals'),
] + [Rule('$Optional', word) for word in optional_words]
"""
Explanation: There are at least four major categories of words here:
- Words that refer to entities, such as "texas", "mississippi", "usa", and "austin".
- Words that refer to types, such as "state", "river", and "cities".
- Words that refer to relations, such as "in", "borders", "capital", and "long".
- Other function words, such as "the", "what", "how", and "are".
One might make finer distinctions, but this seems like a reasonable starting point. Note that these categories do not always correspond to traditional syntactic categories. While the entities are typically proper nouns, and the types are typically common nouns, the relations include prepositions, verbs, nouns, and adjectives.
The design of our grammar will roughly follow this schema. The major categories will include $Entity, $Type, $Collection, $Relation, and $Optional.
Optionals
In Unit 2, our grammar engineering process didn't really start cooking until we introduced optionals. This time around, let's begin with the optionals. We'll define as $Optional every word in the Geo880 training data which does not plainly refer to an entity, type, or relation. And we'll let any query be preceded or followed by a sequence of one or more $Optionals.
End of explanation
"""
from annotator import Annotator, NumberAnnotator
class GeobaseAnnotator(Annotator):
def __init__(self, geobase):
self.geobase = geobase
def annotate(self, tokens):
phrase = ' '.join(tokens)
places = self.geobase.binaries_rev['name'][phrase]
return [('$Entity', place) for place in places]
"""
Explanation: Because $Query has not yet been defined, we won't be able to parse anything yet.
Entities and collections
Our grammar will need to be able to recognize names of entities, such as "utah". There are hundreds of entities in Geobase, and we don't want to have to introduce a grammar rule for each entity. Instead, we'll define a new annotator, GeobaseAnnotator, which simply annotates phrases which exactly match names in Geobase.
End of explanation
"""
rules_collection_entity = [
Rule('$Query', '$Collection', lambda sems: sems[0]),
Rule('$Collection', '$Entity', lambda sems: sems[0]),
]
rules = rules_optionals + rules_collection_entity
"""
Explanation: Now a couple of rules that will enable us to parse inputs that simply name locations, such as "utah".
(TODO: explain rationale for $Collection and $Query.)
End of explanation
"""
annotators = [NumberAnnotator(), GeobaseAnnotator(geobase)]
grammar = Grammar(rules=rules, annotators=annotators)
"""
Explanation: Now let's make a grammar.
End of explanation
"""
parses = grammar.parse_input('what is utah')
for parse in parses[:1]:
print('\n'.join([str(parse.semantics), str(executor.execute(parse.semantics))]))
"""
Explanation: Let's try to parse some inputs which just name locations.
End of explanation
"""
from experiment import sample_wins_and_losses
from geoquery import GeoQueryDomain
from metrics import DenotationOracleAccuracyMetric
from scoring import Model
domain = GeoQueryDomain()
model = Model(grammar=grammar, executor=executor.execute)
metric = DenotationOracleAccuracyMetric()
sample_wins_and_losses(domain=domain, model=model, metric=metric, seed=1)
"""
Explanation: Great, it worked. Now let's run an evaluation on the Geo880 training examples.
End of explanation
"""
rules_types = [
Rule('$Collection', '$Type', lambda sems: sems[0]),
Rule('$Type', 'state', 'state'),
Rule('$Type', 'states', 'state'),
Rule('$Type', 'city', 'city'),
Rule('$Type', 'cities', 'city'),
Rule('$Type', 'big cities', 'city'),
Rule('$Type', 'towns', 'city'),
Rule('$Type', 'river', 'river'),
Rule('$Type', 'rivers', 'river'),
Rule('$Type', 'mountain', 'mountain'),
Rule('$Type', 'mountains', 'mountain'),
Rule('$Type', 'mount', 'mountain'),
Rule('$Type', 'peak', 'mountain'),
Rule('$Type', 'road', 'road'),
Rule('$Type', 'roads', 'road'),
Rule('$Type', 'lake', 'lake'),
Rule('$Type', 'lakes', 'lake'),
Rule('$Type', 'country', 'country'),
Rule('$Type', 'countries', 'country'),
]
"""
Explanation: We don't yet have a single win: denotation oracle accuracy remains stuck at zero. However, the average number of parses is slightly greater than zero, meaning that there are a few examples which our grammar can parse (though not correctly). It would be interesting to know which examples. There's a utility function in experiment.py which will give you the visibility you need. See if you can figure out what to do.
<!-- 'where is san diego ?' is parsed as '/city/san_diego_ca' -->
Types
(TODO: the words in the training data include lots of words for types. Let's write down some lexical rules defining the category $Type, guided as usual by the words we actually see in the training data. We'll also make $Type a kind of $Collection.)
End of explanation
"""
rules = rules_optionals + rules_collection_entity + rules_types
grammar = Grammar(rules=rules, annotators=annotators)
parses = grammar.parse_input('name the lakes')
for parse in parses[:1]:
print('\n'.join([str(parse.semantics), str(executor.execute(parse.semantics))]))
"""
Explanation: We should now be able to parse inputs denoting types, such as "name the lakes":
End of explanation
"""
model = Model(grammar=grammar, executor=executor.execute)
sample_wins_and_losses(domain=domain, model=model, metric=metric, seed=1)
"""
Explanation: It worked. Let's evaluate on the Geo880 training data again.
End of explanation
"""
rules_relations = [
Rule('$Collection', '$Relation ?$Optionals $Collection', lambda sems: sems[0](sems[2])),
Rule('$Relation', '$FwdRelation', lambda sems: (lambda arg: (sems[0], arg))),
Rule('$Relation', '$RevRelation', lambda sems: (lambda arg: (arg, sems[0]))),
Rule('$FwdRelation', '$FwdBordersRelation', 'borders'),
Rule('$FwdBordersRelation', 'border'),
Rule('$FwdBordersRelation', 'bordering'),
Rule('$FwdBordersRelation', 'borders'),
Rule('$FwdBordersRelation', 'neighbor'),
Rule('$FwdBordersRelation', 'neighboring'),
Rule('$FwdBordersRelation', 'surrounding'),
Rule('$FwdBordersRelation', 'next to'),
Rule('$FwdRelation', '$FwdTraversesRelation', 'traverses'),
Rule('$FwdTraversesRelation', 'cross ?over'),
Rule('$FwdTraversesRelation', 'flow through'),
Rule('$FwdTraversesRelation', 'flowing through'),
Rule('$FwdTraversesRelation', 'flows through'),
Rule('$FwdTraversesRelation', 'go through'),
Rule('$FwdTraversesRelation', 'goes through'),
Rule('$FwdTraversesRelation', 'in'),
Rule('$FwdTraversesRelation', 'pass through'),
Rule('$FwdTraversesRelation', 'passes through'),
Rule('$FwdTraversesRelation', 'run through'),
Rule('$FwdTraversesRelation', 'running through'),
Rule('$FwdTraversesRelation', 'runs through'),
Rule('$FwdTraversesRelation', 'traverse'),
Rule('$FwdTraversesRelation', 'traverses'),
Rule('$RevRelation', '$RevTraversesRelation', 'traverses'),
Rule('$RevTraversesRelation', 'has'),
Rule('$RevTraversesRelation', 'have'), # 'how many states have major rivers'
Rule('$RevTraversesRelation', 'lie on'),
Rule('$RevTraversesRelation', 'next to'),
Rule('$RevTraversesRelation', 'traversed by'),
Rule('$RevTraversesRelation', 'washed by'),
Rule('$FwdRelation', '$FwdContainsRelation', 'contains'),
# 'how many states have a city named springfield'
Rule('$FwdContainsRelation', 'has'),
Rule('$FwdContainsRelation', 'have'),
Rule('$RevRelation', '$RevContainsRelation', 'contains'),
Rule('$RevContainsRelation', 'contained by'),
Rule('$RevContainsRelation', 'in'),
Rule('$RevContainsRelation', 'found in'),
Rule('$RevContainsRelation', 'located in'),
Rule('$RevContainsRelation', 'of'),
Rule('$RevRelation', '$RevCapitalRelation', 'capital'),
Rule('$RevCapitalRelation', 'capital'),
Rule('$RevCapitalRelation', 'capitals'),
Rule('$RevRelation', '$RevHighestPointRelation', 'highest_point'),
Rule('$RevHighestPointRelation', 'high point'),
Rule('$RevHighestPointRelation', 'high points'),
Rule('$RevHighestPointRelation', 'highest point'),
Rule('$RevHighestPointRelation', 'highest points'),
Rule('$RevRelation', '$RevLowestPointRelation', 'lowest_point'),
Rule('$RevLowestPointRelation', 'low point'),
Rule('$RevLowestPointRelation', 'low points'),
Rule('$RevLowestPointRelation', 'lowest point'),
Rule('$RevLowestPointRelation', 'lowest points'),
Rule('$RevLowestPointRelation', 'lowest spot'),
Rule('$RevRelation', '$RevHighestElevationRelation', 'highest_elevation'),
Rule('$RevHighestElevationRelation', '?highest elevation'),
Rule('$RevRelation', '$RevHeightRelation', 'height'),
Rule('$RevHeightRelation', 'elevation'),
Rule('$RevHeightRelation', 'height'),
Rule('$RevHeightRelation', 'high'),
Rule('$RevHeightRelation', 'tall'),
Rule('$RevRelation', '$RevAreaRelation', 'area'),
Rule('$RevAreaRelation', 'area'),
Rule('$RevAreaRelation', 'big'),
Rule('$RevAreaRelation', 'large'),
Rule('$RevAreaRelation', 'size'),
Rule('$RevRelation', '$RevPopulationRelation', 'population'),
Rule('$RevPopulationRelation', 'big'),
Rule('$RevPopulationRelation', 'large'),
Rule('$RevPopulationRelation', 'populated'),
Rule('$RevPopulationRelation', 'population'),
Rule('$RevPopulationRelation', 'populations'),
Rule('$RevPopulationRelation', 'populous'),
Rule('$RevPopulationRelation', 'size'),
Rule('$RevRelation', '$RevLengthRelation', 'length'),
Rule('$RevLengthRelation', 'length'),
Rule('$RevLengthRelation', 'long'),
]
"""
Explanation: Liftoff! We have two wins, and denotation oracle accuracy is greater than zero! Just barely.
Relations and joins
In order to really make this bird fly, we're going to have to handle relations. In particular, we'd like to be able to parse queries which combine a relation with an entity or collection, such as "what is the capital of vermont".
As usual, we'll adopt a data-driven approach. The training examples include lots of words and phrases which refer to relations, both "forward" relations (like "traverses") and "reverse" relations (like "traversed by"). Guided by the training data, we'll write lexical rules which define the categories $FwdRelation and $RevRelation. Then we'll add rules that allow either a $FwdRelation or a $RevRelation to be promoted to a generic $Relation, with semantic functions which ensure that the semantics are constructed with the proper orientation. Finally, we'll define a rule for joining a $Relation (such as "capital of") with a $Collection (such as "vermont") to yield another $Collection (such as "capital of vermont").
<!-- (TODO: Give a fuller explanation of what's going on with the semantics.) -->
End of explanation
"""
rules = rules_optionals + rules_collection_entity + rules_types + rules_relations
grammar = Grammar(rules=rules, annotators=annotators)
parses = grammar.parse_input('what is the capital of vermont ?')
for parse in parses[:1]:
print('\n'.join([str(parse.semantics), str(executor.execute(parse.semantics))]))
"""
Explanation: We should now be able to parse "what is the capital of vermont". Let's see:
End of explanation
"""
model = Model(grammar=grammar, executor=executor.execute)
sample_wins_and_losses(domain=domain, model=model, metric=metric, seed=1)
"""
Explanation: Montpelier! I always forget that one.
OK, let's evaluate our progress on the Geo880 training data.
End of explanation
"""
rules_intersection = [
Rule('$Collection', '$Collection $Collection',
lambda sems: ('.and', sems[0], sems[1])),
Rule('$Collection', '$Collection $Optional $Collection',
lambda sems: ('.and', sems[0], sems[2])),
Rule('$Collection', '$Collection $Optional $Optional $Collection',
lambda sems: ('.and', sems[0], sems[3])),
]
rules = rules_optionals + rules_collection_entity + rules_types + rules_relations + rules_intersection
grammar = Grammar(rules=rules, annotators=annotators)
parses = grammar.parse_input('states bordering california')
for parse in parses[:1]:
print('\n'.join([str(parse.semantics), str(executor.execute(parse.semantics))]))
"""
Explanation: Hot diggity, it's working. Denotation oracle accuracy is over 12%, double digits. We have 75 wins, and they're what we expect: queries that simply combine a relation and an entity (or collection).
Intersections
End of explanation
"""
model = Model(grammar=grammar, executor=executor.execute)
sample_wins_and_losses(domain=domain, model=model, metric=metric, seed=1)
"""
Explanation: Let's evaluate the impact on the Geo880 training examples.
End of explanation
"""
rules_superlatives = [
Rule('$Collection', '$Superlative ?$Optionals $Collection', lambda sems: sems[0] + (sems[2],)),
Rule('$Collection', '$Collection ?$Optionals $Superlative', lambda sems: sems[2] + (sems[0],)),
Rule('$Superlative', 'largest', ('.argmax', 'area')),
Rule('$Superlative', 'largest', ('.argmax', 'population')),
Rule('$Superlative', 'biggest', ('.argmax', 'area')),
Rule('$Superlative', 'biggest', ('.argmax', 'population')),
Rule('$Superlative', 'smallest', ('.argmin', 'area')),
Rule('$Superlative', 'smallest', ('.argmin', 'population')),
Rule('$Superlative', 'longest', ('.argmax', 'length')),
Rule('$Superlative', 'shortest', ('.argmin', 'length')),
Rule('$Superlative', 'tallest', ('.argmax', 'height')),
Rule('$Superlative', 'highest', ('.argmax', 'height')),
Rule('$Superlative', '$MostLeast $RevRelation', lambda sems: (sems[0], sems[1])),
Rule('$MostLeast', 'most', '.argmax'),
Rule('$MostLeast', 'least', '.argmin'),
Rule('$MostLeast', 'lowest', '.argmin'),
Rule('$MostLeast', 'greatest', '.argmax'),
Rule('$MostLeast', 'highest', '.argmax'),
]
"""
Explanation: Great, denotation oracle accuracy has more than doubled, from 12% to 28%. And the wins now include intersections like "which states border new york". The losses, however, are clearly dominated by one category of error.
Superlatives
Many of the losses involve superlatives, such as "biggest" or "shortest". Let's remedy that. As usual, we let the training examples guide us in adding lexical rules.
End of explanation
"""
rules = rules_optionals + rules_collection_entity + rules_types + rules_relations + rules_intersection + rules_superlatives
grammar = Grammar(rules=rules, annotators=annotators)
parses = grammar.parse_input('tallest mountain')
for parse in parses[:1]:
print('\n'.join([str(parse.semantics), str(executor.execute(parse.semantics))]))
"""
Explanation: Now we should be able to parse "tallest mountain":
End of explanation
"""
model = Model(grammar=grammar, executor=executor.execute)
sample_wins_and_losses(domain=domain, model=model, metric=metric, seed=1)
"""
Explanation: Let's evaluate the impact on the Geo880 training examples.
End of explanation
"""
def reverse(relation_sem):
"""TODO"""
# relation_sem is a lambda function which takes an arg and forms a pair,
# either (rel, arg) or (arg, rel). We want to swap the order of the pair.
def apply_and_swap(arg):
pair = relation_sem(arg)
return (pair[1], pair[0])
return apply_and_swap
rules_reverse_joins = [
Rule('$Collection', '$Collection ?$Optionals $Relation',
lambda sems: reverse(sems[2])(sems[0])),
]
rules = rules_optionals + rules_collection_entity + rules_types + rules_relations + rules_intersection + rules_superlatives + rules_reverse_joins
grammar = Grammar(rules=rules, annotators=annotators)
parses = grammar.parse_input('which states does the rio grande cross')
for parse in parses[:1]:
print('\n'.join([str(parse.semantics), str(executor.execute(parse.semantics))]))
"""
Explanation: Wow, superlatives make a big difference. Denotation oracle accuracy has surged from 28% to 42%.
Reverse joins
End of explanation
"""
model = Model(grammar=grammar, executor=executor.execute)
sample_wins_and_losses(domain=domain, model=model, metric=metric, seed=1)
"""
Explanation: Let's evaluate the impact on the Geo880 training examples.
End of explanation
"""
from experiment import evaluate_model
from metrics import denotation_match_metrics
evaluate_model(model=model,
examples=geo880_train_examples[:10],
metrics=denotation_match_metrics(),
print_examples=True)
"""
Explanation: This time the gain in denotation oracle accuracy was more modest, from 42% to 47%. Still, we are making good progress. However, note that a substantial gap has opened between accuracy and oracle accuracy. This indicates that we could benefit from adding a scoring model.
Feature engineering
Through an iterative process of grammar engineering, we've managed to increase denotation oracle accuracy of 47%. But we've been ignoring denotation accuracy, which now lags far behind, at 25%. This represents an opportunity.
In order to figure out how best to fix the problem, we need to do some error analysis. Let's look for some specific examples where denotation accuracy is 0, even though denotation oracle accuracy is 1. In other words, let's look for some examples where we have a correct parse, but it's not ranked at the top. We should be able to find some cases like that among the first ten examples of the Geo880 training data.
End of explanation
"""
def empty_denotation_feature(parse):
features = defaultdict(float)
if parse.denotation == ():
features['empty_denotation'] += 1.0
return features
weights = {'empty_denotation': -1.0}
model = Model(grammar=grammar,
feature_fn=empty_denotation_feature,
weights=weights,
executor=executor.execute)
"""
Explanation: Take a look through that output. Over the ten examples, we achieved denotation oracle accuracy of 60%, but denotation accuracy of just 40%. In other words, there were two examples where we generated a correct parse, but failed to rank it at the top. Take a closer look at those two cases.
The first case is "what state has the shortest river ?". The top parse has semantics ('.and', 'state', ('.argmin', 'length', 'river')), which means something like "states that are the shortest river". That's not right. In fact, there's no such thing: the denotation is empty.
The second case is "what is the highest mountain in alaska ?". The top parse has semantics ('.argmax', 'height', ('.and', 'mountain', '/state/alaska')), which means "the highest mountain which is alaska". Again, there's no such thing: the denotation is empty.
So in both of the cases where we put the wrong parse at the top, the top parse had nonsensical semantics with an empty denotation. In fact, if you scroll through the output above, you will see that there are a lot of candidate parses with empty denotations. Seems like we could make a big improvement just by downweighting parses with empty denotations. This is easy to do.
End of explanation
"""
from experiment import evaluate_model
from metrics import denotation_match_metrics
evaluate_model(model=model,
examples=geo880_train_examples,
metrics=denotation_match_metrics(),
print_examples=False)
"""
Explanation: Let's evaluate the impact of using our new empty_denotation feature on the Geo880 training examples.
End of explanation
"""
|
supergis/git_notebook | pystart/jupyter_magics.ipynb | gpl-3.0 | %lsmagic
"""
Explanation: IPython的魔法符号-Magics
openthings@163.com
最新的Jupyter Notebook可以混合执行Shell、Python以及Ruby、R等代码!
这一功能将解释型语言的特点发挥到了极致,从而打破了传统语言"运行时"的边界。
IPython是一个非常好用Python控制台,极大地扩展了Python的能力。
因为它不仅是一种语言的运行环境,而且是一个高效率的分析工具。
* 之前任何语言和IDE都是相互独立的,导致工作时需要在不同的系统间切换和拷贝/粘贴数据。
* Magic操作符可以在HTML页面中输入shell脚本以及Ruby等其它语言并混合执行,极大地提升了传统的“控制台”的生产效率。
* Magics是一个单行的标签式“命令行”系统,指示后续的代码将如何、以及被何种解释器去处理。
* Magisc与传统的shell脚本几乎没有什么区别,但是可以将多种指令混合在一起。
Magics 主要有两种语法:
Line magics: 以 % 字符开始,该行后面都为指令代码,参数用空格隔开,不需要加引号。
Cell magics: 使用两个百分号 (%%)开始, 后面的整个单元(Cell)都是指令代码。
注意,%%魔法操作符只在Cell的第一行使用,而且不能嵌套、重复(一个Cell只有一个)。极个别的情况,可以堆叠,但是只用于个别情况。
输入 [%lsmagic] 可以获得Magic操作符的列表。
如下所示(在Jupyter Notebook环境下,按[shift+enter]可以运行。):
End of explanation
"""
time print("hi")
%time
"""
Explanation: 缺省情况下,Automagic开关打开,不需要输入%符号,将会自动识别。
注意,这有可能与其它的操作引起冲突,需要注意避免。如果有混淆情况,加上%符号即可。
下面显示运行一段代码所消耗的时间。
End of explanation
"""
ls -l -h
!ls -l -h
files = !ls -l -h
files
"""
Explanation: 执行Shell脚本。
End of explanation
"""
%%!
ls -l
pwd
who
"""
Explanation: 执行多行shell脚本。
End of explanation
"""
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
"""
Explanation: 下面开始体验一下魔法操作符的威力。
载入matplotlib和numpy,后面的数值计算和绘图将会使用。
End of explanation
"""
%timeit np.linalg.eigvals(np.random.rand(100,100))
%%timeit a = np.random.rand(100, 100)
np.linalg.eigvals(a)
"""
Explanation: <!--====--> cell magics的简单例子
%timeit 魔法,计量代码的执行时间, 适用于单行和cell:
End of explanation
"""
%%capture capt
from __future__ import print_function
import sys
print('Hello stdout')
print('and stderr', file=sys.stderr)
capt.stdout, capt.stderr
capt.show()
"""
Explanation: %%capture 魔法,用于捕获stdout/err, 可以直接显示,也可以存到变量里备用:
End of explanation
"""
%%writefile foo.py
print('Hello world')
%run foo
"""
Explanation: %%writefile 魔法,将后续的语句写入文件中:
End of explanation
"""
%%script python
import sys
print('hello from Python %s' % sys.version)
%%script python3
import sys
print('hello from Python: %s' % sys.version)
"""
Explanation: <!--====--> Magics 运行其它的解释器。
IPython 有一个 %%script 魔法操作符, 可以在一个子进程中运行其它语言的解释器,包括: bash, ruby, perl, zsh, R, 等等.
执行时从stdin取得输入,就像你自己在键入一样。
直接在%%script 行后传入指令即可使用。
后续的cell中的内容将按照指示符运行,子进程的信息通过stdout/err捕获和显示。
End of explanation
"""
%%ruby
puts "Hello from Ruby #{RUBY_VERSION}"
%%bash
echo "hello from $BASH"
"""
Explanation: IPython对通用的解释器创建了别名,可以直接使用, 譬如:bash, ruby, perl, etc.
等价于这个操作符: %%script <name>
End of explanation
"""
%%writefile ./lnum.py
print('my first line.')
print("my second line.")
print("Finished.")
%%script python ./lnum.py
#
"""
Explanation: 高级练习: 写一个自己的脚本文件。
写一个脚本文件,名为 lnum.py, 然后执行:
End of explanation
"""
%%bash
echo "hi, stdout"
echo "hello, stderr" >&2
%%bash --out output --err error
echo "hi, stdout"
echo "hello, stderr" >&2
"""
Explanation: 捕获输出。
可以直接从子进程中捕获stdout/err到Python变量中, 替代直接进入stdout/err。
End of explanation
"""
print(error)
print(output)
"""
Explanation: 可以直接访问变量名了。
End of explanation
"""
%%ruby --bg --out ruby_lines
for n in 1...10
sleep 1
puts "line #{n}"
STDOUT.flush
end
"""
Explanation: 后台运行 Scripts
只需添加 --bg ,即可让脚本在后台运行。
在此情况下, 输出将被丢弃,除非使用 --out/err 捕获输出。
End of explanation
"""
ruby_lines
print(ruby_lines.read())
"""
Explanation: 当后台线程保存输出时,有一个stdout/err pipes, 而不是输出的文本形式。
End of explanation
"""
%load_ext cythonmagic
"""
Explanation: Cython Magic 函数扩展
载入 extension
IPtyhon 包含 cythonmagic extension,提供了几个与Cython代码工作的魔法函数。使用 %load_ext 载入,如下:
End of explanation
"""
%%cython_pyximport foo
def f(x):
return 4.0*x
f(10)
"""
Explanation: %%cython_pyximport magic函数允许你在Cell中使用任意的Cython代码。Cython代码被写入.pyx 文件,保存在当前工作目录,然后使用pyximport 引用进来。需要指定一个模块的名称,所有的符号将被自动import。
End of explanation
"""
%%cython
cimport cython
from libc.math cimport exp, sqrt, pow, log, erf
@cython.cdivision(True)
cdef double std_norm_cdf(double x) nogil:
return 0.5*(1+erf(x/sqrt(2.0)))
@cython.cdivision(True)
def black_scholes(double s, double k, double t, double v,
double rf, double div, double cp):
"""Price an option using the Black-Scholes model.
s : initial stock price
k : strike price
t : expiration time
v : volatility
rf : risk-free rate
div : dividend
cp : +1/-1 for call/put
"""
cdef double d1, d2, optprice
with nogil:
d1 = (log(s/k)+(rf-div+0.5*pow(v,2))*t)/(v*sqrt(t))
d2 = d1 - v*sqrt(t)
optprice = cp*s*exp(-div*t)*std_norm_cdf(cp*d1) - \
cp*k*exp(-rf*t)*std_norm_cdf(cp*d2)
return optprice
black_scholes(100.0, 100.0, 1.0, 0.3, 0.03, 0.0, -1)
"""
Explanation: The %cython magic
%cython magic类似于 %%cython_pyximport magic, 但不需要指定一个模块名称. %%cython magic 使用 ~/.cython/magic目录中的临时文件来管理模块,所有符号会被自动引用。
下面是一个使用Cython的例子,Black-Scholes options pricing algorithm:
End of explanation
"""
#%timeit black_scholes(100.0, 100.0, 1.0, 0.3, 0.03, 0.0, -1)
"""
Explanation: 测量一下运行时间。
End of explanation
"""
%%cython -lm
from libc.math cimport sin
print 'sin(1)=', sin(1)
"""
Explanation: Cython 允许使用额外的库与你的扩展进行链接,采用 -l 选项 (或者 --lib)。 注意,该选项可以使用多次,libraries, such as -lm -llib2 --lib lib3. 这里是使用 system math library的例子:
End of explanation
"""
%reload_ext rmagic
"""
Explanation: 同样,可以使用 -I/--include 来指定包含头文件的目录, 以及使用 -c/--compile-args 编译选项,以及 extra_compile_args of the distutils Extension class. 请参考 the Cython docs on C library usage 获得更详细的说明。
Rmagic 函数扩展
IPython 通过 rmagic 扩展来调用R模块,是通过rpy2来实现的(安装:conda install rpy2)。
rpy2的文档:http://rpy2.readthedocs.io/en/version_2.7.x/
首先使用 %load_ext 载入该模块:
注意:新的rpy2已改动,不能运行。参考:http://rpy2.readthedocs.io/en/version_2.7.x/interactive.html?highlight=rmagic
End of explanation
"""
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
X = np.array([0,1,2,3,4])
Y = np.array([3,5,4,6,7])
plt.scatter(X, Y)
"""
Explanation: 典型的用法是使用R来计算numpy的Array的统计指标。我们试一下简单线性模型,输出scatterplot。
End of explanation
"""
%Rpush X Y
%R lm(Y~X)$coef
"""
Explanation: 首先把变量赋给 R, 拟合模型并返回结果。 %Rpush 拷贝 rpy2中的变量. %R 对 rpy2 中的字符串求值,然后返回结果。在这里是线性模型的协方差-coefficients。
End of explanation
"""
%R resid(lm(Y~X)); coef(lm(X~Y))
"""
Explanation: %R可以返回多个值。
End of explanation
"""
b = %R a=resid(lm(Y~X))
%Rpull a
print(a)
assert id(b.data) == id(a.data)
%R -o a
"""
Explanation: 可以将 %R 结果传回 python objects. 返回值是一个“;”隔开的多行表达式,coef(lm(X~Y)).
拉取R的其它变量, 采用 %Rpull 和 %Rget. 在 R code 执行后,在 rpy2 namespace 有变量需要获取。
主要区别是:
(%Rget)返回值, 而(%Rpull)从 self.shell.user_ns 拉取。想象一下,我们计算得到变量 "a" 在rpy2's namespace. 使用 %R magic, 我们得到结果并存储到 b。可以从 user_ns 使用 %Rpull得到。返回的是同一个数据。
End of explanation
"""
from __future__ import print_function
v1 = %R plot(X,Y); print(summary(lm(Y~X))); vv=mean(X)*mean(Y)
print('v1 is:', v1)
v2 = %R mean(X)*mean(Y)
print('v2 is:', v2)
"""
Explanation: Plotting and capturing output
R的控制台stdout()被ipython捕获。
End of explanation
"""
%%R -i X,Y -o XYcoef
XYlm = lm(Y~X)
XYcoef = coef(XYlm)
print(summary(XYlm))
par(mfrow=c(2,2))
plot(XYlm)
"""
Explanation: Cell 级别的 magic
我们希望用R在cell级别。而且numpy最好不要转换,参考R: rnumpy ( http://bitbucket.org/njs/rnumpy/wiki/API ) 。
End of explanation
"""
x = %octave [1 2; 3 4];
x
a = [1, 2, 3]
%octave_push a
%octave a = a * 2;
%octave_pull a
a
"""
Explanation: octavemagic: Octave inside IPython
octavemagic 提供与Octave交互的能力。依赖 oct2py 和 h5py 软件包。
载入扩展包:
概览
载入这个扩展包,启用了三个magic functions: %octave, %octave_push, 和 %octave_pull。
第一个执行一行或多行Octave, 后面两个执行 Octave 和 Python 的变量交换。
End of explanation
"""
%%octave -i x -o y
y = x + 3;
y
"""
Explanation: %%octave : 多行 Octave 被执行。但与单行不同, 没有值被返回, 我们使用-i 和 -o 指定输入和输出变量。
End of explanation
"""
%%octave -f svg
p = [12 -2.5 -8 -0.1 8];
x = 0:0.01:1;
polyout(p, 'x')
plot(x, polyval(p, x));
"""
Explanation: Plotting
Plot输出自动被捕获和显示,使用 -f 参数选择输出的格式 (目前支持 png 和 svg)。
End of explanation
"""
%%octave -s 500,500
# butterworth filter, order 2, cutoff pi/2 radians
b = [0.292893218813452 0.585786437626905 0.292893218813452];
a = [1 0 0.171572875253810];
freqz(b, a, 32);
%%octave -s 600,200 -f png
subplot(121);
[x, y] = meshgrid(0:0.1:3);
r = sin(x - 0.5).^2 + cos(y - 0.5).^2;
surf(x, y, r);
subplot(122);
sombrero()
"""
Explanation: 使用 -s 参数调整大小:
End of explanation
"""
|
MadcowD/cs189 | hw5/hw5.ipynb | mit | import numpy as np
import math
from scipy import stats
"""
Explanation: Homework 5 Random Forests and Decision Trees.
End of explanation
"""
# Based on the standard definition of entropy.
def entropy(data, classes):
entr = 0
for cls in classes:
probi = len(cls)/len(data)
entr += -probi*math.log2(probi)
return entr
# Determines the information gain for an attribute for a data set.
def discrete_igain(data, classes, a, possible_a):
data_entropy = entropy(data,classes)
return discrete_igain_fast(data,classes,a,possible_a,data_entropy)
def discrete_igain_fast(data,classes, a , possible_a, data_entropy):
mutualinf = 0
for attr in possible_a:
data_attr = [x for x in data if x[a] == attr]
mutualinf += len(data_attr)/len(data)*entropy(data_attr, classes)
return data_entropy - mutualinf
#not an efficient classes algorithm, but no fucks given; only use once.
def get_classes(data, labels):
cls = []
label_vals = np.unique(labels)
for label in label_vals:
cls.append([x for x,y in zip(data,labels) if y == label])
return cls
def get_attrvals(data, a):
return np.unique(np.transpose(data)[a])
"""
Explanation: Let's implement a Decision tree using Shannon Entropy and expected information gain.
To do this we need to
* Implement functions getting information entropy
* Implement a decision tree class which acts on a "labeled" dataset
Information Theory
End of explanation
"""
#XOR!
x = np.array([[0,0],
[1,0],
[0,1],
[1,1],
[-1,1],
[-1,0],
[-1,-1]])
y = np.array([0,1,1,0,0,0,1])
cls = get_classes(x,y)
entropy(x, cls)
discrete_igain(x, cls, 1, get_attrvals(x, 1))
def submap(data,label, restriction):
indices, constraints = zip(*restriction)
a,b = zip(*[(x,y) for x,y in zip(data,label) if x[indices] == constraints])
return (np.array(list(a)), np.array(list(b)))
xp, yp =submap(x,y, [(0,-1)])
clsp = get_classes(xp,yp)
def tabstr(n):
retstr = ""
for i in range(n):
retstr += "\t"
return retstr
"""
Explanation: Tests
End of explanation
"""
# A toy example of the discrete decision tree
class ddtree:
### gets decisional
def __init__(self, data, labels, restricted=[]):
self.data = data
self.labels = labels
self.classes = get_classes(data,labels)
self.subtrees = {}
self.entropy = entropy(self.data, self.classes)
self.attr = None
self.restricted = restricted
#trains the tree:
def train(self):
#check to see that the label set is unique
if len(np.unique(self.labels)) <= 1:
return
#Get the igains for all of the attributes.
igains = []
for i in range(self.data.shape[1]): #random forest would take a random sub sample.
#make sure we don't consider all the values a previous node has considered.
if self.restricted is None or i not in self.restricted:
atr_vals = get_attrvals(self.data,i)
if len(atr_vals) >1: #We only want to consider attributes whose possible values are different
igains.append(discrete_igain(self.data, self.classes, i, atr_vals))
#Best attribute is the argmax
best_attr = np.argmax(igains)
self.attr = best_attr
# restrict the attributes which the trees can consider!
subres = [best_attr]
subres.extend(self.restricted)
#Make the sub decision trees for each choice of the attribute.
for val in get_attrvals(self.data,best_attr):
#make a subtree
dp, lp = submap(self.data, self.labels, [(best_attr, val)])
#make the subtree decisions based on if they satisfy a lambda; to abstract in dynamic trees.
tree = ddtree(dp, lp, subres)
self.subtrees[(val, lambda x, best_attr=best_attr, val=val: x[best_attr] == val)] = tree
#train all of the trees
for (val, test), tree in self.subtrees.items():
tree.train()
def classify(self, x):
if len(self.subtrees) > 0 and self.attr is not None:
for (val, test), tree in self.subtrees.items():
if test(x):
return tree.classify(x)
else:
return self.labels[0] #Return the only label in the tree.
def print_tree(self, n):
if len(self.subtrees) > 0 and self.attr is not None:
retstr = "x[" + str(self.attr) + "] subtrees(" + str(len(self.subtrees)) + ")\n" + tabstr(n)
for (val, test), tree in self.subtrees.items():
retstr += "- " + str(val) + " ->"+"\t" + tree.print_tree(n+1) +"\n" + tabstr(n)
else:
return "Y:" + str(self.labels[0])
return retstr
def __str__(self):
return self.print_tree(0)
test = ddtree(x,y)
# Let's try training this thing!
test.train()
print(test)
print("Classification error!")
for i in range(len(x)):
print(test.classify(x[i]), y[i])
"""
Explanation: Discrete Decision Trees
Since we've built our tests and informtion theoretic methods; it is now probably a good idea to build a decision tree class. This should attempt to maximize information gain upon construction and thereafter be able to classify
any given training example confined to the attributional classes of the dataset.
After we do this, we'll define a decision tree which can take discrete and continuous vales, specified during construction.
End of explanation
"""
#def indicator
class Indicator:
def __init__(self, func, name):
self.func = func
self.name = name
def __call__(self, *args, **kwargs):
return self.func(*args, **kwargs)
def __str__(self):
return str(self.name)
def __repr__(self):
return str(self)
# make our indicator functions.
def dindicator(a):
return Indicator((lambda x,a=a: x == a), str(a))
def ci_indicator(a,b):
return Indicator((lambda x,a=a,b=b: a <= x and x < b), "[" +str(a) + ", " + str(b) + ")")
def cray_indicator(a, inf):
if inf > 0:
return Indicator((lambda x,a=a: a <= x), "(" + str(a) +", inf)")
elif inf < 0:
return Indicator((lambda x,a=a: x < a), "(-inf," + str(a) +")")
else:
return dindicator(a)
def split_indicators(splits):
indicators = []
#make end rays
indicators.append(cray_indicator(splits[0], -1))
#make middle intervals
for i, val in enumerate(splits[:-1]):
indicators.append(ci_indicator(splits[i], splits[i+1]))
#make end rays
indicators.append(cray_indicator(splits[-1], 1))
return indicators
def singleton_indicator(possible_vals):
indicators = []
for val in possible_vals:
indicators.append(dindicator(val))
return indicators
"""
Explanation: Full decision trees with continuous values and random subset selection!
In order to implement the full version of decsion trees, we don't think of attributes as elements of a set but in fact indicator functions of subsets. In the case of the ddtree we have $\chi_a$ for singleton sets ${a} := a$. For the full thing we'll just extend this notion to "axis aligned" intervals.
End of explanation
"""
# Based on the standard definition of entropy.
def entropy(data, classes):
if len(data) == 0:
return 0
entr = 0
for cls in classes:
cls = np.intersect1d(cls, data)
probi = len(cls)/len(data)
entr += -probi*math.log2(probi)
return entr
# Determines the information gain for an attribute for a data set.
def igain(data, classes, attribute, indicators):
data_entropy = entropy(data,classes)
return igain_fast(data, classes, attribute, indicators, data_entropy)
def igain_fast(data, classes, attribute, indicators, data_entropy):
mutualinf = 0
for indicator in indicators:
data_attr = [x for x in data if indicator(x[attribute])]
mutualinf += len(data_attr)/len(data)*entropy(data_attr, classes)
return data_entropy - mutualinf
#not an efficient classes algorithm, but no fucks given; only use once.
def get_classes(data, labels):
cls = []
label_vals = np.unique(labels)
for label in label_vals:
cls.append([x for x,y in zip(data,labels) if y == label])
return cls
def get_indicators(data, classes, a, discrete=True, heuristic=None):
if discrete:
return singleton_indicator(np.unique(np.transpose(data)[a]))
else:
if heuristic is None:
return split_indicators([np.mean(np.transpose(data)[a])])
else:
return heuristic(data, classes, a, np.transpose(data)[a])
# Submaps data based on restrictions.
def submap(data,label, restriction):
def satisfies(x, restriction=restriction):
for index, constraint in restriction:
if not constraint(x[index]):
return False
return True
valueset = [(x,y) for x,y in zip(data,label) if satisfies(x)]
if not valueset:
return None, None
else:
a,b = zip(*valueset)
return (np.array(list(a)), np.array(list(b)))
"""
Explanation: We need to now make the indicator versions of our helper functions.
End of explanation
"""
import random
#Breaks an interval up into n random splits
def get_random_splits(n, M, m):
splits = []
for i in range(n):
splits.append(random.random()*(M-m) + m)
splits.sort()
return splits
def random_heuristic(n=None):
def get_splits(data, classes, a, values, n=n):
if n is None:
n = math.floor(math.sqrt(len(values)))
M,m = np.max(values), np.min(values)
splits = get_random_splits(n, M, m)
return split_indicators(splits)
return get_splits
def entropy_heuristic():
def get_splits(data, classes, a, values):
possible_indicators = []
values = np.unique(values)
for value in values:
possible_indicators.append(split_indicators([value]))
igains = list(map(lambda x: igain(data, classes, a, x), possible_indicators))
return possible_indicators[np.argmax(igains)]
return get_splits
# Gets the most adventageous information gain according to splits
# TODO TEST:
def random_entropy_heuristic(n=None,p=100):
def get_splits(data,classes, a, values, n=n, p=p):
possible_indicators = []
rando_splitter = random_heuristic(n)
for i in range(p):
possible_indicators.append(rando_splitter(data, classes,a, values))
igains = map(lambda x: igain(data, classes, a, x), possible_indicators)
return possible_indicators[np.argmax(igains)]
return get_splits
"""
Explanation: Splitpoint Heuristics
We need heuristics for data deemed continuous. How should we split it?
End of explanation
"""
# A toy example of the discrete decision tree
class dtree:
### gets decisional
def __init__(self, data, labels, restricted=[], depth=0, maxdepth=None):
self.data = data
self.labels = labels
self.classes = get_classes(data,labels)
self.subtrees = {}
self.entropy = entropy(self.data, self.classes)
self.attr = None
self.depth = depth
self.restricted = restricted
self.maxdepth = maxdepth
def bag(self, attributes, bag):
if bag > 0:
return random.sample(attributes, min(bag, len(attributes)))
else:
return attributes
#trains the tree:
def train(self, bag=0, typetable=None, heuristic=None):
#check to see that the label set is unique
if len(np.unique(self.labels)) <= 1:
return
#Get the igains for all of the attributes.
igains = []
attr_indic = {}
attributes = [ i for i in range(len(self.data[0])) if i not in self.restricted]
attributes = self.bag( attributes, bag)
if not attributes or self.depth == self.maxdepth:
return #We've reached the end of our tree.
for i in attributes: #random forest would take a random sub sample.
#make sure we don't consider all the values a previous node has considered.
if self.restricted is None or i not in self.restricted:
discrete = False
#Assume that all data is discrete :()
if typetable is False:
discrete = False
elif typetable is None or typetable[i]:
discrete = True
indicators = get_indicators(self.data, self.classes, i, discrete, heuristic)
# In case this is the best indicator
attr_indic[i] = indicators
#We only want to consider attributes whose possible values are different
igains.append(igain(self.data, self.classes, i, indicators))
#Best attribute is the argmax
best_attr = attributes[np.argmax(np.array(igains))]
self.attr = best_attr
# restrict the attributes which the trees can consider!
subres = [best_attr]
subres.extend(self.restricted)
#Make the sub decision trees for each choice of the attribute.
for val in attr_indic[best_attr]:
#make a subtree
dp, lp = submap(self.data, self.labels, [(best_attr, val)])
#Only make a subtree if there are datapoints which would even satisfy it!
if dp is not None:
#make the subtree decisions based on if they satisfy a lambda; to abstract in dynamic trees.
tree = dtree(dp, lp, subres, depth=self.depth+1, maxdepth=self.maxdepth)
self.subtrees[(val, lambda x, best_attr=best_attr, val=val: val(x[best_attr]))] = tree
#train all of the trees
for (val, test), tree in self.subtrees.items():
tree.train(bag, typetable, heuristic)
def classify(self, x):
if len(self.subtrees) > 0 and self.attr is not None:
for (val, test), tree in self.subtrees.items():
if test(x):
return tree.classify(x)
else:
return stats.mode(self.labels)[0][0] #Return the only label in the tree.
def print_tree(self, n):
if len(self.subtrees) > 0 and self.attr is not None:
retstr = "x[" + str(self.attr) + "] subtrees(" + str(len(self.subtrees)) + ")\n" + tabstr(n)
for (val, test), tree in self.subtrees.items():
retstr += "- " + str(val) + " ->"+"\t" + tree.print_tree(n+1) +"\n" + tabstr(n)
else:
return "Y:" + str(self.labels[0])
return retstr
def __str__(self):
return self.print_tree(0)
"""
Explanation: Decision Tree
Now that we've resolved the problem to acting on arbitrary indicator functions for subsets, we can consider building a general decision tree!
End of explanation
"""
test = dtree(x,y) #Shit it worked!
test.train()
print(test)
print("Classification error!")
for i in range(len(x)):
print(test.classify(x[i]), y[i])
"""
Explanation: Wow that was a lot. Okay, so if that worked; we should definitely be able to the discrete case! :e
End of explanation
"""
#XOR!
x = np.array([[0,0],
[0,0.1],
[0,0.2],
[0,0.3],
[0,0.4],
[0,0.5],
[0,0.6],
[0,0.7],
[0,0.8],
[0,0.9],
[0,1.0],])
y = np.array([0,0,0,0,0,1,1,1,1,1,1])
cls = get_classes(x,y)
typetable = {}
typetable[0] = True
typetable[1] = False
test = dtree(x,y)
test.train(typetable=typetable, heuristic=entropy_heuristic())
print(test)
print("Classification error!")
for i in range(len(x)):
print(test.classify(x[i]), y[i])
"""
Explanation: Shit that worked. Let's try some continuous data.
End of explanation
"""
# Not meant for regression!
class rforest:
#initalizes all the trees and partitions the data.
def __init__(self, data, labels, numtrees, partition_size, maxdepth=None):
self.trees = []
self.data = data
self.labels = labels
self.partitions = []
#create partitions
for i in range(numtrees):
dpart, lpart = zip(*random.sample(list(zip(data,labels)), partition_size))
self.partitions.append((dpart, lpart))
self.trees.append(dtree(np.array(list(dpart)), np.array(list(lpart)), maxdepth=maxdepth))
def train(self, bag=0, typetable=None, heuristic=None):
for tree in self.trees:
tree.train(bag, typetable, heuristic)
#Ensemble voting
def ensemble(self,x):
box = {}
for tree in self.trees:
vote = tree.classify(x)
if vote is not None:
if vote in box:
box[vote] += 1
else:
box[vote] = 1
return box
def classify(self, x):
predictions = self.ensemble(x)
i, m = None, -1
for pred, votes in predictions.items():
if votes > m:
i,m = pred, votes
return i
"""
Explanation: Random Forests
Now we can aggregate our forest fun with random forests who decorolate decision trees and take therir average!
End of explanation
"""
test = rforest(x,y, 6, 3)
test.train(heuristic=entropy_heuristic(), bag=1)
print("Classification error!")
for i in range(len(x)):
print(test.ensemble(x[i]), y[i])
"""
Explanation: Okay now let's try this thing on that x,y dataset
End of explanation
"""
import scipy.io
from sklearn.preprocessing import normalize
spam_data = scipy.io.loadmat("../data/spam_dataset/spam_data.mat")
spam_train = spam_data['training_data']
spam_test = spam_data['test_data']
spam_train_data_raw = spam_train
spam_train_label_raw = spam_data['training_labels']
spam_tlabel = spam_train_label_raw.ravel()
div_train = [1.0/max(arr) for arr in spam_train_data_raw.T]
spam_tdata = spam_train_data_raw
spam_test_data = spam_test
#Shuffle that spam data good.
shuffle = np.random.permutation(np.arange(spam_tdata.shape[0]))
spam_tdata, spam_tlabel = spam_tdata[shuffle], spam_tlabel[shuffle]
#VALIDATION
spam_valid_data = spam_tdata[0:750]
spam_valid_label = spam_tlabel[0:750]
#TRAINING
spam_train_data =spam_tdata[750:]
spam_train_label =spam_tlabel[750:]
ents = rforest(spam_train_data, spam_train_label, 20, int(len(spam_train_data)/5), maxdepth=20)
ents.train(typetable=False, heuristic=random_entropy_heuristic(3, 100),bag=20)
print("Classification error!")
net_error = 0
for i in range(len(spam_valid_data)):
pred = ents.classify(spam_valid_data[i])
if pred is None:
print(pred)
if pred is not None:
net_error += abs(pred - spam_valid_label[i])
print(net_error/len(spam_valid_data))
spam_results = []
for i, dp in enumerate(spam_test_data):
pred = ents.classify(dp)
if pred is None:
pred = 0
spam_results.append(np.array([i+1, pred]))
np.savetxt(
'kagglespam.csv', # file name
spam_results, # array to save
fmt='%i', # formatting, 2 digits in this case
delimiter=',', # column delimiter
newline='\n')
"""
Explanation: Let's do Some fucking Spam
End of explanation
"""
import csv
raw_census_data = []
numerical = ['age', 'fnlwgt','education-num', 'capital-gain', 'capital-loss', 'hours-per-week']
with open('../data/census_data/train_data.csv') as csvFile:
reader = csv.DictReader(csvFile)
for row in reader:
raw_census_data.append(row)
census_data = []
census_label =[]
seen = {}
for data in raw_census_data:
dp = []
for attr in data:
if attr == "label":
census_label.append(int(data[attr]))
if data[attr] == '?':
dp.append(0)
if attr in numerical:
dp.append(int(data[attr]))
else:
if attr not in seen:
seen[attr] = []
if data[attr] not in seen[attr]:
seen[attr].append(data[attr])
dp.append(seen[attr].index(data[attr]))
census_data.append(dp)
typetable = [x == 0 for x in census_data[0]]
census_train_data = census_data
census_label= np.array(census_label)
import csv
raw_census_data = []
numerical = ['age', 'fnlwgt','education-num', 'capital-gain', 'capital-loss', 'hours-per-week' ]
with open('../data/census_data/test_data.csv') as csvFile:
reader = csv.DictReader(csvFile)
for row in reader:
raw_census_data.append(row)
census_data = []
for data in raw_census_data:
dp = []
for attr in data:
if attr == "label":
census_label.extend(int(data[attr]))
if data[attr] == '?':
dp.append(0)
if attr in numerical:
dp.append(int(data[attr]))
else:
if attr not in seen:
seen[attr] = []
if data[attr] not in seen[attr]:
seen[attr].append(data[attr])
dp.append(seen[attr].index(data[attr]))
census_data.append(dp)
census_test_data = census_data
ents = rforest(census_train_data, census_label, 20, int(len(census_train_data)/5), maxdepth=20)
len(typetable)
ents.train(typetable=typetable, heuristic=entropy_heuristic())
print("Classification error!")
net_error = 0
for i in range(len(census_train_data)):
pred = ents.classify(census_train_data[i])
if pred is None:
print(pred)
if pred is not None:
net_error += abs(pred - census_label[i])
print(net_error/len(spam_valid_data))
spam_results = []
for i, dp in enumerate(census_test_data):
pred = ents.classify(dp)
if pred is None:
pred = 0
spam_results.append(np.array([i+1, pred]))
np.savetxt(
'census.csv', # file name
spam_results, # array to save
fmt='%i', # formatting, 2 digits in this case
delimiter=',', # column delimiter
newline='\n')
"""
Explanation: Census
End of explanation
"""
|
edeno/Jadhav-2016-Data-Analysis | notebooks/2017_06_09_Spectral Granger.ipynb | gpl-3.0 | time_extent = (0, .250)
num_trials = 500
sampling_frequency = 200
num_time_points = ((time_extent[1] - time_extent[0]) * sampling_frequency) + 1
time = np.linspace(time_extent[0], time_extent[1], num=num_time_points, endpoint=True)
signal_shape = (len(time), num_trials)
np.random.seed(2)
def simulate_arma_model(ar_coefficients, ma_coefficients=None,
signal_shape=(100,1), sigma=1, axis=0, num_burnin_samples=10):
ar = np.r_[1, -ar_coefficients] # add zero-lag and negate
if ma_coefficients is None:
ma = np.asarray([1])
else:
ma = np.r_[1, ma_coefficients] # add zero-lag
# Add burnin samples to shape
signal_shape = list(signal_shape)
signal_shape[axis] += num_burnin_samples
# Get arma process
white_noise = np.random.normal(0, sigma, size=signal_shape)
signal = scipy.signal.lfilter(ma, ar, white_noise, axis=axis)
# Return non-burnin samples
slc = [slice(None)] * len(signal_shape)
slc[axis] = slice(num_burnin_samples, signal_shape[axis], 1)
return signal[slc]
ar1 = np.array([.55, -0.70])
x1 = simulate_arma_model(ar1, signal_shape=signal_shape, sigma=1, num_burnin_samples=sampling_frequency)
arima_model.ARMA(x1[:, 0], (2,0)).fit(trend='nc', disp=0).summary()
ar2 = np.array([.56, -0.75])
x2 = simulate_arma_model(ar2, signal_shape=signal_shape, sigma=2, num_burnin_samples=sampling_frequency)
arima_model.ARMA(x2[:, 0], (2,0)).fit(trend='nc', disp=0).summary()
x2[1:, :] = 0.60 * x1[:-1, :]
"""
Explanation: Simulated Network
Network from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3971884/#RSTA20110610M2x27
$x1 \rightarrow x2$:
$x_1(t) = 0.55x_1(t-1) - 0.70x_1(t-2) + \epsilon_1(t)$,
$x_2(t) = 0.56x_2(t-1) - 0.75x_2(t-2) + 0.60x_1(t-1) + \epsilon_2(t)$
where $\sigma_1^2=1.00$ for $\epsilon_1$ and $\sigma_2^2=2.00$ for $\epsilon_2$, both with mean zero
End of explanation
"""
psd1 = spectral.multitaper_power_spectral_density(x1,
sampling_frequency=sampling_frequency,
time_halfbandwidth_product=1,
desired_frequencies=[0, 100])
psd2 = spectral.multitaper_power_spectral_density(x2,
sampling_frequency=sampling_frequency,
time_halfbandwidth_product=1,
desired_frequencies=[0, 100])
fig, axes = plt.subplots(1,2, figsize=(4,3), sharex=True, sharey=True)
psd1.plot(ax=axes[0], legend=False)
axes[0].set_ylabel('Power')
axes[0].set_title('x1')
axes[0].axvline(40, color='black', linestyle=':')
psd2.plot(ax=axes[1], legend=False)
axes[1].axvline(40, color='black', linestyle=':')
axes[1].set_title('x2')
plt.tight_layout()
"""
Explanation: x1 and x2 have spectral peaks at 40 Hz
End of explanation
"""
# Step 1
centered_x1 = spectral._subtract_mean(x1)
centered_x2 = spectral._subtract_mean(x2)
x = np.concatenate((centered_x1[..., np.newaxis],
centered_x2[..., np.newaxis]),
axis=-1)
num_lfps = x.shape[-1]
# Step 2
order = 3
fit = [alg.MAR_est_LWR(x[:, trial, :].T, order)
for trial in np.arange(x1.shape[1])]
# A shape: order-1 x num_lfps x num_lfps
# cov shape: num_lfps x num_lfps
# Step 3
Sigma = np.mean([trial_fit[1] for trial_fit in fit], axis=0)
# Step 4
A = np.mean([trial_fit[0] for trial_fit in fit], axis=0)
# Step 5
pad = 0
number_of_time_samples = int(num_time_points)
next_exponent = spectral._nextpower2(number_of_time_samples)
number_of_fft_samples = max(
2 ** (next_exponent + pad), number_of_time_samples)
half_of_fft_samples = number_of_fft_samples//2 - 1
A_0 = np.concatenate((np.eye(num_lfps)[np.newaxis, :, :], A)).reshape((order, -1))
B = np.zeros((A_0.shape[-1], half_of_fft_samples), dtype='complex')
for coef_ind in np.arange(A_0.shape[-1]):
normalized_freq, B[coef_ind, :] = scipy.signal.freqz(A_0[:, coef_ind],
worN=half_of_fft_samples)
B = B.reshape((num_lfps, num_lfps, half_of_fft_samples))
H = np.zeros_like(B)
for freq_ind in np.arange(half_of_fft_samples):
H[:, :, freq_ind] = np.linalg.inv(B[:, :, freq_ind])
freq = (normalized_freq * sampling_frequency) / (2 * np.pi)
# Step 6
S = np.zeros_like(H)
for freq_ind in np.arange(H.shape[-1]):
S[:, :, freq_ind] = np.linalg.multi_dot(
[H[:, :, freq_ind],
Sigma,
H[:, :, freq_ind].conj().transpose()])
S = np.abs(S)
I12 = -np.log(1 - ((Sigma[0, 0] - Sigma[0, 1]**2 / Sigma[1, 1]) * np.abs(H[1, 0])**2) / S[1, 1])
# I12 = np.log( S[1, 1] / (S[1,1] - (Sigma[0, 0] - Sigma[0, 1]**2 / Sigma[1, 1]) * np.abs(H[1, 0])**2))
I21 = -np.log(1 - ((Sigma[1, 1] - Sigma[0, 1]**2 / Sigma[0, 0]) * np.abs(H[0, 1])**2) / S[0, 0])
plt.plot(freq, I12, label='x1 → x2')
plt.plot(freq, I21, label='x2 → x1')
plt.xlabel('Frequencies (Hz)')
plt.ylabel('Granger Causality')
plt.legend();
"""
Explanation: Spectral Granger Causality
Parametric version
Steps:
1. Make LFPs zero mean (possibly also divide by standard deviation as in Gregoriou et al. 2009)
2. For each trial, estimate the multivariate autoregressive model:
$$ \Sigma_{k=0}^{m} A_k X_{t-k} = E_t $$
where $m$ is the order of the model, $A_k$ is the coefficient matrix and $E_t$ is the residual error with covariance matrix $\Sigma$
3. Average covariance matrix $\Sigma$ of the noise term $E_t$ of the multivariate autoregressive model over trials
4. Average estimated coefficients $A_k$ of the multivariate autoregressive model over trials
5. Calculate transfer function $H$ from the estimated coefficients of the multivariate autoregressive model
$$ H(f) = \left(\Sigma_{k=0}^{m} A_k e^{-2\pi ikf} \right)^{-1}
$$
6. Calculate the spectral matrix $S$ from the transfer function and the covariance matrix
$$ S(f) = H(f)\Sigma H(f)^*
$$
7. Compute the spectral granger using the covariance matrix, transfer function, and spectral matrix:
$$ I_{1 \rightarrow 2} =
-ln\left{1 -
\frac{\left(\Sigma_{11} - \frac{\Sigma_{12}^2}{\Sigma_{22}}\right)
\left|H_{21}\right|^2}
{S_{22}(f)}\right}
$$
$$ I_{2 \rightarrow 1} =
-ln\left{1 -
\frac{\left(\Sigma_{22} - \frac{\Sigma_{12}^2}{\Sigma_{11}}\right)
\left|H_{12}\right|^2}
{S_{11}(f)}\right}
$$
End of explanation
"""
order = 3
fit = [alg.MAR_est_LWR(x[:, trial, :].T, order)
for trial in np.arange(x1.shape[1])]
Sigma = np.mean([trial_fit[1] for trial_fit in fit], axis=0)
A = np.mean([trial_fit[0] for trial_fit in fit], axis=0)
normalized_freq, f_x2y, f_y2x, f_xy, Sw = alg.granger_causality_xy(A, Sigma, n_freqs=number_of_fft_samples)
freq = (normalized_freq * sampling_frequency) / (2 * np.pi)
plt.plot(freq, f_x2y, label='x1 → x2')
plt.plot(freq, f_y2x, label='x2 → x1')
plt.xlabel('Frequencies (Hz)')
plt.ylabel('Granger Causality')
plt.legend();
"""
Explanation: Compare with nitime version
End of explanation
"""
# Step 1
def get_complex_spectrum(data,
sampling_frequency=1000,
time_halfbandwidth_product=3,
pad=0,
tapers=None,
frequencies=None,
freq_ind=None,
number_of_fft_samples=None,
number_of_tapers=None,
desired_frequencies=None):
complex_spectrum = spectral._multitaper_fft(
tapers, spectral._center_data(data), number_of_fft_samples, sampling_frequency)
return np.nanmean(complex_spectrum[freq_ind, :, :], axis=(1, 2)).squeeze()
data = [x1, x2]
num_signals = len(data)
time_halfbandwidth_product = 1
tapers, number_of_fft_samples, frequencies, freq_ind = spectral._set_default_multitaper_parameters(
number_of_time_samples=data[0].shape[0],
sampling_frequency=sampling_frequency,
time_halfbandwidth_product=time_halfbandwidth_product)
complex_spectra = [get_complex_spectrum(
signal,
sampling_frequency=sampling_frequency,
time_halfbandwidth_product=time_halfbandwidth_product,
tapers=tapers,
frequencies=frequencies,
freq_ind=freq_ind,
number_of_fft_samples=number_of_fft_samples) for signal in data]
S = np.stack([np.conj(complex_spectrum1) * complex_spectrum2
for complex_spectrum1, complex_spectrum2
in itertools.product(complex_spectra, repeat=2)])
S = S.reshape((num_signals, num_signals, num_frequencies))
"""
Explanation: Non-parametric version
Steps:
1. Construct the complex spectral density matrix $S$ from the multitaper fft of all signals
2. Factorize the spectral density matrix $S = \Psi \Psi^{*}$ where $\Psi$ is the minimum phase factor
3. Get $A_0$ where $A_0$ is a coefficient of the Z expansion of $\Psi(z) = \Sigma_{k=0}^{\inf} A_k z^k$ where $z=e^{i2\pi f}$. $\Psi(0) = A_0$
4. Compute the noise covariance $\Sigma = A_0 A_0^T$
5. Compute the transfer function $H = \Psi A_0^{-1}$
6. Compute the granger causality with $\Sigma$, $S$, and $H$
End of explanation
"""
A0 = np.random.normal(size=(num_signals, 1000))
A0 = np.dot(A0, A0.T) / 1000;
A0 = np.linalg.cholesky(A0).T
num_two_sided_frequencies = 2 * num_frequencies - 1
Psi = np.zeros((num_signals, num_signals, num_two_sided_frequencies), dtype=complex)
"""
Explanation: Wilson spectral matrix factorizaion
Iterative algorithm based on Newton-Raphson. Want to solve: $S - \Psi \Psi^{*} = 0$
Steps:
1. Initialize A0 as a random upper triangular matrix
2.
End of explanation
"""
|
taspinar/siml | notebooks/Machine Learning with Signal Processing techniques.ipynb | mit | from siml.sk_utils import *
from siml.signal_analysis_utils import *
import numpy as np
import matplotlib.pyplot as plt
from collections import defaultdict, Counter
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import classification_report
"""
Explanation: This Notebook is accompanied by the following blog-post:
http://ataspinar.com/2018/04/04/machine-learning-with-signal-processing-techniques/
End of explanation
"""
activities_description = {
1: 'walking',
2: 'walking upstairs',
3: 'walking downstairs',
4: 'sitting',
5: 'standing',
6: 'laying'
}
def read_signals(filename):
with open(filename, 'r') as fp:
data = fp.read().splitlines()
data = map(lambda x: x.rstrip().lstrip().split(), data)
data = [list(map(float, line)) for line in data]
return data
def read_labels(filename):
with open(filename, 'r') as fp:
activities = fp.read().splitlines()
activities = list(map(int, activities))
return activities
def randomize(dataset, labels):
permutation = np.random.permutation(labels.shape[0])
shuffled_dataset = dataset[permutation, :, :]
shuffled_labels = labels[permutation]
return shuffled_dataset, shuffled_labels
INPUT_FOLDER_TRAIN = '../datasets/UCI_HAR/train/InertialSignals/'
INPUT_FOLDER_TEST = '../datasets/UCI_HAR/test/InertialSignals/'
INPUT_FILES_TRAIN = ['body_acc_x_train.txt', 'body_acc_y_train.txt', 'body_acc_z_train.txt',
'body_gyro_x_train.txt', 'body_gyro_y_train.txt', 'body_gyro_z_train.txt',
'total_acc_x_train.txt', 'total_acc_y_train.txt', 'total_acc_z_train.txt']
INPUT_FILES_TEST = ['body_acc_x_test.txt', 'body_acc_y_test.txt', 'body_acc_z_test.txt',
'body_gyro_x_test.txt', 'body_gyro_y_test.txt', 'body_gyro_z_test.txt',
'total_acc_x_test.txt', 'total_acc_y_test.txt', 'total_acc_z_test.txt']
LABELFILE_TRAIN = '../datasets/UCI_HAR/train/y_train.txt'
LABELFILE_TEST = '../datasets/UCI_HAR/test/y_test.txt'
train_signals, test_signals = [], []
for input_file in INPUT_FILES_TRAIN:
signal = read_signals(INPUT_FOLDER_TRAIN + input_file)
train_signals.append(signal)
train_signals = np.transpose(np.array(train_signals), (1, 2, 0))
for input_file in INPUT_FILES_TEST:
signal = read_signals(INPUT_FOLDER_TEST + input_file)
test_signals.append(signal)
test_signals = np.transpose(np.array(test_signals), (1, 2, 0))
train_labels = read_labels(LABELFILE_TRAIN)
test_labels = read_labels(LABELFILE_TEST)
[no_signals_train, no_steps_train, no_components_train] = np.shape(train_signals)
[no_signals_test, no_steps_test, no_components_test] = np.shape(test_signals)
no_labels = len(np.unique(train_labels[:]))
print("The train dataset contains {} signals, each one of length {} and {} components ".format(no_signals_train, no_steps_train, no_components_train))
print("The test dataset contains {} signals, each one of length {} and {} components ".format(no_signals_test, no_steps_test, no_components_test))
print("The train dataset contains {} labels, with the following distribution:\n {}".format(np.shape(train_labels)[0], Counter(train_labels[:])))
print("The test dataset contains {} labels, with the following distribution:\n {}".format(np.shape(test_labels)[0], Counter(test_labels[:])))
train_signals, train_labels = randomize(train_signals, np.array(train_labels))
test_signals, test_labels = randomize(test_signals, np.array(test_labels))
"""
Explanation: 0. Loading the signals from file
End of explanation
"""
N = 128
f_s = 50
t_n = 2.56
T = t_n / N
sample_rate = 1 / f_s
denominator = 10
signal_no = 15
signals = train_signals[signal_no, :, :]
signal = signals[:, 3]
label = train_labels[signal_no]
activity_name = activities_description[label]
"""
Explanation: 1. Visualizations
End of explanation
"""
f_values, fft_values = get_fft_values(signal, T, N, f_s)
plt.plot(f_values, fft_values, linestyle='-', color='blue')
plt.xlabel('Frequency [Hz]', fontsize=16)
plt.ylabel('Amplitude', fontsize=16)
plt.title("Frequency domain of the signal", fontsize=16)
plt.show()
"""
Explanation: 1a. Visualization of the FFT
End of explanation
"""
f_values, psd_values = get_psd_values(signal, T, N, f_s)
plt.plot(f_values, psd_values, linestyle='-', color='blue')
plt.xlabel('Frequency [Hz]')
plt.ylabel('PSD [V**2 / Hz]')
plt.show()
"""
Explanation: 1b. Visualization of the PSD
End of explanation
"""
t_values, autocorr_values = get_autocorr_values(signal, T, N, f_s)
plt.plot(t_values, autocorr_values, linestyle='-', color='blue')
plt.xlabel('time delay [s]')
plt.ylabel('Autocorrelation amplitude')
plt.show()
"""
Explanation: 1c. Visualization of the Autocorrelation
End of explanation
"""
labels = ['x-component', 'y-component', 'z-component']
colors = ['r', 'g', 'b']
suptitle = "Different signals for the activity: {}"
xlabels = ['Time [sec]', 'Freq [Hz]', 'Freq [Hz]', 'Time lag [s]']
ylabel = 'Amplitude'
axtitles = [['Acceleration', 'Gyro', 'Total acceleration'],
['FFT acc', 'FFT gyro', 'FFT total acc'],
['PSD acc', 'PSD gyro', 'PSD total acc'],
['Autocorr acc', 'Autocorr gyro', 'Autocorr total acc']
]
list_functions = [get_values, get_fft_values, get_psd_values, get_autocorr_values]
f, axarr = plt.subplots(nrows=4, ncols=3, figsize=(12,12))
f.suptitle(suptitle.format(activity_name), fontsize=16)
for row_no in range(0,4):
for comp_no in range(0,9):
col_no = comp_no // 3
plot_no = comp_no % 3
color = colors[plot_no]
label = labels[plot_no]
axtitle = axtitles[row_no][col_no]
xlabel = xlabels[row_no]
value_retriever = list_functions[row_no]
ax = axarr[row_no, col_no]
ax.set_title(axtitle, fontsize=16)
ax.set_xlabel(xlabel, fontsize=16)
if col_no == 0:
ax.set_ylabel(ylabel, fontsize=16)
signal_component = signals[:, comp_no]
x_values, y_values = value_retriever(signal_component, T, N, f_s)
ax.plot(x_values, y_values, linestyle='-', color=color, label=label)
if row_no > 0:
max_peak_height = 0.1 * np.nanmax(y_values)
indices_peaks = detect_peaks(y_values, mph=max_peak_height)
ax.scatter(x_values[indices_peaks], y_values[indices_peaks], c=color, marker='*', s=60)
if col_no == 2:
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.tight_layout()
plt.subplots_adjust(top=0.90, hspace=0.6)
plt.show()
"""
Explanation: 1d. Visualization of all transformations on all components
End of explanation
"""
X_train, Y_train = extract_features_labels(train_signals, train_labels, T, N, f_s, denominator)
X_test, Y_test = extract_features_labels(test_signals, test_labels, T, N, f_s, denominator)
"""
Explanation: 2. Extract Features from the signals
End of explanation
"""
clf = RandomForestClassifier(n_estimators=1000)
clf.fit(X_train, Y_train)
print("Accuracy on training set is : {}".format(clf.score(X_train, Y_train)))
print("Accuracy on test set is : {}".format(clf.score(X_test, Y_test)))
Y_test_pred = clf.predict(X_test)
print(classification_report(Y_test, Y_test_pred))
"""
Explanation: 3. Classification of the signals
3.1 Try classification with Random Forest
End of explanation
"""
#See https://github.com/taspinar/siml
dict_results = batch_classify(X_train, Y_train, X_test, Y_test)
display_dict_models(dict_results)
"""
Explanation: 3.2 Try out several classifiers (to see which one initially scores best)
End of explanation
"""
GDB_params = {
'n_estimators': [100, 200, 5000],
'learning_rate': [0.5, 0.1, 0.01, 0.001],
'criterion': ['friedman_mse', 'mse']
}
for n_est in GDB_params['n_estimators']:
for lr in GDB_params['learning_rate']:
for crit in GDB_params['criterion']:
clf = GradientBoostingClassifier(n_estimators=n_est,
learning_rate = lr,
criterion = crit)
clf.fit(X_train, Y_train)
train_score = clf.score(X_train, Y_train)
test_score = clf.score(X_test, Y_test)
print("For ({}, {}, {}) - train, test score: \t {:.5f} \t-\t {:.5f}".format(
n_est, lr, crit[:4], train_score, test_score)
)
"""
Explanation: 3.3 Hyperparameter optimization of best classifier
End of explanation
"""
|
Kaggle/learntools | notebooks/data_cleaning/raw/ex3.ipynb | apache-2.0 | from learntools.core import binder
binder.bind(globals())
from learntools.data_cleaning.ex3 import *
print("Setup Complete")
"""
Explanation: In this exercise, you'll apply what you learned in the Parsing dates tutorial.
Setup
The questions below will give you feedback on your work. Run the following cell to set up the feedback system.
End of explanation
"""
# modules we'll use
import pandas as pd
import numpy as np
import seaborn as sns
import datetime
# read in our data
earthquakes = pd.read_csv("../input/earthquake-database/database.csv")
# set seed for reproducibility
np.random.seed(0)
"""
Explanation: Get our environment set up
The first thing we'll need to do is load in the libraries and dataset we'll be using. We'll be working with a dataset containing information on earthquakes that occured between 1965 and 2016.
End of explanation
"""
# TODO: Your code here!
"""
Explanation: 1) Check the data type of our date column
You'll be working with the "Date" column from the earthquakes dataframe. Investigate this column now: does it look like it contains dates? What is the dtype of the column?
End of explanation
"""
# Check your answer (Run this code cell to receive credit!)
q1.check()
# Line below will give you a hint
#_COMMENT_IF(PROD)_
q1.hint()
#%%RM_IF(PROD)%%
print(earthquakes['Date'].head())
print(earthquakes['Date'].dtype)
"""
Explanation: Once you have answered the question above, run the code cell below to get credit for your work.
End of explanation
"""
earthquakes[3378:3383]
"""
Explanation: 2) Convert our date columns to datetime
Most of the entries in the "Date" column follow the same format: "month/day/four-digit year". However, the entry at index 3378 follows a completely different pattern. Run the code cell below to see this.
End of explanation
"""
date_lengths = earthquakes.Date.str.len()
date_lengths.value_counts()
"""
Explanation: This does appear to be an issue with data entry: ideally, all entries in the column have the same format. We can get an idea of how widespread this issue is by checking the length of each entry in the "Date" column.
End of explanation
"""
indices = np.where([date_lengths == 24])[1]
print('Indices with corrupted data:', indices)
earthquakes.loc[indices]
"""
Explanation: Looks like there are two more rows that has a date in a different format. Run the code cell below to obtain the indices corresponding to those rows and print the data.
End of explanation
"""
# TODO: Your code here
# Check your answer
q2.check()
#%%RM_IF(PROD)%%
q2.assert_check_failed()
#%%RM_IF(PROD)%%
earthquakes['date_parsed'] = earthquakes["Date"]
q2.assert_check_failed()
#%%RM_IF(PROD)%%
earthquakes.loc[3378, "Date"] = "02/23/1975"
earthquakes.loc[7512, "Date"] = "04/28/1985"
earthquakes.loc[20650, "Date"] = "03/13/2011"
earthquakes['date_parsed'] = pd.to_datetime(earthquakes['Date'], format="%m/%d/%Y")
q2.assert_check_passed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q2.hint()
#_COMMENT_IF(PROD)_
q2.solution()
"""
Explanation: Given all of this information, it's your turn to create a new column "date_parsed" in the earthquakes dataset that has correctly parsed dates in it.
Note: When completing this problem, you are allowed to (but are not required to) amend the entries in the "Date" and "Time" columns. Do not remove any rows from the dataset.
End of explanation
"""
# try to get the day of the month from the date column
day_of_month_earthquakes = ____
# Check your answer
q3.check()
#%%RM_IF(PROD)%%
day_of_month_earthquakes = earthquakes['date_parsed'].dt.month
q3.assert_check_failed()
#%%RM_IF(PROD)%%
day_of_month_earthquakes = earthquakes['date_parsed'].dt.day
q3.assert_check_passed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q3.hint()
#_COMMENT_IF(PROD)_
q3.solution()
"""
Explanation: 3) Select the day of the month
Create a Pandas Series day_of_month_earthquakes containing the day of the month from the "date_parsed" column.
End of explanation
"""
# TODO: Your code here!
#%%RM_IF(PROD)%%
# remove na's
day_of_month_earthquakes = day_of_month_earthquakes.dropna()
# plot the day of the month
sns.distplot(day_of_month_earthquakes, kde=False, bins=31)
"""
Explanation: 4) Plot the day of the month to check the date parsing
Plot the days of the month from your earthquake dataset.
End of explanation
"""
# Check your answer (Run this code cell to receive credit!)
q4.check()
# Line below will give you a hint
#_COMMENT_IF(PROD)_
q4.hint()
"""
Explanation: Does the graph make sense to you?
End of explanation
"""
volcanos = pd.read_csv("../input/volcanic-eruptions/database.csv")
"""
Explanation: (Optional) Bonus Challenge
For an extra challenge, you'll work with a Smithsonian dataset that documents Earth's volcanoes and their eruptive history over the past 10,000 years
Run the next code cell to load the data.
End of explanation
"""
volcanos['Last Known Eruption'].sample(5)
"""
Explanation: Try parsing the column "Last Known Eruption" from the volcanos dataframe. This column contains a mixture of text ("Unknown") and years both before the common era (BCE, also known as BC) and in the common era (CE, also known as AD).
End of explanation
"""
|
hcchengithub/project-k | notebooks/tutor.ipynb | mit | # In case you are not familiar with Jupyter Notebook, click here and press Ctrl+Enter to run this cell.
import projectk as vm
vm
"""
Explanation: An introduction to the project-k FORTH kernel
project-k is a very small FORTH programming language kernel supporting Javascript and Python open-sourced on GitHub https://github.com/hcchengithub/project-k. We are going to use this FORTH kernel to build our own tiny FORTH programming language system.
Read only
Read this tutorial on GitHub https://github.com/hcchengithub/project-k/blob/master/notebooks/tutor.ipynb
How to play
Use an online Jupyter Notebook, I recommend notebooks.ai while Google colab, Microsoft Azure Notebooks, and more are available out there, you don't need to install anything. Click [Download Zip] form GitHub project-k. We only need this notebook, notebooks\tutor.ipynb, and project-k source code file for Python, projectk.py that has only 20k bytes includes a lot of comments. Choose an online Jupyter notebook you like, create an acount, upload the minimum two files, double click or run this notebook tutor.ipynb and start playing.
<img src="projectk.jpg">
Import the FORTH kernel
The python statement below imports projectk.py and gives it an arbitrary nick name vm. As shown below we can see that vm is a python module and it is an instance of projectk.py which is our FORTH kernel.
End of explanation
"""
print(dir(vm))
"""
Explanation: Python standard function dir(obj) gets all member names of an object. Lest's see what are in the FORTH kernel vm:
End of explanation
"""
vm.stack
"""
Explanation: I only want you to see that there are very few properties and methods in this FORTH kernel object and many of them are conventional FORTH tokens like code, endcode, comma, compiling, dictionary, here, last, stack, pop, push, tos, rpop, rstack, rtos, tib, ntib, tick, and words.
Now let's play
The property vm.stack is the FORTH data stack which is empty at first.
End of explanation
"""
vm.dictate("123")
vm.stack
"""
Explanation: vm.dictate() method is the way project-k VM receives your commands (a string). It actually is also the way we feed it an entire FORTH source code file. Everything given to vm.dictate() is like a command line you type to the FORTH system as simple as only a number:
End of explanation
"""
vm.dictate("456").stack
"""
Explanation: The first line above dictates project-k VM to push 123 onto the data stack and the second line views the data stack. We can even cascade these two lines into one:
End of explanation
"""
vm.dictate("code hi! print('Hello World!!') end-code"); # define the "hi!" comamnd where print() is a standard python function
vm.dictate("hi!");
"""
Explanation: because vm.dictate() returns the vm object itself.
project-k VM knows only two words 'code' and 'end-code' at first
Let's define a FORTH command (or 'word') that prints "Hello World!!":
End of explanation
"""
vm.dictate("code words print([w.name for w in vm.words['forth'][1:]]) end-code")
vm.dictate("words");
"""
Explanation: Did you know what have we done? We defined a new FORTH code word! By the way, we can use any character in a word name except white spaces. This is a FORTH convention.
Define the 'words' command to view all words
I'd like to see what are all the words we have so far. The FORTH command 'words' is what we want now but this tiny FORTH system does not have it yet. We have to define it:
End of explanation
"""
vm.dictate("code + push(pop(1)+pop()) end-code"); # pop two operands from FORTH data stack and push back the result
vm.dictate("code .s print(stack) end-code"); # print the FORTH data stack
vm.dictate('code s" push(nexttoken(\'"\'));nexttoken() end-code'); # get a string
vm.dictate('words'); # list all recent words
"""
Explanation: In the above definition the vm.words is a python dictionary (not FORTH dictionary) defined in the project-k VM object as a property which is something like an array of all recent words in the recent vocabulary named forth which is the only one vocabulary comes with the FORTH kernel. Where a FORTH 'vocabulary' is simply a key in python dictionary key:value pair.
We have only 4 words so far as the words new command show above. Where 'code' and 'end-code' are built-in in the FORTH kernel; 'hi!' and 'words' were defined above.
Define '+' and conventional FORTH words '.s' , and 's"'
Next exercise is to define some more FORTH words.
End of explanation
"""
vm.stack = [] # clear the data stack
vm.dictate(' s" Forth "') # get the string 'Forth '
vm.dictate(' s" is the easist "') # get the string 'is the easist '
vm.dictate(' s" programming langage."') # get the string 'programing language.'
vm.dictate('.s'); # view the data stack
print(vm.dictate('+').stack) # concatenate top two strings
print(vm.dictate('+').stack) # concatenate the reset
"""
Explanation: This example demonstrates how to use built-in methods push(), pop(), nexttoken() and the stack property (or global variable). As shown in above definitions, we can omit vm. so vm.push, vm.stack are simplified to push, stack because code ... end-code definitions are right in the VM name space. Now let's try these new words:
End of explanation
"""
print(vm.dictate('123 456 + ').pop()); # Push 123, push 456, add them
print(vm.dictate('1.23 45.6 + ').pop());
"""
Explanation: The + command can certainly concatenate strings together and also can add numbers because Python's + operator works that way. Please try it with integers and floating point numbers:
End of explanation
"""
|
tkurfurst/deep-learning | transfer-learning/Transfer_Learning_Solution.ipynb | mit | from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
vgg_dir = 'tensorflow_vgg/'
# Make sure vgg exists
if not isdir(vgg_dir):
raise Exception("VGG directory doesn't exist!")
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(vgg_dir + "vgg16.npy"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='VGG16 Parameters') as pbar:
urlretrieve(
'https://s3.amazonaws.com/content.udacity-data.com/nd101/vgg16.npy',
vgg_dir + 'vgg16.npy',
pbar.hook)
else:
print("Parameter file already exists!")
"""
Explanation: Transfer Learning
Most of the time you won't want to train a whole convolutional network yourself. Modern ConvNets training on huge datasets like ImageNet take weeks on multiple GPUs. Instead, most people use a pretrained network either as a fixed feature extractor, or as an initial network to fine tune. In this notebook, you'll be using VGGNet trained on the ImageNet dataset as a feature extractor. Below is a diagram of the VGGNet architecture.
<img src="assets/cnnarchitecture.jpg" width=700px>
VGGNet is great because it's simple and has great performance, coming in second in the ImageNet competition. The idea here is that we keep all the convolutional layers, but replace the final fully connected layers with our own classifier. This way we can use VGGNet as a feature extractor for our images then easily train a simple classifier on top of that. What we'll do is take the first fully connected layer with 4096 units, including thresholding with ReLUs. We can use those values as a code for each image, then build a classifier on top of those codes.
You can read more about transfer learning from the CS231n course notes.
Pretrained VGGNet
We'll be using a pretrained network from https://github.com/machrisaa/tensorflow-vgg.
This is a really nice implementation of VGGNet, quite easy to work with. The network has already been trained and the parameters are available from this link.
End of explanation
"""
import tarfile
dataset_folder_path = 'flower_photos'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile('flower_photos.tar.gz'):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='Flowers Dataset') as pbar:
urlretrieve(
'http://download.tensorflow.org/example_images/flower_photos.tgz',
'flower_photos.tar.gz',
pbar.hook)
if not isdir(dataset_folder_path):
with tarfile.open('flower_photos.tar.gz') as tar:
tar.extractall()
tar.close()
"""
Explanation: Flower power
Here we'll be using VGGNet to classify images of flowers. To get the flower dataset, run the cell below. This dataset comes from the TensorFlow inception tutorial.
End of explanation
"""
import os
import numpy as np
import tensorflow as tf
from tensorflow_vgg import vgg16
from tensorflow_vgg import utils
data_dir = 'flower_photos/'
contents = os.listdir(data_dir)
classes = [each for each in contents if os.path.isdir(data_dir + each)]
"""
Explanation: ConvNet Codes
Below, we'll run through all the images in our dataset and get codes for each of them. That is, we'll run the images through the VGGNet convolutional layers and record the values of the first fully connected layer. We can then write these to a file for later when we build our own classifier.
Here we're using the vgg16 module from tensorflow_vgg. The network takes images of size $244 \times 224 \times 3$ as input. Then it has 5 sets of convolutional layers. The network implemented here has this structure (copied from the source code:
```
self.conv1_1 = self.conv_layer(bgr, "conv1_1")
self.conv1_2 = self.conv_layer(self.conv1_1, "conv1_2")
self.pool1 = self.max_pool(self.conv1_2, 'pool1')
self.conv2_1 = self.conv_layer(self.pool1, "conv2_1")
self.conv2_2 = self.conv_layer(self.conv2_1, "conv2_2")
self.pool2 = self.max_pool(self.conv2_2, 'pool2')
self.conv3_1 = self.conv_layer(self.pool2, "conv3_1")
self.conv3_2 = self.conv_layer(self.conv3_1, "conv3_2")
self.conv3_3 = self.conv_layer(self.conv3_2, "conv3_3")
self.pool3 = self.max_pool(self.conv3_3, 'pool3')
self.conv4_1 = self.conv_layer(self.pool3, "conv4_1")
self.conv4_2 = self.conv_layer(self.conv4_1, "conv4_2")
self.conv4_3 = self.conv_layer(self.conv4_2, "conv4_3")
self.pool4 = self.max_pool(self.conv4_3, 'pool4')
self.conv5_1 = self.conv_layer(self.pool4, "conv5_1")
self.conv5_2 = self.conv_layer(self.conv5_1, "conv5_2")
self.conv5_3 = self.conv_layer(self.conv5_2, "conv5_3")
self.pool5 = self.max_pool(self.conv5_3, 'pool5')
self.fc6 = self.fc_layer(self.pool5, "fc6")
self.relu6 = tf.nn.relu(self.fc6)
```
So what we want are the values of the first fully connected layer, after being ReLUd (self.relu6). To build the network, we use
with tf.Session() as sess:
vgg = vgg16.Vgg16()
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
with tf.name_scope("content_vgg"):
vgg.build(input_)
This creates the vgg object, then builds the graph with vgg.build(input_). Then to get the values from the layer,
feed_dict = {input_: images}
codes = sess.run(vgg.relu6, feed_dict=feed_dict)
End of explanation
"""
# Set the batch size higher if you can fit in in your GPU memory
batch_size = 10
codes_list = []
labels = []
batch = []
codes = None
with tf.Session() as sess:
vgg = vgg16.Vgg16()
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
with tf.name_scope("content_vgg"):
vgg.build(input_)
for each in classes:
print("Starting {} images".format(each))
class_path = data_dir + each
files = os.listdir(class_path)
for ii, file in enumerate(files, 1):
# Add images to the current batch
# utils.load_image crops the input images for us, from the center
img = utils.load_image(os.path.join(class_path, file))
batch.append(img.reshape((1, 224, 224, 3)))
labels.append(each)
# Running the batch through the network to get the codes
if ii % batch_size == 0 or ii == len(files):
images = np.concatenate(batch)
feed_dict = {input_: images}
codes_batch = sess.run(vgg.relu6, feed_dict=feed_dict)
# Here I'm building an array of the codes
if codes is None:
codes = codes_batch
else:
codes = np.concatenate((codes, codes_batch))
# Reset to start building the next batch
batch = []
print('{} images processed'.format(ii))
# write codes to file
with open('codes', 'w') as f:
codes.tofile(f)
# write labels to file
import csv
with open('labels', 'w') as f:
writer = csv.writer(f, delimiter='\n')
writer.writerow(labels)
"""
Explanation: Below I'm running images through the VGG network in batches.
End of explanation
"""
# read codes and labels from file
import csv
with open('labels') as f:
reader = csv.reader(f, delimiter='\n')
labels = np.array([each for each in reader if len(each) > 0]).squeeze()
with open('codes') as f:
codes = np.fromfile(f, dtype=np.float32)
codes = codes.reshape((len(labels), -1))
"""
Explanation: Building the Classifier
Now that we have codes for all the images, we can build a simple classifier on top of them. The codes behave just like normal input into a simple neural network. Below I'm going to have you do most of the work.
End of explanation
"""
from sklearn.preprocessing import LabelBinarizer
lb = LabelBinarizer()
lb.fit(labels)
labels_vecs = lb.transform(labels)
"""
Explanation: Data prep
As usual, now we need to one-hot encode our labels and create validation/test sets. First up, creating our labels!
Exercise: From scikit-learn, use LabelBinarizer to create one-hot encoded vectors from the labels.
End of explanation
"""
from sklearn.model_selection import StratifiedShuffleSplit
ss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)
train_idx, val_idx = next(ss.split(codes, labels))
half_val_len = int(len(val_idx)/2)
val_idx, test_idx = val_idx[:half_val_len], val_idx[half_val_len:]
train_x, train_y = codes[train_idx], labels_vecs[train_idx]
val_x, val_y = codes[val_idx], labels_vecs[val_idx]
test_x, test_y = codes[test_idx], labels_vecs[test_idx]
print("Train shapes (x, y):", train_x.shape, train_y.shape)
print("Validation shapes (x, y):", val_x.shape, val_y.shape)
print("Test shapes (x, y):", test_x.shape, test_y.shape)
"""
Explanation: Now you'll want to create your training, validation, and test sets. An important thing to note here is that our labels and data aren't randomized yet. We'll want to shuffle our data so the validation and test sets contain data from all classes. Otherwise, you could end up with testing sets that are all one class. Typically, you'll also want to make sure that each smaller set has the same the distribution of classes as it is for the whole data set. The easiest way to accomplish both these goals is to use StratifiedShuffleSplit from scikit-learn.
You can create the splitter like so:
ss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)
Then split the data with
splitter = ss.split(x, y)
ss.split returns a generator of indices. You can pass the indices into the arrays to get the split sets. The fact that it's a generator means you either need to iterate over it, or use next(splitter) to get the indices. Be sure to read the documentation and the user guide.
Exercise: Use StratifiedShuffleSplit to split the codes and labels into training, validation, and test sets.
End of explanation
"""
inputs_ = tf.placeholder(tf.float32, shape=[None, codes.shape[1]])
labels_ = tf.placeholder(tf.int64, shape=[None, labels_vecs.shape[1]])
fc = tf.contrib.layers.fully_connected(inputs_, 256)
logits = tf.contrib.layers.fully_connected(fc, labels_vecs.shape[1], activation_fn=None)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=labels_, logits=logits)
cost = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer().minimize(cost)
predicted = tf.nn.softmax(logits)
correct_pred = tf.equal(tf.argmax(predicted, 1), tf.argmax(labels_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
"""
Explanation: If you did it right, you should see these sizes for the training sets:
Train shapes (x, y): (2936, 4096) (2936, 5)
Validation shapes (x, y): (367, 4096) (367, 5)
Test shapes (x, y): (367, 4096) (367, 5)
Classifier layers
Once you have the convolutional codes, you just need to build a classfier from some fully connected layers. You use the codes as the inputs and the image labels as targets. Otherwise the classifier is a typical neural network.
Exercise: With the codes and labels loaded, build the classifier. Consider the codes as your inputs, each of them are 4096D vectors. You'll want to use a hidden layer and an output layer as your classifier. Remember that the output layer needs to have one unit for each class and a softmax activation function. Use the cross entropy to calculate the cost.
End of explanation
"""
def get_batches(x, y, n_batches=10):
""" Return a generator that yields batches from arrays x and y. """
batch_size = len(x)//n_batches
for ii in range(0, n_batches*batch_size, batch_size):
# If we're not on the last batch, grab data with size batch_size
if ii != (n_batches-1)*batch_size:
X, Y = x[ii: ii+batch_size], y[ii: ii+batch_size]
# On the last batch, grab the rest of the data
else:
X, Y = x[ii:], y[ii:]
# I love generators
yield X, Y
"""
Explanation: Batches!
Here is just a simple way to do batches. I've written it so that it includes all the data. Sometimes you'll throw out some data at the end to make sure you have full batches. Here I just extend the last batch to include the remaining data.
End of explanation
"""
epochs = 10
iteration = 0
saver = tf.train.Saver()
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for x, y in get_batches(train_x, train_y):
feed = {inputs_: x,
labels_: y}
loss, _ = sess.run([cost, optimizer], feed_dict=feed)
print("Epoch: {}/{}".format(e+1, epochs),
"Iteration: {}".format(iteration),
"Training loss: {:.5f}".format(loss))
iteration += 1
if iteration % 5 == 0:
feed = {inputs_: val_x,
labels_: val_y}
val_acc = sess.run(accuracy, feed_dict=feed)
print("Epoch: {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Validation Acc: {:.4f}".format(val_acc))
saver.save(sess, "checkpoints/flowers.ckpt")
"""
Explanation: Training
Here, we'll train the network.
Exercise: So far we've been providing the training code for you. Here, I'm going to give you a bit more of a challenge and have you write the code to train the network. Of course, you'll be able to see my solution if you need help.
End of explanation
"""
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
feed = {inputs_: test_x,
labels_: test_y}
test_acc = sess.run(accuracy, feed_dict=feed)
print("Test accuracy: {:.4f}".format(test_acc))
%matplotlib inline
import matplotlib.pyplot as plt
from scipy.ndimage import imread
"""
Explanation: Testing
Below you see the test accuracy. You can also see the predictions returned for images.
End of explanation
"""
test_img_path = 'flower_photos/sunflowers/1008566138_6927679c8a.jpg'
test_img = imread(test_img_path)
plt.imshow(test_img)
# Run this cell if you don't have a vgg graph built
with tf.Session() as sess:
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
vgg = vgg16.Vgg16()
vgg.build(input_)
with tf.Session() as sess:
img = utils.load_image(test_img_path)
img = img.reshape((1, 224, 224, 3))
feed_dict = {input_: img}
code = sess.run(vgg.relu6, feed_dict=feed_dict)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
feed = {inputs_: code}
prediction = sess.run(predicted, feed_dict=feed).squeeze()
plt.imshow(test_img)
plt.barh(np.arange(5), prediction)
_ = plt.yticks(np.arange(5), lb.classes_)
"""
Explanation: Below, feel free to choose images and see how the trained classifier predicts the flowers in them.
End of explanation
"""
|
Kaggle/learntools | notebooks/ml_intermediate/raw/ex2.ipynb | apache-2.0 | # Set up code checking
import os
if not os.path.exists("../input/train.csv"):
os.symlink("../input/home-data-for-ml-course/train.csv", "../input/train.csv")
os.symlink("../input/home-data-for-ml-course/test.csv", "../input/test.csv")
from learntools.core import binder
binder.bind(globals())
from learntools.ml_intermediate.ex2 import *
print("Setup Complete")
"""
Explanation: Now it's your turn to test your new knowledge of missing values handling. You'll probably find it makes a big difference.
Setup
The questions will give you feedback on your work. Run the following cell to set up the feedback system.
End of explanation
"""
import pandas as pd
from sklearn.model_selection import train_test_split
# Read the data
X_full = pd.read_csv('../input/train.csv', index_col='Id')
X_test_full = pd.read_csv('../input/test.csv', index_col='Id')
# Remove rows with missing target, separate target from predictors
X_full.dropna(axis=0, subset=['SalePrice'], inplace=True)
y = X_full.SalePrice
X_full.drop(['SalePrice'], axis=1, inplace=True)
# To keep things simple, we'll use only numerical predictors
X = X_full.select_dtypes(exclude=['object'])
X_test = X_test_full.select_dtypes(exclude=['object'])
# Break off validation set from training data
X_train, X_valid, y_train, y_valid = train_test_split(X, y, train_size=0.8, test_size=0.2,
random_state=0)
"""
Explanation: In this exercise, you will work with data from the Housing Prices Competition for Kaggle Learn Users.
Run the next code cell without changes to load the training and validation sets in X_train, X_valid, y_train, and y_valid. The test set is loaded in X_test.
End of explanation
"""
X_train.head()
"""
Explanation: Use the next code cell to print the first five rows of the data.
End of explanation
"""
# Shape of training data (num_rows, num_columns)
print(X_train.shape)
# Number of missing values in each column of training data
missing_val_count_by_column = (X_train.isnull().sum())
print(missing_val_count_by_column[missing_val_count_by_column > 0])
"""
Explanation: You can already see a few missing values in the first several rows. In the next step, you'll obtain a more comprehensive understanding of the missing values in the dataset.
Step 1: Preliminary investigation
Run the code cell below without changes.
End of explanation
"""
# Fill in the line below: How many rows are in the training data?
num_rows = ____
# Fill in the line below: How many columns in the training data
# have missing values?
num_cols_with_missing = ____
# Fill in the line below: How many missing entries are contained in
# all of the training data?
tot_missing = ____
# Check your answers
step_1.a.check()
#%%RM_IF(PROD)%%
num_rows = 1168
num_cols_with_missing = 3
tot_missing = 212 + 6 + 58
step_1.a.assert_check_passed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
step_1.a.hint()
#_COMMENT_IF(PROD)_
step_1.a.solution()
"""
Explanation: Part A
Use the above output to answer the questions below.
End of explanation
"""
# Check your answer (Run this code cell to receive credit!)
step_1.b.check()
#_COMMENT_IF(PROD)_
step_1.b.hint()
"""
Explanation: Part B
Considering your answers above, what do you think is likely the best approach to dealing with the missing values?
End of explanation
"""
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import mean_absolute_error
# Function for comparing different approaches
def score_dataset(X_train, X_valid, y_train, y_valid):
model = RandomForestRegressor(n_estimators=100, random_state=0)
model.fit(X_train, y_train)
preds = model.predict(X_valid)
return mean_absolute_error(y_valid, preds)
"""
Explanation: To compare different approaches to dealing with missing values, you'll use the same score_dataset() function from the tutorial. This function reports the mean absolute error (MAE) from a random forest model.
End of explanation
"""
# Fill in the line below: get names of columns with missing values
____ # Your code here
# Fill in the lines below: drop columns in training and validation data
reduced_X_train = ____
reduced_X_valid = ____
# Check your answers
step_2.check()
#%%RM_IF(PROD)%%
# Get names of columns with missing values
cols_with_missing = [col for col in X_train.columns
if X_train[col].isnull().any()]
# Drop columns in training and validation data
reduced_X_train = X_train.drop(cols_with_missing, axis=1)
reduced_X_valid = X_valid.drop(cols_with_missing, axis=1)
step_2.assert_check_passed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
step_2.hint()
#_COMMENT_IF(PROD)_
step_2.solution()
"""
Explanation: Step 2: Drop columns with missing values
In this step, you'll preprocess the data in X_train and X_valid to remove columns with missing values. Set the preprocessed DataFrames to reduced_X_train and reduced_X_valid, respectively.
End of explanation
"""
print("MAE (Drop columns with missing values):")
print(score_dataset(reduced_X_train, reduced_X_valid, y_train, y_valid))
"""
Explanation: Run the next code cell without changes to obtain the MAE for this approach.
End of explanation
"""
from sklearn.impute import SimpleImputer
# Fill in the lines below: imputation
____ # Your code here
imputed_X_train = ____
imputed_X_valid = ____
# Fill in the lines below: imputation removed column names; put them back
imputed_X_train.columns = ____
imputed_X_valid.columns = ____
# Check your answers
step_3.a.check()
#%%RM_IF(PROD)%%
# Imputation
my_imputer = SimpleImputer()
imputed_X_train = pd.DataFrame(my_imputer.fit_transform(X_train))
imputed_X_valid = pd.DataFrame(my_imputer.transform(X_valid))
step_3.a.assert_check_failed()
#%%RM_IF(PROD)%%
# Imputation
my_imputer = SimpleImputer()
imputed_X_train = pd.DataFrame(my_imputer.fit_transform(X_train))
imputed_X_valid = pd.DataFrame(my_imputer.fit_transform(X_valid))
# Imputation removed column names; put them back
imputed_X_train.columns = X_train.columns
imputed_X_valid.columns = X_valid.columns
step_3.a.assert_check_failed()
#%%RM_IF(PROD)%%
# Imputation
my_imputer = SimpleImputer()
imputed_X_train = pd.DataFrame(my_imputer.fit_transform(X_train))
imputed_X_valid = pd.DataFrame(my_imputer.transform(X_valid))
# Imputation removed column names; put them back
imputed_X_train.columns = X_train.columns
imputed_X_valid.columns = X_valid.columns
step_3.a.assert_check_passed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
step_3.a.hint()
#_COMMENT_IF(PROD)_
step_3.a.solution()
"""
Explanation: Step 3: Imputation
Part A
Use the next code cell to impute missing values with the mean value along each column. Set the preprocessed DataFrames to imputed_X_train and imputed_X_valid. Make sure that the column names match those in X_train and X_valid.
End of explanation
"""
print("MAE (Imputation):")
print(score_dataset(imputed_X_train, imputed_X_valid, y_train, y_valid))
"""
Explanation: Run the next code cell without changes to obtain the MAE for this approach.
End of explanation
"""
# Check your answer (Run this code cell to receive credit!)
step_3.b.check()
#_COMMENT_IF(PROD)_
step_3.b.hint()
"""
Explanation: Part B
Compare the MAE from each approach. Does anything surprise you about the results? Why do you think one approach performed better than the other?
End of explanation
"""
# Preprocessed training and validation features
final_X_train = ____
final_X_valid = ____
# Check your answers
step_4.a.check()
#%%RM_IF(PROD)%%
# Imputation
final_imputer = SimpleImputer(strategy='median')
final_X_train = pd.DataFrame(final_imputer.fit_transform(X_train))
final_X_valid = pd.DataFrame(final_imputer.transform(X_valid))
# Imputation removed column names; put them back
final_X_train.columns = X_train.columns
final_X_valid.columns = X_valid.columns
step_4.a.assert_check_passed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
step_4.a.hint()
#_COMMENT_IF(PROD)_
step_4.a.solution()
"""
Explanation: Step 4: Generate test predictions
In this final step, you'll use any approach of your choosing to deal with missing values. Once you've preprocessed the training and validation features, you'll train and evaluate a random forest model. Then, you'll preprocess the test data before generating predictions that can be submitted to the competition!
Part A
Use the next code cell to preprocess the training and validation data. Set the preprocessed DataFrames to final_X_train and final_X_valid. You can use any approach of your choosing here! in order for this step to be marked as correct, you need only ensure:
- the preprocessed DataFrames have the same number of columns,
- the preprocessed DataFrames have no missing values,
- final_X_train and y_train have the same number of rows, and
- final_X_valid and y_valid have the same number of rows.
End of explanation
"""
# Define and fit model
model = RandomForestRegressor(n_estimators=100, random_state=0)
model.fit(final_X_train, y_train)
# Get validation predictions and MAE
preds_valid = model.predict(final_X_valid)
print("MAE (Your approach):")
print(mean_absolute_error(y_valid, preds_valid))
"""
Explanation: Run the next code cell to train and evaluate a random forest model. (Note that we don't use the score_dataset() function above, because we will soon use the trained model to generate test predictions!)
End of explanation
"""
# Fill in the line below: preprocess test data
final_X_test = ____
# Fill in the line below: get test predictions
preds_test = ____
# Check your answers
step_4.b.check()
#%%RM_IF(PROD)%%
# Preprocess test data
final_X_test = pd.DataFrame(final_imputer.transform(X_test))
# Get test predictions
preds_test = model.predict(final_X_test)
step_4.b.assert_check_passed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
step_4.b.hint()
#_COMMENT_IF(PROD)_
step_4.b.solution()
"""
Explanation: Part B
Use the next code cell to preprocess your test data. Make sure that you use a method that agrees with how you preprocessed the training and validation data, and set the preprocessed test features to final_X_test.
Then, use the preprocessed test features and the trained model to generate test predictions in preds_test.
In order for this step to be marked correct, you need only ensure:
- the preprocessed test DataFrame has no missing values, and
- final_X_test has the same number of rows as X_test.
End of explanation
"""
# Save test predictions to file
output = pd.DataFrame({'Id': X_test.index,
'SalePrice': preds_test})
output.to_csv('submission.csv', index=False)
"""
Explanation: Run the next code cell without changes to save your results to a CSV file that can be submitted directly to the competition.
End of explanation
"""
|
Upward-Spiral-Science/claritycontrol | code/a06_test_assumptions.ipynb | apache-2.0 | import os
PATH="/Users/david/Desktop/CourseWork/TheArtOfDataScience/claritycontrol/code/scripts/" # use your own path
os.chdir(PATH)
import clarity as cl # I wrote this module for easier operations on data
import clarity.resources as rs
import csv,gc # garbage memory collection :)
import numpy as np
import matplotlib.pyplot as plt
import jgraph as ig
%matplotlib inline
# settings for histogram
BINS=32 # histogram bins
RANGE=(10.0,300.0)
"""
Explanation: HW6 Testing Assumptions
Heavily borrowed text materials and formatting from grelliam
Testing Assumptions
State assumptions
Check assumptions (with figures)
residuals
correlations
# of modes
Step 1: State assumptions
We extract the histogram data out of the raw clarity scanned data, given different conditions to the subjects.
We assume that histograms are sampled according to: $x_i \stackrel{iid}{\sim} F$. This is both an independent and identical assumption.
We assume that the data poinst are independent: $F_{X|0}=Norm(\mu_{0},\sigma_{0})^{V\times V}$.
We assume there is a class conditional difference across conditions={Control, Cocaine, Fear}.
In addition, we assume that any other differences of the subjects such as genders, ages will not or have limit affects to the data. (We cannot test on this, because we do not have access to that information.)
Step 2: Check assumptions
For independent histograms, check that off diagonal covariance is approximately 0. <br/>
$x_i \stackrel{iid}{\sim} F$<br/>
$(x_1, x_2, ..., x_n) \sim F = \prod_i^n F_i$ <br/>
$F_i = F_j, \forall i,j$
For identical histograms, check the optimal number of clusters and see if that is 1. <br/>
$F = \prod_j^J F_j, J < n$ <br/>
$\prod_j^J w_jF_j(\theta)$ <br/>
End of explanation
"""
# Load X
X = np.loadtxt("../data/hist/features.csv",delimiter=',')
print X.shape
# Load Y
y = np.array([0,0,0,1,1,1,1,1,2,2,2,2])
"""
Explanation: Histogram data preparation and Scale data
If you haven't done this before, please refer to homework for data preparation and scale data.
Setup Step
End of explanation
"""
vectorized = X
covar = np.cov(vectorized)
plt.figure(figsize=(7,7))
plt.imshow(covar)
plt.title('Covariance of Clarity Histograms datasets')
plt.colorbar()
plt.show()
diag = covar.diagonal()*np.eye(covar.shape[0])
hollow = covar-diag
d_det = np.linalg.det(diag)
h_det = np.linalg.det(hollow)
plt.figure(figsize=(11,8))
plt.subplot(121)
plt.imshow(diag)
plt.clim([0, np.max(covar)])
plt.title('Determinant of on-diagonal: ' + str(d_det))
plt.subplot(122)
plt.imshow(hollow)
plt.clim([0, np.max(covar)])
plt.title('Determinant of off-diagonal: ' + str(h_det))
plt.show()
print "Ratio of on- and off-diagonal determinants: " + str(d_det/h_det)
"""
Explanation: Independent Histogram Assumption
End of explanation
"""
import sklearn.mixture
i = np.linspace(1,12,12,dtype='int')
print i
bic = np.array(())
for idx in i:
print "Fitting and evaluating model with " + str(idx) + " clusters."
gmm = sklearn.mixture.GMM(n_components=idx,n_iter=1000,covariance_type='diag')
gmm.fit(vectorized)
bic = np.append(bic, gmm.bic(vectorized))
plt.figure(figsize=(7,7))
plt.plot(i, 1.0/bic)
plt.title('BIC')
plt.ylabel('score')
plt.xlabel('number of clusters')
plt.show()
print bic
"""
Explanation: From the above, we conclude that the assumption that the histograms were independent is most likely true.
This is because cross-graph covariance matrix is not highly influenced by the off-diagonal components of the covariance matrix.
Identical Histogram Assumption
End of explanation
"""
vect = X.T
covar = np.cov(vect)
plt.figure(figsize=(7,7))
plt.imshow(covar)
plt.title('Covariance of Clarity Histogram dataset')
plt.colorbar()
plt.show()
diag = covar.diagonal()*np.eye(covar.shape[0])
hollow = covar-diag
d_det = np.sum(diag)
h_det = np.sum(hollow)
plt.figure(figsize=(11,8))
plt.subplot(121)
plt.imshow(diag)
plt.clim([0, np.max(covar)])
plt.title('Sum of on-diagonal: ' + str(d_det))
plt.subplot(122)
plt.imshow(hollow)
plt.clim([0, np.max(covar)])
plt.title('Sum of off-diagonal: ' + str(h_det))
plt.show()
print "Ratio of on- and off-diagonal covariance sums: " + str(d_det/h_det)
"""
Explanation: From the above we observe that that our data most likely was not sampled identically from one distribution. This is an odd shape for a BIC curve, and thus we must do more investigation. This curve implies the larger number of clusters the better.
Independent Histogram Data Points Assumption
End of explanation
"""
import scipy.stats as ss
prob = 1.0*np.sum(1.0*(vectorized>0),1)/64
vals = ss.linregress(prob, y)
m = vals[0]
c = vals[1]
def comp_value(m, c, data):
return m.T*data + c
resi = np.array(())
for idx, subj in enumerate(y):
temp = comp_value(m, c, prob[idx])
resi = np.append(resi, subj - temp)
plt.figure(figsize=(7,7))
plt.scatter(prob, resi)
plt.title('Residual assignment error')
plt.xlabel('edge probability')
plt.ylabel('error')
plt.show()
"""
Explanation: The edges are not independent of one another because the ratio of on- to off-diagonal covariance is relatively small.
Class Conditional Histogram Probability Assumption
End of explanation
"""
|
GoogleCloudPlatform/asl-ml-immersion | notebooks/building_production_ml_systems/solutions/3_kubeflow_pipelines_vertex.ipynb | apache-2.0 | !pip3 install --user google-cloud-pipeline-components==0.1.1 --upgrade
"""
Explanation: Vertex pipelines
Learning Objectives:
Use components from google_cloud_pipeline_components to create a Vertex Pipeline which will
1. train a custom model on Vertex AI
1. create an endpoint to host the model
1. upload the trained model, and
1. deploy the uploaded model to the endpoint for serving
Overview
This notebook shows how to use the components defined in google_cloud_pipeline_components in conjunction with an experimental run_as_aiplatform_custom_job method, to build a Vertex Pipelines workflow that trains a custom model, uploads the model, creates an endpoint, and deploys the model to the endpoint.
We'll use the kfp.v2.google.experimental.run_as_aiplatform_custom_job method to train a custom model.
The google cloud pipeline components are documented here. From this github page you can also find other examples in how to build a Vertex pipeline with AutoML here. You can see other available methods from the Vertex AI SDK.
Set up your local development environment and install necessary packages
End of explanation
"""
import os
from datetime import datetime
import kfp
from google.cloud import aiplatform
from google_cloud_pipeline_components import aiplatform as gcc_aip
from kfp.v2 import compiler
from kfp.v2.dsl import component
from kfp.v2.google import experimental
"""
Explanation: Restart the kernel
After you install the additional packages, you need to restart the notebook kernel so it can find the packages.
Import libraries and define constants
End of explanation
"""
print(f"KFP SDK version: {kfp.__version__}")
"""
Explanation: Check the versions of the packages you installed. The KFP SDK version should be >=1.6.
End of explanation
"""
# Change below if necessary
PROJECT = !gcloud config get-value project # noqa: E999
PROJECT = PROJECT[0]
BUCKET = PROJECT
REGION = "us-central1"
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
PIPELINE_ROOT = f"gs://{BUCKET}/pipeline_root"
print(PIPELINE_ROOT)
"""
Explanation: Set your environment variables
Next, we'll set up our project variables, like GCP project ID, the bucket and region. Also, to avoid name collisions between resources created, we'll create a timestamp and append it onto the name of resources we create in this lab.
End of explanation
"""
!gsutil ls -la gs://{BUCKET}/pipeline_root
"""
Explanation: We'll save pipeline artifacts in a directory called pipeline_root within our bucket. Validate access to your Cloud Storage bucket by examining its contents. It should be empty at this stage.
End of explanation
"""
@component
def training_op(input1: str):
print(f"VertexAI pipeline: {input1}")
"""
Explanation: Give your default service account storage bucket access
This pipeline will read .csv files from Cloud storage for training and will write model checkpoints and artifacts to a specified bucket. So, we need to give our default service account storage.objectAdmin access. You can do this by running the command below in Cloud Shell:
bash
PROJECT=$(gcloud config get-value project)
PROJECT_NUMBER=$(gcloud projects list --filter="name=$PROJECT" --format="value(PROJECT_NUMBER)")
gcloud projects add-iam-policy-binding $PROJECT \
--member="serviceAccount:$PROJECT_NUMBER-compute@developer.gserviceaccount.com" \
--role="roles/storage.objectAdmin"
Note, it may take some time for the permissions to propogate to the service account. You can confirm the status from the IAM page here.
Define a pipeline that uses the components
We'll start by defining a component with which the custom training job is run. For this example, this component doesn't do anything (but run a print statement).
End of explanation
"""
# Output directory and job_name
OUTDIR = f"gs://{BUCKET}/taxifare/trained_model_{TIMESTAMP}"
MODEL_DISPLAY_NAME = f"taxifare_{TIMESTAMP}"
PYTHON_PACKAGE_URIS = f"gs://{BUCKET}/taxifare/taxifare_trainer-0.1.tar.gz"
MACHINE_TYPE = "n1-standard-16"
REPLICA_COUNT = 1
PYTHON_PACKAGE_EXECUTOR_IMAGE_URI = (
"us-docker.pkg.dev/vertex-ai/training/tf-cpu.2-3:latest"
)
SERVING_CONTAINER_IMAGE_URI = (
"us-docker.pkg.dev/vertex-ai/prediction/tf2-cpu.2-3:latest"
)
PYTHON_MODULE = "trainer.task"
# Model and training hyperparameters
BATCH_SIZE = 500
NUM_EXAMPLES_TO_TRAIN_ON = 10000
NUM_EVALS = 1000
NBUCKETS = 10
LR = 0.001
NNSIZE = "32 8"
# GCS paths
GCS_PROJECT_PATH = f"gs://{BUCKET}/taxifare"
DATA_PATH = f"{GCS_PROJECT_PATH}/data"
TRAIN_DATA_PATH = f"{DATA_PATH}/taxi-train*"
EVAL_DATA_PATH = f"{DATA_PATH}/taxi-valid*"
@kfp.dsl.pipeline(name="taxifare--train-upload-endpoint-deploy")
def pipeline(
project: str = PROJECT,
model_display_name: str = MODEL_DISPLAY_NAME,
):
train_task = training_op("taxifare training pipeline")
experimental.run_as_aiplatform_custom_job(
train_task,
display_name=f"pipelines-train-{TIMESTAMP}",
worker_pool_specs=[
{
"pythonPackageSpec": {
"executor_image_uri": PYTHON_PACKAGE_EXECUTOR_IMAGE_URI,
"package_uris": [PYTHON_PACKAGE_URIS],
"python_module": PYTHON_MODULE,
"args": [
f"--eval_data_path={EVAL_DATA_PATH}",
f"--output_dir={OUTDIR}",
f"--train_data_path={TRAIN_DATA_PATH}",
f"--batch_size={BATCH_SIZE}",
f"--num_examples_to_train_on={NUM_EXAMPLES_TO_TRAIN_ON}", # noqa: E501
f"--num_evals={NUM_EVALS}",
f"--nbuckets={NBUCKETS}",
f"--lr={LR}",
f"--nnsize={NNSIZE}",
],
},
"replica_count": f"{REPLICA_COUNT}",
"machineSpec": {
"machineType": f"{MACHINE_TYPE}",
},
}
],
)
model_upload_op = gcc_aip.ModelUploadOp(
project=f"{PROJECT}",
display_name=f"pipelines-ModelUpload-{TIMESTAMP}",
artifact_uri=f"{OUTDIR}/savedmodel",
serving_container_image_uri=f"{SERVING_CONTAINER_IMAGE_URI}",
serving_container_environment_variables={"NOT_USED": "NO_VALUE"},
)
model_upload_op.after(train_task)
endpoint_create_op = gcc_aip.EndpointCreateOp(
project=f"{PROJECT}",
display_name=f"pipelines-EndpointCreate-{TIMESTAMP}",
)
model_deploy_op = gcc_aip.ModelDeployOp(
project=f"{PROJECT}",
endpoint=endpoint_create_op.outputs["endpoint"],
model=model_upload_op.outputs["model"],
deployed_model_display_name=f"{MODEL_DISPLAY_NAME}",
machine_type=f"{MACHINE_TYPE}",
)
"""
Explanation: Now, you define the pipeline.
The experimental.run_as_aiplatform_custom_job method takes as args the component defined above, and the list of worker_pool_specs— in this case one— with which the custom training job is configured.
See full function code here
Then, google_cloud_pipeline_components components are used to define the rest of the pipeline: upload the model, create an endpoint, and deploy the model to the endpoint. (While not shown in this example, the model deploy will create an endpoint if one is not provided).
Note that the code we're using the exact same code that we developed in the previous lab 1_training_at_scale_vertex.ipynb. In fact, we are pulling the same python package executor image URI that we pushed to Cloud storage in that lab. Note that we also include the SERVING_CONTAINER_IMAGE_URI since we'll need to specify that when uploading and deploying our model.
End of explanation
"""
if not os.path.isdir("vertex_pipelines"):
os.mkdir("vertex_pipelines")
compiler.Compiler().compile(
pipeline_func=pipeline,
package_path="./vertex_pipelines/train_upload_endpoint_deploy.json",
)
"""
Explanation: Compile and run the pipeline
Now, you're ready to compile the pipeline:
End of explanation
"""
pipeline_job = aiplatform.pipeline_jobs.PipelineJob(
display_name="taxifare_pipeline",
template_path="./vertex_pipelines/train_upload_endpoint_deploy.json",
pipeline_root=f"{PIPELINE_ROOT}",
project=PROJECT,
location=REGION,
)
"""
Explanation: The pipeline compilation generates the train_upload_endpoint_deploy.json job spec file.
Next, instantiate the pipeline job object:
End of explanation
"""
pipeline_job.run()
"""
Explanation: Then, you run the defined pipeline like this:
End of explanation
"""
|
MarneeDear/softwarecarpentry | python lessons/Fundamentals/Introduction.ipynb | mit | example_variable = "ljhkjhkjkgjkg"
# I can display what is inside example_variable by using
# the print command lets us do this
# try changing the value.
print (example_variable)
"""
Explanation: What is Python and why would I use it?
Python is a programming language.
A programming language is a set words you can put together to tell a computer to do something.
We like using Python in Software Carpentry Workshops for lots of reasons
It is widely used in science
Has a huge supporting community so there are lots of ways to learn and get help
In my experience it is really nice to work with because it is easier to get started than other languages.
like this Jupyter Notebook. Not a lot of languages have this kind of thing. It's great.
Even if you aren't using Python in your work, you can use Python to learn the fundamentals of programming that will apply accross languages
What are the fundamentals?
VARIABLES
We store values inside variables
a variable can hold any kind of data type or structure
we can refer to variables in other parts of our programs
End of explanation
"""
# STRINGS are one or more characters stung togehter: "Hello World!"
greeting = "Hello World!"
print ("The greeting is, %s" % greeting)
# LISTS are collections of things in a list:
list_of_characters = ['a', 'b', 'c']
print (list_of_characters)
list_of_numbers = [1, 2, 3, 4]
print (list_of_numbers )
# We can access any value in the list by it's position in the list. This is called the index
# Indexes start at 0
print ("The second value in the list is %d" % list_of_numbers[1])
# DICTIONARIES are collections of things that you can lookup like in a real dictionary:
dictionary_of_definitions = {"aardvark" : "The aardvark is a medium-sized, burrowing, nocturnal mammal native to Africa.",
"boat" : "A boat is a thing that floats on water"}
# we can find the definition of aardvark by giving the dictionary the "key" to the definition we want.
# In this case the key is the word we want to lookup
print ("The definition of aardvark is, %s" % dictionary_of_definitions["aardvark"])
dictionary_of_colors = {1 : "purple", 2 : "green", 3 : "blue"}
# This sets up an association between numbers and colors. You can do this for any pair or pairs of associations
# Sometimes dictionaries are called "associative arrays" for this reason.
# We call each association a "key-value pair".
# A dicitonary can be thought of as a list of associations known as key-value pairs
# we can find the color associated with a number by asking the dictionary the number we want to lookup
print ("The color at item 2 is %s " % dictionary_of_colors[2])
#
"""
Explanation: DATA TYPES
characters and numbers
characters: '0' '3' ';' '?' 'x' 'y' 'z'
numbers (integers and decimals): 1 2 3 100 10000 10.0 56.9 -100 -3.765
booleans: True, False
DATA STRUCTURES
strings, lists, dictionaries, and tuples
End of explanation
"""
# TUPLES are like sets of values
# an example is a set of x and y coordinates like this
tuple_of_x_y_coordinates = (3, 4)
print (tuple_of_x_y_coordinates)
# tuples can have any number of values
coordinates = (1, 7, 38, 9, 0)
print (coordinates)
icecream_flavors = ("strawberry", "vanilla", "chocolate")
print (icecream_flavors)
# you might be asking, what is the difference between a tuple and a list
# Once you have created a list you can add more items to it
# Once you have created a tuple, you cannot add more items to it
# Let's start with an empty list
add_things_list = []
add_things_list.append("one")
print (add_things_list)
add_things_list.append("two")
print (add_things_list)
# Add things to the list above: add_things_list
"""
Explanation: ASSESSMENT
Which one of these is a valid entry in a dictionary?
"key" : "value"
"GCBHSA: "ldksghdklfghfdlgkfdhgfldkghfgfhd"
"900" : "key" : "value"
1 : 10000
End of explanation
"""
# Conditionals are how we make a decision in the program
# we do this with something called an "if statement"
# Here is an example
it_is_daytime = False # this is the variable that holds the current condition of it_is_daytime which is True or False
if it_is_daytime:
print ("Have a nice day.")
else:
print ("Have a nice night.")
# before running this cell
# what will happen if we change it_is_daytime to True?
# what will happen if we change it_is_daytime to False?
# what if a condition has more than two choices? Does it have to use a boolean?
# python if-statments will let you do that with elif
# elif stands for "else if"
user_name = "Joe"
if user_name == "Marnee": #notice the double equals sign. This is used to differentiate between comparing two values and
# and assiging a value to a variable
print ("Marnee likes to program in Python and F#")
elif user_name == "Frank":
print ("Frank does lots of interesting image processing in Python.")
elif user_name == "Julian":
print ("Julian ia an awesome programmer at Cyverse")
else:
print ("We do not know who you are")
# for each possibility of user_name we have an if or else-if statment to check the value of the name
# and print a message accordingly.
# loops tell a program to do the same thing over and over again until a certain condition is met
# we can loop over cellections of things like lists or dictionaries
# or we can create a looping structure
# LOOPING over a collection
# LIST
# If I want to print a list of fruits, I could write out each print statment like this:
print("apple")
print("banana")
print("mango")
# or I could create a list of fruit
# loop over the list
# and print each item in the list
list_of_fruit = ["apple", "banana", "mango"]
# this is how we write the loop
# "fruit" here is a variable that will hold each item in the list, the fruit, as we loop
# over the items in the list
print (">>looping>>")
for fruit in list_of_fruit:
print (fruit)
# LOOPING over a collection
# DICTIONARY
# We can do the same thing with a dictionary and each association in the dictionary
fruit_price = {"apple" : 0.10, "banana" : 0.50, "mango" : 0.75}
for key, value in fruit_price.items():
print ("%s price is %f" % (key, value))
# LOOPING a set number of times
# We can do this with range
# range automatically creates a list of numbers in a range
# here we have a list of 10 numbers starting with 0 and increasing by one until we have 10 numbers
# What will be printed
for x in range(0,10):
print (x)
# That's it. With just these data types, structures, and logic, you can build a program
# let's do that next with functions
"""
Explanation: OK great. Now what can we do with all of this?
We can plug everything together with a bit of logic and python language
and make a program that can do things like
process data
parse files
data analysis
What kind of logic are we talking about?
We are talking about something colled a "logical structure"
There are two logical structures we will use
conditionals
loops
End of explanation
"""
|
pvanheus/swc15nwu-python | Loops.ipynb | gpl-3.0 | number = 5
exponent = 3
result = 1
for _ in range(exponent):
result = result * number
print number
"""
Explanation: Challenge:
Write code using for loop and range() that takes a number and computes its exponent.
E.g. if you have 2 and 3, the answer should be 8. Use print to display the result.
Solution:
Use result to accumulate the result in, so that the sum for 5^3 effectively become 1*5*5*5. Note that the loop variable isn't actually used (it is a simple counter), we the anonymous variable _ has been used as a loop variable.
End of explanation
"""
string = 'abcdef'
reversed_string = ''
length = len(string)
for count in range(length):
reversed_string = reversed_string + string[length - count - 1]
#print (length - count - 1), reversed_string
print reversed_string
"""
Explanation: Reverse a string
Given a string, print its reverse.
End of explanation
"""
|
sdpython/ensae_teaching_cs | _doc/notebooks/td2a/td2a_some_nlp.ipynb | mit | from jyquickhelper import add_notebook_menu
add_notebook_menu()
"""
Explanation: 2A.ml - Texte et machine learning
Revue de méthodes de word embedding statistiques (~ NLP) ou comment transformer une information textuelle en vecteurs dans un espace vectoriel (features) ? Deux exercices sont ajoutés à la fin.
End of explanation
"""
from ensae_teaching_cs.data import twitter_zip
df = twitter_zip(as_df=True)
df.head(n=2).T
df.shape
"""
Explanation: Données
Nous allons travailler sur des données twitter collectées avec le mot-clé macron : tweets_macron_sijetaispresident_201609.zip.
End of explanation
"""
data = df[["retweet_count", "text"]].dropna()
data.shape
"""
Explanation: 5000 tweets n'est pas assez pour tirer des conclusions mais cela donne une idée. On supprime les valeurs manquantes.
End of explanation
"""
data.sort_values("retweet_count", ascending=False).head()
"""
Explanation: Construire une pondération
Le texte est toujours délicat à traiter. Il n'est pas toujours évident de sortir d'une information binaire : un mot est-il présent ou pas. Les mots n'ont aucun sens numérique. Une liste de tweets n'a pas beaucoup de sens à part les trier par une autre colonne : les retweet par exemple.
End of explanation
"""
from nltk.tokenize import TweetTokenizer
tknzr = TweetTokenizer(preserve_case=False)
tokens = tknzr.tokenize(data.loc[0, "text"])
tokens
"""
Explanation: Sans cette colonne qui mesure la popularité, il faut trouver un moyen d'extraire de l'information. On découpe alors en mots et on constuire un modèle de langage : les n-grammes. Si un tweet est constitué de la séquence de mots $(m_1, m_2, ..., m_k)$. On définit sa probabilité comme :
$$P(tweet) = P(w_1, w_2) P(w_3 | w_2, w_1) P(w_4 | w_3, w_2) ... P(w_k | w_{k-1}, w_{k-2})$$
Dans ce cas, $n=3$ car on suppose que la probabilité d'apparition d'un mot ne dépend que des deux précédents. On estime chaque n-grammes comme suit :
$$P(c | a, b) = \frac{ # (a, b, c)}{ # (a, b)}$$
C'est le nombre de fois où on observe la séquence $(a,b,c)$ divisé par le nombre de fois où on observe la séquence $(a,b)$.
Tokenisation
Découper en mots paraît simple tweet.split() et puis il y a toujours des surprises avec le texte, la prise en compte des tirets, les majuscules, les espaces en trop. On utilse un tokenizer dédié : TweetTokenizer ou un tokenizer qui prend en compte le langage.
End of explanation
"""
from nltk.util import ngrams
generated_ngrams = ngrams(tokens, 4, pad_left=True, pad_right=True)
list(generated_ngrams)
"""
Explanation: n-grammes
N-Gram-Based Text Categorization: Categorizing Text With Python
End of explanation
"""
from sklearn.feature_extraction.text import CountVectorizer
count_vect = CountVectorizer()
counts = count_vect.fit_transform(data["text"])
counts.shape
"""
Explanation: Exercice 1 : calculer des n-grammes sur les tweets
Nettoyage
Tous les modèles sont plus stables sans les stop-words, c'est-à-dire tous les mots présents dans n'importe quel documents et qui n'apporte pas de sens (à, de, le, la, ...). Souvent, on enlève les accents, la ponctuation... Moins de variabilité signifie des statistiques plus fiable.
Exercice 2 : nettoyer les tweets
Voir stem.
Structure de graphe
On cherche cette fois-ci à construire des coordonnées pour chaque tweet.
matrice d'adjacence
Une option courante est de découper chaque expression en mots puis de créer une matrice expression x mot ou chaque case indique la présence d'un mot dans une expression.
End of explanation
"""
type(counts)
counts[:5,:5].toarray()
data.loc[0,"text"]
counts[0,:].sum()
"""
Explanation: On aboutit à une matrice sparse ou chaque expression est représentée à une vecteur ou chaque 1 représente l'appartenance d'un mot à l'ensemble.
End of explanation
"""
from sklearn.feature_extraction.text import TfidfTransformer
tfidf = TfidfTransformer()
res = tfidf.fit_transform(counts)
res.shape
res[0,:].sum()
"""
Explanation: td-idf
Ce genre de technique produit des matrices de très grande dimension qu'il faut réduire. On peut enlever les mots rares ou les mots très fréquents. td-idf est une technique qui vient des moteurs de recherche. Elle construit le même type de matrice (même dimension) mais associe à chaque couple (document - mot) un poids qui dépend de la fréquence d'un mot globalement et du nombre de documents contenant ce mot.
$$idf(t) = \log \frac{# D}{#{d \; | \; t \in d }}$$
Où :
$#D$ est le nombre de tweets
$#{d \; | \; t \in d }$ est le nombre de tweets contenant le mot $t$
$f(t,d)$ est le nombre d'occurences d'un mot $t$ dans un document $d$.
$$tf(t,d) = \frac{1}{2} + \frac{1}{2} \frac{f(t,d)}{\max_{t' \in d} f(t',d)}$$
On construit le nombre $tfidf(t,f)$
$$tdidf(t,d) = tf(t,d) idf(t)$$
Le terme $idf(t)$ favorise les mots présent dans peu de documents, le terme $tf(t,f)$ favorise les termes répétés un grand nombre de fois dans le même document. On applique à la matrice précédente.
End of explanation
"""
sentences = [tknzr.tokenize(_) for _ in data["text"]]
sentences[0]
import gensim, logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
model = gensim.models.Word2Vec(sentences, min_count=1)
model.wv.similar_by_word("fin")
model.wv["fin"].shape
model.wv["fin"]
"""
Explanation: Exercice 3 : tf-idf sans mot-clés
La matrice ainsi créée est de grande dimension. Il faut trouver un moyen de la réduire avec TfidfVectorizer.
word2vec
word2vec From theory to practice
Efficient Estimation of Word Representations in Vector Space
word2vec
Cet algorithme part d'une répresentation des mots sous forme de vecteur en un espace de dimension N = le nombre de mots distinct. Un mot est représenté par $(0,0, ..., 0, 1, 0, ..., 0)$. L'astuce consiste à réduire le nombre de dimensions en compressant avec une ACP, un réseau de neurones non linéaires.
End of explanation
"""
from sklearn.feature_extraction.text import TfidfVectorizer
tfidf_vectorizer = TfidfVectorizer(max_df=0.95, min_df=2,
max_features=1000)
tfidf = tfidf_vectorizer.fit_transform(data["text"])
tfidf.shape
from sklearn.decomposition import NMF, LatentDirichletAllocation
lda = LatentDirichletAllocation(n_components=10, max_iter=5,
learning_method='online',
learning_offset=50.,
random_state=0)
lda.fit(tfidf)
tf_feature_names = tfidf_vectorizer.get_feature_names()
tf_feature_names[100:103]
def print_top_words(model, feature_names, n_top_words):
for topic_idx, topic in enumerate(model.components_):
print("Topic #%d:" % topic_idx)
print(" ".join([feature_names[i]
for i in topic.argsort()[- n_top_words - 1:][::-1]]))
print()
print_top_words(lda, tf_feature_names, 10)
tr = lda.transform(tfidf)
tr[:5]
tr.shape
import pyLDAvis
import pyLDAvis.sklearn
pyLDAvis.enable_notebook()
pyLDAvis.sklearn.prepare(lda, tfidf, tfidf_vectorizer)
"""
Explanation: Tagging
L'objectif est de tagger les mots comme déterminer si un mot est un verbe, un adjectif ...
grammar
Voir html.grammar.
CRF
Voir CRF
HMM
Voir HMM.
Clustering
Une fois qu'on a des coordonnées, on peut faire plein de choses.
LDA
Latent Dirichlet Application
LatentDirichletAllocation
End of explanation
"""
|
yevheniyc/Python | 1j_NLP_Python/ex04.ipynb | mit | from textblob import TextBlob
sent = "That’s a great starting point for developing custom search, content recommenders, and even AI applications."
blob = TextBlob(sent)
repr(blob)
"""
Explanation: Exercise 04: Noun phrase chunking
Sometimes it's useful to use noun phrase chunking to extract key phrases…
End of explanation
"""
for w in blob.words:
print(w)
"""
Explanation: First let's look at the individual keywords:
End of explanation
"""
# noun pharses: that's, custom search, content recommenders, ai - Super helpful!
for np in blob.noun_phrases:
print(np)
"""
Explanation: Contrast those results with noun phrases:
End of explanation
"""
|
turbomanage/training-data-analyst | blogs/lightning/2_sklearn.ipynb | apache-2.0 | %pip install cloudml-hypertune
BUCKET = 'cloud-training-demos-ml'
PROJECT = 'cloud-training-demos'
REGION = 'us-central1'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
%load_ext autoreload
%aimport ltgpred
"""
Explanation: Scikit-learn from CSV
This notebook reads the CSV data written out by the Dataflow program of 1_explore.ipynb and trains a scikit-learn model on Cloud ML Engine.
End of explanation
"""
!mkdir -p preproc/csv
!gsutil cp gs://$BUCKET/lightning/preproc_0.02_32_2/csv/*-00000-* preproc/csv
import pandas as pd
df = pd.read_csv('preproc/csv/train-00000-of-00522',
header=None,
names=[
'cx', 'cy', 'lat', 'lon', 'mean_ref_sm', 'max_ref_sm',
'mean_ref_big', 'max_ref_big', 'ltg_sm', 'ltg_big', 'has_ltg'
])
del df['has_ltg']
#df = pd.get_dummies(df)
df.head()
%%bash
export CLOUDSDK_PYTHON=$(which python3)
OUTDIR=skl_trained
DATADIR=${PWD}/preproc/csv
rm -rf $OUTDIR
gcloud ml-engine local train \
--module-name=trainer.train_skl --package-path=${PWD}/ltgpred/trainer \
-- \
--job-dir=$OUTDIR --train_data=${DATADIR}/train* --eval_data=${DATADIR}/eval*
"""
Explanation: Train sklearn model locally
End of explanation
"""
%%writefile largemachine.yaml
trainingInput:
scaleTier: CUSTOM
masterType: complex_model_l
%%bash
export CLOUDSDK_PYTHON=$(which python3)
OUTDIR=gs://${BUCKET}/lightning/skl_trained
DATADIR=gs://$BUCKET/lightning/preproc_0.02_32_2/csv
JOBNAME=ltgpred_skl_$(date -u +%y%m%d_%H%M%S)
gsutil -m rm -rf $OUTDIR
gcloud ml-engine jobs submit training $JOBNAME \
--module-name=ltgpred.trainer.train_skl --package-path=${PWD}/ltgpred --job-dir=$OUTDIR \
--region=${REGION} --scale-tier=custom --config=largemachine.yaml \
--python-version=3.5 --runtime-version=1.9 \
-- \
--train_data=${DATADIR}/train-001* --eval_data=${DATADIR}/eval-0000*
"""
Explanation: Training sklearn model on CMLE
End of explanation
"""
|
crawles/spark-nba-analytics | nba_spark.ipynb | mit | %matplotlib inline
import os
import numpy as np
import pandas as pd
import seaborn as sns
from nba_utils import draw_3pt_piechart,plot_shot_chart
from IPython.core.display import display, HTML
from IPython.core.magic import register_cell_magic, register_line_cell_magic, register_line_magic
from matplotlib import pyplot as plt
from pyspark.ml.regression import LinearRegression
from pyspark.ml.feature import VectorAssembler
from pyspark.sql.functions import array, col, count, mean, sum, udf, when
from pyspark.sql.types import DoubleType, IntegerType, StringType, Row
from pyspark.sql.functions import sum, col, udf
import warnings
warnings.filterwarnings("ignore")
sns.set_style("white")
sns.set_color_codes()
"""
Explanation: Using Python and Apache Spark to Analyze the NBA and the 3-point Shot
<div style="font-size:16px;color:grey">Chris Rawles</div>
<a href="https://github.com/crawles/spark-nba-analytics"><img style="position: absolute; top: 0; right: 0; border: 0;" src="https://camo.githubusercontent.com/38ef81f8aca64bb9a64448d0d70f1308ef5341ab/68747470733a2f2f73332e616d617a6f6e6177732e636f6d2f6769746875622f726962626f6e732f666f726b6d655f72696768745f6461726b626c75655f3132313632312e706e67" alt="Fork me on GitHub" data-canonical-src="https://s3.amazonaws.com/github/ribbons/forkme_right_darkblue_121621.png"></a>
Apache Spark has become a common tool for large-scale data analysis, and in this post we show how to use the (at time of writing) newest version of Spark, Spark 2.1.0, to analyze NBA data. Specifically we will use season totals data from 1979 to 2016 and shot chart data to visualize how the NBA has continued to trend towards shooting more and more 3-point shots.
Using Python 3, we utilize the Spark Python API (PySpark) to create and analyze Spark DataFrames. In addition, we utilize both Spark SQL and the Spark DataFrame’s domain-specific language to cleanse and visualize the season total data, finally building building a simple linear regression model using the spark.ml package -- Spark’s now primary machine learning API. In the second half of this notebook, we utilize shot chart data to visualize the the 3-point trend.
Since the 3-point line was introduced in the 1979-80 season, there has been a steady increase in the number of 3-point shots taken. At it's core this phenomenon can be explained by the average number of points per shot. Simply put, shooting a high accuracy from three results in a very high points per shot. At the end of this post we visually demonstrate the power of the 3-point shot and its very high efficiency.
The code
As a first step, we import pyspark submodules and additional packages to aid analysis including matplotlib, numpy, pandas, and seaborn.
End of explanation
"""
# set default plot settings
plt.style.use('fivethirtyeight')
plt.rcParams['figure.figsize'] = (9, 5)
plt.rcParams['font.family'] = 'serif'
plt.rcParams['font.serif'] = 'Ubuntu'
plt.rcParams['font.monospace'] = 'Ubuntu Mono'
plt.rcParams['font.size'] = 9
plt.rcParams['axes.labelsize'] = 11
plt.rcParams['axes.labelweight'] = 'bold'
plt.rcParams['xtick.labelsize'] = 11
plt.rcParams['ytick.labelsize'] = 11
plt.rcParams['legend.fontsize'] = 14
plt.rcParams['figure.titlesize'] = 18
# for export purposes only
display(HTML('<style>.container {width:80% !important;}</style>'))
update_title = 'document.title = "Using Python and Apache Spark to Analyze the NBA and the 3-point Shot";'
HTML('<script>{}</script>'.format(update_title))
"""
Explanation: As an optional additional step, we specify our custom plotting styles:
End of explanation
"""
df = spark.read.option('header','true')\
.option('inferSchema','true')\
.csv('data/season_totals.csv')
# we now cache the data in memory for faster access down the line
df.cache()
"""
Explanation: Season Totals: The Rise of the 3-Point Shot
Using our SparkSession object named spark, which is essentially the entry point to the Spark DataFrame API, we can read in a CSV file as a Spark DataFrame (Note: this variable spark is automatically available to us when we launch pyspark). We read in the season total data from CSV, which was sourced from Basketball Reference. The code for acquiring this data is located in the appendix.
Note: If you are running this with Hadoop, you will need to put the data into HDFS:<br>
$ hadoop fs -put /home/hadoop/spark-nba-analytics/data /user/hadoop
End of explanation
"""
df.orderBy('pts',ascending = False).limit(10).toPandas()[['yr','player','age','pts','fg3']]
"""
Explanation: Using our DataFrame df, we can view the top 10 players, sorted by number of points in an individual season. Notice we use the toPandas function to retrieve our results. The corresponding result looks cleaner for display than when using the take function.
End of explanation
"""
print(df.columns)
"""
Explanation: We can also view the column names of our DataFrame:
End of explanation
"""
# 3 point attempts / 36 minute
fga_py = df.groupBy('yr')\
.agg({'mp' : 'sum', 'fg3a' : 'sum'})\
.select(col('yr'), (36*col('sum(fg3a)')/col('sum(mp)')).alias('fg3a_p36m'))\
.orderBy('yr')
"""
Explanation: Next, using the DataFrame domain specific language (DSL), we can analyze the number of 3 point attempts taken each season computing the average attempts per 36 minutes for each season.
End of explanation
"""
# or could use SQL
sqlContext.registerDataFrameAsTable(df, 'df')
fga_py = sqlContext.sql('''SELECT yr,
sum(fg3a)/sum(mp)*36 fg3a_p36m
FROM df GROUP BY yr
ORDER BY yr''')
"""
Explanation: Alternatively, we can utilize Spark SQL to perform the same query using plain SQL syntax:
End of explanation
"""
_df = fga_py.toPandas()
plt.plot(_df.yr,_df.fg3a_p36m, color = '#00a79c')
plt.xlabel('Year')
plt.ylabel('Number of attempts')
_=plt.title('Player average 3-point attempts (per 36 minutes)')
_=plt.annotate('3 pointer introduced', xy=(1980.5, .5), xytext=(1981, 1.1), fontsize = 12,
arrowprops=dict(facecolor='grey', shrink=0.05, linewidth = 2))
_=plt.annotate('NBA moved in\n3-point line', xy=(1993.7, 1.5), xytext=(1987, 1.79), fontsize = 12,
arrowprops=dict(facecolor='grey', shrink=0.05, linewidth = 2))
_=plt.annotate('NBA moved back\n3-point line', xy=(1998, 2.), xytext=(1998.5, 2.4), fontsize = 12,
arrowprops=dict(facecolor='grey', shrink=0.05, linewidth = 2))
plt.tight_layout()
plt.savefig('results/3_point_trend.png')
"""
Explanation: Now that we have aggregated our data and computed the average per 36 minute attempts for each season, we can query our results into a Pandas DataFrame and plot it using matplotlib.
End of explanation
"""
# train the model
t = VectorAssembler(inputCols=['yr'], outputCol = 'features')
training = t.transform(fga_py)\
.withColumn('yr',fga_py.yr)\
.withColumn('label',fga_py.fg3a_p36m)
training.toPandas().head()
"""
Explanation: We can see a steady rise in the number of 3 point attempts since the shot's introduction in the 1979-80 season. It's interesting and logical to observe the the blip in number of attempts during the period in the mid 90's when the NBA moved the line in a few feet. In addition, there has also been a more sudden rise in the number of attempts in the past 5 years.
Building a linear regression model
We can fit a linear regression model to this curve to model the increase in shot attempts and also to make a prediction for the next 5 years. Of course, this assumes a linear nature of the rate of increase and is likely a naive assumption.
Firstly, we must transform our data using the VectorAssembler function to a single column where each row consists of a feature vector. This is a requirement for the linear regression function in ML Pipelines. We first build the transformer using our single variable yr and transform our season total data using the transformer function.
End of explanation
"""
lr = LinearRegression(maxIter=10)
model = lr.fit(training)
"""
Explanation: We then build our linear regression model object using our transformed data.
End of explanation
"""
# apply model for the 1979-80 season thru 2020-21 season
training_yrs = training.select('yr').rdd.map(lambda x: x[0]).collect()
training_y = training.select('fg3a_p36m').rdd.map(lambda x: x[0]).collect()
prediction_yrs = [2017, 2018, 2019, 2020, 2021]
all_yrs = training_yrs + prediction_yrs
# built testing DataFrame
test_rdd = sc.parallelize(all_yrs)
row = Row('yr')
all_years_features = t.transform(test_rdd.map(row).toDF())
# apply linear regression model
df_results = model.transform(all_years_features).toPandas()
"""
Explanation: Next, we want to apply our trained model object model to our original training set along with 5 years of future data. Spanning this date frame, we build a test DataFrame, transform it to features, and then apply our model to make a prediction.
End of explanation
"""
plt.plot(df_results.yr,df_results.prediction, linewidth = 2, linestyle = '--',color = '#fc4f30', label = 'L2 Fit')
plt.plot(training_yrs, training_y, color = '#00a79c', label = None)
plt.xlabel('Year')
plt.ylabel('Number of attempts')
plt.legend(loc = 4)
_=plt.title('Player average 3-point attempts (per 36 minutes)')
plt.tight_layout()
plt.savefig('results/model_prediction.png')
"""
Explanation: We can then plot our results:
End of explanation
"""
# reset style for pretty shot charts
plt.style.use('default')
sns.set_style("white")
df = spark.read\
.option('header', 'true')\
.option('inferSchema', 'true')\
.csv('data/shot_charts_top_10/1000_plus_shot_charts_2011_2016.csv')
df.cache() # optimizes performance for later calls to this dataframe
print(df.count())
df.orderBy('game_date').limit(10).toPandas()[['yr','name','game_date','shot_distance','x','y','shot_made_flag']]
"""
Explanation: Shot chart data
In addition to season total data, we process and analyze NBA shot charts to view the impact the 3-point revolution has had on shot selection. The shot chart data was acquired from nbasavant.com, which sources it's data from NBA.com and ESPN.
The shot chart data contains xy coordinates of field goal attempts for individual players, game date, time of shot, shot distance, a shot made flag, and other fields. We have compiled all individual seasons where a player attempted at least 1000 field goals attempts from the 2010-11 through the 2015-16 season.
As before we can read in the CSV data into a Spark DataFrame.
End of explanation
"""
player = 'Stephen Curry'
yr = '2016'
df_steph = df.filter('''name == "{player}"
and yr == {yr}
and y < 400'''.format(player = player,
yr = yr))
x = np.array([v[0] for v in df_steph.select('x').collect()])
y = np.array([v[0] for v in df_steph.select('y').collect()])
p=plot_shot_chart(x, y, gridsize = 30,
kind='hex',
label='Steph Curry\n2016')
p.savefig('results/steph_curry_2016_shotchart.png')
"""
Explanation: We can query an individual player and season and visualize their shots locations. We utilize code based on Savvas Tjortjoglou's wonderful example.
As an example, we visualize Steph Curry's 2015-2016 historic shooting season using a hexbin plot.
End of explanation
"""
def is_corner_3(xy):
'''Want to identify corner 3 point attempts'''
x,y = xy
return int((abs(x) >= 220) and (y < 92.5))
def is_normal_3(xycorner3):
'''Want to identify normal (not corner 3) point attempts'''
x,y,corner3 = xycorner3
radius = 475/2.
y_3pt = np.sqrt(np.square(radius) - np.square(x))
return int(y > max(92.5,y_3pt) and not corner3)
"""
Explanation: The shot chart data is rich in information, but it does not specify if a given shot is a 3-point attempt, not to mention if the shot is a corner 3. Not to worry! We can utilize Spark's User Defined Functions (UDF) to assign a classification to every shot.
Here we defined our shot classification functions using standard Python functions even utilizing numpy routines as well.
End of explanation
"""
corner_3_udf = udf(is_corner_3, IntegerType())
normal_3_udf = udf(is_normal_3, IntegerType())
df2 = df.withColumn('corner_3', corner_3_udf(array([df.x,df.y])))
df3 = df2.withColumn('normal_3', normal_3_udf(array([df2.x,df2.y,df2.corner_3])))
df4 = df3.withColumn('is_a_3', df3.corner_3 + df3.normal_3)
df = df4
df.cache()
"""
Explanation: We then register our UDFs and apply each UDF to the entire dataset to classify each shot type:
End of explanation
"""
# make shot charts for all years
midrange_thresh = 8
in_half_court = 'y <= 400 and abs(x) <= 250'
addl_filter = 'shot_distance > {midrange_thresh}'.format(midrange_thresh = midrange_thresh)
for yr in range(2011,2016+1):
df_yr = df.filter('''{in_half_court}
and yr == {yr}
and {addl_filter}'''.format(in_half_court = in_half_court,
yr = yr,
addl_filter = addl_filter))
x = np.array([v[0] for v in df_yr.select('x').collect()])
y = np.array([v[0] for v in df_yr.select('y').collect()])
p = plot_shot_chart(x,y, gridsize = 30, kind = 'kde', label = yr)
p.fig.suptitle('Evolution of the 3 point shot', x = .19, y = 0.86, size = 20, fontweight= 'bold')
per_3 = df_yr.select(mean(df_yr.is_a_3)).take(1)[0][0]
per_midrange = 1 - per_3
draw_3pt_piechart(per_3, per_midrange)
p.savefig('results/all_years/{}.png'.format(yr))
plt.close()
"""
Explanation: Shot attempts have changed in the past 6 years
We can visualize the change in the shot selection using all of our data from the 2010-11 season up until the 2015-16 season. For visualization purposes, we exclude all shot attempts taken inside of 8 feet as we would like to focus on the midrange and 3 point shots. An accompanying trend to the increase of 3-point shots is the decrease in midrange attempts and this can be visualized as well.
End of explanation
"""
# convert to gif
!convert -delay 200 -loop 0 results/all_years/*.png results/evolution_3pt.gif
"""
Explanation: After we generate shot charts for each year, we create an animated gif using the ImageMagick convert routine:
End of explanation
"""
shot_acc = df.groupBy('shot_distance','corner_3','normal_3','is_a_3')\
.agg(count('*').alias('num_attempts'),mean(df.shot_made_flag).alias('shot_accuracy'))\
.withColumn('points_per_shot',when(col('is_a_3') == 1, col('shot_accuracy')*3)
.otherwise(col('shot_accuracy')*2)
)\
.filter('num_attempts > 5')\
.orderBy('shot_distance')\
.toPandas()
"""
Explanation: <img src='results/evolution_3pt.gif'>
Over the years, there is a notable trend towards more three pointers and less midrange shots.
Points per shot
Finally, we end with where we started. The motivating factor and the math behind the efficiency of the 3 point shot can be attributed to its very high points per shot. The simple math means that shooting 33% on 3-point shots is equal to shooting 50% on 2-point shots. Thus high percentage 3-pointers can be very valuable shots.
We compute the points per shot vs. distance on our shot chart dataset:
End of explanation
"""
plt.style.use('fivethirtyeight')
def plot_acc_vs_dist(df,kwargs = {}):
plt.plot(df.shot_distance, df.points_per_shot, **kwargs)
plot_acc_vs_dist(shot_acc.query('is_a_3 == False'), {'color' : '#008fd5'})
plot_acc_vs_dist(shot_acc.query('is_a_3 == True'), {'color' : '#008fd5'})
plt.title('Shot value vs. shot distance, 2011-2016 seasons\n Players with 1000+ attempts in a season', size = 14)
plt.xlim(0,30)
plt.xlabel('Shot Distance (ft)')
plt.ylabel('Points per shot')
plt.annotate('high efficiency 2s', xy=(2., 1.15), xytext=(4.5, 1.28),
arrowprops=dict(facecolor='grey', shrink=0.05),
)
plt.annotate('high efficiency 3s', xy=(22, 1.15), xytext=(13.5, 1.15),
arrowprops=dict(facecolor='grey', shrink=0.05),
)
plt.text(22, 1.28, 'corner 3s', fontsize = 12)
plt.tight_layout()
plt.savefig('results/pps.png')
"""
Explanation: We then the plot points per shot vs. shot distance:
End of explanation
"""
from bs4 import BeautifulSoup
import requests
def parse_html_table(html_table):
'''For parsing basketball reference stats html table.
We will apply this function to every seasons html page.'''
data = []
cur_row = []
row_names = []
for ele in html_table:
stat_name = ele['data-stat']
stat_value = ele.string
new_row = (stat_name == 'player')
if new_row:
if cur_row:
data.append(cur_row)
cur_row = []
col_names = []
cur_row.append(ele['csk']) # fixes weird asterisk error
col_names.append(stat_name)
continue
cur_row.append(stat_value)
col_names.append(stat_name)
return data, col_names
# # Loop thru each year and collect data
# dfs = []
# for yr in range(1980,(2016 + 1)):
# url = 'http://www.basketball-reference.com/leagues/NBA_{yr}_totals.html'.format(yr = yr)
# r = requests.get(url)
# soup = BeautifulSoup(r.text)
# yr_data, col_names = parse_html_table(soup.findAll('td'))
# df = pd.DataFrame(yr_data, columns = col_names)
# df['yr'] = yr
# dfs.append(df)
# all_seasons = pd.concat(dfs)
# all_seasons.to_csv('data/season_totals.csv')
"""
Explanation: Among the top scorers in the league, corner 3 and other close 3-point attempts are among the most efficient shots in the league and are on par with shots taken within just a few feet of the basket. It's no wonder that accurate 3-point shooting is among the most valuable talents in the NBA today!
Appendix
Scrape season totals
Using Beautiful Soup, we scrape the season totals for every player from the 1979-1980 season up until the 2015-2016 season.
End of explanation
"""
|
biosustain/cameo-notebooks | other/co-factor-swapping.ipynb | apache-2.0 | from cameo import models
model_orig = models.bigg.iJO1366
from cameo.strain_design.heuristic.evolutionary.optimization import CofactorSwapOptimization
from cameo.strain_design.heuristic.evolutionary.objective_functions import product_yield
from cameo.strain_design.heuristic.evolutionary.objective_functions import biomass_product_coupled_yield
from cameo.util import TimeMachine
from cameo.flux_analysis.analysis import flux_variability_analysis as fva
"""
Explanation: Optimize co-factor swap
Many metabolic enzymes depend on co-factors to function. Keeping balance between co-factors is important for homeostasis and that balance might interact unfavorable with metabolic engineering. And example of such balance is that between the two similar co-factor pairs NAD+/NADH and NADP+/NADPH. These co-factors are not only similar there are even enzymes that catalyze the same reaction but depend on different co-factors. Enzymes can also be enigneered to change their co-factor preference.
This opens an opportunity for using co-factor swaps to optimize production of a target metabolite. Figuring out which reactions should be subjected to co-factor swap can be done using the OptSwap algorithm. Briefly, the algorithm uses a genetic algorithm to test combinations of reactions to co-factor swap and then reports those that results in higher theoretical maximum yield.
We have implemented a variant of OptSwap in cameo and here we test it to reproduce the results in that paper for the aerobic yeast metabolism.
Get the model and import modules.
End of explanation
"""
model = model_orig.copy()
"""
Explanation: We make a copy of the model for easy testing.
End of explanation
"""
for rid in ['FHL', 'CAT', 'SPODM', 'SPODMpp']:
model.reactions.get_by_id(rid).knock_out()
model.reactions.POR5.lower_bound = 0
model.reactions.EX_glc__D_e.lower_bound = -10
model.reactions.EX_o2_e.lower_bound = -10
model.reactions.BIOMASS_Ec_iJO1366_core_53p95M.lower_bound = 0.1
"""
Explanation: Make model changes as indicated in the paper.
End of explanation
"""
model.objective = model.reactions.EX_thr__L_e
(model.solve().f * 4) / (model.reactions.EX_glc__D_e.flux * 6)
"""
Explanation: In the paper they get 0.77 maximum product yield for l-threonine, which we also get
End of explanation
"""
py = product_yield(model.reactions.EX_thr__L_e, model.reactions.EX_glc__D_e)
optswap = CofactorSwapOptimization(model=model, objective_function=py)
"""
Explanation: Let's run optswap using the CofactorSwapOptimization class in cameo. We use a product-yield function to evaluate how good a solution is, but there are other possibilities like biomass_coupled_product_yield.
GAPD is suggested in the paper as a suitable reactions
End of explanation
"""
optswap.run(max_evaluations=2000, max_size=2)
"""
Explanation: We also observe GAPD among the best options even though we considered many more reactions than in the original paper
End of explanation
"""
list(optswap.model.swapped_reactions)[0:10]
optswap.model.reactions.EX_thr__L_e.model = optswap.model
optswap.model.objective = optswap.model.reactions.EX_thr__L_e
original = (optswap.model.solve().f * 4) / (-optswap.model.reactions.EX_glc__D_e.flux * 6)
with TimeMachine() as tm:
optswap.model.swap_reaction('GAPD', tm)
swapped = (optswap.model.solve().f * 4) / (-optswap.model.reactions.EX_glc__D_e.flux * 6)
print("product/substrate yield without swap: {}\nproduct/substrate yield with swap: {}".format(original, swapped))
"""
Explanation: The created optswap class has properties to check which reactions were tested and to perform swapping, let's list the first 10.
End of explanation
"""
|
alhamdubello/sc-python | 01-csv-data.ipynb | mit | # Python requets Library lets us get data straight from a URL
import requests
url = "http://climatedataapi.worldbank.org/climateweb/rest/v1/country/cru/tas/year/GBR.csv"
response = requests.get(url)
if response.status_code != 200:
print ('Failed to get data:', response.status_code)
else:
print ('First 100 characters of the data are:')
print ( response.text[:100])
"""
Explanation: Getting Data from the the web
The world bank provides climate data via it's web
http://climatedataapi.worldbank.org/climateweb/rest/v1/country/cru/var/year/iso3.ext
var is either tas or pr ext is usually CSV iso3 is the ISO standard 3 letter code for the country of interset (in capitals)
End of explanation
"""
url = "http://climatedataapi.worldbank.org/climateweb/rest/v1/country/cru/tas/year/GTM.csv"
response = requests.get(url)
if response.status_code != 200:
print ('Failed to get data:', response.status_code)
else:
print ('First 100 characters of the data are:')
print ( response.text[:100])
url = "http://climatedataapi.worldbank.org/climateweb/rest/v1/country/annualavg/pr/1980/1999/AFG.csv"
response = requests.get(url)
if response.status_code != 200:
print ('Failed to get data:', response.status_code)
else:
print ('First 100 characters of the data are:')
print ( response.text[:100])
# Create a csv file: test01.csv
1901,12.3
1902,45.6
1903,78.9
with open ('test01.csv', 'r') as reader:
for line in reader:
print (len (line))
with open ('test01.csv', 'r') as reader:
for line in reader:
fields = line.split(',')
print (fields)
# We need to get rid of the hidden newline \n
with open ('test01.csv', 'r') as reader:
for line in reader:
fields = line.strip().split(',')
print (fields)
"""
Explanation: Gettting Status codes for Guatemala (Country Code is GTM)
Fetch rainfall for Afghanistan between 1980 and 1999
End of explanation
"""
import csv
with open ('test01.csv', 'r') as rawdata:
csvdata = csv.reader(rawdata)
for record in csvdata:
print (record)
url = "http://climatedataapi.worldbank.org/climateweb/rest/v1/country/cru/pr/year/GBR.csv"
response = requests.get(url)
if response.status_code != 200:
print ('Failed to get data:', response.status_code)
else:
wrapper = csv.reader(response.text.strip().split('\n'))
for record in wrapper:
if record[0] != 'year':
year = int(record [0])
value = float(record [1])
print (year, value)
"""
Explanation: Using CSV library instead
End of explanation
"""
|
amitkaps/hackermath | Module_1e_logistic_regression.ipynb | mit | import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('fivethirtyeight')
plt.rcParams['figure.figsize'] = (10, 6)
pop = pd.read_csv('data/cars_small.csv')
pop.head()
"""
Explanation: Logistic Regression (Classification)
So far we have been looking at regression problems, where the solution are continous variables. Let us look at a binary classification problem - where we need to identify the correct class [0 or 1]
End of explanation
"""
class_mapping = {'Hatchback': 0, 'Sedan': 1}
pop['types'] = pop['type'].map(class_mapping)
plt.scatter(pop.price, pop.types, c=pop.types, s = 150, alpha = 0.8 )
plt.xlabel('price')
plt.ylabel('types')
"""
Explanation: Lets say we want to classify the vehicles by 'Hatchback' and 'Sedan'
End of explanation
"""
def ols (df, xlabel, ylabel):
n = df.shape[0]
x0 = np.ones(n)
x1 = df[xlabel]
X = np.c_[x0, x1]
X = np.asmatrix(X)
y = np.transpose(np.asmatrix(df[ylabel]))
X_T = np.transpose(X)
X_pseudo = np.linalg.inv(X_T * X) * X_T
beta = X_pseudo * y
return beta
def plot_ols(df, xlabel, ylabel):
beta = ols(df, 'price', 'types')
beta_0 = beta.item(0)
beta_1 = beta.item(1)
plt.scatter(df[xlabel], df[ylabel], c=df[ylabel], s = 150, alpha = 0.8 )
plt.xlabel(xlabel)
plt.ylabel(ylabel)
y = beta_0 + beta_1 * df[xlabel]
plt.plot(df[xlabel], y, '-')
cutoff = (0.5 - beta_0)/beta_1
plt.vlines(cutoff, -0.4, 1.4)
ols(pop, 'price', 'types')
plot_ols(pop, 'price', 'types')
"""
Explanation: Why Linear Function does not work
Now we can use a linear function and try to choose a cutoff value to show where the two class separate - lets say cutoff = 0.5
End of explanation
"""
pop1 = pop.copy()
pop1.tail()
# Lets create an outlier
pop1.loc[37,'price'] = 1500
pop1.loc[41,'price'] = 2000
plot_ols(pop1, 'price', 'types')
"""
Explanation: However, there are two problems with this approach
- The cut-off value is highly influenced by outliers
- The value of linear regression is unbounded, while our classification is bounded
End of explanation
"""
z = np.linspace(-10, 10, 100)
p = 1/(1+np.exp(-z))
plt.plot(z,p)
plt.hlines(0.5, -20,20)
plt.vlines(0, 0,1)
plt.xlabel('z')
plt.ylabel('P(z)')
"""
Explanation: Logistic Function
So we need a function which is bounded between 0 < f(X) < 1, for our classification to work. We will use a logit function for this purpose
$$ P(z) = \frac {1}{1 + e^{-z}} $$
End of explanation
"""
plt.scatter(pop['kmpl'], pop['price'], c=pop['types'], s = 150, alpha = 0.8 )
plt.xlabel('kmpl')
plt.ylabel('price')
"""
Explanation: So now we can transpose our linear regression problem with this logit function
$$ y = X\beta $$
becomes
$$ P(y) = P(X\beta) $$
$$ y = 0, when \, P(y) >0.5 $$
which is equivalent to $$X\beta > 0 $$
Lets see this in the example in 'price' and 'kmpl' variables, for class $types$
$$ \beta_0 + \beta_1price + \beta_2kmpl + \epsilon > 0 $$
So we are looking for a line which will fulfill the above condition
End of explanation
"""
z = np.linspace(0.001, 0.999, 1000)
c1 = -np.log(z)
c2 = -np.log(1-z)
plt.plot(z,c1)
plt.plot(z,c2)
#plt.hlines(0.5, -10,10)
#plt.vlines(0, 0,1)
plt.xlabel('z')
plt.ylabel('Cost')
"""
Explanation: Cost Function
$$Cost (P(X\beta), y)) =
\begin{cases}
-log(P(X\beta)), & \text{if $y=1$ } \
-log(1 -P(X\beta)), & \text{if $y =0$}
\end{cases}$$
End of explanation
"""
n = pop.shape[0]
x0 = np.ones(n)
x1 = pop.kmpl
x2 = pop.price
X_actual = np.c_[x1, x2]
X_norm = (X_actual - np.mean(X_actual, axis=0)) / np.std(X_actual, axis=0)
X = np.c_[x0, X_norm]
X = np.asmatrix(X)
y = np.asmatrix(pop.types.values.reshape(-1,1))
b = np.asmatrix([[0],[0],[0]])
def P(z):
return 1.0/(1+np.exp(-z))
def cost(X,y,b,n):
C = (- y.T*np.log(P(X*b))-(1-y.T)*np.log(1-P(X*b)))/n
return C[0,0]
def gradient(X,y,b,n):
g = (2/n)*X.T*(P(X*b) - y)
return g
def gradient_descent_logistic (eta, epochs, X, y, n):
# Set Initial Values
b = np.asmatrix([[0],[0],[0]])
c = cost(X,y,b,n)
c_all = []
c_all.append(c)
# Run the calculation for those many epochs
for i in range(epochs):
g = gradient(X,y,b,n)
b = b - eta * g
c = cost(X,y,b,n)
c_all.append(c)
return c_all, b
x1_min, x1_max = -3, 3
x2_min, x2_max = -3, 3
xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, (x1_max - x1_min)/100),
np.arange(x2_min, x2_max, (x2_max - x2_min)/100))
xx = np.c_[np.ones(xx1.ravel().shape[0]), xx1.ravel(), xx2.ravel()]
def plot_gradient_descent(eta, epoch, gradient_func):
es, bs = gradient_func(eta, epoch, X, y, n)
# Plot the intercept and coefficients
plt.subplot(1, 2, 1)
#plt.tight_layout()
# Plot the probabilty plot contour
Z = P(xx*bs)
Z = Z.reshape(xx1.shape)
cs = plt.contourf(xx1, xx2, Z, cmap=plt.cm.viridis, alpha = 0.5)
plt.colorbar(cs)
# Plot the intercept and coefficients
plt.scatter(X[:,1], X[:,2], c=pop.types, s = 150, alpha = 0.8 )
plt.xlabel('kmpl')
plt.ylabel('price')
# Plot the error rates
plt.subplot(1, 2, 2)
plt.plot(es)
plt.xlabel('Epochs')
plt.ylabel('Error')
plot_gradient_descent(0.05, 1000, gradient_descent_logistic)
"""
Explanation: Gradient Descent for Logistic Function
To make it easier to work with we can write this in one line - Log-Likelihood Function
$$ C(\beta) = \frac{1}{n} (- y^T * log(P(X\beta)) - (1- y^T) * log(1 -P(X\beta))) $$
If we were to differentiate this, we will get our gradient which is very similiar to our linear regression one:
$$ \nabla C(\beta) = \frac {2}{n} X^T(P(X\beta)−y) $$
and are gradient descent algorithm will be
$$ \beta_{i+1} = \beta_{i} - \eta * \nabla C(\beta)$$
End of explanation
"""
|
johntanz/ROP | Old Code/Masimo160127.ipynb | gpl-2.0 | #the usual beginning
import pandas as pd
import numpy as np
from pandas import Series, DataFrame
from datetime import datetime, timedelta
from pandas import concat
#define any string with 'C' as NaN
def readD(val):
if 'C' in val:
return np.nan
return val
"""
Explanation: Masimo Analysis
For Pulse Ox. Analysis, make sure the data file is the right .csv format:
a) Headings on Row 1
b) Open the csv file through Notepad or TextEdit and delete extra
row commas (non-printable characters)
c) There are always Dates in Column A and Time in Column B.
d) There might be a row that says "Time Gap Present". Delete this row from Notepad
or TextEdit
End of explanation
"""
df = pd.read_csv('/Users/John/Dropbox/LLU/ROP/Pulse Ox/ROP007PO.csv',
parse_dates={'timestamp': ['Date','Time']},
index_col='timestamp',
usecols=['Date', 'Time', 'SpO2', 'PR', 'PI', 'Exceptions'],
na_values=['0'],
converters={'Exceptions': readD}
)
#parse_dates tells the read_csv function to combine the date and time column
#into one timestamp column and parse it as a timestamp.
# pandas is smart enough to know how to parse a date in various formats
#index_col sets the timestamp column to be the index.
#usecols tells the read_csv function to select only the subset of the columns.
#na_values is used to turn 0 into NaN
#converters: readD is the dict that means any string with 'C' with be NaN (for PI)
#dfclean = df[27:33][df[27:33].loc[:, ['SpO2', 'PR', 'PI', 'Exceptions']].apply(pd.notnull).all(1)]
#clean the dataframe to get rid of rows that have NaN for PI purposes
df_clean = df[df.loc[:, ['PI', 'Exceptions']].apply(pd.notnull).all(1)]
"""Pulse ox date/time is 1 mins and 32 seconds faster than phone. Have to correct for it."""
TC = timedelta(minutes=1, seconds=32)
"""
Explanation: Import File into Python
Change File Name!
End of explanation
"""
df_first = df.first_valid_index() #get the first number from index
Y = datetime(2015, 6, 30, 19, 2, 48)
#Y = pd.to_datetime(df_first) #convert index to datetime
# Y = TIME DATA COLLECTION BEGAN / First data point on CSV
# SYNTAX:
# datetime(year, month, day[, hour[, minute[, second[, microsecond[,tzinfo]]]]])
W = datetime(2015, 7, 1, 8, 30)+TC
# W = first eye drop starts
X = datetime(2015, 7, 1, 9, 49)+TC
# X = ROP Exam Started
Z = datetime(2015, 7, 1, 9, 53)+TC
# Z = ROP Exam Ended
df_last = df.last_valid_index() #get the last number from index
Q = datetime(2015, 7, 1, 13, 51, 19)
#Q = pd.to_datetime(df_last)
# Q = TIME DATA COLLECTION ENDED / Last Data point on CSV
df_clean[W:X]
"""
Explanation: Set Date and Time of ROP Exam and Eye Drops
End of explanation
"""
avg0PI = df_clean.PI[Y:W].mean()
avg0O2 = df.SpO2[Y:W].mean()
avg0PR = df.PR[Y:W].mean()
print 'Baseline Averages\n', 'PI :\t',avg0PI, '\nSpO2 :\t',avg0O2,'\nPR :\t',avg0PR,
#df.std() for standard deviation
str(avg0PI)
"""
Explanation: Baseline Averages
End of explanation
"""
# Every 5 min Average from start of eye drops to start of exam
def perdeltadrop(start, end, delta):
rdrop = []
curr = start
while curr < end:
rdrop.append(curr)
curr += delta
return rdrop
dfdropPI = df_clean.PI[W:W+timedelta(hours=1)]
dfdropO2 = df.SpO2[W:W+timedelta(hours=1)]
dfdropPR = df.PR[W:W+timedelta(hours=1)]
windrop = timedelta(minutes=5)#make the range
rdrop = perdeltadrop(W, W+timedelta(minutes=15), windrop)
avgdropPI = Series(index = rdrop, name = 'PI DurEyeD')
avgdropO2 = Series(index = rdrop, name = 'SpO2 DurEyeD')
avgdropPR = Series(index = rdrop, name = 'PR DurEyeD')
for i in rdrop:
avgdropPI[i] = dfdropPI[i:(i+windrop)].mean()
avgdropO2[i] = dfdropO2[i:(i+windrop)].mean()
avgdropPR[i] = dfdropPR[i:(i+windrop)].mean()
resultdrops = concat([avgdropPI, avgdropO2, avgdropPR], axis=1, join='inner')
print resultdrops
"""
Explanation: Average q 5 Min for 1 hour after 1st Eye Drops
End of explanation
"""
#AVERAGE DURING ROP EXAM FOR FIRST FOUR MINUTES
def perdelta1(start, end, delta):
r1 = []
curr = start
while curr < end:
r1.append(curr)
curr += delta
return r1
df1PI = df_clean.PI[X:X+timedelta(minutes=4)]
df1O2 = df.SpO2[X:X+timedelta(minutes=4)]
df1PR = df.PR[X:X+timedelta(minutes=4)]
win1 = timedelta(seconds=10) #any unit of time & make the range
r1 = perdelta1(X, X+timedelta(minutes=4), win1)
#make the series to store
avg1PI = Series(index = r1, name = 'PI DurEx')
avg1O2 = Series(index = r1, name = 'SpO2 DurEx')
avg1PR = Series(index = r1, name = 'PR DurEX')
#average!
for i1 in r1:
avg1PI[i1] = df1PI[i1:(i1+win1)].mean()
avg1O2[i1] = df1O2[i1:(i1+win1)].mean()
avg1PR[i1] = df1PR[i1:(i1+win1)].mean()
result1 = concat([avg1PI, avg1O2, avg1PR], axis=1, join='inner')
print result1
"""
Explanation: Average Every 10 Sec During ROP Exam for first 4 minutes
End of explanation
"""
#AVERAGE EVERY 5 MINUTES ONE HOUR AFTER ROP EXAM
def perdelta2(start, end, delta):
r2 = []
curr = start
while curr < end:
r2.append(curr)
curr += delta
return r2
# datetime(year, month, day, hour, etc.)
df2PI = df_clean.PI[Z:(Z+timedelta(hours=1))]
df2O2 = df.SpO2[Z:(Z+timedelta(hours=1))]
df2PR = df.PR[Z:(Z+timedelta(hours=1))]
win2 = timedelta(minutes=5) #any unit of time, make the range
r2 = perdelta2(Z, (Z+timedelta(hours=1)), win2) #define the average using function
#make the series to store
avg2PI = Series(index = r2, name = 'PI q5MinHr1')
avg2O2 = Series(index = r2, name = 'O2 q5MinHr1')
avg2PR = Series(index = r2, name = 'PR q5MinHr1')
#average!
for i2 in r2:
avg2PI[i2] = df2PI[i2:(i2+win2)].mean()
avg2O2[i2] = df2O2[i2:(i2+win2)].mean()
avg2PR[i2] = df2PR[i2:(i2+win2)].mean()
result2 = concat([avg2PI, avg2O2, avg2PR], axis=1, join='inner')
print result2
"""
Explanation: Average Every 5 Mins Hour 1-2 After ROP Exam
End of explanation
"""
#AVERAGE EVERY 15 MINUTES TWO HOURS AFTER ROP EXAM
def perdelta3(start, end, delta):
r3 = []
curr = start
while curr < end:
r3.append(curr)
curr += delta
return r3
# datetime(year, month, day, hour, etc.)
df3PI = df_clean.PI[(Z+timedelta(hours=1)):(Z+timedelta(hours=2))]
df3O2 = df.SpO2[(Z+timedelta(hours=1)):(Z+timedelta(hours=2))]
df3PR = df.PR[(Z+timedelta(hours=1)):(Z+timedelta(hours=2))]
win3 = timedelta(minutes=15) #any unit of time, make the range
r3 = perdelta3((Z+timedelta(hours=1)), (Z+timedelta(hours=2)), win3)
#make the series to store
avg3PI = Series(index = r3, name = 'PI q15MinHr2')
avg3O2 = Series(index = r3, name = 'O2 q15MinHr2')
avg3PR = Series(index = r3, name = 'PR q15MinHr2')
#average!
for i3 in r3:
avg3PI[i3] = df3PI[i3:(i3+win3)].mean()
avg3O2[i3] = df3O2[i3:(i3+win3)].mean()
avg3PR[i3] = df3PR[i3:(i3+win3)].mean()
result3 = concat([avg3PI, avg3O2, avg3PR], axis=1, join='inner')
print result3
"""
Explanation: Average Every 15 Mins Hour 2-3 After ROP Exam
End of explanation
"""
#AVERAGE EVERY 30 MINUTES THREE HOURS AFTER ROP EXAM
def perdelta4(start, end, delta):
r4 = []
curr = start
while curr < end:
r4.append(curr)
curr += delta
return r4
# datetime(year, month, day, hour, etc.)
df4PI = df_clean.PI[(Z+timedelta(hours=2)):(Z+timedelta(hours=3))]
df4O2 = df.SpO2[(Z+timedelta(hours=2)):(Z+timedelta(hours=3))]
df4PR = df.PR[(Z+timedelta(hours=2)):(Z+timedelta(hours=3))]
win4 = timedelta(minutes=30) #any unit of time, make the range
r4 = perdelta4((Z+timedelta(hours=2)), (Z+timedelta(hours=3)), win4)
#make the series to store
avg4PI = Series(index = r4, name = 'PI q30MinHr3')
avg4O2 = Series(index = r4, name = 'O2 q30MinHr3')
avg4PR = Series(index = r4, name = 'PR q30MinHr3')
#average!
for i4 in r4:
avg4PI[i4] = df4PI[i4:(i4+win4)].mean()
avg4O2[i4] = df4O2[i4:(i4+win4)].mean()
avg4PR[i4] = df4PR[i4:(i4+win4)].mean()
result4 = concat([avg4PI, avg4O2, avg4PR], axis=1, join='inner')
print result4
"""
Explanation: Average Every 30 Mins Hour 3-4 After ROP Exam
End of explanation
"""
#AVERAGE EVERY 60 MINUTES 4-24 HOURS AFTER ROP EXAM
def perdelta5(start, end, delta):
r5 = []
curr = start
while curr < end:
r5.append(curr)
curr += delta
return r5
# datetime(year, month, day, hour, etc.)
df5PI = df_clean.PI[(Z+timedelta(hours=3)):(Z+timedelta(hours=24))]
df5O2 = df.SpO2[(Z+timedelta(hours=3)):(Z+timedelta(hours=24))]
df5PR = df.PR[(Z+timedelta(hours=3)):(Z+timedelta(hours=24))]
win5 = timedelta(minutes=60) #any unit of time, make the range
r5 = perdelta5((Z+timedelta(hours=3)), (Z+timedelta(hours=24)), win5)
#make the series to store
avg5PI = Series(index = r5, name = 'PI q60MinHr4+')
avg5O2 = Series(index = r5, name = 'O2 q60MinHr4+')
avg5PR = Series(index = r5, name = 'PR q60MinHr4+')
#average!
for i5 in r5:
avg5PI[i5] = df5PI[i5:(i5+win5)].mean()
avg5O2[i5] = df5O2[i5:(i5+win5)].mean()
avg5PR[i5] = df5PR[i5:(i5+win5)].mean()
result5 = concat([avg5PI, avg5O2, avg5PR], axis=1, join='inner')
print result5
"""
Explanation: Average Every Hour 4-24 Hours Post ROP Exam
End of explanation
"""
df_O2_pre = df[Y:W]
#Find count of these ranges
below = 0 # v <=80
middle = 0 #v >= 81 and v<=84
above = 0 #v >=85 and v<=89
ls = []
b_dict = {}
m_dict = {}
a_dict = {}
for i, v in df_O2_pre['SpO2'].iteritems():
if v <= 80: #below block
if not ls:
ls.append(v)
else:
if ls[0] >= 81: #if the range before was not below 80
if len(ls) >= 5: #if the range was greater than 10 seconds, set to 5 because data points are every 2
if ls[0] <= 84: #was it in the middle range?
m_dict[middle] = ls
middle += 1
ls = [v]
elif ls[0] >= 85 and ls[0] <=89: #was it in the above range?
a_dict[above] = ls
above += 1
ls = [v]
else: #old list wasn't long enough to count
ls = [v]
else: #if in the same range
ls.append(v)
elif v >= 81 and v<= 84: #middle block
if not ls:
ls.append(v)
else:
if ls[0] <= 80 or (ls[0]>=85 and ls[0]<= 89): #if not in the middle range
if len(ls) >= 5: #if range was greater than 10 seconds
if ls[0] <= 80: #was it in the below range?
b_dict[below] = ls
below += 1
ls = [v]
elif ls[0] >= 85 and ls[0] <=89: #was it in the above range?
a_dict[above] = ls
above += 1
ls = [v]
else: #old list wasn't long enough to count
ls = [v]
else:
ls.append(v)
elif v >= 85 and v <=89: #above block
if not ls:
ls.append(v)
else:
if ls[0] <=84 : #if not in the above range
if len(ls) >= 5: #if range was greater than
if ls[0] <= 80: #was it in the below range?
b_dict[below] = ls
below += 1
ls = [v]
elif ls[0] >= 81 and ls[0] <=84: #was it in the middle range?
m_dict[middle] = ls
middle += 1
ls = [v]
else: #old list wasn't long enough to count
ls = [v]
else:
ls.append(v)
else: #v>90 or something else weird. start the list over
ls = []
#final list check
if len(ls) >= 5:
if ls[0] <= 80: #was it in the below range?
b_dict[below] = ls
below += 1
ls = [v]
elif ls[0] >= 81 and ls[0] <=84: #was it in the middle range?
m_dict[middle] = ls
middle += 1
ls = [v]
elif ls[0] >= 85 and ls[0] <=89: #was it in the above range?
a_dict[above] = ls
above += 1
b_len = 0.0
for key, val in b_dict.iteritems():
b_len += len(val)
m_len = 0.0
for key, val in m_dict.iteritems():
m_len += len(val)
a_len = 0.0
for key, val in a_dict.iteritems():
a_len += len(val)
df_O2_post = df[Z:Q]
#Find count of these ranges
below2 = 0 # v <=80
middle2= 0 #v >= 81 and v<=84
above2 = 0 #v >=85 and v<=89
ls2 = []
b_dict2 = {}
m_dict2 = {}
a_dict2 = {}
for i2, v2 in df_O2_post['SpO2'].iteritems():
if v2 <= 80: #below block
if not ls2:
ls2.append(v2)
else:
if ls2[0] >= 81: #if the range before was not below 80
if len(ls2) >= 5: #if the range was greater than 10 seconds, set to 5 because data points are every 2
if ls2[0] <= 84: #was it in the middle range?
m_dict2[middle2] = ls2
middle2 += 1
ls2 = [v2]
elif ls2[0] >= 85 and ls2[0] <=89: #was it in the above range?
a_dict2[above2] = ls2
above2 += 1
ls2 = [v2]
else: #old list wasn't long enough to count
ls2 = [v2]
else: #if in the same range
ls2.append(v2)
elif v2 >= 81 and v2<= 84: #middle block
if not ls2:
ls2.append(v2)
else:
if ls2[0] <= 80 or (ls2[0]>=85 and ls2[0]<= 89): #if not in the middle range
if len(ls2) >= 5: #if range was greater than 10 seconds
if ls2[0] <= 80: #was it in the below range?
b_dict2[below2] = ls2
below2 += 1
ls2 = [v2]
elif ls2[0] >= 85 and ls2[0] <=89: #was it in the above range?
a_dict2[above2] = ls2
above2 += 1
ls2 = [v2]
else: #old list wasn't long enough to count
ls2 = [v2]
else:
ls2.append(v2)
elif v2 >= 85 and v2 <=89: #above block
if not ls2:
ls2.append(v2)
else:
if ls2[0] <=84 : #if not in the above range
if len(ls2) >= 5: #if range was greater than
if ls2[0] <= 80: #was it in the below range?
b_dict2[below2] = ls2
below2 += 1
ls2 = [v2]
elif ls2[0] >= 81 and ls2[0] <=84: #was it in the middle range?
m_dict2[middle2] = ls2
middle2 += 1
ls2 = [v2]
else: #old list wasn't long enough to count
ls2 = [v2]
else:
ls2.append(v2)
else: #v2>90 or something else weird. start the list over
ls2 = []
#final list check
if len(ls2) >= 5:
if ls2[0] <= 80: #was it in the below range?
b_dict2[below2] = ls2
below2 += 1
ls2= [v2]
elif ls2[0] >= 81 and ls2[0] <=84: #was it in the middle range?
m_dict2[middle2] = ls2
middle2 += 1
ls2 = [v2]
elif ls2[0] >= 85 and ls2[0] <=89: #was it in the above range?
a_dict2[above2] = ls2
above2 += 1
b_len2 = 0.0
for key, val2 in b_dict2.iteritems():
b_len2 += len(val2)
m_len2 = 0.0
for key, val2 in m_dict2.iteritems():
m_len2 += len(val2)
a_len2 = 0.0
for key, val2 in a_dict2.iteritems():
a_len2 += len(val2)
#print results from count and min
print "Desat Counts for X mins\n"
print "Pre Mild Desat (85-89) Count: %s\t" %above, "for %s min" %((a_len*2)/60.)
print "Pre Mod Desat (81-84) Count: %s\t" %middle, "for %s min" %((m_len*2)/60.)
print "Pre Sev Desat (=< 80) Count: %s\t" %below, "for %s min\n" %((b_len*2)/60.)
print "Post Mild Desat (85-89) Count: %s\t" %above2, "for %s min" %((a_len2*2)/60.)
print "Post Mod Desat (81-84) Count: %s\t" %middle2, "for %s min" %((m_len2*2)/60.)
print "Post Sev Desat (=< 80) Count: %s\t" %below2, "for %s min\n" %((b_len2*2)/60.)
print "Data Recording Time!"
print '*' * 10
print "Pre-Exam Data Recording Length\t", X - Y # start of exam - first data point
print "Post-Exam Data Recording Length\t", Q - Z #last data point - end of exam
print "Total Data Recording Length\t", Q - Y #last data point - first data point
Pre = ['Pre',(X-Y)]
Post = ['Post',(Q-Z)]
Total = ['Total',(Q-Y)]
RTL = [Pre, Post, Total]
PreMild = ['Pre Mild Desats \t',(above), 'for', (a_len*2)/60., 'mins']
PreMod = ['Pre Mod Desats \t',(middle), 'for', (m_len*2)/60., 'mins']
PreSev = ['Pre Sev Desats \t',(below), 'for', (b_len*2)/60., 'mins']
PreDesats = [PreMild, PreMod, PreSev]
PostMild = ['Post Mild Desats \t',(above2), 'for', (a_len2*2)/60., 'mins']
PostMod = ['Post Mod Desats \t',(middle2), 'for', (m_len2*2)/60., 'mins']
PostSev = ['Post Sev Desats \t',(below2), 'for', (b_len2*2)/60., 'mins']
PostDesats = [PostMild, PostMod, PostSev]
#creating a list for recording time length
#did it count check sort correctly? get rid of the ''' if you want to check your values
'''
print "Mild check"
for key, val in b_dict.iteritems():
print all(i <=80 for i in val)
print "Moderate check"
for key, val in m_dict.iteritems():
print all(i >= 81 and i<=84 for i in val)
print "Severe check"
for key, val in a_dict.iteritems():
print all(i >= 85 and i<=89 for i in val)
'''
print Q
print Z
"""
Explanation: Mild, Moderate, and Severe Desaturation Events
End of explanation
"""
leest = [(avg0PI), 'PI Start']
print leest[:1]
import csv
class excel_tab(csv.excel):
delimiter = '\t'
csv.register_dialect("excel_tab", excel_tab)
with open('test.csv', 'w') as f: #CHANGE CSV FILE NAME, saves in same directory
writer = csv.writer(f, dialect=excel_tab)
#writer.writerow(['PI, O2, PR']) accidently found this out but using commas = gives me columns YAY! fix this
#to make code look nice ok nice
writer.writerow([leest])
"""
Explanation: Export to CSV
End of explanation
"""
|
phoebe-project/phoebe2-docs | 2.3/examples/extinction_BK_binary.ipynb | gpl-3.0 | #!pip install -I "phoebe>=2.3,<2.4"
"""
Explanation: Extinction: B-K Binary
In this example, we'll reproduce Figures 1 and 2 in the extinction release paper (Jones et al. 2020).
"Let us begin with a rather extreme case, a synthetic binary comprised of a hot, B-type main sequence star(M=6.5 Msol,Teff=17000 K, and R=4.2 Rsol) anda cool K-type giant (M=1.8 Msol,Teff=4000 K, and R=39.5 Rsol)vin a 1000 day orbit -- a system where, while the temperature difference is large, the luminosities are similar." (Jones et al. 2020)
<img src="jones+20_fig1.png" alt="Figure 1" width="800px"/>
<img src="jones+20_fig2.png" alt="Figure 2" width="400px"/>
Setup
Let's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab).
End of explanation
"""
import matplotlib
matplotlib.rcParams['text.usetex'] = True
matplotlib.rcParams['pdf.fonttype'] = 42
matplotlib.rcParams['ps.fonttype'] = 42
matplotlib.rcParams['mathtext.fontset'] = 'stix'
matplotlib.rcParams['font.family'] = 'STIXGeneral'
from matplotlib import gridspec
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger('error')
b = phoebe.default_binary()
"""
Explanation: As always, let's do imports and initialize a logger and a new bundle.
End of explanation
"""
b.set_value('period', component='binary', value=1000.0*u.d)
b.set_value('teff', component='primary', value=17000*u.K)
b.set_value('teff', component='secondary', value=4000*u.K)
b.set_value('requiv', component='primary', value=4.22173036*u.solRad)
b.set_value('requiv', component='secondary', value=40.732435*u.solRad)
b.flip_constraint('mass@primary', solve_for='sma@binary')
b.set_value('mass', component='primary', value=6.5*u.solMass)
b.flip_constraint('mass@secondary', solve_for='q')
b.set_value('mass', component='secondary', value=1.9145*u.solMass)
"""
Explanation: First we'll define the system parameters
End of explanation
"""
times = phoebe.linspace(-20, 20, 101)
b.add_dataset('lc', times=times, dataset='B', passband="Johnson:B")
b.add_dataset('lc', times=times, dataset='R', passband="Cousins:R")
b.add_dataset('lc', times=times, dataset='KEP', passband="Kepler:mean")
"""
Explanation: And then create three light curve datasets at the same times, but in different passbands
End of explanation
"""
b.set_value_all('atm', 'ck2004')
b.set_value_all('gravb_bol', 0.0)
b.set_value_all('ld_mode_bol', 'manual')
b.set_value_all('ld_func_bol', 'linear')
b.set_value_all('ld_coeffs_bol', [0.0])
"""
Explanation: Now we'll set some atmosphere and limb-darkening options
End of explanation
"""
b.flip_constraint('ebv', solve_for='Av')
"""
Explanation: And flip the extinction constraint so we can provide E(B-V).
End of explanation
"""
b.set_value('ebv', 0.0)
b.run_compute(distortion_method='rotstar', irrad_method='none', model='noext')
"""
Explanation: For comparison, we'll run a model without extinction
End of explanation
"""
b.set_value('ebv', 1.0)
b.run_compute(distortion_method='rotstar', irrad_method='none', model='ext')
"""
Explanation: and then another model with extinction
End of explanation
"""
Bextmags=-2.5*np.log10(b['value@fluxes@B@ext@model'])
Bnoextmags=-2.5*np.log10(b['value@fluxes@B@noext@model'])
Bextmags_norm=Bextmags-Bextmags.min()+1
Bnoextmags_norm=Bnoextmags-Bnoextmags.min()+1
Bresid=Bextmags_norm-Bnoextmags_norm
Rextmags=-2.5*np.log10(b['value@fluxes@R@ext@model'])
Rnoextmags=-2.5*np.log10(b['value@fluxes@R@noext@model'])
Rextmags_norm=Rextmags-Rextmags.min()+1
Rnoextmags_norm=Rnoextmags-Rnoextmags.min()+1
Rresid=Rextmags_norm-Rnoextmags_norm
fig=plt.figure(figsize=(12,6))
gs=gridspec.GridSpec(2,2,height_ratios=[4,1],width_ratios=[1,1])
ax=plt.subplot(gs[0,0])
ax.plot(b['value@times@B@noext@model']/1000,Bnoextmags_norm,color='k',linestyle="--")
ax.plot(b['value@times@B@ext@model']/1000,Bextmags_norm,color='k',linestyle="-")
ax.set_ylabel('Magnitude')
ax.set_xticklabels([])
ax.set_xlim([-0.02,0.02])
ax.set_ylim([3.5,0.8])
ax.set_title('(a) Johnson B')
ax2=plt.subplot(gs[0,1])
ax2.plot(b['value@times@R@noext@model']/1000,Rnoextmags_norm,color='k',linestyle="--")
ax2.plot(b['value@times@R@ext@model']/1000,Rextmags_norm,color='k',linestyle="-")
ax2.set_ylabel('Magnitude')
ax2.set_xticklabels([])
ax2.set_xlim([-0.02,0.02])
ax2.set_ylim([3.5,0.8])
ax2.set_title('(b) Cousins Rc')
ax_1=plt.subplot(gs[1,0])
ax_1.plot(b['value@times@B@noext@model']/1000,Bresid,color='k',linestyle='-')
ax_1.set_ylabel(r'$\Delta m$')
ax_1.set_xlabel('Phase')
ax_1.set_xlim([-0.02,0.02])
ax_1.set_ylim([0.05,-0.3])
ax_1.axhline(y=0., linestyle='dashed',color='k',linewidth=0.5)
ax2_1=plt.subplot(gs[1,1])
ax2_1.plot(b['value@times@R@noext@model']/1000,Rresid,color='k',linestyle='-')
ax2_1.set_ylabel(r'$\Delta m$')
ax2_1.set_xlabel('Phase')
ax2_1.set_xlim([-0.02,0.02])
ax2_1.set_ylim([0.05,-0.3])
ax2_1.axhline(y=0., linestyle='dashed',color='k',linewidth=0.5)
plt.tight_layout()
fig.canvas.draw()
KEPextmags=-2.5*np.log10(b['value@fluxes@KEP@ext@model'])
KEPnoextmags=-2.5*np.log10(b['value@fluxes@KEP@noext@model'])
KEPextmags_norm=KEPextmags-KEPextmags.min()+1
KEPnoextmags_norm=KEPnoextmags-KEPnoextmags.min()+1
KEPresid=KEPextmags_norm-KEPnoextmags_norm
fig=plt.figure(figsize=(6,6))
gs=gridspec.GridSpec(2,1,height_ratios=[4,1])
ax=plt.subplot(gs[0])
ax.plot(b['value@times@KEP@noext@model']/1000,KEPnoextmags_norm,color='k',linestyle="--")
ax.plot(b['value@times@KEP@ext@model']/1000,KEPextmags_norm,color='k',linestyle="-")
ax.set_ylabel('Magnitude')
ax.set_xticklabels([])
ax.set_xlim([-0.02,0.02])
ax.set_ylim([3.5,0.8])
ax.set_title('Kepler K')
ax_1=plt.subplot(gs[1])
ax_1.plot(b['value@times@KEP@noext@model']/1000,KEPresid,color='k',linestyle='-')
ax_1.set_ylabel(r'$\Delta m$')
ax_1.set_xlabel('Phase')
ax_1.set_xlim([-0.02,0.02])
ax_1.set_ylim([0.05,-0.3])
ax_1.axhline(y=0., linestyle='dashed',color='k',linewidth=0.5)
plt.tight_layout()
fig.canvas.draw()
"""
Explanation: Lastly, we'll convert the model fluxes into magnitudes and format the figures.
End of explanation
"""
|
Pittsburgh-NEH-Institute/Institute-Materials-2017 | schedule/week_2/collation/tokenization_normalization_collation.ipynb | gpl-3.0 | from collatex import *
collation = Collation()
collation.add_plain_witness( "A", "The quick brown fox jumped over the lazy dog.")
collation.add_plain_witness( "B", "The brown fox jumped over the dog." )
collation.add_plain_witness( "C", "The bad fox jumped over the lazy dog." )
table = collate(collation)
print(table)
"""
Explanation: Adding tokenization and normalization to collation
Starting point
End of explanation
"""
collation = Collation()
A_content = "The quick brown fox jumped over the lazy dog."
B_content = "The brown fox jumped over the dog."
C_content = "The bad fox jumped over the lazy dog."
collation.add_plain_witness( "A", A_content )
collation.add_plain_witness( "B", B_content )
collation.add_plain_witness( "C", C_content )
table = collate(collation)
print(table)
"""
Explanation: Separate the text of the witness from its inclusion in the collation.
End of explanation
"""
import re
def tokenize(input):
return [create_token(token) for token in re.findall('\S+\s*',input)]
def create_token(input):
return {"t": input}
collation = Collation()
A_content = "The quick brown fox jumped over the lazy dog."
B_content = "The brown fox jumped over the dog."
C_content = "The bad fox jumped over the lazy dog."
witness_list = []
witness_list.append({"id": "A", "tokens": tokenize(A_content)})
print(witness_list)
print("\n")
json_input = {"witnesses": witness_list}
print(json_input)
"""
Explanation: Use functions to tokenize the witness text. Start with just one witness, and verify the result by outputting JSON.
End of explanation
"""
import re
def normalize(input):
return input.lower()
def tokenize(input):
return [create_token(token) for token in re.findall('\S+\s*',input)]
def create_token(input):
return {"t": input, "n": normalize(input)}
collation = Collation()
A_content = "The quick brown fox jumped over the lazy dog."
B_content = "The brown fox jumped over the dog."
C_content = "The bad fox jumped over the lazy dog."
witness_list = []
witness_list.append({"id": "A", "tokens": tokenize(A_content)})
witness_list.append({"id": "B", "tokens": tokenize(B_content)})
witness_list.append({"id": "C", "tokens": tokenize(C_content)})
json_input = {"witnesses": witness_list}
print(json_input)
table = collate(json_input)
print(table)
"""
Explanation: Add simple normalization. This won’t affect the collation output, but we can verify that it’s working.
End of explanation
"""
import re
def normalize(input):
return input.lower()
def tokenize(input):
return [create_token(token) for token in re.findall('\S+\s*',input)]
def create_token(input):
return {"t": input, "n": normalize(input)}
collation = Collation()
A_content = "Look, a gray koala!"
B_content = "Look, a big grey koala!"
C_content = "Look, a big wombat!"
witness_list = []
witness_list.append({"id": "A", "tokens": tokenize(A_content)})
witness_list.append({"id": "B", "tokens": tokenize(B_content)})
witness_list.append({"id": "C", "tokens": tokenize(C_content)})
json_input = {"witnesses": witness_list}
# print(json_input)
table = collate(json_input)
print(table)
"""
Explanation: Change the text to create a more complex example
End of explanation
"""
import re
import string
def normalize(input):
# "string.punctuation" returns string of all punctuation marks Python knows
# the [] are a regex character class
# this subs all punctuation for an empty space
input = re.sub('[' + string.punctuation + ']','',input)
animals = ['koala', 'wombat']
# animals is a list of all animals
if input in animals:
return 'ANIMAL'
else:
return input.lower()
def tokenize(input):
return [create_token(token) for token in re.findall('\S+\s*',input)]
def create_token(input):
return {"t": input, "n": normalize(input)}
collation = Collation()
A_content = "Look, a gray koala!"
B_content = "Look, a big grey koala!"
C_content = "Look, a big wombat!"
witness_list = []
witness_list.append({"id": "A", "tokens": tokenize(A_content)})
witness_list.append({"id": "B", "tokens": tokenize(B_content)})
witness_list.append({"id": "C", "tokens": tokenize(C_content)})
json_input = {"witnesses": witness_list}
print(json_input)
table = collate(json_input)
print(table)
"""
Explanation: Enhance normalization to recognize that all animals are alike. (This introduces possible complications, which can be addressed through further enhancements.)
End of explanation
"""
import re
import string
def normalize(input):
input = re.sub('[' + string.punctuation + ']','',input)
animals = ['koala', 'wombat']
if input in animals:
return 'ANIMAL'
else:
return input.lower()
def tokenize(input):
return [create_token(token) for token in re.findall('\S+\s*',input)]
def create_token(input):
return {"t": input, "n": normalize(input)}
collation = Collation()
A_content = "Look, a gray koala!"
B_content = "Look, a big grey koala!"
C_content = "Look, a big wombat!"
witness_list = []
witness_list.append({"id": "A", "tokens": tokenize(A_content)})
witness_list.append({"id": "B", "tokens": tokenize(B_content)})
witness_list.append({"id": "C", "tokens": tokenize(C_content)})
json_input = {"witnesses": witness_list}
# print(json_input)
# CX's near_matching looks for the closest match
table = collate(json_input, near_match=True, segmentation=False)
print(table)
"""
Explanation: The animals are now aligned, but the colors aren’t. We can address that through matching:
End of explanation
"""
|
seewhydee/ntuphys_nb | jupyter/jupyter_tutorial/jupyter_tutorial_02.ipynb | gpl-3.0 | %matplotlib inline
from scipy import *
import matplotlib.pyplot as plt
from ipywidgets import interact, FloatSlider
## Definition of the plot_cos function, our "callback function".
def plot_cos(phi):
## Plot parameters
xmin, xmax, nx = 0.0, 10.0, 50
ymin, ymax = -1.2, 1.2
## Plot the figure
x = linspace(xmin, xmax, nx)
y = cos(x + phi)
plt.figure(figsize=(8,3))
plt.plot(x, y, linewidth=2)
## Set up the figure axes, etc.
plt.title("y = cos(x + phi)")
plt.xlim(xmin, xmax)
plt.ylim(ymin, ymax)
plt.xlabel('x')
plt.ylabel('y')
## Generate our user interface.
interact(plot_cos, phi=FloatSlider(min=-3.2, max=3.2, step=0.2, value=0.0));
"""
Explanation: Jupyter Tutorial (part 2)
In this part of the Jupyter tutorial, we will show how to use Jupyter code cells to implement interactive and/or animated figures. Such figures allows Jupyter notebooks to go beyond traditional "static" course notes. For instance, a student can use sliders to change the parameters of a plotted curve, and the curve will be automatically updated to display the effects of the change.
The material we will cover is as follows:
Creating interactive plots using the FloatSlider widget.
Other widget types (integer sliders, toggle buttons, etc.).
Creating 3D plots.
Creating animated plots.
Interactive plots with sliders
The following code cell contains a simple example of an interactive plot. The figure shows the graph of $y = \cos(x + \phi)$. There is a slider, which you can drag to change the value of $\phi$. The graph is updated automatically.
As before, you need to run the code cell to enable the interactive graph; select it and type Ctrl-Enter, or choose the Cell &rightarrow; Run Cells menu item, or the "run" toolbar button.
End of explanation
"""
%matplotlib inline
from scipy import *
import matplotlib.pyplot as plt
from ipywidgets import interact, FloatSlider
from IPython.display import display
def interact_cos():
## Plot parameters
xmin, xmax, nx = 0.0, 10.0, 50
ymin, ymax = -1.2, 1.2
pmin, pmax, pstep, pinit = -3.2, 3.2, 0.2, 0.0
## Set up the plot data
x = linspace(xmin, xmax, nx)
fig = plt.figure(figsize=(8,3))
line, = plt.plot([], [], linewidth=2) # Initialize curve to empty data.
## Set up the figure axes, etc.
plt.title("y = cos(x + phi)")
plt.xlim(xmin, xmax)
plt.ylim(ymin, ymax)
plt.xlabel('x')
plt.ylabel('y')
plt.close() # Don't show the figure yet.
## Callback function
def plot_cos(phi):
y = cos(x + phi)
line.set_data(x, y)
display(fig)
## Generate the user interface.
interact(plot_cos, phi=FloatSlider(min=pmin, max=pmax, step=pstep, value=pinit))
interact_cos();
"""
Explanation: Let's go over the key steps in the above program:
First, we set up inline plotting, and import the modules required by the rest of the program, as discussed in part 1 of the tutorial. This time, in addition to the scipy and matplotlib.pyplot modules, we also import the ipywidgets module, which implements interactive user interfaces ("widgets") for Jupyter. Specifically, we import interact and FloatSlider, which we'll need later.
The rest of the program consists of two pieces: a callback function for the user interface, and a generator for the interface. The callback function, in this case named plot_cos, does the actual work of generating the plot once the user has provided the necessary input(s) (i.e., the value of $\phi$ to use). The interface generator specifies how the user's input is obtained (i.e., using an interactive slider), and how that data is passed to the callback function.
The plot_cos function, our callback function, takes one input named phi, which specifies the phase shift in $y = \cos(x + \phi)$. Using this, it creates a Matplotlib plot of $y$ versus $x$.
The interface generator consists of a call to the function named interact, which was imported from the ipywidgets module. The first input to interact specifies what callback function to use—in this case, we specify the function plot_cos, which we've just defined. After that, the remaining inputs to the interact function should specify how the callback function's input(s) should be obtained.
In this case, there is just one input, which is used to specify how to obtain phi (the sole input to plot_cos). In order to access phi using an interactive slider, we invoke the FloatSlider class, which was also imported from the ipywidgets module. The inputs to FloatSlider are fairly self-explanatory: they specify the minimum and maximum values of the slider, the numerical step, and the initial (default) value.
Here is a slightly different way to write the program:
End of explanation
"""
%matplotlib inline
from scipy import *
import matplotlib.pyplot as plt
from ipywidgets import interact, FloatSlider
from IPython.display import display
def interact_exp():
## Set up the plot here.
## A "callback function", for plotting y = exp(kx).
def plot_exp(A, k):
; # Fill in code here
## Generate our user interface.
interact() # Fill in code here
interact_exp();
"""
Explanation: The goal of this re-write is to set things up so that the callback function, plot_cos, does as little work as possible. The callback function should not need to re-create the figure, set up the figure title and axis labels, etc.; all those things are the same no matter what the value of $\phi$. In order to accomplish this, we organize the program as follows:
* Most of the program, including the call to interact which generates the user interface, is encapsulated inside a function named interact_cos. This is to avoid having variables accidentally "leak out" to other programs in the Juypyter notebook by accident.
* Within interact_cos, we create the figure by calling plt.figure. We remember the figure object returned by plt.figure, by assigning it to a variable. This figure creation is done outside of the callback function, plot_cos. Later, plot_cos will use the figure object that we remembered here, in order to update the figure.
* We initialize the curve that we intend to plot, using empty data. The call to plt.plot returns a list of line objects created; previously, we ignored this return value, but now we remember the line object by assigning it to a variable.
* The callback function, plot_cos, is responsible for just two things: assigning the plot data (given the user-specified $\phi$, and refreshing the display. To assign the x/y plot data, it invokes the line object's set_data method. Then it calls the function display from the Ipython.display module, which tells the Jupyter notebook to re-display the figure.
As a follow-up exercise, see if you can write a program to plot $y=A \exp(k x)$. You will now need two sliders, for selecting the two parameters $A$ and $k$.
End of explanation
"""
%matplotlib inline
from scipy import *
import matplotlib.pyplot as plt
from ipywidgets import interact, FloatSlider
from IPython.display import display
def interact_cos_compare():
## Plot parameters
xmin, xmax, nx = 0.0, 10.0, 50
ymin, ymax = -5.0, 5.0
## Set up the figure and the comparison plot of y=cos(x).
fig = plt.figure(figsize=(8,3))
x = linspace(xmin, xmax, nx)
y = cos(x)
plt.plot(x, y, color='r', linestyle='dashed', linewidth=2, label=r"$y=\cos(x)$")
line, = plt.plot([], [], color='k', linewidth=2, label=r"$y=A \cos(x + \phi)$")
## Set up the figure axes, etc.
plt.xlim(xmin, xmax)
plt.ylim(ymin, ymax)
plt.xlabel(r'$x$')
plt.ylabel(r'$y$')
plt.legend(loc="upper left")
plt.close()
## Callback function: plot y=Acos(x+phi)
def plot_cos_compare(phi, A):
y = A*cos(x+phi)
line.set_data(x, y)
display(fig)
interact(plot_cos_compare,
phi = FloatSlider(min=-3.2, max=3.2, step=0.2, value=0.0, description="phase"),
A = FloatSlider(min=-5.0, max=5.0, step=0.2, value=1.2, description="amplitude"))
interact_cos_compare();
"""
Explanation: More advanced FloatSlider usage<a name="advanced_floatslider"></a>
The following plot illustrates several minor improvements over our previous example:
* It implements a pair of sliders, for specifying the two parameters $A$ and $\phi$ in $y = A\cos(x+\phi)$.
* The sliders are provided with the English-language labels "phase" and "amplitude", by supplying the optional description arguments to FloatSlider.
* The plot includes a static curve $y = \cos(x)$ for comparison, drawn using dashes.
* The plot includes a legend to distinguish the two curves, with $\LaTeX$ equation rendering.
End of explanation
"""
%matplotlib inline
from ipywidgets import interact, IntSlider, ToggleButtons, FloatRangeSlider
from numpy import linspace, exp, sign
import matplotlib.pyplot as plt
from IPython.display import display
def interact_polyn():
## Plot parameters
xmin, xmax, nx = -5.0, 5.0, 100
col0, col1 = "grey", "mediumblue"
## Set up the figure and the comparison plot of y=cos(x).
fig = plt.figure(figsize=(8,3))
ax = fig.add_subplot(1, 1, 1)
x = linspace(xmin, xmax, nx)
plt.plot([xmin, xmax], [0.0, 0.0], '--', color=col0) # guides to the eye
line0, = plt.plot([], [], '--', color=col0)
line1, = plt.plot([], [], color=col1, linewidth=2)
## Axis labels, etc.
plt.title(r"$y = \pm x^n$")
plt.xlim(xmin, xmax)
plt.xlabel(r'$x$')
plt.ylabel(r'$y$')
plt.close()
## Callback function: plot +/- x^n.
def plot_polyn(n, sgn, yrange):
s = 1. if sgn == '+' else -1.
y = s * x**n
line0.set_data([0.0, 0.0], yrange)
line1.set_data(x, y)
ax.set_ylim(yrange[0], yrange[1])
display(fig)
interact(plot_polyn,
n = IntSlider(min=0, max=10, value=1, description="n (power)"),
sgn = ToggleButtons(description='sign', options=['+', '-']),
yrange = FloatRangeSlider(min=-50., max=50., step=0.5, value=[-10., 10.], description='y axis range'));
interact_polyn();
"""
Explanation: Other widget types
Sliders are not the only input widgets available. Your interactive figures can include many other kinds of user interface elements. For a complete list, consult the ipywidget documentation. The following example shows the use of the IntSlider widget (similar to FloatSlider except that it returns integers), the ToggleButton widget (which selects between discrete choices), and the FloatRangeSlider widget (which selects a pair of floating-point numbers).
End of explanation
"""
%matplotlib notebook
from scipy import *
import matplotlib.pyplot as plt
from ipywidgets import interact, FloatSlider
from mpl_toolkits.mplot3d import Axes3D
from matplotlib.tri import Triangulation
from IPython.display import display
def interact_hyperboloid():
## Parameters
rmax, nr = 3.0, 25
phimin, phimax, nphi = -pi, pi, 40
camz, camera_angle = 10., 30.
col1, col2 = "cornflowerblue", "sandybrown"
## Initialize numerical data for the hyperboloid.
## First, parameterize it using polar coordinates.
r0vec = linspace(0.0, 1.0, nr) # Unscaled radius
phivec = linspace(phimin, phimax, nphi)
r0, phi = meshgrid(r0vec, phivec)
r0, phi = r0.flatten(), phi.flatten()
## Note: we triangulate the polar coordinates. The order
## is preserved, so this triangulation is usable for the
## Cartesian plot later.
tri = Triangulation(r0, phi).triangles
## Set up the 3D plot.
fig = plt.figure(figsize=(12,6))
ax = fig.add_subplot(1, 1, 1, projection='3d')
ax.set_xlim3d(-rmax, rmax)
ax.set_ylim3d(-rmax, rmax)
ax.set_zlim3d(-pi, pi)
ax.view_init(elev=camz, azim=camera_angle)
plt.close()
def plot_hyperboloid(c):
r = r0 * (rmax - c) + c
x, y = r*cos(phi), r*sin(phi)
z = sqrt(x*x + y*y - c*c + 1e-9)
## Plot the hyperboloid.
ax.clear()
ax.plot_trisurf(x, y, z, triangles=tri, linewidth=0.1, alpha=1.0, color=col1)
ax.plot_trisurf(x, y, -z, triangles=tri, linewidth=0.1, alpha=1.0, color=col2)
## Set plot axes, etc.
ax.set_title(r"$x^2 + y^2 - z^2 = c^2$")
ax.set_xlabel(r'$x$')
ax.set_ylabel(r'$y$')
ax.set_zlabel(r'$z$')
display(fig)
interact(plot_hyperboloid,
c = FloatSlider(min=0., max=2., step=0.1, value=1.0,
continuous_update=False, description='Waist radius'))
interact_hyperboloid();
"""
Explanation: 3D plots
Jupyter can also display 3D plots produced by Matplotlib. To enable 3D plots, you must load the Axes3D object with the following import statement:
from mpl_toolkits.mplot3d import Axes3D
The, you will be able to initialize a 3D figure by calling the add_subplot function in the following way:
ax = fig.add_subplot(1, 1, 1, projection='3d')
Within a 3D figure, you can plot individual curves in 3D by calling plt.plot with $x$, $y$, and $z$ arrays, rather than just $x$ and $y$ arrays.
Plotting 2D surfaces in a 3D figure is a little more complicated. The following example program shows how to plot the hyperboloid $x^2 + y^2 - z^2 = c^2$. The method is to parameterize and triangulate the surface. Then, convert the parameterization into $x$, $y$, and $z$ coordinates, and call plot_trisurf, giving it the coordinates along with the the triangulation data. For more information, see the Matplotlib documentation on 3D plots.
There are a couple of other minor things to point out in this program:
In the first line, instead of the usual %matplotlib inline rendering method, we invoke %matplotlib notebook. This is an alternative rendering method which includes a nice toolbar for the figure. Also, for 3D plots, this rendering method allows the perspective to be adjusted by clicking and dragging.
When invoking the FloatSlider, we supply the option continuous_update=False. This tells Jupyter that we should wait for the user to finish dragging the slider before re-drawing the figure, rather than re-drawing continuously. This is recommended if computing the figure is expensive.
End of explanation
"""
%matplotlib inline
from scipy import *
import matplotlib.pyplot as plt
## Enable HTML5 video output.
from matplotlib import animation, rc
rc('animation', html='html5')
def animate_oscillation():
amplitude, gamma, omega0 = 1.0, 0.1, 1.0 # Oscillator parameters
tmin, tmax, nt = 0., 50., 200 # Animation parameters
nframes, frame_dt = 100, 40
tmin_plt, xlim = -5, 1.2 # Axis limits
circ_pos = -2
## Set up the drawing area
fig = plt.figure(figsize=(8,4))
plt.xlim(tmin_plt, tmax)
plt.ylim(-xlim, xlim)
## Draw the static parts of the figure
t = linspace(tmin, tmax, nt)
x = amplitude * exp(-gamma*t) * cos(sqrt(omega0**2 - gamma**2)*t)
plt.plot(t, x, color='blue', linewidth=2)
plt.title('Motion of a damped harmonic oscillator.')
plt.xlabel('t')
plt.ylabel('x')
## Initialize the plot objects to be animated (`line', `circ', `dash')
## with empty plot data. They'll be used by the `animate` subroutine.
line, = plt.plot([], [], color='grey', linewidth=2)
circ, = plt.plot([], [], 'o', color='red', markersize=15)
dash, = plt.plot([], [], '--', color='grey', markersize=15)
plt.close()
## Initialization function: plot the background of each frame
def init():
line.set_data([], [])
circ.set_data([], [])
dash.set_data([], [])
return line, circ, dash
## Animation function. This is called sequentially for different
## integer n, running from 0 to nframes-1 (inclusive).
def animate(n):
t = tmin + (tmax-tmin)*n/nframes
line.set_data([t, t], [-xlim, xlim])
xc = amplitude * exp(-gamma*t) * cos(sqrt(omega0**2 - gamma**2)*t)
circ.set_data(circ_pos, xc)
dash.set_data([circ_pos, t], [xc, xc])
return line, circ, dash
# Call the animator. blit=True means only re-draw the parts that have changed.
animator = animation.FuncAnimation(fig, animate, init_func=init,
frames=nframes, interval=frame_dt, blit=True)
return animator
animate_oscillation()
"""
Explanation: Plotting HTML5 animations
You can also plot animated figures. This is accomplished using the matplotlib module's ability to render animations in HTML5 video, which can be played by the web browser. An example, showing the animation of a damped harmonic oscillator, is given below. The key steps in the program are:
Import the animation submodule of Matplotlib. Also, load the rc function, which is for customizing Matplotlib settings; call it to enable HTML5 video output for Matplotlib animations.
The last line of the program needs to print (output) an animation object, which will have the form of a HTML5 video. In this example, this is achieved by defining a function named oscillation_animation, which returns the desired animation object, and calling oscillation_animation on the final line.
To create the desired animation object, we must call animation.FuncAnimation. That asks for several inputs, detailing all the information needed to define the animation: what figure to plot in, how to update each animation frame (via a callback function called the animation function), how to clear an animation frame (via another callback function, the animation initialization function), the number of animation frames, and the time interval between animation frames. So we also need to define all these things.
The static parts of the figure will be plotted by calling functions from matplotlib.pyplot, in the usual way. As before, we retain the figure object returned by plt.figure, by assigning it to a variable. This figure object will be passed to animation.FuncAnimation to tell it where to draw the animation.
For each curve that we want to animate, we first create a plot for empty data, and hang on to the resulting line object(s) returned by plt.plot.
Define the animation function, which calls the set_data method of the line object(s) with the desired x/y plot data. Similarly, define the animation initialization function. These two functions are then supplied to animation.FuncAnimation, as mentioned above.
And that's it! We now have a spiffy animated figure.
End of explanation
"""
%matplotlib inline
from scipy import *
import matplotlib.pyplot as plt
## Enable HTML5 video output.
from matplotlib import animation, rc
rc('animation', html='html5')
def animate_wave():
## Set up parameters, the static parts of the figure, etc.
## Initialization function.
def init():
; # Fill in code here
## Animation function.
def animate(n):
## Fill in code for plotting f versus x at time t_n.
;
# Call the animator.
animator = animation.FuncAnimation() # Fill in code here
return animator
animate_wave()
"""
Explanation: As an excercise, try writing code for animating a traveling wave:
$$f(x,t) = \cos(kx - \omega t).$$
You can pick an arbitrary value of $k$ and $\omega$. In each animation frame, we need to plot $f$ versus $x$. Let there be $N$ animation frames, spread over one wave period $T = 2 \pi/\omega$. Hence, frame $n \in {0, 1, \dots, N-1}$ occurs at time
$$t_n = \frac{2\pi n}{\omega N}.$$
We therefore have to plot
$$f(x,t_n) = \cos(kx - \omega t_n), \;\;\; \mathrm{where}\;\; t_n = \frac{2\pi n}{\omega N}.$$
As an extension, try implementing widgets for interactively specifying the values of $k$ and $\omega$!
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.