markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Train base MLP model
# Compile and train the base MLP model base_model.compile( optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) base_model.fit(train_dataset, epochs=HPARAMS.train_epochs, verbose=1)
site/en-snapshot/neural_structured_learning/tutorials/graph_keras_mlp_cora.ipynb
tensorflow/docs-l10n
apache-2.0
Evaluate base MLP model
# Helper function to print evaluation metrics. def print_metrics(model_desc, eval_metrics): """Prints evaluation metrics. Args: model_desc: A description of the model. eval_metrics: A dictionary mapping metric names to corresponding values. It must contain the loss and accuracy metrics. """ print...
site/en-snapshot/neural_structured_learning/tutorials/graph_keras_mlp_cora.ipynb
tensorflow/docs-l10n
apache-2.0
Train MLP model with graph regularization Incorporating graph regularization into the loss term of an existing tf.Keras.Model requires just a few lines of code. The base model is wrapped to create a new tf.Keras subclass model, whose loss includes graph regularization. To assess the incremental benefit of graph regular...
# Build a new base MLP model. base_reg_model_tag, base_reg_model = 'FUNCTIONAL', make_mlp_functional_model( HPARAMS) # Wrap the base MLP model with graph regularization. graph_reg_config = nsl.configs.make_graph_reg_config( max_neighbors=HPARAMS.num_neighbors, multiplier=HPARAMS.graph_regularization_multip...
site/en-snapshot/neural_structured_learning/tutorials/graph_keras_mlp_cora.ipynb
tensorflow/docs-l10n
apache-2.0
Evaluate MLP model with graph regularization
eval_results = dict( zip(graph_reg_model.metrics_names, graph_reg_model.evaluate(test_dataset, steps=HPARAMS.eval_steps))) print_metrics('MLP + graph regularization', eval_results)
site/en-snapshot/neural_structured_learning/tutorials/graph_keras_mlp_cora.ipynb
tensorflow/docs-l10n
apache-2.0
第二步. 分析数据 在项目的第一个部分,你会对波士顿房地产数据进行初步的观察并给出你的分析。通过对数据的探索来熟悉数据可以让你更好地理解和解释你的结果。 由于这个项目的最终目标是建立一个预测房屋价值的模型,我们需要将数据集分为特征(features)和目标变量(target variable)。 - 特征 'RM', 'LSTAT',和 'PTRATIO',给我们提供了每个数据点的数量相关的信息。 - 目标变量:'MEDV',是我们希望预测的变量。 他们分别被存在features和prices两个变量名中。 编程练习 1:基础统计运算 你的第一个编程练习是计算有关波士顿房价的描述统计数据。我们已为你导入了numpy,你需要使用这个库...
#TODO 1 #目标:计算价值的最小值 minimum_price = np.min(prices) #目标:计算价值的最大值 maximum_price = np.max(prices) #目标:计算价值的平均值 mean_price =np.mean(prices) #目标:计算价值的中值 median_price = np.median(prices) #目标:计算价值的标准差 std_price = np.std(prices) #目标:输出计算的结果 print "Statistics for Boston housing dataset:\n" print "Minimum price: ${:,.2f}"...
boston_housing/boston_housing.ipynb
jasonkitbaby/udacity-homework
apache-2.0
问题 1 - 特征观察 如前文所述,本项目中我们关注的是其中三个值:'RM'、'LSTAT' 和'PTRATIO',对每一个数据点: - 'RM' 是该地区中每个房屋的平均房间数量; - 'LSTAT' 是指该地区有多少百分比的房东属于是低收入阶层(有工作但收入微薄); - 'PTRATIO' 是该地区的中学和小学里,学生和老师的数目比(学生/老师)。 凭直觉,上述三个特征中对每一个来说,你认为增大该特征的数值,'MEDV'的值会是增大还是减小呢?每一个答案都需要你给出理由。 提示:你预期一个'RM' 值是6的房屋跟'RM' 值是7的房屋相比,价值更高还是更低呢? 问题 1 - 回答: 增大 RM MEDV 会增大, 房屋...
# 载入画图所需要的库 matplotlib import matplotlib.pyplot as plt # 使输出的图像以更高清的方式显示 %config InlineBackend.figure_format = 'retina' # 调整图像的宽高 plt.figure(figsize=(16, 4)) for i, key in enumerate(['RM', 'LSTAT', 'PTRATIO']): plt.subplot(1, 3, i+1) plt.xlabel(key) plt.scatter(data[key], data['MEDV'], alpha=0.5)
boston_housing/boston_housing.ipynb
jasonkitbaby/udacity-homework
apache-2.0
编程练习 2: 数据分割与重排 接下来,你需要把波士顿房屋数据集分成训练和测试两个子集。通常在这个过程中,数据也会被重排列,以消除数据集中由于顺序而产生的偏差。 在下面的代码中,你需要 使用 sklearn.model_selection 中的 train_test_split, 将features和prices的数据都分成用于训练的数据子集和用于测试的数据子集。 - 分割比例为:80%的数据用于训练,20%用于测试; - 选定一个数值以设定 train_test_split 中的 random_state ,这会确保结果的一致性;
# TODO 2 # 提示: 导入train_test_split from sklearn.cross_validation import train_test_split def generate_train_and_test(X, y): """打乱并分割数据为训练集和测试集""" X_train,X_test, y_train, y_test = train_test_split(X,y,test_size=0.2, random_state=0) return (X_train, X_test, y_train, y_test) X_train, X_test, y_train...
boston_housing/boston_housing.ipynb
jasonkitbaby/udacity-homework
apache-2.0
问题 2 - 训练及测试 将数据集按一定比例分为训练用的数据集和测试用的数据集对学习算法有什么好处? 如果用模型已经见过的数据,例如部分训练集数据进行测试,又有什么坏处? 提示: 如果没有数据来对模型进行测试,会出现什么问题? 问题 2 - 回答: 一部分数据用于训练 拟合参数, 一部分数据用于测试,来验证模型是否准确。 如果用已经见过的数据来做测试,会导致测试的结果非常准确,预测的结果不准确, 因为见过的数据参与训练,拟合的参数非常贴近这些数据。 第三步. 模型衡量标准 在项目的第三步中,你需要了解必要的工具和技巧来让你的模型进行预测。用这些工具和技巧对每一个模型的表现做精确的衡量可以极大地增强你预测的信心。 编程练习3:定义衡...
# TODO 3 # 提示: 导入r2_score from sklearn.metrics import r2_score def performance_metric(y_true, y_predict): """计算并返回预测值相比于预测值的分数""" score = r2_score(y_true,y_predict) return score # TODO 3 可选 # 不允许导入任何计算决定系数的库 def performance_metric2(y_true, y_predict): """计算并返回预测值相比于预测值的分数""" score =...
boston_housing/boston_housing.ipynb
jasonkitbaby/udacity-homework
apache-2.0
问题 3 - 拟合程度 假设一个数据集有五个数据且一个模型做出下列目标变量的预测: | 真实数值 | 预测数值 | | :-------------: | :--------: | | 3.0 | 2.5 | | -0.5 | 0.0 | | 2.0 | 2.1 | | 7.0 | 7.8 | | 4.2 | 5.3 | 你觉得这个模型已成功地描述了目标变量的变化吗?如果成功,请解释为什么,如果没有,也请给出原因。 提示:运行下方的代码,使用performance_metric函数来计算模型的决定系数。
# 计算这个模型的预测结果的决定系数 score = performance_metric([3, -0.5, 2, 7, 4.2], [2.5, 0.0, 2.1, 7.8, 5.3]) print "Model has a coefficient of determination, R^2, of {:.3f}.".format(score)
boston_housing/boston_housing.ipynb
jasonkitbaby/udacity-homework
apache-2.0
问题 3 - 回答: 成功描述了目标变量的变化, R^2 的值贴近1. 第四步. 分析模型的表现 在项目的第四步,我们来看一下不同参数下,模型在训练集和验证集上的表现。这里,我们专注于一个特定的算法(带剪枝的决策树,但这并不是这个项目的重点),和这个算法的一个参数 'max_depth'。用全部训练集训练,选择不同'max_depth' 参数,观察这一参数的变化如何影响模型的表现。画出模型的表现来对于分析过程十分有益,这可以让我们看到一些单看结果看不到的行为。 学习曲线 下方区域内的代码会输出四幅图像,它们是一个决策树模型在不同最大深度下的表现。每一条曲线都直观得显示了随着训练数据量的增加,模型学习曲线的在训练集评分和验证集评分的...
# 根据不同的训练集大小,和最大深度,生成学习曲线 vs.ModelLearning(X_train, y_train)
boston_housing/boston_housing.ipynb
jasonkitbaby/udacity-homework
apache-2.0
问题 4 - 学习曲线 选择上述图像中的其中一个,并给出其最大深度。随着训练数据量的增加,训练集曲线的评分有怎样的变化?验证集曲线呢?如果有更多的训练数据,是否能有效提升模型的表现呢? 提示:学习曲线的评分是否最终会收敛到特定的值? 问题 4 - 回答: figure 1 训练的评分一直很低, 表示 其模型拟合的不是特好,有更多的训练数据不会提高模型的表现, 需要提高模型复杂度, 复杂度曲线 下列代码内的区域会输出一幅图像,它展示了一个已经经过训练和验证的决策树模型在不同最大深度条件下的表现。这个图形将包含两条曲线,一个是训练集的变化,一个是验证集的变化。跟学习曲线相似,阴影区域代表该曲线的不确定性,模型训练和测试部分的评分都用的...
# 根据不同的最大深度参数,生成复杂度曲线 vs.ModelComplexity(X_train, y_train)
boston_housing/boston_housing.ipynb
jasonkitbaby/udacity-homework
apache-2.0
问题 5 - 偏差(bias)与方差(variance)之间的权衡取舍 当模型以最大深度 1训练时,模型的预测是出现很大的偏差还是出现了很大的方差?当模型以最大深度10训练时,情形又如何呢?图形中的哪些特征能够支持你的结论? 提示: 你如何得知模型是否出现了偏差很大或者方差很大的问题? 问题 5 - 回答: 最大深度为1时, 是偏差问题,模型不够敏感,,欠拟合,需要提高复杂度 最大深度为10时, 是方差问题, 模型很好的贴近训练数据,但是验证评分较差, 过拟合 问题 6- 最优模型的猜测 你认为最大深度是多少的模型能够最好地对未见过的数据进行预测?你得出这个答案的依据是什么? 问题 6 - 回答: 最大深度 是4的时候能够很好地...
# TODO 4 #提示: 导入 'KFold' 'DecisionTreeRegressor' 'make_scorer' 'GridSearchCV' from sklearn.model_selection import KFold from sklearn.tree import DecisionTreeRegressor from sklearn.metrics import make_scorer from sklearn.model_selection import GridSearchCV def fit_model(X, y): """ 基于输入数据 [X,y],利于网格搜索找到最优的决策树模型""...
boston_housing/boston_housing.ipynb
jasonkitbaby/udacity-homework
apache-2.0
编程练习 4:训练最优模型 (可选) 在这个练习中,你将需要将所学到的内容整合,使用决策树算法训练一个模型。为了得出的是一个最优模型,你需要使用网格搜索法训练模型,以找到最佳的 'max_depth' 参数。你可以把'max_depth' 参数理解为决策树算法在做出预测前,允许其对数据提出问题的数量。决策树是监督学习算法中的一种。 在下方 fit_model 函数中,你需要做的是: 遍历参数‘max_depth’的可选值 1~10,构造对应模型 计算当前模型的交叉验证分数 返回最优交叉验证分数对应的模型
# TODO 4 可选 ''' 不允许使用 DecisionTreeRegressor 以外的任何 sklearn 库 提示: 你可能需要实现下面的 cross_val_score 函数 def cross_val_score(estimator, X, y, scoring = performance_metric, cv=3): """ 返回每组交叉验证的模型分数的数组 """ scores = [0,0,0] return scores ''' def fit_model2(X, y): """ 基于输入数据 [X,y],利于网格搜索找到最优的决策树模型""" #最优交...
boston_housing/boston_housing.ipynb
jasonkitbaby/udacity-homework
apache-2.0
问题 9 - 最优模型 最优模型的最大深度(maximum depth)是多少?此答案与你在问题 6所做的猜测是否相同? 运行下方区域内的代码,将决策树回归函数代入训练数据的集合,以得到最优化的模型。
# 基于熟练数据,获得最优模型 optimal_reg = fit_model(X_train, y_train) # 输出最优模型的 'max_depth' 参数 print "Parameter 'max_depth' is {} for the optimal model.".format(optimal_reg.get_params()['max_depth'])
boston_housing/boston_housing.ipynb
jasonkitbaby/udacity-homework
apache-2.0
问题 9 - 回答: 最优化的答案是4, 与估计的一致 第六步. 做出预测 当我们用数据训练出一个模型,它现在就可用于对新的数据进行预测。在决策树回归函数中,模型已经学会对新输入的数据提问,并返回对目标变量的预测值。你可以用这个预测来获取数据未知目标变量的信息,这些数据必须是不包含在训练数据之内的。 问题 10 - 预测销售价格 想像你是一个在波士顿地区的房屋经纪人,并期待使用此模型以帮助你的客户评估他们想出售的房屋。你已经从你的三个客户收集到以下的资讯: | 特征 | 客戶 1 | 客戶 2 | 客戶 3 | | :---: | :---: | :---: | :---: | | 房屋内房间总数 | 5 间房间 | 4 间房间 |...
# 生成三个客户的数据 client_data = [[5, 17, 15], # 客户 1 [4, 32, 22], # 客户 2 [8, 3, 12]] # 客户 3 # 进行预测 predicted_price = optimal_reg.predict(client_data) for i, price in enumerate(predicted_price): print "Predicted selling price for Client {}'s home: ${:,.2f}".format(i+1, price)
boston_housing/boston_housing.ipynb
jasonkitbaby/udacity-homework
apache-2.0
问题 10 - 回答: 从统计的结果可以看出 增大 RM ,MEDV 会增大, 房屋的价格与房间数量是正相关关系 增大 LSTAT ,MEDV 会减小, 该区域的房价与收入水平有一点联系,大部分房东如果收入较低可以说明该地区房价可能偏低。 增大 PTRATIO , MEDV 会减小, 说明该地区学生多,老师少,可能教育资源比较优秀。影响到房价。 所以 - 客户1 预测价格 $391,183.33,房间数5 RM数量中 ,穷人数中等, LSTAT一般,教育资源一般,15个学生对1位老师, PTRATIO中等 - 客户2 预测价格 $189,123.53,房间数4 RM数量低,穷人数偏多 LSTAT低,教育资源稀缺,22位学生对...
#TODO 5 # 提示:你可能需要用到 X_test, y_test, optimal_reg, performance_metric # 提示:你可能需要参考问题10的代码进行预测 # 提示:你可能需要参考问题3的代码来计算R^2的值 y_pre = optimal_reg.predict(X_test) r2 = r2_score(y_test,y_pre) print "Optimal model has R^2 score {:,.2f} on test data".format(r2)
boston_housing/boston_housing.ipynb
jasonkitbaby/udacity-homework
apache-2.0
问题11 - 分析决定系数 你刚刚计算了最优模型在测试集上的决定系数,你会如何评价这个结果? 问题11 - 回答 决定系数没有绝对的高低, 视模型和环境因素而定。 这次的决定系数 R^2 是0.77 比较接近 1了。 结果还不错。 说明选择的特征比较好,大部分能都能解释到房屋价格, 可以尝试添加一些其他的有用特征,看是否能进一步提升决定系数。 模型健壮性 一个最优的模型不一定是一个健壮模型。有的时候模型会过于复杂或者过于简单,以致于难以泛化新增添的数据;有的时候模型采用的学习算法并不适用于特定的数据结构;有的时候样本本身可能有太多噪点或样本过少,使得模型无法准确地预测目标变量。这些情况下我们会说模型是欠拟合的。 问题 12 - 模...
# 请先注释掉 fit_model 函数里的所有 print 语句 vs.PredictTrials(features, prices, fit_model, client_data)
boston_housing/boston_housing.ipynb
jasonkitbaby/udacity-homework
apache-2.0
问题 12 - 回答: 问题 13 - 实用性探讨 简单地讨论一下你建构的模型能否在现实世界中使用? 提示:回答以下几个问题,并给出相应结论的理由: - 1978年所采集的数据,在已考虑通货膨胀的前提下,在今天是否仍然适用? - 数据中呈现的特征是否足够描述一个房屋? - 在波士顿这样的大都市采集的数据,能否应用在其它乡镇地区? - 你觉得仅仅凭房屋所在社区的环境来判断房屋价值合理吗? 问题 13 - 回答: 1978年所采集的数据,在已考虑通货膨胀的前提下,在今天仍具有参考性 数据中呈现的特征不够描述房屋的特征 在波士顿这样的大都市采集的数据,并不能沿用到其他乡镇地区 单纯凭借社区环境来判断房屋价格并不合理 构建的模型是可以...
# TODO 6 # 导入数据 # 载入此项目所需要的库 import numpy as np import pandas as pd import visuals as vs # Supplementary code # 让结果在notebook中显示 %matplotlib inline # 1.导入数据 data = pd.read_csv('bj_housing.csv') area = data['Area'] Room = data['Room'] living = data['Living'] school = data['School'] year = data['Year'] Floor = da...
boston_housing/boston_housing.ipynb
jasonkitbaby/udacity-homework
apache-2.0
Create a tmpo session, and enter debug mode to get more output.
s = tmpo.Session() s.debug = False
notebooks/Demo/Demo_tmpo.ipynb
JrtPec/opengrid
apache-2.0
Add a sensor and token to start tracking the data for this given sensor. You only have to do this once for each sensor.
s.add('d209e2bbb35b82b83cc0de5e8b84a4ff','e16d9c9543572906a11649d92f902226')
notebooks/Demo/Demo_tmpo.ipynb
JrtPec/opengrid
apache-2.0
Sync all available data to your hard drive. All sensors previously added will be synced.
s.sync()
notebooks/Demo/Demo_tmpo.ipynb
JrtPec/opengrid
apache-2.0
Now you can create a pandas timeseries with all data from a given sensor.
ts = s.series('d209e2bbb35b82b83cc0de5e8b84a4ff') print(ts)
notebooks/Demo/Demo_tmpo.ipynb
JrtPec/opengrid
apache-2.0
When plotting the data, you'll notice that this ts contains cumulative data, and the time axis (= pandas index) contains seconds since the epoch. Not very practical.
ts.ix[:1000].plot() plt.show()
notebooks/Demo/Demo_tmpo.ipynb
JrtPec/opengrid
apache-2.0
To show differential data (eg instantaneous power), we first have to resample this cumulative data to the interval we want to obtain. We use linear interpolation to approximate the cumulative value between two datapoints. In the example below, we resample to hourly values. Then, we take the difference between the cum...
tsmin = ts.resample(rule='H') tsmin=tsmin.interpolate(method='linear') tsmin=tsmin.diff() tsmin.plot()
notebooks/Demo/Demo_tmpo.ipynb
JrtPec/opengrid
apache-2.0
If we want to plot only a specific period, we can slice the data with the .ix[from:to] method.
tsmin.ix['20141016':'20141018'].plot() ts.name
notebooks/Demo/Demo_tmpo.ipynb
JrtPec/opengrid
apache-2.0
Load playlists Load playlists.
fplaylist = os.path.join(data_dir, '%s-playlist.pkl.gz' % dataset_name) _all_playlists = pkl.load(gzip.open(fplaylist, 'rb')) # _all_playlists[0] all_playlists = [] if type(_all_playlists[0][1]) == tuple: for pl, u in _all_playlists: user = '%s_%s' % (u[0], u[1]) # user string all_playlists.appe...
dchen/music/pla_split.ipynb
cdawei/digbeta
gpl-3.0
check duplicated songs in the same playlist.
print('{:,} | {:,}'.format(np.sum(pl_lengths), np.sum([len(set(pl)) for pl, _ in all_playlists])))
dchen/music/pla_split.ipynb
cdawei/digbeta
gpl-3.0
Load song features Load song_id --> feature array mapping: map a song to the audio features of one of its corresponding tracks in MSD.
_song2feature = pkl.load(gzip.open(ffeature, 'rb')) song2feature = dict() for sid in sorted(_song2feature): song2feature[sid] = _song2feature[sid][audio_feature_indices]
dchen/music/pla_split.ipynb
cdawei/digbeta
gpl-3.0
Load genres Song genres from MSD Allmusic Genre Dataset (Top MAGD) and tagtraum.
song2genre = pkl.load(gzip.open(fgenre, 'rb'))
dchen/music/pla_split.ipynb
cdawei/digbeta
gpl-3.0
Song collection
_all_songs = sorted([(sid, int(song2feature[sid][-1])) for sid in {s for pl, _ in all_playlists for s in pl}], key=lambda x: (x[1], x[0])) print('{:,}'.format(len(_all_songs)))
dchen/music/pla_split.ipynb
cdawei/digbeta
gpl-3.0
Randomise the order of song with the same age.
song_age_dict = dict() for sid, age in _all_songs: age = int(age) try: song_age_dict[age].append(sid) except KeyError: song_age_dict[age] = [sid] all_songs = [] np.random.seed(RAND_SEED) for age in sorted(song_age_dict.keys()): all_songs += [(sid, age) for sid in np.random.permutation...
dchen/music/pla_split.ipynb
cdawei/digbeta
gpl-3.0
Check if all songs have genre info.
print('#songs missing genre: {:,}'.format(len(all_songs) - np.sum([sid in song2genre for (sid, _) in all_songs])))
dchen/music/pla_split.ipynb
cdawei/digbeta
gpl-3.0
Song popularity.
song2index = {sid: ix for ix, (sid, _) in enumerate(all_songs)} song_pl_mat = lil_matrix((len(all_songs), len(all_playlists)), dtype=np.int8) for j in range(len(all_playlists)): pl = all_playlists[j][0] ind = [song2index[sid] for sid in pl] song_pl_mat[ind, j] = 1 song_pop = song_pl_mat.tocsc().sum(axis=1)...
dchen/music/pla_split.ipynb
cdawei/digbeta
gpl-3.0
deal with one outlier.
# song_pop1 = song_pop.copy() # maxix = np.argmax(song_pop) # song_pop1[maxix] = 0 # clipped_max_pop = np.max(song_pop1) + 10 # second_max_pop + 10 # if max_pop - clipped_max_pop > 500: # song_pop1[maxix] = clipped_max_pop
dchen/music/pla_split.ipynb
cdawei/digbeta
gpl-3.0
Create song-playlist matrix Songs as rows, playlists as columns.
def gen_dataset(playlists, song2feature, song2genre, song2artist, artist2vec, train_song_set, dev_song_set=[], test_song_set=[], song2pop_train=None): """ Create labelled dataset: rows are songs, columns are users. Input: - playlists: a set of playlists - train_song_set...
dchen/music/pla_split.ipynb
cdawei/digbeta
gpl-3.0
Split playlists Split playlists such that every song in test set is also in training set. ~~Split playlists (60/10/30 train/dev/test split) uniformly at random.~~ ~~Split each user's playlists (60/20/20 train/dev/test split) uniformly at random if the user has $5$ or more playlists.~~
train_playlists = [] dev_playlists = [] test_playlists = [] candidate_pl_indices = [] other_pl_indices = [] for i in range(len(all_playlists)): pl = all_playlists[i][0] if np.all(np.asarray([song2pop[sid] for sid in pl]) >= 5): candidate_pl_indices.append(i) else: other_pl_indices.appen...
dchen/music/pla_split.ipynb
cdawei/digbeta
gpl-3.0
Every song in test set should also be in training set.
print('#Songs in train + dev set: %d, #Songs total: %d' % \ (len(set([sid for pl, _ in train_playlists + dev_playlists for sid in pl])), len(all_songs))) print('{:30s} {:,}'.format('#playlist in training set:', len(train_playlists))) print('{:30s} {:,}'.format('#playlist in dev set:', len(dev_playlists))) print(...
dchen/music/pla_split.ipynb
cdawei/digbeta
gpl-3.0
Learn artist features
song2artist = pkl.load(gzip.open(fsong2artist, 'rb')) artist_playlist = [] for pl, _ in train_playlists + dev_playlists: pl_artists = [song2artist[sid] if sid in song2artist else '$UNK$' for sid in pl] artist_playlist.append(pl_artists) fartist2vec_bin = os.path.join(data_dir, 'setting2/artist2vec.bin') if o...
dchen/music/pla_split.ipynb
cdawei/digbeta
gpl-3.0
Hold a subset of songs in dev/test playlist Keep the first $K=1,2,3,4$ songs for playlist in dev and test set.
N_SEED_K = 1 dev_playlists_obs = [] dev_playlists_held = [] test_playlists_obs = [] test_playlists_held = [] for pl, _ in dev_playlists: npl = len(pl) k = N_SEED_K dev_playlists_obs.append(pl[:k]) dev_playlists_held.append(pl[k:]) for pl, _ in test_playlists: npl = len(pl) k = N_SEED_K ...
dchen/music/pla_split.ipynb
cdawei/digbeta
gpl-3.0
Hold a subset of songs in a subset of playlists, use all songs
pkl_dir2 = os.path.join(data_dir, 'setting2') fpl2 = os.path.join(pkl_dir2, 'playlists_train_dev_test_s2_%d.pkl.gz' % N_SEED_K) fy2 = os.path.join(pkl_dir2, 'Y_%d.pkl.gz' % N_SEED_K) fxtrain2 = os.path.join(pkl_dir2, 'X_train_%d.pkl.gz' % N_SEED_K) fytrain2 = os.path.join(pkl_dir2, 'Y_train_%d.pkl.gz' % N_SEED...
dchen/music/pla_split.ipynb
cdawei/digbeta
gpl-3.0
Use dedicated sparse matrices to reprsent what entries are observed in dev and test set.
Y_train = Y[:, :len(train_playlists)].tocsc() Y_trndev = Y[:, :len(train_playlists) + len(dev_playlists)].tocsc() PU_dev = lil_matrix((len(all_songs), len(dev_playlists)), dtype=np.bool) PU_test = lil_matrix((len(all_songs), len(test_playlists)), dtype=np.bool) num_known_dev = 0 for j in range(len(dev_playlists)): ...
dchen/music/pla_split.ipynb
cdawei/digbeta
gpl-3.0
Feature normalisation.
X_train_mean = np.mean(X_train, axis=0).reshape((1, -1)) X_train_std = np.std(X_train, axis=0).reshape((1, -1)) + 10 ** (-6) X_train -= X_train_mean X_train /= X_train_std X_trndev_mean = np.mean(X_trndev, axis=0).reshape((1, -1)) X_trndev_std = np.std(X_trndev, axis=0).reshape((1, -1)) + 10 ** (-6) X_trndev -= X_trnd...
dchen/music/pla_split.ipynb
cdawei/digbeta
gpl-3.0
Build the adjacent matrix of playlists (nodes) for setting II, playlists of the same user form a clique. Cliques in train + dev set.
pl_users = [u for (_, u) in train_playlists + dev_playlists] cliques_trndev = [] for u in sorted(set(pl_users)): clique = np.where(u == np.array(pl_users, dtype=np.object))[0] #if len(clique) > 1: cliques_trndev.append(clique) pkl.dump(cliques_trndev, gzip.open(fclique21, 'wb')) clqsize = [len(clq) for cl...
dchen/music/pla_split.ipynb
cdawei/digbeta
gpl-3.0
Cliques in train + dev + test set.
pl_users = [u for (_, u) in train_playlists + dev_playlists + test_playlists] clique_all = [] for u in sorted(set(pl_users)): clique = np.where(u == np.array(pl_users, dtype=np.object))[0] #if len(clique) > 1: clique_all.append(clique) pkl.dump(clique_all, gzip.open(fclique22, 'wb')) clqsize = [len(clq) f...
dchen/music/pla_split.ipynb
cdawei/digbeta
gpl-3.0
The data is stored hierachically in an hdf5 file as a tree of keys and values. It is possible to inspect the file using standard hdf5 tools. Below we show the keys and values associated with the root of the tree. This shows that there is a "patient" group and a group "record-0"
list(hdf.items())
notebooks/vizAbsenceSz.ipynb
cleemesser/eeg-hdfstorage
bsd-3-clause
We can focus on the patient group and access it via hdf['patient'] as if it was a python dictionary. Here are the key,value pairs in that group. Note that the patient information has been anonymized. Everyone is given the same set of birthdays. This shows that this file is for Subject 2619, who is male.
list(hdf['patient'].attrs.items())
notebooks/vizAbsenceSz.ipynb
cleemesser/eeg-hdfstorage
bsd-3-clause
Now we look at how the waveform data is stored. By convention, the first record is called "record-0" and it contains the waveform data as well as the approximate time (relative to the birthdate)at which the study was done, as well as technical information like the number of channels, electrode names and sample rate.
rec = hdf['record-0'] list(rec.attrs.items()) # here is the list of data arrays stored in the record list(rec.items()) rec['physical_dimensions'][:] rec['prefilters'][:] rec['signal_digital_maxs'][:] rec['signal_digital_mins'][:] rec['signal_physical_maxs'][:]
notebooks/vizAbsenceSz.ipynb
cleemesser/eeg-hdfstorage
bsd-3-clause
We can then grab the actual waveform data and visualize it.
signals = rec['signals'] labels = rec['signal_labels'] electrode_labels = [str(s,'ascii') for s in labels] numbered_electrode_labels = ["%d:%s" % (ii, str(labels[ii], 'ascii')) for ii in range(len(labels))]
notebooks/vizAbsenceSz.ipynb
cleemesser/eeg-hdfstorage
bsd-3-clause
Simple visualization of EEG (brief absence seizure)
# search identified spasms at 1836, 1871, 1901, 1939 stacklineplot.show_epoch_centered(signals, 1476,epoch_width_sec=15,chstart=0, chstop=19, fs=rec.attrs['sample_frequency'], ylabels=electrode_labels, yscale=3.0) plt.title('Absence Seizure');
notebooks/vizAbsenceSz.ipynb
cleemesser/eeg-hdfstorage
bsd-3-clause
Annotations It was not a coincidence that I chose this time in the record. I used the annotations to focus on portion of the record which was marked as having a seizure. You can access the clinical annotations via rec['edf_annotations']
annot = rec['edf_annotations'] antext = [s.decode('utf-8') for s in annot['texts'][:]] starts100ns = [xx for xx in annot['starts_100ns'][:]] # process the bytes into text and lists of start times df = pd.DataFrame(data=antext, columns=['text']) # load into a pandas data frame df['starts100ns'] = starts100ns df['sta...
notebooks/vizAbsenceSz.ipynb
cleemesser/eeg-hdfstorage
bsd-3-clause
It is easy then to find the annotations related to seizures
df[df.text.str.contains('sz',case=False)]
notebooks/vizAbsenceSz.ipynb
cleemesser/eeg-hdfstorage
bsd-3-clause
https://www.youtube.com/watch?v=ElmBrKyMXxs https://github.com/hans/ipython-notebooks/blob/master/tf/TF%20tutorial.ipynb https://github.com/ematvey/tensorflow-seq2seq-tutorials
from __future__ import division import tensorflow as tf from os import path, remove import numpy as np import pandas as pd import csv from sklearn.model_selection import StratifiedShuffleSplit from time import time from matplotlib import pyplot as plt import seaborn as sns from mylibs.jupyter_notebook_helper import sho...
04_time_series_prediction/.ipynb_checkpoints/23_price_history_seq2seq-cross-validation-checkpoint.ipynb
pligor/predicting-future-product-prices
agpl-3.0
Step 0 - hyperparams vocab_size is all the potential words you could have (classification for translation case) and max sequence length are the SAME thing decoder RNN hidden units are usually same size as encoder RNN hidden units in translation but for our case it does not seem really to be a relationship there but we ...
num_units = 400 #state size input_len = 60 target_len = 30 batch_size = 64 #50 with_EOS = False total_train_size = 57994 train_size = 6400 test_size = 1282
04_time_series_prediction/.ipynb_checkpoints/23_price_history_seq2seq-cross-validation-checkpoint.ipynb
pligor/predicting-future-product-prices
agpl-3.0
Once generate data
data_path = '../data/price_history' #npz_full_train = data_path + '/price_history_03_dp_60to30_train.npz' #npz_full_train = data_path + '/price_history_60to30_targets_normed_train.npz' npz_full_train = data_path + '/price_history_03_dp_60to30_global_remove_scale_targets_normed_train.npz' #npz_train = data_path + '/pr...
04_time_series_prediction/.ipynb_checkpoints/23_price_history_seq2seq-cross-validation-checkpoint.ipynb
pligor/predicting-future-product-prices
agpl-3.0
Cross Validating
def plotter(stats_list, label_text): _ = renderStatsListWithLabels(stats_list=stats_list, label_text=label_text) plt.show() _ = renderStatsListWithLabels(stats_list=stats_list, label_text=label_text, title='Validation Error', kk='error(valid)') plt.show() #sorted(fact...
04_time_series_prediction/.ipynb_checkpoints/23_price_history_seq2seq-cross-validation-checkpoint.ipynb
pligor/predicting-future-product-prices
agpl-3.0
Step 3 training the network
model = PriceHistorySeq2SeqDynDecIns(rng=random_state, dtype=dtype, config=config, with_EOS=with_EOS) opt_res.best_params num_units, activation, lamda2, keep_prob_input, learning_rate = opt_res.best_params batch_size npz_1280_test = '../data/price_history/price_history_03_dp_60to30_global_remove_scale_targets_norme...
04_time_series_prediction/.ipynb_checkpoints/23_price_history_seq2seq-cross-validation-checkpoint.ipynb
pligor/predicting-future-product-prices
agpl-3.0
One-way ANOVA, general setup We'll starting with simulating data for a one-way ANOVA, under the null hypothesis. In this simulation we'll simulate four groups, all drawn from the same underlying distribution: $N(\mu=0,\sigma=1)$.
## simulate one way ANOVA under the null hypothesis of no ## difference in group means groupmeans = [0, 0, 0, 0] k = len(groupmeans) # number of groups groupstds = [1] * k # standard deviations equal across groups n = 25 # sample size # generate samples samples = [stats.norm.rvs(loc=i, scale=j, size = n) for (i,j) ...
2016-03-30-ANOVA-simulations.ipynb
Bio204-class/bio204-notebooks
cc0-1.0
Illustrate sample distributions and group means We then draw the simulated data, showing the group distributions on the left and the distribution of group means on the right.
# draw a figure fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(10,4)) clrs = sbn.color_palette("Set1", n_colors=k) for i, sample in enumerate(samples): sbn.kdeplot(sample, color=clrs[i], ax=ax1) ax1_ymax = ax1.get_ylim()[1] for i, sample in enumerate(samples): ax2.vlines(np.mean(sample), 0, ax1_ymax, linestyle...
2016-03-30-ANOVA-simulations.ipynb
Bio204-class/bio204-notebooks
cc0-1.0
F-statistic We calculate an F-statistic, which is the ratio of the "between group" variance to the "within group" variance. The calculation below is appropriate when all the group sizes are the same.
# Between-group and within-group estimates of variance sample_group_means = [np.mean(s) for s in samples] sample_group_var = [np.var(s, ddof=1) for s in samples] Vbtw = n * np.var(sample_group_means, ddof=1) Vwin = np.mean(sample_group_var) Fstat = Vbtw/Vwin print("Between group estimate of population variance:", Vbt...
2016-03-30-ANOVA-simulations.ipynb
Bio204-class/bio204-notebooks
cc0-1.0
Simulating the sampling distribution of the F-test statistic To understand how surprising our observed data is, relative to what we would expect under the null hypothesis, we need to understand the sampling distribution of the F-statistic. Here we use simulation to estimate this sampling distribution.
# now carry out many such simulations to estimate the sampling distristribution # of our F-test statistic groupmeans = [0, 0, 0, 0] k = len(groupmeans) # number of groups groupstds = [1] * k # standard deviations equal across groups n = 25 # sample size nsims = 1000 Fstats = [] for sim in range(nsims): samples =...
2016-03-30-ANOVA-simulations.ipynb
Bio204-class/bio204-notebooks
cc0-1.0
Draw a figure to compare our simulated sampling distribution of the F-statistic to the theoretical expectation Let's create a plot comparing our simulated sampling distribution to the theoretical sampling distribution determined analytically. As we see below they compare well.
fig, ax = plt.subplots() sbn.distplot(Fstats, ax=ax, label="Simulation", kde_kws=dict(alpha=0.5, linewidth=2)) # plot the theoretical F-distribution for # corresponding degrees of freedom df1 = k - 1 df2 = n*k - k x = np.linspace(0,9,500) Ftheory = stats.f.pdf(x, df1, df2) plt.plot(x,Ftheory, linestyle='d...
2016-03-30-ANOVA-simulations.ipynb
Bio204-class/bio204-notebooks
cc0-1.0
Determining signficance thresholds To determine whether we would reject the null hypothesis for an observed value of the F-statistic we need to calculate the appropriate cutoff value for a given significance threshold, $\alpha$. Here we consider the standard signficiance threshold $\alpha$ = 0.05.
# draw F distribution x = np.linspace(0,9,500) Ftheory = stats.f.pdf(x, df1, df2) plt.plot(x, Ftheory, linestyle='solid', linewidth=2, label="Theoretical\nExpectation") # draw vertical line at threshold threshold = stats.f.ppf(0.95, df1, df2) plt.vlines(threshold, 0, stats.f.pdf(threshold, df1, df2), linestyle='solid'...
2016-03-30-ANOVA-simulations.ipynb
Bio204-class/bio204-notebooks
cc0-1.0
Note that the F-distribution above is specific to the particular degrees of freedom. We would typically refer to that distribution as $F_{3,96}$. In this case, for $\alpha=0.5$, we would reject the null hypothesis if the observed value of the F-statistic was greater than 2.70. Simulation where $H_A$ holds As we've don...
# now simulate case where one of the group means is different groupmeans = [0, 0, 0, 1] k = len(groupmeans) # number of groups groupstds = [1] * k # standard deviations equal across groups n = 25 # sample size nsims = 1000 Fstats = [] for sim in range(nsims): samples = [stats.norm.rvs(loc=i, scale=j, size = n) f...
2016-03-30-ANOVA-simulations.ipynb
Bio204-class/bio204-notebooks
cc0-1.0
We then plot the distribution of the F-statistic under this specific $H_A$ versus the distribution of F under the null hypothesis.
fig, ax = plt.subplots() sbn.distplot(Fstats, ax=ax, label="Simulated $H_A$", kde_kws=dict(alpha=0.5, linewidth=2)) # plot the theoretical F-distribution for # corresponding degrees of freedom df1 = k - 1 df2 = n*k - k x = np.linspace(0,9,500) Ftheory = stats.f.pdf(x, df1, df2) plt.plot(x,Ftheory, linesty...
2016-03-30-ANOVA-simulations.ipynb
Bio204-class/bio204-notebooks
cc0-1.0
This problem can be written as a cobra.Model
from cobra import Model, Metabolite, Reaction cone = Reaction("cone") popsicle = Reaction("popsicle") # constrainted to a budget budget = Metabolite("budget") budget._constraint_sense = "L" budget._bound = starting_budget cone.add_metabolites({budget: cone_production_cost}) popsicle.add_metabolites({budget: popsicle_...
documentation_builder/milp.ipynb
jerkos/cobrapy
lgpl-2.1
In reality, cones and popsicles can only be sold in integer amounts. We can use the variable kind attribute of a cobra.Reaction to enforce this.
cone.variable_kind = "integer" popsicle.variable_kind = "integer" m.optimize().x_dict
documentation_builder/milp.ipynb
jerkos/cobrapy
lgpl-2.1
Now the model makes both popsicles and cones. Restaurant Order To tackle the less immediately obvious problem from the following XKCD comic:
from IPython.display import Image Image(url=r"http://imgs.xkcd.com/comics/np_complete.png")
documentation_builder/milp.ipynb
jerkos/cobrapy
lgpl-2.1
We want a solution satisfying the following constraints: $\left(\begin{matrix}2.15&2.75&3.35&3.55&4.20&5.80\end{matrix}\right) \cdot \vec v = 15.05$ $\vec v_i \ge 0$ $\vec v_i \in \mathbb{Z}$ This problem can be written as a COBRA model as well.
total_cost = Metabolite("constraint") total_cost._bound = 15.05 costs = {"mixed_fruit": 2.15, "french_fries": 2.75, "side_salad": 3.35, "hot_wings": 3.55, "mozarella_sticks": 4.20, "sampler_plate": 5.80} m = Model("appetizers") for item, cost in costs.items(): r = Reaction(item) r.add_metabolites({t...
documentation_builder/milp.ipynb
jerkos/cobrapy
lgpl-2.1
There is another solution to this problem, which would have been obtained if we had maximized for mixed fruit instead of minimizing.
m.optimize(objective_sense="maximize").x_dict
documentation_builder/milp.ipynb
jerkos/cobrapy
lgpl-2.1
Boolean Indicators To give a COBRA-related example, we can create boolean variables as integers, which can serve as indicators for a reaction being active in a model. For a reaction flux $v$ with lower bound -1000 and upper bound 1000, we can create a binary variable $b$ with the following constraints: $b \in {0, 1}$ $...
import cobra.test model = cobra.test.create_test_model("textbook") # an indicator for pgi pgi = model.reactions.get_by_id("PGI") # make a boolean variable pgi_indicator = Reaction("indicator_PGI") pgi_indicator.lower_bound = 0 pgi_indicator.upper_bound = 1 pgi_indicator.variable_kind = "integer" # create constraint fo...
documentation_builder/milp.ipynb
jerkos/cobrapy
lgpl-2.1
In a model with both these reactions active, the indicators will also be active
solution = model.optimize() print("PGI indicator = %d" % solution.x_dict["indicator_PGI"]) print("ZWF indicator = %d" % solution.x_dict["indicator_ZWF"]) print("PGI flux = %.2f" % solution.x_dict["PGI"]) print("ZWF flux = %.2f" % solution.x_dict["G6PDH2r"])
documentation_builder/milp.ipynb
jerkos/cobrapy
lgpl-2.1
Because these boolean indicators are in the model, additional constraints can be applied on them. For example, we can prevent both reactions from being active at the same time by adding the following constraint: $b_\text{pgi} + b_\text{zwf} = 1$
or_constraint = Metabolite("or") or_constraint._bound = 1 zwf_indicator.add_metabolites({or_constraint: 1}) pgi_indicator.add_metabolites({or_constraint: 1}) solution = model.optimize() print("PGI indicator = %d" % solution.x_dict["indicator_PGI"]) print("ZWF indicator = %d" % solution.x_dict["indicator_ZWF"]) print("...
documentation_builder/milp.ipynb
jerkos/cobrapy
lgpl-2.1
Create List
# Make a list of crew members crew_members = ['Steve', 'Stacy', 'Miller', 'Chris', 'Bill', 'Jack']
python/select_random_item_from_list.ipynb
tpin3694/tpin3694.github.io
mit
Select Random Item From List
# Choose a random crew member choice(crew_members)
python/select_random_item_from_list.ipynb
tpin3694/tpin3694.github.io
mit
Batch normalization: Forward In the file cs231n/layers.py, implement the batch normalization forward pass in the function batchnorm_forward. Once you have done so, run the following to test your implementation.
# Check the training-time forward pass by checking means and variances # of features both before and after batch normalization # Simulate the forward pass for a two-layer network N, D1, D2, D3 = 200, 50, 60, 3 X = np.random.randn(N, D1) W1 = np.random.randn(D1, D2) W2 = np.random.randn(D2, D3) a = np.maximum(0, X.dot(...
assignment2/BatchNormalization.ipynb
pyemma/deeplearning
gpl-3.0
Batch Normalization: backward Now implement the backward pass for batch normalization in the function batchnorm_backward. To derive the backward pass you should write out the computation graph for batch normalization and backprop through each of the intermediate nodes. Some intermediates may have multiple outgoing bran...
# Gradient check batchnorm backward pass N, D = 4, 5 x = 5 * np.random.randn(N, D) + 12 gamma = np.random.randn(D) beta = np.random.randn(D) dout = np.random.randn(N, D) bn_param = {'mode': 'train'} fx = lambda x: batchnorm_forward(x, gamma, beta, bn_param)[0] fg = lambda a: batchnorm_forward(x, gamma, beta, bn_param...
assignment2/BatchNormalization.ipynb
pyemma/deeplearning
gpl-3.0
Batch Normalization: alternative backward In class we talked about two different implementations for the sigmoid backward pass. One strategy is to write out a computation graph composed of simple operations and backprop through all intermediate values. Another strategy is to work out the derivatives on paper. For the s...
N, D = 100, 500 x = 5 * np.random.randn(N, D) + 12 gamma = np.random.randn(D) beta = np.random.randn(D) dout = np.random.randn(N, D) bn_param = {'mode': 'train'} out, cache = batchnorm_forward(x, gamma, beta, bn_param) t1 = time.time() dx1, dgamma1, dbeta1 = batchnorm_backward(dout, cache) t2 = time.time() dx2, dgamm...
assignment2/BatchNormalization.ipynb
pyemma/deeplearning
gpl-3.0
Fully Connected Nets with Batch Normalization Now that you have a working implementation for batch normalization, go back to your FullyConnectedNet in the file cs2312n/classifiers/fc_net.py. Modify your implementation to add batch normalization. Concretely, when the flag use_batchnorm is True in the constructor, you sh...
N, D, H1, H2, C = 2, 15, 20, 30, 10 X = np.random.randn(N, D) y = np.random.randint(C, size=(N,)) for reg in [0, 3.14]: print 'Running check with reg = ', reg model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C, reg=reg, weight_scale=5e-2, dtype=np.float64, ...
assignment2/BatchNormalization.ipynb
pyemma/deeplearning
gpl-3.0
Batchnorm for deep networks Run the following to train a six-layer network on a subset of 1000 training examples both with and without batch normalization.
# Try training a very deep net with batchnorm hidden_dims = [100, 100, 100, 100, 100] num_train = 1000 small_data = { 'X_train': data['X_train'][:num_train], 'y_train': data['y_train'][:num_train], 'X_val': data['X_val'], 'y_val': data['y_val'], } weight_scale = 2e-2 bn_model = FullyConnectedNet(hidden_dims, ...
assignment2/BatchNormalization.ipynb
pyemma/deeplearning
gpl-3.0
Batch normalization and initialization We will now run a small experiment to study the interaction of batch normalization and weight initialization. The first cell will train 8-layer networks both with and without batch normalization using different scales for weight initialization. The second layer will plot training ...
# Try training a very deep net with batchnorm hidden_dims = [50, 50, 50, 50, 50, 50, 50] num_train = 1000 small_data = { 'X_train': data['X_train'][:num_train], 'y_train': data['y_train'][:num_train], 'X_val': data['X_val'], 'y_val': data['y_val'], } bn_solvers = {} solvers = {} weight_scales = np.logspace(-4...
assignment2/BatchNormalization.ipynb
pyemma/deeplearning
gpl-3.0
Simulating fragments of genomes that match priming_exp bulk OTUs
!cd $workDir; \ SIPSim fragments \ target_genome_index.txt \ --fp $genomeDir \ --fr $primerFile \ --fld skewed-normal,5000,2000,-5 \ --flr None,None \ --nf 10000 \ --np $nprocs \ --tbl \ 2> ampFrags.log \ > ampFrags.txt
ipynb/bac_genome/priming_exp/validation_sample/X12C.700.14.05_fracRichness-moreDif.ipynb
nick-youngblut/SIPSim
mit
Plotting fragment length distribution
%%R -i workDir inFile = paste(c(workDir, 'ampFrags.txt'), collapse='/') tbl = read.delim(inFile, sep='\t') tbl %>% head(n=3) %%R -w 950 -h 650 some.taxa = tbl$taxon_name %>% unique %>% head(n=20) tbl.f = tbl %>% filter(taxon_name %in% some.taxa) ggplot(tbl.f, aes(fragGC, fragLength)) + stat_density2d() +...
ipynb/bac_genome/priming_exp/validation_sample/X12C.700.14.05_fracRichness-moreDif.ipynb
nick-youngblut/SIPSim
mit
Simulating fragments of total dataset with a greater diffusion
!cd $workDir; \ SIPSim fragments \ $genomeAllIndex \ --fp $genomeAllDir \ --fr $primerFile \ --fld skewed-normal,5000,2000,-5 \ --flr None,None \ --nf 10000 \ --np $nprocs \ 2> ampFragsAll.log \ > ampFragsAll.pkl ampFragsAllFile = os.path.join(workDir, 'ampFragsAll.pkl')
ipynb/bac_genome/priming_exp/validation_sample/X12C.700.14.05_fracRichness-moreDif.ipynb
nick-youngblut/SIPSim
mit
Plotting 'true' taxon abundance distribution (from priming exp dataset)
%%R -i metaDataFile # loading priming_exp metadata file meta = read.delim(metaDataFile, sep='\t') meta %>% head(n=4) %%R -i otuTableFile # loading priming_exp OTU table tbl.otu.true = read.delim(otuTableFile, sep='\t') %>% select(OTUId, starts_with('X12C.700.14')) tbl.otu.true %>% head(n=3) %%R # editing tabl...
ipynb/bac_genome/priming_exp/validation_sample/X12C.700.14.05_fracRichness-moreDif.ipynb
nick-youngblut/SIPSim
mit
Abundance distributions of each target taxon
%%R -w 900 -h 3500 tbl.sim.true.f = tbl.sim.true %>% ungroup() %>% filter(density >= 1.6772) %>% filter(density <= 1.7603) %>% group_by(taxon) %>% mutate(mean_rel_abund = mean(rel_abund)) %>% ungroup() tbl.sim.true.f$taxon = reorder(tbl.sim.true.f$taxon, -tbl.sim.true.f$mean_rel_abund) ggplo...
ipynb/bac_genome/priming_exp/validation_sample/X12C.700.14.05_fracRichness-moreDif.ipynb
nick-youngblut/SIPSim
mit
Tiny offset from zero here, but overall it looks pretty good.
np.std(df.y_scaled[np.logical_and(df.x_scaled < 1, df.x_scaled > 0.5)], ddof=1)*1000
examples/LED testing.ipynb
ryanpdwyer/teensyio
mit
With default analogRead settings, 12-bit resolution, we see 4 µA measured current noise on 10 mA full scale, with no effort to reduce the bandwidth of any of the components.
(4./10000)**-1
examples/LED testing.ipynb
ryanpdwyer/teensyio
mit
3. Graphical analysis using Matplotlib and IPython widgets As the primary use of this module is for teaching purposes, there are a number of pedagogically useful plotting methods. I will demonstrate the basic usage of only a few of them below. To see a full listing of the available plotting methods use tab-completion o...
# use tab completion to see complete list ces_model.plot_
examples/3 Graphical analysis.ipynb
solowPy/solowPy
mit
Static example: Creating a static plot of the classic Solow diagram is done as follows.
fig, ax = plt.subplots(1, 1, figsize=(8,6)) ces_model.plot_solow_diagram(ax) fig.show()
examples/3 Graphical analysis.ipynb
solowPy/solowPy
mit
Interactive example: All of the various plotting methods can be made interactive using IPython widgets. To construct an IPython widget we need the following additional import statements.
from IPython.html.widgets import fixed, interact, FloatSliderWidget
examples/3 Graphical analysis.ipynb
solowPy/solowPy
mit
Creating an interactive plot of the classic Solow diagram is done as follows.
# wrap the static plotting code in a function def interactive_solow_diagram(model, **params): """Interactive widget for the factor shares.""" fig, ax = plt.subplots(1, 1, figsize=(8, 6)) model.plot_solow_diagram(ax, Nk=1000, **params) # define some widgets for the various parameters eps = 1e-2 technolo...
examples/3 Graphical analysis.ipynb
solowPy/solowPy
mit
3.1 Intensive production function Creating an interactive plot of the intensive production function is done as follows.
model.plot_intensive_output? def interactive_intensive_output(model, **params): """Interactive widget for the intensive production function.""" fig, ax = plt.subplots(1, 1, figsize=(8, 6)) model.plot_intensive_output(ax, Nk=1000, **params) # define some widgets for the various parameters eps = 1e-2 ou...
examples/3 Graphical analysis.ipynb
solowPy/solowPy
mit
3.2 Factor shares Creating an interactive plot of factor shares for capital and labor is done as follows.
def interactive_factor_shares(model, **params): """Interactive widget for the factor shares.""" fig, ax = plt.subplots(1, 1, figsize=(8, 6)) model.plot_factor_shares(ax, Nk=1000, **params) # define some widgets for the various parameters eps = 1e-2 technology_progress_widget = FloatSliderWidget(min=-0....
examples/3 Graphical analysis.ipynb
solowPy/solowPy
mit
3.4 Phase Diagram Creating an interactive plot of the phase diagram for the Solow model is done as follows.
def interactive_phase_diagram(model, **params): """Interactive widget for the phase diagram.""" fig, ax = plt.subplots(1, 1, figsize=(8, 6)) model.plot_phase_diagram(ax, Nk=1000, **params) # define some widgets for the various parameters eps = 1e-2 technology_progress_widget = FloatSliderWidget(min=-0....
examples/3 Graphical analysis.ipynb
solowPy/solowPy
mit
Motivating GMM: Weaknesses of k-Means Let's take a look at some of the weaknesses of k-means and think about how we might improve the cluster model. As we saw in the previous section, given simple, well-separated data, k-means finds suitable clustering results. For example, if we have simple blobs of data, the k-means ...
# Generate some data from sklearn.datasets import make_blobs X, y_true = make_blobs(n_samples=400, centers=4, cluster_std=0.60, random_state=0) X = X[:, ::-1] # flip axes for better plotting # Plot the data with K Means Labels from sklearn.cluster import KMeans kmeans = KMeans(4, random_state=0)...
present/mcc2/PythonDataScienceHandbook/05.12-Gaussian-Mixtures.ipynb
csaladenes/csaladenes.github.io
mit
From an intuitive standpoint, we might expect that the clustering assignment for some points is more certain than others: for example, there appears to be a very slight overlap between the two middle clusters, such that we might not have complete confidence in the cluster assigment of points between them. Unfortunately...
from sklearn.cluster import KMeans from scipy.spatial.distance import cdist def plot_kmeans(kmeans, X, n_clusters=4, rseed=0, ax=None): labels = kmeans.fit_predict(X) # plot the input data ax = ax or plt.gca() ax.axis('equal') ax.scatter(X[:, 0], X[:, 1], c=labels, s=40, cmap='viridis', zorder=2) ...
present/mcc2/PythonDataScienceHandbook/05.12-Gaussian-Mixtures.ipynb
csaladenes/csaladenes.github.io
mit
An important observation for k-means is that these cluster models must be circular: k-means has no built-in way of accounting for oblong or elliptical clusters. So, for example, if we take the same data and transform it, the cluster assignments end up becoming muddled:
rng = np.random.RandomState(13) X_stretched = np.dot(X, rng.randn(2, 2)) kmeans = KMeans(n_clusters=4, random_state=0) plot_kmeans(kmeans, X_stretched)
present/mcc2/PythonDataScienceHandbook/05.12-Gaussian-Mixtures.ipynb
csaladenes/csaladenes.github.io
mit
By eye, we recognize that these transformed clusters are non-circular, and thus circular clusters would be a poor fit. Nevertheless, k-means is not flexible enough to account for this, and tries to force-fit the data into four circular clusters. This results in a mixing of cluster assignments where the resulting circle...
# from sklearn.mixture import GMM from sklearn.mixture import GaussianMixture as GMM gmm = GMM(n_components=4).fit(X) labels = gmm.predict(X) plt.scatter(X[:, 0], X[:, 1], c=labels, s=40, cmap='viridis');
present/mcc2/PythonDataScienceHandbook/05.12-Gaussian-Mixtures.ipynb
csaladenes/csaladenes.github.io
mit
But because GMM contains a probabilistic model under the hood, it is also possible to find probabilistic cluster assignments—in Scikit-Learn this is done using the predict_proba method. This returns a matrix of size [n_samples, n_clusters] which measures the probability that any point belongs to the given cluster:
probs = gmm.predict_proba(X) print(probs[:5].round(3))
present/mcc2/PythonDataScienceHandbook/05.12-Gaussian-Mixtures.ipynb
csaladenes/csaladenes.github.io
mit
We can visualize this uncertainty by, for example, making the size of each point proportional to the certainty of its prediction; looking at the following figure, we can see that it is precisely the points at the boundaries between clusters that reflect this uncertainty of cluster assignment:
size = 50 * probs.max(1) ** 2 # square emphasizes differences plt.scatter(X[:, 0], X[:, 1], c=labels, cmap='viridis', s=size);
present/mcc2/PythonDataScienceHandbook/05.12-Gaussian-Mixtures.ipynb
csaladenes/csaladenes.github.io
mit