markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Now we want to get the beam and residual maps. These are stored in fits files, and we read them into HDU objects with data and header attributes. The get_maps function works for any of the 2D images stored as fits in the 'output_data' directory.
beamxx, beamyy, residual = [fp.get_maps(fhd_run, obsids=obsids, imtype=imtype) for imtype in ('Beam_XX','Beam_YY','uniform_Residual_I')]
scripts/source-finding.ipynb
EoRImaging/katalogss
bsd-2-clause
To convert the residual maps from Jy/pixel to Jy/beam, we need the map of pixel areas in units of beam.
pix2beam = fp.pixarea_maps(fhd_run, obsids=obsids, map_dir=kgs_out+'area_maps/') for o in obsids: residual[o].data*=pix2beam[o]
scripts/source-finding.ipynb
EoRImaging/katalogss
bsd-2-clause
Now we're ready to start source finding using the katalogss module.
#clustering parameters eps_factor = 0.25 min_samples = 1 catalog={} for obsid in obsids: cmps = pd.DataFrame(comps[obsid]) cmps = kg.clip_comps(cmps) beam = beamxx[obsid].copy() beam.data = np.sqrt(np.mean([beamxx[obsid].data**2, beamyy[obsid].data**2],axis=0)) eps = eps_factor * meta[obsid]['beam_width'] cmps = kg.cluster_sources(cmps, eps, min_samples) catalog[obsid] = kg.catalog_sources(cmps, meta[obsid], residual[obsid], beam) cat = catalog[obsid] cat.head(10) wcs = WCS(resi.header) plt.figure(figsize=(10,8)) ax = plt.subplot(111,projection=wcs) x,y = wcs.wcs_world2pix(cat.ra, cat.dec,1) ax.scatter(x,y,s=cat.flux,edgecolor='none') lon,lat = ax.coords lon.set_axislabel('RA',fontsize=16) lat.set_axislabel('Dec',fontsize=16) ax.set_title(obsid,fontsize=20) pickle.dump(catalog, open(kgs_out+'catalogs.p','w'))
scripts/source-finding.ipynb
EoRImaging/katalogss
bsd-2-clause
We continue analysing the fram heart disease data. First load the data, use the name fram for the DataFrame variable. Make sure that in the data you loaded the column and row headers are in place. Checkout the summary of the variables using the describe method.
# exercise 1 def get_path(filename): import sys import os prog_name = sys.argv[0] if os.path.basename(prog_name) == "__main__.py": # Running under TMC return os.path.join(os.path.dirname(prog_name), "..", "src", filename) else: return filename # Put your solution here! fram = pd.read_csv(get_path('src/fram.txt'), sep='\t') fram.describe()
data_analysis/uh_data_analysis_with_python/hy-data-analysis-with-python-spring-2020/part08-e01_regression/reference/project_notebook_regression_analysis.ipynb
mohanprasath/Course-Work
gpl-3.0
Create function rescale that takes a Series as parameter. It should center the data and normalize it by dividing by 2$\sigma$, where $\sigma$ is the standard deviation. Return the rescaled Series.
# exercise 2 # Put your solution here! def rescale(serie): serie = pd.Series(serie) mean = serie.mean() std = serie.std() s2 = serie.apply(lambda x: (x-mean)/(2*std)) return s2
data_analysis/uh_data_analysis_with_python/hy-data-analysis-with-python-spring-2020/part08-e01_regression/reference/project_notebook_regression_analysis.ipynb
mohanprasath/Course-Work
gpl-3.0
Add to the DataFrame the scaled versions of all the continuous variables (with function rescale). Add small letter s in front of the original variable name to get the name of the scaled variable. For instance, AGE -> sAGE.
# exercise 3 # Put your solution here! fram2 = fram.copy() cont = ['AGE', 'FRW', 'SBP', 'SBP10', 'DBP', 'CHOL', 'CIG', 'CHD', 'DEATH', 'YRS_DTH'] for col in cont: colstr = 's' + col fram[colstr] = rescale(fram2[col])
data_analysis/uh_data_analysis_with_python/hy-data-analysis-with-python-spring-2020/part08-e01_regression/reference/project_notebook_regression_analysis.ipynb
mohanprasath/Course-Work
gpl-3.0
Form a model that predicts systolic blood pressure using weight, gender, and cholesterol level as explanatory variables. Store the fitted model in variable named fit.
# exercise 4 # Put your solution here! fit = smf.ols('SBP ~ sFRW + SEX + sCHOL', data=fram).fit()
data_analysis/uh_data_analysis_with_python/hy-data-analysis-with-python-spring-2020/part08-e01_regression/reference/project_notebook_regression_analysis.ipynb
mohanprasath/Course-Work
gpl-3.0
How much does the inclusion of age increase the explanatory power of the model? Which variables explain the variance of the target variable most? Your solution here The inclusion of age increases the models explanatory power by a very small margin, we can compare the results of R-squared, Adj. R-squared, AIC, BIC etc. and come to a conclusion that the difference is not significant. However it makes a tiny improvement. According to coefficients, weight it's the most important factor, then age is the next important factor. However, it is important to keep in mind how much do these variables vary and take it to account as well. Try to add to the model all the interactions with other variables.
# exercise 6 # Put your solution here! temp = ['sFRW', 'SEX', 'sCHOL', 'sAGE'] s = '' for w in temp: for w2 in temp: if w != w2: s += w+':'+w2+' ' s+='+ ' s = 'SBP ~ sFRW + SEX + sCHOL + sAGE + ' + s s = s[0:len(s)-3] fit = smf.ols(s, data=fram).fit()
data_analysis/uh_data_analysis_with_python/hy-data-analysis-with-python-spring-2020/part08-e01_regression/reference/project_notebook_regression_analysis.ipynb
mohanprasath/Course-Work
gpl-3.0
Then visualize the model as the function of weight for the youngest (sAGE=-1.0), middle aged (sAGE=0.0), and oldest (sAGE=1.0) women while assuming the background variables to be centered. Remember to consider the changes in the intercept and in the regression coefficient caused by age. Visualize both the data points and the fitted lines.
# exercise 7 # Put your solution here! p = fit.params fram.plot.scatter("sFRW", "SBP") int1 = p.Intercept - p["sAGE"] int2 = p.Intercept int3 = p.Intercept + p["sAGE"] slope1 = p.sFRW - p["sFRW:sAGE"] slope2 = p.sFRW slope3 = p.sFRW + p["sFRW:sAGE"] abline_plot(intercept=int1, slope=slope1, ax=plt.gca(), color="green", label="female youngest") abline_plot(intercept=int2, slope=slope2, ax=plt.gca(), color="yellow", label="female middle aged") abline_plot(intercept=int3, slope=slope3, ax=plt.gca(), color="red", label="female oldest") plt.legend()
data_analysis/uh_data_analysis_with_python/hy-data-analysis-with-python-spring-2020/part08-e01_regression/reference/project_notebook_regression_analysis.ipynb
mohanprasath/Course-Work
gpl-3.0
How does the dependence of blood pressure on weight change as a person gets older? Your solution here. The dependence lowers as a person gets old, meaning that it lowers the significance of weight compared to bloodpressure when older. Even more accurate model Include the background variable sCIG from the data and its interactions. Visualize the model for systolic blood pressure as the function of the most important explanatory variable. Visualize separate lines for the small (-1.0), average (0.0), and large (1.0) values of sCHOL. Other variables can be assumed to be at their mean value.
# exercise 8 # Put your solution here! temp = ['sFRW', 'SEX', 'sCHOL', 'sAGE', 'sCIG'] s = '' for w in temp: for w2 in temp: if w != w2: s += w+':'+w2+' ' s+='+ ' s = 'SBP ~ sFRW + SEX + sCHOL + sAGE + sCIG + ' + s s = s[0:len(s)-3] fit = smf.ols(s, data=fram).fit() # We checked from fit.summary() that sFRW had the highest coefficient value # of all the variables, so it is the most important explanatory variable. p = fit.params fram.plot.scatter("sFRW", "SBP") int1 = p.Intercept - p["sCHOL"] int2 = p.Intercept int3 = p.Intercept + p["sCHOL"] slope1 = p.sFRW - p["sFRW:sCHOL"] slope2 = p.sFRW slope3 = p.sFRW + p["sFRW:sCHOL"] abline_plot(intercept=int1, slope=slope1, ax=plt.gca(), color="green", label="small cholesterole") abline_plot(intercept=int2, slope=slope2, ax=plt.gca(), color="yellow", label="average cholesterole") abline_plot(intercept=int3, slope=slope3, ax=plt.gca(), color="red", label="large cholesterole") plt.legend()
data_analysis/uh_data_analysis_with_python/hy-data-analysis-with-python-spring-2020/part08-e01_regression/reference/project_notebook_regression_analysis.ipynb
mohanprasath/Course-Work
gpl-3.0
How does the model and its accuracy look? Your solution here. Model does look reasonable, coefficients seem to correlate nicely compared to intuition. I am not sure what is meant by accuracy, but comparing AIC,BIC, R-squared etc. it is quite similar to the previous one. Logistic regression
def logistic(x): return 1.0 / (1.0 + np.exp(-x))
data_analysis/uh_data_analysis_with_python/hy-data-analysis-with-python-spring-2020/part08-e01_regression/reference/project_notebook_regression_analysis.ipynb
mohanprasath/Course-Work
gpl-3.0
We will continue predicting high blood pressure by taking in some continuous background variables, such as the age. Recreate the model HIGH_BP ~ sFRW + SEX + SEX:sFRW presented in the introduction. Make sure, that you get the same results. Use name fit for the fitted model. Compute and store the error rate into variable error_rate_orig.
# exercise 9 # Put your solution here! fram["HIGH_BP"] = (fram.SBP >= 140) | (fram.DBP >= 90) fram.HIGH_BP = fram.HIGH_BP.astype('int', copy=False) fit = smf.glm(formula="HIGH_BP ~ sFRW + SEX + SEX:sFRW", data=fram, family=sm.families.Binomial()).fit() error_rate_orig = np.mean(((fit.fittedvalues < 0.5) & fram.HIGH_BP) | ((fit.fittedvalues > 0.5) & ~fram.HIGH_BP))
data_analysis/uh_data_analysis_with_python/hy-data-analysis-with-python-spring-2020/part08-e01_regression/reference/project_notebook_regression_analysis.ipynb
mohanprasath/Course-Work
gpl-3.0
Add the sAGE variable and its interactions. Check the prediction accuracy of the model and compare it to the previous model. Store the prediction accuracy to variable error_rate.
# exercise 10 # Put your solution here! fit = smf.glm(formula="HIGH_BP ~ sFRW + SEX + SEX:sFRW + sAGE * SEX + sAGE:sFRW", data=fram, family=sm.families.Binomial()).fit() error_rate = np.mean(((fit.fittedvalues < 0.5) & fram.HIGH_BP) | ((fit.fittedvalues > 0.5) & ~fram.HIGH_BP)) print('Original error: %0.5f, New error: %0.5f' % (error_rate_orig, error_rate))
data_analysis/uh_data_analysis_with_python/hy-data-analysis-with-python-spring-2020/part08-e01_regression/reference/project_notebook_regression_analysis.ipynb
mohanprasath/Course-Work
gpl-3.0
Visualize the predicted probability of high blood pressure as the function of weight. Remember to use normalized values (rescale) also for those variables that are not included in the visualization, so that sensible values are used for them (data average). Draw two figures with altogether six curves: young, middle aged, and old women; and young, middle aged, and old men. Use plt.subplots. (Plotting works in similar fashion as in the introduction. The argument factors need, however, be changed as in the example about visualisation of continuous variable.)
# exercise 11 def logistic(x): return 1.0 / (1.0 + np.exp(-x)) # Put your solution here! p = fit.params X = np.linspace(-2,4,100) fig, ax = plt.subplots(nrows=1, ncols=2) ax[0].scatter(fram.sFRW[(fram.SEX=="female")], fram.HIGH_BP[(fram.SEX=="female")]) ax[0].set_title("female") ax[1].scatter(fram.sFRW[(fram.SEX=="male")], fram.HIGH_BP[(fram.SEX=="male")]) ax[1].set_title("male") plt.xlabel("Weight") plt.ylabel("Pr(Has high BP)") ax[0].plot(X, logistic(X*(p.sFRW) + p.Intercept - p["sAGE"]), color='green', label="young") ax[0].plot(X, logistic(X*(p.sFRW) + p.Intercept), color='yellow', label="middle aged") ax[0].plot(X, logistic(X*(p.sFRW) + p.Intercept + p["sAGE"]), color='red', label="old") ax[0].legend() ax[1].plot(X, logistic(X*(p.sFRW + p["SEX[T.male]:sFRW"] + p["sAGE:SEX[T.male]"]) + p.Intercept - p["sAGE:SEX[T.male]"]), color='green', label="young") ax[1].plot(X, logistic(X*(p.sFRW + p["SEX[T.male]:sFRW"] + p["sAGE:SEX[T.male]"]) + p.Intercept), color='yellow', label="middle aged") ax[1].plot(X, logistic(X*(p.sFRW + p["SEX[T.male]:sFRW"] + p["sAGE:SEX[T.male]"]) + p.Intercept + p["sAGE:SEX[T.male]"]), color='red', label="old") ax[1].legend()
data_analysis/uh_data_analysis_with_python/hy-data-analysis-with-python-spring-2020/part08-e01_regression/reference/project_notebook_regression_analysis.ipynb
mohanprasath/Course-Work
gpl-3.0
How do the models with different ages and genders differ from each other? Your solution here. From the graphs it seems that females have a lower chance of high BP when younger. Generally females are more affected by weight compared to males. On the contrary to females, males seem to have a higher chance of high BP when younger but the difference between ages smallers when weight increases. Males also seem that they are affected less by weight in general. Comparing these graphs also shows that males have less high BP in the heavier side of the graph (blue diamonds). Create here a helper function train_test_split that gets a DataFrame as parameter and return a pair of DataFrames: one for training and the second for testing. The function should get parameters in the following way: python train_test_split(df, train_fraction=0.8) The data should be split randomly to training and testing DataFrames so that train_fraction fraction of data should go into the training set. Use the sample method of the DataFrame.
# exercise 12 # Put your solution here! def train_test_split(df, train_fraction=0.8): train = df.sample(frac=train_fraction) test = df.drop(train.index) return train,test
data_analysis/uh_data_analysis_with_python/hy-data-analysis-with-python-spring-2020/part08-e01_regression/reference/project_notebook_regression_analysis.ipynb
mohanprasath/Course-Work
gpl-3.0
Check the prediction accuracy of your model using cross validation. Use 100-fold cross validation and training_fraction 0.8.
# exercise 13 np.random.seed(1) # Put your solution here! error_model=[] error_null=[] for i in range(100): train, test = train_test_split(fram) fit = smf.glm(formula="HIGH_BP ~ sFRW + SEX + SEX:sFRW + sAGE * SEX + sAGE:sFRW", data=train, family=sm.families.Binomial()).fit() pred = fit.predict(test, transform=True) error_rate = np.mean(((pred < 0.5) & (test.HIGH_BP==1)) | ((pred > 0.5) & (test.HIGH_BP==0))) error_model.append(error_rate) error_null.append((1-test.HIGH_BP).mean()) print("Mean model error: %0.5f, Mean null error: %0.5f" % (pd.Series(error_model).mean(), pd.Series(error_null).mean()))
data_analysis/uh_data_analysis_with_python/hy-data-analysis-with-python-spring-2020/part08-e01_regression/reference/project_notebook_regression_analysis.ipynb
mohanprasath/Course-Work
gpl-3.0
Predicting coronary heart disease Let us use again the same data to learn a model for the occurrence of coronary heart disease. We will use logistic regression to predict whether a patient sometimes shows symptoms of coronary heart disease. For this, add to the data a binary variable hasCHD, that describes the event (CHD &gt; 0). The binary variable hadCHD can get only two values: 0 or 1. As a sanity check, compute the mean of this variable, which tells the number of positive cases.
# exercise 14 # Put your solution here! fram['hasCHD'] = fram.CHD > 0 fram.hasCHD = fram.hasCHD.astype('int', copy=False)
data_analysis/uh_data_analysis_with_python/hy-data-analysis-with-python-spring-2020/part08-e01_regression/reference/project_notebook_regression_analysis.ipynb
mohanprasath/Course-Work
gpl-3.0
Next, form a logistic regression model for variable hasCHD by using variables sCHOL, sCIG, and sFRW, and their interactions as explanatory variables. Store the fitted model to variable fit. Compute the prediction accuracy of the model, store it to variable error_rate.
# exercise 15 # Put your solution here! temp = ['sFRW', 'sCHOL', 'sCIG'] s = '' for w in temp: for w2 in temp: if w != w2: s += w+':'+w2+' ' s+='+ ' s = 'hasCHD ~ sFRW + sCHOL + sCIG + ' + s s = s[0:len(s)-3] fit = smf.glm(formula=s, data=fram, family=sm.families.Binomial()).fit() error_rate = np.mean(((fit.fittedvalues < 0.5) & fram.hasCHD) | ((fit.fittedvalues > 0.5) & ~fram.hasCHD))
data_analysis/uh_data_analysis_with_python/hy-data-analysis-with-python-spring-2020/part08-e01_regression/reference/project_notebook_regression_analysis.ipynb
mohanprasath/Course-Work
gpl-3.0
Visualize the model by using the most important explanator on the x axis. Visualize both the points (with plt.scatter) and the logistic curve (with plt.plot).
# exercise 16 def logistic(x): return 1.0 / (1.0 + np.exp(-x)) # Put your solution here! p = fit.params X = np.linspace(-1,3,100) plt.scatter(fram["sCIG"], fram["hasCHD"] + np.random.uniform(-0.10, 0.10, len(fram)), marker = ".") plt.plot(X, logistic(X*(p.sCIG) + p.Intercept), color="red")
data_analysis/uh_data_analysis_with_python/hy-data-analysis-with-python-spring-2020/part08-e01_regression/reference/project_notebook_regression_analysis.ipynb
mohanprasath/Course-Work
gpl-3.0
Is the prediction accuracy of the model good or bad? Can we expect to have practical use of the model? Your solution here. Considering error rate being 22% it can give pretty good generalizations of what are the risk factors for having CHD. It can give pretty good predictions so I think it could be used somewhere in practise. If a person has cholestherol 200, smokes 17 cigarets per day, and has weight 100, then what is the probability that he/she sometimes shows signs of coronal hear disease? Note that the model expects normalized values. Store the normalized values to dictionary called point. Store the probability in variable predicted.
# exercise 17 # Put your solution here! vals = pd.DataFrame([[200,17,100]],columns=['sCHOL', 'sCIG', 'sFRW']) point = {} for ind in vals: col = ind[1:] temp = fram[col].append(vals[ind]) point[ind] = rescale(temp).iloc[-1] predicted = fit.predict(point).iloc[0] print(predicted)
data_analysis/uh_data_analysis_with_python/hy-data-analysis-with-python-spring-2020/part08-e01_regression/reference/project_notebook_regression_analysis.ipynb
mohanprasath/Course-Work
gpl-3.0
使用视觉注意力生成图像描述 <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://tensorflow.google.cn/tutorials/text/image_captioning"> <img src="https://tensorflow.google.cn/images/tf_logo_32px.png" /> 在 tensorFlow.google.cn 上查看</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/text/image_captioning.ipynb"> <img src="https://tensorflow.google.cn/images/colab_logo_32px.png" /> 在 Google Colab 中运行</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/text/image_captioning.ipynb"> <img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png" /> 在 GitHub 上查看源代码</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/tutorials/text/image_captioning.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png" />下载 notebook</a> </td> </table> 给定一个类似以下示例的图像,我们的目标是生成一个类似“a surfer riding on a wave(一名正在冲浪的冲浪者)”的描述。 图像来源;许可:公共领域 为此,您将使用基于注意力的模型,它使我们能够看到在生成描述时模型聚焦在图像的哪个部分。 此模型架构类似于论文 Show, Attend and Tell: Neural Image Caption Generation with Visual Attention。 此笔记本是一个端到端示例。当您运行此笔记本时,它会下载 MS-COCO 数据集、使用 Inception V3 缓存图像的子集、训练编解码器模型,并使用训练后的模型在新的图像上生成描述。 在此示例中,您将在相对较少的数据上训练模型——对应约 20,000 个图像的前 30,000 个描述(因为数据集中的每个图像有多个描述)。
import tensorflow as tf # You'll generate plots of attention in order to see which parts of an image # our model focuses on during captioning import matplotlib.pyplot as plt # Scikit-learn includes many helpful utilities from sklearn.model_selection import train_test_split from sklearn.utils import shuffle import re import numpy as np import os import time import json from glob import glob from PIL import Image import pickle
site/zh-cn/tutorials/text/image_captioning.ipynb
tensorflow/docs-l10n
apache-2.0
下载并准备 MS-COCO 数据集 您将使用 MS-COCO 数据集来训练模型。此数据集包含了超过 82,000 个图像,每个图像至少具有 5 个不同的描述注解。下面的代码将自动下载并提取数据集。 小心:前方含有大量下载。您将使用的训练集包含 13GB 的文件。
# Download caption annotation files annotation_folder = '/annotations/' if not os.path.exists(os.path.abspath('.') + annotation_folder): annotation_zip = tf.keras.utils.get_file('captions.zip', cache_subdir=os.path.abspath('.'), origin = 'http://images.cocodataset.org/annotations/annotations_trainval2014.zip', extract = True) annotation_file = os.path.dirname(annotation_zip)+'/annotations/captions_train2014.json' os.remove(annotation_zip) # Download image files image_folder = '/train2014/' if not os.path.exists(os.path.abspath('.') + image_folder): image_zip = tf.keras.utils.get_file('train2014.zip', cache_subdir=os.path.abspath('.'), origin = 'http://images.cocodataset.org/zips/train2014.zip', extract = True) PATH = os.path.dirname(image_zip) + image_folder os.remove(image_zip) else: PATH = os.path.abspath('.') + image_folder
site/zh-cn/tutorials/text/image_captioning.ipynb
tensorflow/docs-l10n
apache-2.0
可选:限制训练集的大小 为了加快此教程的训练速度,您将使用 30,000 个描述及其对应图像的子集来训练模型。选择使用更多数据将提高生成描述的质量。
# Read the json file with open(annotation_file, 'r') as f: annotations = json.load(f) # Store captions and image names in vectors all_captions = [] all_img_name_vector = [] for annot in annotations['annotations']: caption = '&lt;start&gt; ' + annot['caption'] + ' &lt;end&gt;' image_id = annot['image_id'] full_coco_image_path = PATH + 'COCO_train2014_' + '%012d.jpg' % (image_id) all_img_name_vector.append(full_coco_image_path) all_captions.append(caption) # Shuffle captions and image_names together # Set a random state train_captions, img_name_vector = shuffle(all_captions, all_img_name_vector, random_state=1) # Select the first 30000 captions from the shuffled set num_examples = 30000 train_captions = train_captions[:num_examples] img_name_vector = img_name_vector[:num_examples] len(train_captions), len(all_captions)
site/zh-cn/tutorials/text/image_captioning.ipynb
tensorflow/docs-l10n
apache-2.0
使用 InceptionV3 预处理图像 接下来,您将使用 InceptionV3(已在 Imagenet 上进行了预训练)对每个图像进行分类。您将从最后一个卷积层中提取特征。 首先,您需要将图像转换为 InceptionV3 的预期格式: 将图像大小调整为 299 x 299 像素 使用 preprocess_input 方法预处理图像,以对其进行归一化,以便使其包含范围在 -1 和 1 内的像素,从而与用于训练 InceptionV3 的图像格式相匹配。
def load_image(image_path): img = tf.io.read_file(image_path) img = tf.image.decode_jpeg(img, channels=3) img = tf.image.resize(img, (299, 299)) img = tf.keras.applications.inception_v3.preprocess_input(img) return img, image_path
site/zh-cn/tutorials/text/image_captioning.ipynb
tensorflow/docs-l10n
apache-2.0
初始化 InceptionV3 并加载经过预训练的 Imagenet 权重 现在,您将创建一个 tf.keras 模型,其中输出层是 InceptionV3 架构中的最后一个卷积层。此层的输出形状为 8x8x2048。之所以使用最后一个卷积层,是因为我们在此示例中使用了注意力。您无法在训练期间执行此初始化,因为它可能会成为瓶颈。 您将通过网络对每个图像进行前向传递,并将得到的向量存储在字典中(image_name --> feature_vector)。 当通过网络传递所有图像后,对字典进行 pickle 操作,然后将其保存到磁盘。
image_model = tf.keras.applications.InceptionV3(include_top=False, weights='imagenet') new_input = image_model.input hidden_layer = image_model.layers[-1].output image_features_extract_model = tf.keras.Model(new_input, hidden_layer)
site/zh-cn/tutorials/text/image_captioning.ipynb
tensorflow/docs-l10n
apache-2.0
缓存从 InceptionV3 中提取的特征 您将使用 InceptionV3 预处理每个图像,然后将输出缓存到磁盘。将输出缓存到 RAM 会更快,但也会占用大量内存(每个图像需要 8 * 8 * 2048 个浮点数)。写入时,这将超出 Colab 的内存限制(目前为 12GB 内存)。 使用更复杂的缓存策略(例如,将图像分片来减少对磁盘 I/O 的随机访问)可以提高性能,但这需要编写更多代码。 使用 GPU 在 Colab 中运行缓存大约需要 10 分钟。如果您想查看进度条,可以进行以下操作: 安装 tqdm: !pip install tqdm 导入 tqdm: from tqdm import tqdm 将下面的行: for img, path in image_dataset: 改为: for img, path in tqdm(image_dataset):
# Get unique images encode_train = sorted(set(img_name_vector)) # Feel free to change batch_size according to your system configuration image_dataset = tf.data.Dataset.from_tensor_slices(encode_train) image_dataset = image_dataset.map( load_image, num_parallel_calls=tf.data.experimental.AUTOTUNE).batch(16) for img, path in image_dataset: batch_features = image_features_extract_model(img) batch_features = tf.reshape(batch_features, (batch_features.shape[0], -1, batch_features.shape[3])) for bf, p in zip(batch_features, path): path_of_feature = p.numpy().decode("utf-8") np.save(path_of_feature, bf.numpy())
site/zh-cn/tutorials/text/image_captioning.ipynb
tensorflow/docs-l10n
apache-2.0
预处理描述并进行分词 首先,您需要对描述进行分词(例如,按照空格进行切分)。这会为我们提供一个包含数据中的所有唯一单词的词汇表(例如,“surfing”、“football”等)。 接下来,将词汇表的大小限制为前 5,000 个单词(以节省内存)。将所有其他单词替换为“UNK”(未知)。 然后,创建“词到索引”和“索引到词”的映射。 最后,将所有序列填充为与最长序列相同的长度。
# Find the maximum length of any caption in our dataset def calc_max_length(tensor): return max(len(t) for t in tensor) # Choose the top 5000 words from the vocabulary top_k = 5000 tokenizer = tf.keras.preprocessing.text.Tokenizer(num_words=top_k, oov_token="&lt;unk&gt;", filters='!"#$%&amp;()*+.,-/:;=?@[\]^_`{|}~ ') tokenizer.fit_on_texts(train_captions) train_seqs = tokenizer.texts_to_sequences(train_captions) tokenizer.word_index['&lt;pad&gt;'] = 0 tokenizer.index_word[0] = '&lt;pad&gt;' # Create the tokenized vectors train_seqs = tokenizer.texts_to_sequences(train_captions) # Pad each vector to the max_length of the captions # If you do not provide a max_length value, pad_sequences calculates it automatically cap_vector = tf.keras.preprocessing.sequence.pad_sequences(train_seqs, padding='post') # Calculates the max_length, which is used to store the attention weights max_length = calc_max_length(train_seqs)
site/zh-cn/tutorials/text/image_captioning.ipynb
tensorflow/docs-l10n
apache-2.0
将数据拆分为训练和测试
# Create training and validation sets using an 80-20 split img_name_train, img_name_val, cap_train, cap_val = train_test_split(img_name_vector, cap_vector, test_size=0.2, random_state=0) len(img_name_train), len(cap_train), len(img_name_val), len(cap_val)
site/zh-cn/tutorials/text/image_captioning.ipynb
tensorflow/docs-l10n
apache-2.0
创建用于训练的 tf.data 数据集 我们的图像和描述已准备就绪!接下来,让我们创建一个 tf.data 数据集,用于训练我们的模型。
# Feel free to change these parameters according to your system's configuration BATCH_SIZE = 64 BUFFER_SIZE = 1000 embedding_dim = 256 units = 512 vocab_size = top_k + 1 num_steps = len(img_name_train) // BATCH_SIZE # Shape of the vector extracted from InceptionV3 is (64, 2048) # These two variables represent that vector shape features_shape = 2048 attention_features_shape = 64 # Load the numpy files def map_func(img_name, cap): img_tensor = np.load(img_name.decode('utf-8')+'.npy') return img_tensor, cap dataset = tf.data.Dataset.from_tensor_slices((img_name_train, cap_train)) # Use map to load the numpy files in parallel dataset = dataset.map(lambda item1, item2: tf.numpy_function( map_func, [item1, item2], [tf.float32, tf.int32]), num_parallel_calls=tf.data.experimental.AUTOTUNE) # Shuffle and batch dataset = dataset.shuffle(BUFFER_SIZE).batch(BATCH_SIZE) dataset = dataset.prefetch(buffer_size=tf.data.experimental.AUTOTUNE)
site/zh-cn/tutorials/text/image_captioning.ipynb
tensorflow/docs-l10n
apache-2.0
模型 有趣的事实:下面的解码器与使用注意力机制的神经网络机器翻译示例中的解码器相同。 此模型架构的灵感来自论文 Show, Attend and Tell。 在此示例中,您将从 InceptionV3 的较低卷积层中提取特征,从而得到一个形状为 (8, 8, 2048) 的向量。 将上述形状挤压为 (64, 2048) 的形状。 然后通过 CNN 编码器(包含一个全连接层)传递此向量。 RNN(此处为 GRU)会将注意力放在图像上以预测下一个单词。
class BahdanauAttention(tf.keras.Model): def __init__(self, units): super(BahdanauAttention, self).__init__() self.W1 = tf.keras.layers.Dense(units) self.W2 = tf.keras.layers.Dense(units) self.V = tf.keras.layers.Dense(1) def call(self, features, hidden): # features(CNN_encoder output) shape == (batch_size, 64, embedding_dim) # hidden shape == (batch_size, hidden_size) # hidden_with_time_axis shape == (batch_size, 1, hidden_size) hidden_with_time_axis = tf.expand_dims(hidden, 1) # score shape == (batch_size, 64, hidden_size) score = tf.nn.tanh(self.W1(features) + self.W2(hidden_with_time_axis)) # attention_weights shape == (batch_size, 64, 1) # you get 1 at the last axis because you are applying score to self.V attention_weights = tf.nn.softmax(self.V(score), axis=1) # context_vector shape after sum == (batch_size, hidden_size) context_vector = attention_weights * features context_vector = tf.reduce_sum(context_vector, axis=1) return context_vector, attention_weights class CNN_Encoder(tf.keras.Model): # Since you have already extracted the features and dumped it using pickle # This encoder passes those features through a Fully connected layer def __init__(self, embedding_dim): super(CNN_Encoder, self).__init__() # shape after fc == (batch_size, 64, embedding_dim) self.fc = tf.keras.layers.Dense(embedding_dim) def call(self, x): x = self.fc(x) x = tf.nn.relu(x) return x class RNN_Decoder(tf.keras.Model): def __init__(self, embedding_dim, units, vocab_size): super(RNN_Decoder, self).__init__() self.units = units self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim) self.gru = tf.keras.layers.GRU(self.units, return_sequences=True, return_state=True, recurrent_initializer='glorot_uniform') self.fc1 = tf.keras.layers.Dense(self.units) self.fc2 = tf.keras.layers.Dense(vocab_size) self.attention = BahdanauAttention(self.units) def call(self, x, features, hidden): # defining attention as a separate model context_vector, attention_weights = self.attention(features, hidden) # x shape after passing through embedding == (batch_size, 1, embedding_dim) x = self.embedding(x) # x shape after concatenation == (batch_size, 1, embedding_dim + hidden_size) x = tf.concat([tf.expand_dims(context_vector, 1), x], axis=-1) # passing the concatenated vector to the GRU output, state = self.gru(x) # shape == (batch_size, max_length, hidden_size) x = self.fc1(output) # x shape == (batch_size * max_length, hidden_size) x = tf.reshape(x, (-1, x.shape[2])) # output shape == (batch_size * max_length, vocab) x = self.fc2(x) return x, state, attention_weights def reset_state(self, batch_size): return tf.zeros((batch_size, self.units)) encoder = CNN_Encoder(embedding_dim) decoder = RNN_Decoder(embedding_dim, units, vocab_size) optimizer = tf.keras.optimizers.Adam() loss_object = tf.keras.losses.SparseCategoricalCrossentropy( from_logits=True, reduction='none') def loss_function(real, pred): mask = tf.math.logical_not(tf.math.equal(real, 0)) loss_ = loss_object(real, pred) mask = tf.cast(mask, dtype=loss_.dtype) loss_ *= mask return tf.reduce_mean(loss_)
site/zh-cn/tutorials/text/image_captioning.ipynb
tensorflow/docs-l10n
apache-2.0
检查点
checkpoint_path = "./checkpoints/train" ckpt = tf.train.Checkpoint(encoder=encoder, decoder=decoder, optimizer = optimizer) ckpt_manager = tf.train.CheckpointManager(ckpt, checkpoint_path, max_to_keep=5) start_epoch = 0 if ckpt_manager.latest_checkpoint: start_epoch = int(ckpt_manager.latest_checkpoint.split('-')[-1]) # restoring the latest checkpoint in checkpoint_path ckpt.restore(ckpt_manager.latest_checkpoint)
site/zh-cn/tutorials/text/image_captioning.ipynb
tensorflow/docs-l10n
apache-2.0
训练 提取存储在相应 .npy 文件中的特征,然后通过编码器传递这些特征。 将编码器输出、隐藏状态(已初始化为 0)和解码器输入(即起始词例)传递给解码器。 解码器会返回预测和解码器隐藏状态。 然后将解码器隐藏状态传回模型,并使用预测计算损失。 使用 Teacher Forcing 方法决定解码器的下一个输入。 Teacher Forcing 是一种将目标词作为下一个输入传递给解码器的技术。 最后一步是计算梯度并将其应用于优化器和反向传播。
# adding this in a separate cell because if you run the training cell # many times, the loss_plot array will be reset loss_plot = [] @tf.function def train_step(img_tensor, target): loss = 0 # initializing the hidden state for each batch # because the captions are not related from image to image hidden = decoder.reset_state(batch_size=target.shape[0]) dec_input = tf.expand_dims([tokenizer.word_index['&lt;start&gt;']] * target.shape[0], 1) with tf.GradientTape() as tape: features = encoder(img_tensor) for i in range(1, target.shape[1]): # passing the features through the decoder predictions, hidden, _ = decoder(dec_input, features, hidden) loss += loss_function(target[:, i], predictions) # using teacher forcing dec_input = tf.expand_dims(target[:, i], 1) total_loss = (loss / int(target.shape[1])) trainable_variables = encoder.trainable_variables + decoder.trainable_variables gradients = tape.gradient(loss, trainable_variables) optimizer.apply_gradients(zip(gradients, trainable_variables)) return loss, total_loss EPOCHS = 20 for epoch in range(start_epoch, EPOCHS): start = time.time() total_loss = 0 for (batch, (img_tensor, target)) in enumerate(dataset): batch_loss, t_loss = train_step(img_tensor, target) total_loss += t_loss if batch % 100 == 0: print ('Epoch {} Batch {} Loss {:.4f}'.format( epoch + 1, batch, batch_loss.numpy() / int(target.shape[1]))) # storing the epoch end loss value to plot later loss_plot.append(total_loss / num_steps) if epoch % 5 == 0: ckpt_manager.save() print ('Epoch {} Loss {:.6f}'.format(epoch + 1, total_loss/num_steps)) print ('Time taken for 1 epoch {} sec\n'.format(time.time() - start)) plt.plot(loss_plot) plt.xlabel('Epochs') plt.ylabel('Loss') plt.title('Loss Plot') plt.show()
site/zh-cn/tutorials/text/image_captioning.ipynb
tensorflow/docs-l10n
apache-2.0
说明! 评估函数与训练循环类似,不同之处在于,此处不使用 Teacher Forcing。每一个时间步骤的解码器输入都是其前一步的预测结果、隐藏状态和编码器输出。 当模型预测到最后一个词例时停止。 存储每一个时间步骤的注意力权重。
def evaluate(image): attention_plot = np.zeros((max_length, attention_features_shape)) hidden = decoder.reset_state(batch_size=1) temp_input = tf.expand_dims(load_image(image)[0], 0) img_tensor_val = image_features_extract_model(temp_input) img_tensor_val = tf.reshape(img_tensor_val, (img_tensor_val.shape[0], -1, img_tensor_val.shape[3])) features = encoder(img_tensor_val) dec_input = tf.expand_dims([tokenizer.word_index['&lt;start&gt;']], 0) result = [] for i in range(max_length): predictions, hidden, attention_weights = decoder(dec_input, features, hidden) attention_plot[i] = tf.reshape(attention_weights, (-1, )).numpy() predicted_id = tf.random.categorical(predictions, 1)[0][0].numpy() result.append(tokenizer.index_word[predicted_id]) if tokenizer.index_word[predicted_id] == '&lt;end&gt;': return result, attention_plot dec_input = tf.expand_dims([predicted_id], 0) attention_plot = attention_plot[:len(result), :] return result, attention_plot def plot_attention(image, result, attention_plot): temp_image = np.array(Image.open(image)) fig = plt.figure(figsize=(10, 10)) len_result = len(result) for l in range(len_result): temp_att = np.resize(attention_plot[l], (8, 8)) ax = fig.add_subplot(len_result//2, len_result//2, l+1) ax.set_title(result[l]) img = ax.imshow(temp_image) ax.imshow(temp_att, cmap='gray', alpha=0.6, extent=img.get_extent()) plt.tight_layout() plt.show() # captions on the validation set rid = np.random.randint(0, len(img_name_val)) image = img_name_val[rid] real_caption = ' '.join([tokenizer.index_word[i] for i in cap_val[rid] if i not in [0]]) result, attention_plot = evaluate(image) print ('Real Caption:', real_caption) print ('Prediction Caption:', ' '.join(result)) plot_attention(image, result, attention_plot)
site/zh-cn/tutorials/text/image_captioning.ipynb
tensorflow/docs-l10n
apache-2.0
在自己的图像上进行尝试 为了增加趣味性,我们在下面提供了一个方法,可让您通过我们刚才训练的模型为您自己的图像生成描述。请记住,这个模型是使用较少数据训练的,而您的图像可能与训练数据不同(因此请做好心理准备,您可能会得到奇怪的结果)。
image_url = 'https://tensorflow.org/images/surf.jpg' image_extension = image_url[-4:] image_path = tf.keras.utils.get_file('image'+image_extension, origin=image_url) result, attention_plot = evaluate(image_path) print ('Prediction Caption:', ' '.join(result)) plot_attention(image_path, result, attention_plot) # opening the image Image.open(image_path)
site/zh-cn/tutorials/text/image_captioning.ipynb
tensorflow/docs-l10n
apache-2.0
The HoloViews options system allows controlling the various attributes of a plot. While different plotting extensions like bokeh, matplotlib and plotly offer different features and the style options may differ, there are a wide array of options and concepts that are shared across the different extensions. Specifically this guide provides an overview on controlling the various aspects of a plot including titles, axes, legends and colorbars. Plots have an overall hierarchy and here we will break down the different components: Plot: Refers to the overall plot which can consist of one or more axes Titles: Using title formatting and providing custom titles Background: Setting the plot background color Font sizes: Controlling the font sizes on a plot Plot hooks: Using custom hooks to modify plots Axes: A set of axes provides scales describing the mapping between data and the space on screen Types of axes: Linear axes Logarithmic axes Datetime axes Categorical axes Axis position: Positioning and hiding axes Inverting axes: Flipping the x-/y-axes and inverting an axis Axis labels: Setting axis labels using dimensions and options Axis ranges: Controlling axes ranges using dimensions, padding and options Axis ticks: Controlling axis tick locations, labels and formatting Customizing the plot Title A plot's title is usually constructed using a formatter which takes the group and label along with the plots dimensions into consideration. The default formatter is: '{label} {group} {dimensions}' where the {label} and {group} are inherited from the objects group and label parameters and dimensions represent the key dimensions in a HoloMap/DynamicMap:
hv.HoloMap({i: hv.Curve([1, 2, 3-i], group='Group', label='Label') for i in range(3)}, 'Value')
examples/user_guide/Customizing_Plots.ipynb
basnijholt/holoviews
bsd-3-clause
The title formatter may however be overriden with an explicit title, which may include any combination of the three formatter variables:
hv.Curve([1, 2, 3]).opts(title="Custom Title")
examples/user_guide/Customizing_Plots.ipynb
basnijholt/holoviews
bsd-3-clause
Background Another option which can be controlled at the level of a plot is the background color which may be set using the bgcolor option:
hv.Curve([1, 2, 3]).opts(bgcolor='lightgray')
examples/user_guide/Customizing_Plots.ipynb
basnijholt/holoviews
bsd-3-clause
Font sizes Controlling the font sizes of a plot is very common so HoloViews provides a convenient option to set the fontsize. The fontsize accepts a dictionary which allows supplying fontsizes for different components of the plot from the title, to the axis labels, ticks and legends. The full list of plot components that can be customized separately include: ['xlabel', 'ylabel', 'zlabel', 'labels', 'xticks', 'yticks', 'zticks', 'ticks', 'minor_xticks', 'minor_yticks', 'minor_ticks', 'title', 'legend', 'legend_title'] Let's take a simple example customizing the title, the axis labels and the x/y-ticks separately:
hv.Curve([1, 2, 3], label='Title').opts(fontsize={'title': 16, 'labels': 14, 'xticks': 6, 'yticks': 12})
examples/user_guide/Customizing_Plots.ipynb
basnijholt/holoviews
bsd-3-clause
Plot hooks HoloViews does not expose every single option a plotting extension like matplotlib or bokeh provides, therefore it is sometimes necessary to dig deeper to achieve precisely the customizations one might need. One convenient way of doing so is to use plot hooks to modify the plot object directly. The hooks are applied after HoloViews is done with the plot, allowing for detailed manipulations of the backend specific plot object. The signature of a hook has two arguments, the HoloViews plot object that is rendering the plot and the element being rendered. From there the hook can modify the objects in the plot's handles, which provides convenient access to various components of a plot or simply access the plot.state which corresponds to the plot as a whole, e.g. in this case we define colors for the x- and y-labels of the plot.
def hook(plot, element): print('plot.state: ', plot.state) print('plot.handles: ', sorted(plot.handles.keys())) plot.handles['xaxis'].axis_label_text_color = 'red' plot.handles['yaxis'].axis_label_text_color = 'blue' hv.Curve([1, 2, 3]).opts(hooks=[hook])
examples/user_guide/Customizing_Plots.ipynb
basnijholt/holoviews
bsd-3-clause
Customizing axes Controlling the axis scales is one of the most common changes to make to a plot, so we will provide a quick overview of the three main types of axes and then go into some more detail on how to control the axis labels, ranges, ticks and orientation. Types of axes There are four main types of axes supported across plotting backends, standard linear axes, log axes, datetime axes and categorical axes. In most cases HoloViews automatically detects the appropriate axis type to use based on the type of the data, e.g. numeric values use linear/log axes, date(time) values use datetime axes and string or other object types use categorical axes. Linear axes A linear axes is simply the default, as long as the data is numeric HoloViews will automatically use a linear axis on the plot. Log axes When the data is exponential it is often useful to use log axes, which can be enabled using independent logx and logy options. This way both semi-log and log-log plots can be achieved:
semilogy = hv.Curve(np.logspace(0, 5), label='Semi-log y axes') loglog = hv.Curve((np.logspace(0, 5), np.logspace(0, 5)), label='Log-log axes') semilogy.opts(logy=True) + loglog.opts(logx=True, logy=True, shared_axes=False)
examples/user_guide/Customizing_Plots.ipynb
basnijholt/holoviews
bsd-3-clause
Datetime axes All current plotting extensions allow plotting datetime data, if you ensure the dates array is of a valid datetime dtype.
from bokeh.sampledata.stocks import GOOG, AAPL goog_dates = np.array(GOOG['date'], dtype=np.datetime64) aapl_dates = np.array(AAPL['date'], dtype=np.datetime64) goog = hv.Curve((goog_dates, GOOG['adj_close']), 'Date', 'Stock Index', label='Google') aapl = hv.Curve((aapl_dates, AAPL['adj_close']), 'Date', 'Stock Index', label='Apple') (goog * aapl).opts(width=600, legend_position='top_left')
examples/user_guide/Customizing_Plots.ipynb
basnijholt/holoviews
bsd-3-clause
Categorical axes While the handling of categorical data handles significantly between plotting extensions the same basic concepts apply. If the data is a string type or other object type it is formatted as a string and each unique category is assigned a tick along the axis. When overlaying elements the categories are combined and overlaid appropriately. Whether an axis is categorical also depends on the Element type, e.g. a HeatMap always has two categorical axes while a Bars element always has a categorical x-axis. As a simple example let us create a set of points with categories along the x- and y-axes and render them on top of a HeatMap of th same data:
points = hv.Points([(chr(i+65), chr(j+65), i*j) for i in range(10) for j in range(10)], vdims='z') heatmap = hv.HeatMap(points) (heatmap * points).opts( opts.HeatMap(toolbar='above', tools=['hover']), opts.Points(tools=['hover'], size=dim('z')*0.3))
examples/user_guide/Customizing_Plots.ipynb
basnijholt/holoviews
bsd-3-clause
As a more complex example which does not implicitly assume categorical axes due to the element type we will create a set of random samples indexed by categories from 'A' to 'E' using the Scatter Element and overlay them. Secondly we compute the mean and standard deviation for each category displayed using a set of ErrorBars and finally we overlay these two elements with a Curve representing the mean value . All these Elements respect the categorical index, providing us a view of the distribution of values in each category:
overlay = hv.NdOverlay({group: hv.Scatter(([group]*100, np.random.randn(100)*(5-i)-i)) for i, group in enumerate(['A', 'B', 'C', 'D', 'E'])}) errorbars = hv.ErrorBars([(k, el.reduce(function=np.mean), el.reduce(function=np.std)) for k, el in overlay.items()]) curve = hv.Curve(errorbars) (errorbars * overlay * curve).opts( opts.ErrorBars(line_width=5), opts.Scatter(jitter=0.2, alpha=0.5, size=6, height=400, width=600))
examples/user_guide/Customizing_Plots.ipynb
basnijholt/holoviews
bsd-3-clause
Categorical axes are special in that they support multi-level nesting in some cases. Currently this is only supported for certain element types (BoxWhisker, Violin and Bars) but eventually all chart-like elements will interpret multiple key dimensions as a multi-level categorical hierarchy. To demonstrate this behavior consider the BoxWhisker plot below which support two-level nested categories:
groups = [chr(65+g) for g in np.random.randint(0, 3, 200)] boxes = hv.BoxWhisker((groups, np.random.randint(0, 5, 200), np.random.randn(200)), ['Group', 'Category'], 'Value').sort() boxes.opts(width=600)
examples/user_guide/Customizing_Plots.ipynb
basnijholt/holoviews
bsd-3-clause
Axis positions Axes may be hidden or moved to a different location using the xaxis and yaxis options, which accept None, 'right'/'bottom', 'left'/'top' and 'bare' as values.
np.random.seed(42) ys = np.random.randn(101).cumsum(axis=0) curve = hv.Curve(ys, ('x', 'x-label'), ('y', 'y-label')) (curve.relabel('No axis').opts(xaxis=None, yaxis=None) + curve.relabel('Bare axis').opts(xaxis='bare') + curve.relabel('Moved axis').opts(xaxis='top', yaxis='right'))
examples/user_guide/Customizing_Plots.ipynb
basnijholt/holoviews
bsd-3-clause
Inverting axes Another option to control axes is to invert the x-/y-axes using the invert_axes options, i.e. turn a vertical plot into a horizontal plot. Secondly each individual axis can be flipped left to right or upside down respectively using the invert_xaxis and invert_yaxis options.
bars = hv.Bars([('Australia', 10), ('United States', 14), ('United Kingdom', 7)], 'Country') (bars.relabel('Invert axes').opts(invert_axes=True, width=400) + bars.relabel('Invert x-axis').opts(invert_xaxis=True) + bars.relabel('Invert y-axis').opts(invert_yaxis=True)).opts(shared_axes=False)
examples/user_guide/Customizing_Plots.ipynb
basnijholt/holoviews
bsd-3-clause
Axis labels Ordinarily axis labels are controlled using the dimension label, however explicitly xlabel and ylabel options make it possible to override the label at the plot level. Additionally the labelled option allows specifying which axes should be labelled at all, making it possible to hide axis labels:
(curve.relabel('Dimension labels') + curve.relabel("xlabel='Custom x-label'").opts(xlabel='Custom x-label') + curve.relabel('Unlabelled').opts(labelled=[]))
examples/user_guide/Customizing_Plots.ipynb
basnijholt/holoviews
bsd-3-clause
Axis ranges The ranges of a plot are ordinarily controlled by computing the data range and combining it with the dimension range and soft_range but they may also be padded or explicitly overridden using xlim and ylim options. Dimension ranges data range: The data range is computed by min and max of the dimension values range: Hard override of the data range soft_range: Soft override of the data range Dimension.range Setting the range of a Dimension overrides the data ranges, i.e. here we can see that despite the fact the data extends to x=100 the axis is cut off at 90:
curve.redim(x=hv.Dimension('x', range=(-10, 90)))
examples/user_guide/Customizing_Plots.ipynb
basnijholt/holoviews
bsd-3-clause
Dimension.soft_range Declaringa soft_range on the other hand combines the data range and the supplied range, i.e. it will pick whichever extent is wider. Using the same example as above we can see it uses the -10 value supplied in the soft_range but also extends to 100, which is the upper bound of the actual data:
curve.redim(x=hv.Dimension('x', soft_range=(-10, 90)))
examples/user_guide/Customizing_Plots.ipynb
basnijholt/holoviews
bsd-3-clause
Padding Applying padding to the ranges is an easy way to ensure that the data is not obscured by the margins. The padding is specified by the fraction by which to increase auto-ranged extents to make datapoints more visible around borders. The padding considers the width and height of the plot to keep the visual extent of the padding equal. The padding values can be specified with three levels of detail: A single numeric value (e.g. padding=0.1) A tuple specifying the padding for the x/y(/z) axes respectively (e.g. padding=(0, 0.1)) A tuple of tuples specifying padding for the lower and upper bound respectively (e.g. padding=(0, (0, 0.1)))
(curve.relabel('Pad both axes').opts(padding=0.1) + curve.relabel('Pad y-axis').opts(padding=(0, 0.1)) + curve.relabel('Pad y-axis upper bound').opts(padding=(0, (0, 0.1)))).opts(shared_axes=False)
examples/user_guide/Customizing_Plots.ipynb
basnijholt/holoviews
bsd-3-clause
xlim/ylim The data ranges, dimension ranges and padding combine across plots in an overlay to ensure that all the data is contained in the viewport. In some cases it is more convenient to override the ranges with explicit xlim and ylim options which have the highest precedence and will be respected no matter what.
curve.relabel('Explicit xlim/ylim').opts(xlim=(-10, 110), ylim=(-14, 6))
examples/user_guide/Customizing_Plots.ipynb
basnijholt/holoviews
bsd-3-clause
Axis ticks Setting tick locations differs a little bit dependening on the plotting extension, interactive backends such as bokeh or plotly dynamically update the ticks, which means fixed tick locations may not be appropriate and the formatters have to be applied in Javascript code. Nevertheless most options to control the ticking are consistent across extensions. Tick locations The number and locations of ticks can be set in three main ways: Number of ticks: Declare the number of desired ticks as an integer List of tick positions: An explicit list defining the list of positions at which to draw a tick List of tick positions and labels: A list of tuples of the form (position, label)
(curve.relabel('N ticks (xticks=10)').opts(xticks=10) + curve.relabel('Listed ticks (xticks=[0, 1, 2])').opts(xticks=[0, 50, 100]) + curve.relabel("Tick labels (xticks=[(0, 'zero'), ...").opts(xticks=[(0, 'zero'), (50, 'fifty'), (100, 'one hundred')]))
examples/user_guide/Customizing_Plots.ipynb
basnijholt/holoviews
bsd-3-clause
Lastly each extension will accept the custom Ticker objects the library provides, which can be used to achieve layouts not usually available. Tick formatters Tick formatting works very differently in different backends, however the xformatter and yformatter options try to minimize these differences. Tick formatters may be defined in one of three formats: A classic format string such as '%d', '%.3f' or '%d' which may also contain other characters ('$%.2f') A function which will be compiled to JS using flexx (if installed) when using bokeh A bokeh.models.TickFormatter in bokeh and a matplotlib.ticker.Formatter instance in matplotlib Here is a small example demonstrating how to use the string format and function approaches:
def formatter(value): return str(value) + ' days' curve.relabel('Tick formatters').opts(xformatter=formatter, yformatter='$%.2f', width=500)
examples/user_guide/Customizing_Plots.ipynb
basnijholt/holoviews
bsd-3-clause
Tick orientation Particularly when dealing with categorical axes it is often useful to control the tick rotation. This can be achieved using the xrotation and yrotation options which accept angles in degrees.
bars.opts(xrotation=45)
examples/user_guide/Customizing_Plots.ipynb
basnijholt/holoviews
bsd-3-clause
Define some helper functions
## Return a mask of elements of b found in a: optimal for numeric arrays def match( a, v, return_indices = False ) : index = np.argsort( a ) ## Get insertion indices sorted_index = np.searchsorted( a, v, sorter = index ) ## Truncate the indices by the length of a index = np.take( index, sorted_index, mode = "clip" ) mask = a[ index ] == v ## return if return_indices : return mask, index[ mask ] return mask
year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb
ivannz/study_notes
mit
A handy procedure for converting an $(v_{ij})$ list into a sparse matrix.
## Convert the edgelist into sparse matrix def to_sparse_coo( u, v, shape, dtype = np.int32 ) : ## Create a COOrdinate sparse matrix from the given ij-indices assert( len( u ) == len( v ) ) return sp.coo_matrix( ( np.ones( len( u ) + len( v ), dtype = dtype ), ( np.concatenate( ( u, v ) ), np.concatenate( ( v, u ) ) ) ), shape = shape ) ## Remeber: when converting COO to CSR/CSC the duplicate coordinate ## entries are summed!
year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb
ivannz/study_notes
mit
Load the DBLP dataset, making a cached copy if required.
## Create cache if necessary tick = tm.time( ) if not os.path.exists( cached_dblp_file ) : ## Load the csv file into a dataframe dblp = pd.read_csv( raw_dblp_file, # nrows = 10000, ## On-the-fly decompression compression = "gzip", header = None, quoting = 0, ## Assign column headers names = [ 'author1', 'author2', 'year', ], encoding = "utf-8" ) ## Finish tock = tm.time( ) print "Raw DBLP read in %.3f sec." % ( tock - tick, ) ## Map author names to ids from sklearn.preprocessing import LabelEncoder le = LabelEncoder( ).fit( np.concatenate( ( dblp["author1"].values, dblp["author2"].values, ) ) ) dblp_author_index = le.classes_ for col in [ 'author1', 'author2', ] : dblp[ col ] = le.transform( dblp[ col ] ) ## Cache dblp.to_pickle( cached_dblp_file ) with open( cached_author_index, "w" ) as out : for label in le.classes_ : out.write( label.strip( ).encode( "utf-8" ) + "\n" ) del dblp, le ## Finish tick = tm.time( ) print "Preprocessing took %.3f sec." % ( tick - tock, ) else : ## Load the database from pickled format dblp = pd.read_pickle( cached_dblp_file ) ## Read the dictionary of authors with open( cached_author_index, "r" ) as author_index : dblp_author_index = [ line.decode( "utf-8" ) for line in author_index ] ## Report tock = tm.time( ) print "DBLP loaded in %.3f sec." % ( tock - tick, )
year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb
ivannz/study_notes
mit
Now split the DBLP dataset in two periods: pre and post 2010. First preprocess the pre 2010 data.
pre2010 = dblp[ dblp.year <= 2010 ].copy( )
year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb
ivannz/study_notes
mit
Reencode the vertices of the pre-2010 graph in a less wasteful format. Use sklearn's LabelEncoder() to this end.
from sklearn.preprocessing import LabelEncoder le = LabelEncoder( ).fit( np.concatenate( ( pre2010[ "author1" ].values, pre2010[ "author2" ].values ) ) ) pre2010_values = le.classes_ ## Recode the edge data for col in [ 'author1', 'author2', ] : pre2010[ col ] = le.transform( pre2010[ col ] )
year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb
ivannz/study_notes
mit
Convert the edge list data into a sparse matrix
pre2010_adj = to_sparse_coo( pre2010[ "author1" ].values, pre2010[ "author2" ].values, shape = 2 * [ len( le.classes_ ) ] ) ## Eliminate duplicates by converting them into ones pre2010_adj = pre2010_adj.tocsr( ) pre2010_adj.data = np.ones_like( pre2010_adj.data )
year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb
ivannz/study_notes
mit
Find the vertices of the pre 2010 period that are in post-2010
post2010 = dblp[ dblp.year > 2010 ] common_vertices = np.intersect1d( pre2010_values, np.union1d( post2010[ "author1" ].values, post2010[ "author2" ].values ) )
year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb
ivannz/study_notes
mit
Remove completely new vertices from post 2010 data
post2010 = post2010[ ( match( common_vertices, post2010[ "author1" ].values ) & match( common_vertices, post2010[ "author2" ].values ) ) ] del common_vertices
year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb
ivannz/study_notes
mit
Map the post 2010 vertices to pre 2010 vertices and construct the adjacency matrix.
for col in [ 'author1', 'author2', ] : post2010[ col ] = le.transform( post2010[ col ] ) ## The adjacency matrix post2010_adj = sp.coo_matrix( ( np.ones( post2010.shape[0], dtype = np.bool ), ( post2010[ "author1" ].values, post2010[ "author2" ].values ) ), shape = pre2010_adj.shape ).tolil( )
year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb
ivannz/study_notes
mit
Leave only those edges in the post 2010 dataset, which had not existed during 2000-2010.
post2010_adj[ pre2010_adj.nonzero( ) ] = 0
year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb
ivannz/study_notes
mit
Eliminate duplicate edges and transform into a CSR format
post2010_adj = post2010_adj.tocsr( ) post2010_adj.data = np.ones_like( post2010_adj.data )
year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb
ivannz/study_notes
mit
Here we have two aligned symmetric adjacency matrices : for edges exisited before 2010 and new edges formed after 2010.
print post2010_adj.__repr__() print pre2010_adj.__repr__( )
year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb
ivannz/study_notes
mit
All edes of the post2010 graph are included and considered to be positive examples
positive = np.append( *( c.reshape((-1, 1)) for c in post2010_adj.nonzero( ) ), axis = 1 )
year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb
ivannz/study_notes
mit
Now a slightly harder part is to generate an adequate number of negative examples, so that the final training sample would be balanced.
## Generate a sample of vertex pairs with no edge in both periods negative = np.random.choice( pre2010_adj.shape[ 0 ], size = ( 2 * positive.shape[0], positive.shape[1] ) )
year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb
ivannz/study_notes
mit
Compie the final training dataset.
E = np.vstack( ( positive, negative ) ) y = np.vstack( ( np.ones( ( positive.shape[ 0 ], 1 ), dtype = np.float ), np.zeros( ( negative.shape[ 0 ], 1 ), dtype = np.float ) ) )
year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb
ivannz/study_notes
mit
So, finally, we got ourselves a trainig set of edges with 2:1 negative-to-postive ratio. Feature construction The first pair of features is the degrees on the edge endpoints: for $(i,j) \in V\times V$ $$ \phi^1_{ij} = |N_i|\,\text{ and }\,\phi^2_{ij} = |N_j|\,,$$ where $N_v$ is the set of adjacent vertices of a node $v$.
def phi_degree( edges, A ) : deg = A.sum( axis = 1 ).astype( np.float ) return np.append( deg[ edges[ :, 0 ] ], deg[ edges[ :, 1 ] ], axis = 1 )
year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb
ivannz/study_notes
mit
It turns out that at least two edge features can be constructed via a so called "sandwich" matrix.
def __sparse_sandwich( edges, A, W = None ) : AA = A.dot( A.T ) if W is None else A.dot( W ).dot( A.T ) result = AA[ edges[:,0], edges[:,1] ] del AA ; gc.collect( 0 ) ; gc.collect( 1 ) ; gc.collect( 2 ) return result.reshape(-1, 1)
year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb
ivannz/study_notes
mit
The next feature is the Adamic/adar score: for $(i,j)\in V\times V$ $$ \phi^3_{ij} = \sum_{v\in N_i \cap N_j } \frac{1}{\log |N_v|}\,.$$ Another feature is the numberod neighbours shared by the endpoints: $$\phi^4_{ij} = |N_i\cap N_j|\,.$$ In fact both features are special cases of the same formula : $$ (\phi_{ij}) = A W A'\,, $$ where $W$ is the weight matrix. In the case of common neigbours it is the unit matrix $I$, whereas for Adamic/Adar it is the diagonal matrix of reciprocal of degree logarithms : $$ W = \text{diag}\Bigl( \frac{1}{\log |N_i|} \Bigr)_{i\in V}\,.$$
def phi_adamic_adar( edges, A ) : inv_log_deg = 1.0 / np.log( A.sum( axis = 1 ).getA1( ) ) inv_log_deg[ np.isinf( inv_log_deg ) ] = 0 result = __sparse_sandwich( edges, A, sp.diags( inv_log_deg, 0 ) ) del inv_log_deg ; gc.collect( 0 ) ; gc.collect( 1 ) ; gc.collect( 2 ) return result def phi_common_neighbours( edges, A ) : return __sparse_sandwich( edges, A )
year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb
ivannz/study_notes
mit
Yet another potential feature is the so-called personalized page rank. Basically it is the same page Rank score, but with the ability to teleport only to a single node. In particular, the global Pagerank is the stationary distribution of the markov chain with this transition kernel: $$ M = \beta P + (1-\beta) \frac{1}{|V|}\mathbb{1} \mathbb{1}'\,, $$ where $\mathbb{1} = (1){v\in V}$ and $ P = \bigl(D{uu}^{-1} A_{uv} + \frac{1}{|V|} 1_{\delta^+v=0} \bigr){u,v\in V}$ -- the transition probability matrix $u\leadsto v$, which removes the sink (dangling) nodes, by connecting them to every other vertex. The normalizing matrix $$D=\text{diag}\bigl( \delta^+v + 1{\delta^+v=0} \bigr){v\in V}\,,$$ where the out-degree $\delta^+v = \sum{j\in V} A_{uj}$. The personalised pagerank for some node $w\in V$ is only slightly different: the random walk still is forced ot teleport away from a dangling vertex to any other, but the general teleportation probability is altered so that the walk restarts from node $w\in V$. Let $R\in {0,1}^{1\times V}$ be such that $R = e_w$. Then the personalized pagerank is the statrionary distribution of a random walk with this transition matrix: $$ M_w = \beta P + (1-\beta) \frac{1}{\|R\|_0}\mathbb{1} R'\,, $$ where $\|R\|_0$ denotes the number of nonzero elements in $R$. The stationary distribution is in fact the left-eigenvector of the transition kernel with eigenvalue $1$ : $ \pi = \pi M$, $\pi\in [0,1]^{1\times V}$ and $\pi\mathbb{1} = 1$. It is computed using hte power iterations methods, which basically gets the eigenvector with dominating eigenvalue. In the case of aperiodic, irreducible stochastic matrices the Perro-Frobenius theorem states that such eigenvector exists and the eigenvalues of $M_w$ are within $[-1,1]$. Basic iteration, with dangling vertex elimination, is $$ \pi_1 = \beta \pi_0 D^{-1} A + \bigl( \beta \pi_0 R \frac{1}{|V|} + (1-\beta) \pi_0 \mathbb{1} \frac{1}{\|R\|_0} \bigr) e_w'\,. $$
def __sparse_pagerank( A, beta = 0.85, one = None, niter = 1000, rel_eps = 1e-6, verbose = True ) : ## Initialize the iterations one = one if one is not None else np.ones( ( 1, A.shape[ 0 ] ), dtype = np.float ) one = sp.csr_matrix( one / one.sum( axis = 1 ) ) ## Get the out-degree out = np.asarray( A.sum( axis = 1 ).getA1( ), dtype = np.float ) ## Obtain the mask of dangling vertices dangling = np.where( out == 0.0 )[ 0 ] ## Correct the out-degree for sink nodes out[ dangling ] = 1.0 ## Just one iteration: all dangling nodes add to the importance of all vertices. pi = np.full( ( one.shape[0], A.shape[0] ), 1.0 / A.shape[ 0 ], dtype = np.float ) ## If there are no dangling vertices then use simple iterations kiter, status = 0, -1 ## Make a stochastic matrix P = sp.diags( 1.0 / out, 0, dtype = np.float ).dot( A ).tocsc( ) while kiter < niter : ## make a copy of hte current ranking estimates pi_last = pi.copy( ) ## Use sparse inplace operations for speed. Firstt the random walk part pi *= beta ; pi *= P ## Now the teleportaiton ... pi += ( 1 - beta ) * one ## ... and dangling vertices part if len( dangling ) > 0 : pi += beta * one.multiply( np.sum( pi_last[ :, dangling ], axis = 1 ).reshape( ( -1, 1 ) ) ) ## Normalize pi /= np.sum( pi, axis = 1 ) if np.sum( np.abs( pi - pi_last ) ) <= one.shape[0] * rel_eps * np.sum( np.abs( pi_last ) ) : status = 0 break ## Next iteration kiter += 1 if kiter % 10 == 0 : print kiter return pi, status, kiter
year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb
ivannz/study_notes
mit
Now the feature extractors themselves: the global pagerank and the presonalized (rooted) pagerank.
## The global pagerank score def phi_gpr( edges, A, verbose = True ) : pi, s, k = __sparse_pagerank( A, one = None, verbose = verbose ) return np.concatenate( ( pi[ :, edges[ :, 0 ] ], pi[ :,edges[ :, 1 ] ] ), axis = 0 ).T def phi_ppr( edges, A ) : pass #result = np.empty( edges.shape, dtype = np.float ) # return __sparse_sandwich( edges, A )
year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb
ivannz/study_notes
mit
Computing the features Vertex degrees
tick = tm.time() phi_12 = phi_degree( E, pre2010_adj ) tock = tm.time() print "Vertex degree computed in %.3f sec." % ( tock - tick, )
year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb
ivannz/study_notes
mit
Adamic/Adar metric
tick = tm.time() phi_3 = phi_adamic_adar( E, pre2010_adj ) tock = tm.time() print "Adamic/adar computed in %.3f sec." % ( tock - tick, )
year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb
ivannz/study_notes
mit
Common neighbours
tick = tm.time() phi_4 = phi_common_neighbours( E, pre2010_adj ) tock = tm.time() print "Common neighbours computed in %.3f sec." % ( tock - tick, )
year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb
ivannz/study_notes
mit
Global Pagerank
tick = tm.time() phi_56 = phi_gpr( E, pre2010_adj, verbose = False ) tock = tm.time() print "Global pagerank computed in %.3f sec." % ( tock - tick, )
year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb
ivannz/study_notes
mit
Rooted (personalized) pagerank
tick = tm.time() phi_78 = phi_ppr( E, pre2010_adj, verbose = False ) tock = tm.time() print "Personalized pagerank computed in %.3f sec." % ( tock - tick, )
year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb
ivannz/study_notes
mit
Compute all-pairs shortest paths
# tick = tm.time() # phi_5 = phi_shortest_paths( E, pre2010_adj ) # tock = tm.time() # print "Shortest paths computed in %.3f sec." % ( tock - tick, )
year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb
ivannz/study_notes
mit
Collect all features into a numpy matrix
X = np.hstack( ( phi_12, phi_3, phi_4, phi_56 ) ) #, phi_78 ) )
year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb
ivannz/study_notes
mit
Having computed all the features, lets make a subsample so that the classfification would run faster.
from sklearn.cross_validation import train_test_split X_modelling, X_main, y_modelling, y_main = train_test_split( X, y.ravel( ), train_size = 0.20 )
year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb
ivannz/study_notes
mit
Attach SciKit's grid search and x-validation modules.
from sklearn.grid_search import GridSearchCV from sklearn.cross_validation import cross_val_score
year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb
ivannz/study_notes
mit
We are going to analyze many classifiers at once.
classifiers = list( )
year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb
ivannz/study_notes
mit
Logistic Regression Logistic regression for binary classification solves the following problem on the training dataset $(x_i,t_i){i=1}^n \in \mathbb{R}^{1+p}\times {0,1}$: $$ \sum{i=1}^n t_i \log \sigma( \beta'x_i ) + (1-t_i) \log \bigl( 1-\sigma( \beta'x_i ) \bigr) \to \min_{\beta, \beta_0} \,, $$ where $\sigma(z) = \bigr(1+e^{-z}\bigl)^{-1}$. The classification is done using the folowing rule : $$ \hat{t}(x) = \mathop{\text{argmax}}_{k=1,2}\, \mathbb{P}(T=k|X=x)\,, $$ where $\mathbb{P}(T=1|X=x) = \sigma(\beta'x)$.
from sklearn.linear_model import LogisticRegression LR_grid = GridSearchCV( LogisticRegression( ), cv = 10, verbose = 1, param_grid = { "C" : np.logspace( -2, 2, num = 5 ) }, n_jobs = -1 ).fit( X_modelling, y_modelling ) classifiers.append( ( "Logistic", LR_grid.best_estimator_ ) )
year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb
ivannz/study_notes
mit
Linear and Quadratic Discriminant Analysis It is a widely known fact that sometimes simple models beat more complicated ones in terms of their accuracy. Thus let's consdier LDA and QDA.
from sklearn.lda import LDA from sklearn.qda import QDA classifiers.append( ( "LDA", LDA( ) ) ) classifiers.append( ( "QDA", QDA( ) ) )
year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb
ivannz/study_notes
mit
Decision tree classifiers Let's employ the classification tree model. On its own a decision tree is a volatile classifier, meaning that the addition of new data can drammatically alter its structure, that is why let's utilize boosted trees and randomized forests. These methods learn the intirinsic nonlinear features of the data by iterative construction of weak classifiers focusing on different aspects of the dataset. Random Forest
from sklearn.ensemble import RandomForestClassifier RF_grid = GridSearchCV( RandomForestClassifier( n_estimators = 50 ), cv = 10, verbose = 1, param_grid = { "max_depth" : [ 3, 5, 15, 30 ] }, n_jobs = -1 ).fit( X_modelling, y_modelling ) classifiers.append( ( "RandomForest", RF_grid.best_estimator_ ) )
year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb
ivannz/study_notes
mit
Boosted tree (AdaBoost)
from sklearn.ensemble import AdaBoostClassifier classifiers.append( ( "AdaBoost", AdaBoostClassifier( n_estimators = 50 ) ) )
year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb
ivannz/study_notes
mit
Simple tree One does not expect a simple tree to do comparably well against ensemble classifiers.
from sklearn.tree import DecisionTreeClassifier tree_grid = GridSearchCV( DecisionTreeClassifier( criterion = "gini" ), cv = 10, verbose = 1, param_grid = { "max_depth" : [ 3, 5, 15, 30 ] }, n_jobs = -1 ).fit( X_modelling, y_modelling ) classifiers.append( ( "Tree", tree_grid.best_estimator_ ) )
year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb
ivannz/study_notes
mit
$k$-Nearest Negihbours Another quite engineering approach to classification is to follow a simple rule : if the majority of a points $l$ nearest neighbours correspond to a class $c$ then this point is very likely to come from calss $c$ as well. Know them by their freinds!
from sklearn.neighbors import KNeighborsClassifier knn_grid = GridSearchCV( KNeighborsClassifier( ), cv = 10, verbose = 50, param_grid = { "n_neighbors" : [ 2, 3, 5, 15, 30 ] }, n_jobs = -1 ).fit( X_modelling, y_modelling ) classifiers.append( ( "k-NN", knn_grid.best_estimator_ ) )
year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb
ivannz/study_notes
mit
Support Vector Machine classification
from sklearn.svm import SVC from sklearn.linear_model import SGDClassifier
year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb
ivannz/study_notes
mit
Testing Split the dataset into train and test.
X_train, X_test, y_train, y_test = train_test_split( X_main, y_main, train_size = 0.20 )
year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb
ivannz/study_notes
mit
Subsample the train dataset
subsample = np.random.permutation( X_train.shape[ 0 ] )#[ : 50000 ] X_train_subsample, y_train_subsample = X_train[ subsample ], y_train[ subsample ]
year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb
ivannz/study_notes
mit
Run tests
results = dict() for name, clf in classifiers : tick = tm.time( ) results[name] = cross_val_score( clf, X_train_subsample, y_train_subsample, n_jobs = -1, verbose = 1, cv = 10 ) tock = tm.time( ) print "k-fold crossvalidation for %s took %.3f sec." % ( name, tock - tick, ) k_fold_frame = pd.DataFrame( results ) # k_fold_frame.append( k_fold_frame.apply( np.average ), ignore_index = True ) k_fold_frame.apply( np.average ) fitted_classifiers = [ ( name, clf.fit( X_test, y_test ) ) for name, clf in classifiers ] print np.sum( y_test == 0, dtype = np.float ), np.sum( y_test == 1, dtype = np.float ) fig = plt.figure( figsize = ( 9,9 ) ) ax = fig.add_subplot( 111 ) ax.set_ylabel( "True positive" ) ; ax.set_xlabel( "False positive" ) for name, clf in fitted_classifiers[:-1] : theta = clf.predict_proba( X_test ) ## The response variable is 0-1 coded therefore it is easy to compute the true- and false- positive counts i = np.argsort( theta[:,0], axis = 0 )[::-1] tp, fp = np.cumsum( y_test[ i ] == 1 ), np.cumsum( y_test[ i ] == 0 ) ## Plot the ROC curve ax.plot( fp / np.sum( y_test == 0, dtype = np.float ), tp / np.sum( y_test == 1, dtype = np.float ), label = name ) A = np.tensordot( np.vstack( ( y_test, 1-y_test ) ).T, np.log( np.clip( theta, 1e-15, 1-1e-15 ) ), ( 0, 0 ) ) logloss = -np.sum( np.diag( A ) / y_main.shape[ 0 ] ) print name, logloss ax.legend() plt.show( )
year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb
ivannz/study_notes
mit
Similarity metrics: When both Customers rate two movies exactly the same
# Create a dataframe manually to illustrate the examples ratings = pd.DataFrame(columns = ["customer", "movie", "rating"], data=[ ['Ana','movie_1',1], ['Ana', 'movie_2', 5], ['Bob','movie_1',1], ['Bob', 'movie_2', 5]]) ratings_matrix = ratings.pivot_table(index='customer', columns='movie', values='rating', fill_value=0) rating_1 = ratings_matrix.ix['Ana'] rating_2 = ratings_matrix.ix['Bob'] ratings_matrix s_intersection =similarity.calculate_distance(rating_1, rating_2, 'intersection') s_cosine = similarity.calculate_distance(rating_1, rating_2, 'cosine') s_pearson = similarity.calculate_distance(rating_1, rating_2, 'pearson') s_jaccard = similarity.calculate_distance(rating_1, rating_2, 'jaccard') print("similarity intersection: ", s_intersection) print("similarity cosine: ", s_cosine) print("similarity pearson: ", s_pearson) print("similarity jaccard: ", s_jaccard)
notebooks/notebook-1-similarity-concepts.ipynb
hcorona/recsys-101-workshop
mit
When two customers rate the same movies very differently
# Create a dataframe manually to illustrate the examples ratings = pd.DataFrame(columns = ["customer", "movie", "rating"], data=[ ['Ana','movie_1',5], ['Ana', 'movie_2', 1], ['Bob','movie_1',1], ['Bob', 'movie_2', 5]]) ratings_matrix = ratings.pivot_table(index='customer', columns='movie', values='rating', fill_value=0) rating_1 = ratings_matrix.ix['Ana'] rating_2 = ratings_matrix.ix['Bob'] ratings_matrix s_intersection =similarity.calculate_distance(rating_1, rating_2, 'intersection') s_cosine = similarity.calculate_distance(rating_1, rating_2, 'cosine') s_pearson = similarity.calculate_distance(rating_1, rating_2, 'pearson') s_jaccard = similarity.calculate_distance(rating_1, rating_2, 'jaccard') print("similarity intersection: ", s_intersection) print("similarity cosine: ", s_cosine) print("similarity pearson: ", s_pearson) print("similarity jaccard: ", s_jaccard)
notebooks/notebook-1-similarity-concepts.ipynb
hcorona/recsys-101-workshop
mit
When two customers rate different movies
# Create a dataframe manually to illustrate the examples data=[['Ana','movie_1',5],['Ana', 'movie_2', 1],['Bob','movie_3',5],['Bob', 'movie_4', 5]] ratings = pd.DataFrame(columns = ["customer", "movie", "rating"], data=data) ratings_matrix = ratings.pivot_table(index='customer', columns='movie', values='rating', fill_value=0) rating_1 = ratings_matrix.ix['Ana'] rating_2 = ratings_matrix.ix['Bob'] ratings_matrix s_intersection =similarity.calculate_distance(rating_1, rating_2, 'intersection') s_cosine = similarity.calculate_distance(rating_1, rating_2, 'cosine') s_pearson = similarity.calculate_distance(rating_1, rating_2, 'pearson') s_jaccard = similarity.calculate_distance(rating_1, rating_2, 'jaccard') print("similarity intersection: ", s_intersection) print("similarity cosine: ", s_cosine) print("similarity pearson: ", s_pearson) print("similarity jaccard: ", s_jaccard)
notebooks/notebook-1-similarity-concepts.ipynb
hcorona/recsys-101-workshop
mit
Positive people vs. Negative people
# Create a dataframe manually to illustrate the examples data=[['Ana','movie_1',5],['Ana', 'movie_2', 4],['Bob','movie_1',3],['Bob', 'movie_2', 2]] ratings = pd.DataFrame(columns = ["customer", "movie", "rating"], data=data) ratings_matrix = ratings.pivot_table(index='customer', columns='movie', values='rating', fill_value=0) rating_1 = ratings_matrix.ix['Ana'] rating_2 = ratings_matrix.ix['Bob'] ratings_matrix s_intersection =similarity.calculate_distance(rating_1, rating_2, 'intersection') s_cosine = similarity.calculate_distance(rating_1, rating_2, 'cosine') s_pearson = similarity.calculate_distance(rating_1, rating_2, 'pearson') s_jaccard = similarity.calculate_distance(rating_1, rating_2, 'jaccard') print("similarity intersection: ", s_intersection) print("similarity cosine: ", s_cosine) print("similarity pearson: ", s_pearson) print("similarity jaccard: ", s_jaccard)
notebooks/notebook-1-similarity-concepts.ipynb
hcorona/recsys-101-workshop
mit
People who rate a lot of movies vs. people who don't rate a lot of movies
# Create a dataframe manually to illustrate the examples data=[['Ana','movie_1',5],['Ana', 'movie_2', 4],['Ana', 'movie_3', 4],['Bob','movie_1',3],['Bob', 'movie_2', 2]] ratings = pd.DataFrame(columns = ["customer", "movie", "rating"], data=data) ratings_matrix = ratings.pivot_table(index='customer', columns='movie', values='rating', fill_value=0) rating_1 = ratings_matrix.ix['Ana'] rating_2 = ratings_matrix.ix['Bob'] ratings_matrix s_intersection =similarity.calculate_distance(rating_1, rating_2, 'intersection') s_cosine = similarity.calculate_distance(rating_1, rating_2, 'cosine') s_pearson = similarity.calculate_distance(rating_1, rating_2, 'pearson') s_jaccard = similarity.calculate_distance(rating_1, rating_2, 'jaccard') print("similarity intersection: ", s_intersection) print("similarity cosine: ", s_cosine) print("similarity pearson: ", s_pearson) print("similarity jaccard: ", s_jaccard)
notebooks/notebook-1-similarity-concepts.ipynb
hcorona/recsys-101-workshop
mit
Map Coloring This short notebook shows how map coloring can be formulated as a constraint satisfaction problem. In map coloring, the goal is to color the states shown in a given map such that no two bordering states share the same color. As an example, consider the map of Australia that is shown below. Australia has seven different states: * Western Australia, * Northern Territory, * South Australia, * Queensland, * New South Wales, * Victoria, and * Tasmania. As Tasmania is an island that does not share a border with any of the other states, it can be colored with any given color. <img src="australia.png" alt="map of australia" width="600"> The function $\texttt{map_coloring_csp}()$ returns a CSP that encodes the map coloring problem for Australia.
def map_coloring_csp(): Variables = [ 'WA', 'NSW', 'V', 'NT', 'SA', 'Q' ] Values = { 'red', 'blue', 'green' } Constraints = { 'WA != NT', 'WA != SA', 'NT != SA', 'NT != Q', 'SA != Q', 'SA != NSW', 'SA != V', 'Q != NSW', 'NSW != V' } return Variables, Values, Constraints csp = map_coloring_csp() csp
Python/2 Constraint Solver/Map-Coloring.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0