markdown stringlengths 0 37k | code stringlengths 1 33.3k | path stringlengths 8 215 | repo_name stringlengths 6 77 | license stringclasses 15
values |
|---|---|---|---|---|
Now we want to get the beam and residual maps. These are stored in fits files, and we read them into HDU objects with data and header attributes.
The get_maps function works for any of the 2D images stored as fits in the 'output_data' directory. | beamxx, beamyy, residual = [fp.get_maps(fhd_run, obsids=obsids, imtype=imtype)
for imtype in ('Beam_XX','Beam_YY','uniform_Residual_I')] | scripts/source-finding.ipynb | EoRImaging/katalogss | bsd-2-clause |
To convert the residual maps from Jy/pixel to Jy/beam, we need the map of pixel areas in units of beam. | pix2beam = fp.pixarea_maps(fhd_run, obsids=obsids, map_dir=kgs_out+'area_maps/')
for o in obsids: residual[o].data*=pix2beam[o] | scripts/source-finding.ipynb | EoRImaging/katalogss | bsd-2-clause |
Now we're ready to start source finding using the katalogss module. | #clustering parameters
eps_factor = 0.25
min_samples = 1
catalog={}
for obsid in obsids:
cmps = pd.DataFrame(comps[obsid])
cmps = kg.clip_comps(cmps)
beam = beamxx[obsid].copy()
beam.data = np.sqrt(np.mean([beamxx[obsid].data**2, beamyy[obsid].data**2],axis=0))
eps = eps_factor * meta[obsid]['be... | scripts/source-finding.ipynb | EoRImaging/katalogss | bsd-2-clause |
We continue analysing the fram heart disease data.
First load the data, use the name fram for the DataFrame variable. Make sure that in the data you loaded the column and row headers are in place. Checkout the summary of the variables using the describe method. | # exercise 1
def get_path(filename):
import sys
import os
prog_name = sys.argv[0]
if os.path.basename(prog_name) == "__main__.py": # Running under TMC
return os.path.join(os.path.dirname(prog_name), "..", "src", filename)
else:
return filename
# Put your solution here!
fram = ... | data_analysis/uh_data_analysis_with_python/hy-data-analysis-with-python-spring-2020/part08-e01_regression/reference/project_notebook_regression_analysis.ipynb | mohanprasath/Course-Work | gpl-3.0 |
Create function rescale that takes a Series as parameter. It should center the data and normalize it by dividing
by 2$\sigma$, where $\sigma$ is the standard deviation. Return the rescaled Series. | # exercise 2
# Put your solution here!
def rescale(serie):
serie = pd.Series(serie)
mean = serie.mean()
std = serie.std()
s2 = serie.apply(lambda x: (x-mean)/(2*std))
return s2 | data_analysis/uh_data_analysis_with_python/hy-data-analysis-with-python-spring-2020/part08-e01_regression/reference/project_notebook_regression_analysis.ipynb | mohanprasath/Course-Work | gpl-3.0 |
Add to the DataFrame the scaled versions of all the continuous variables (with function rescale). Add small letter s in front of the original variable name to get the name of the scaled variable. For instance, AGE -> sAGE. | # exercise 3
# Put your solution here!
fram2 = fram.copy()
cont = ['AGE', 'FRW', 'SBP', 'SBP10', 'DBP', 'CHOL', 'CIG', 'CHD', 'DEATH', 'YRS_DTH']
for col in cont:
colstr = 's' + col
fram[colstr] = rescale(fram2[col])
| data_analysis/uh_data_analysis_with_python/hy-data-analysis-with-python-spring-2020/part08-e01_regression/reference/project_notebook_regression_analysis.ipynb | mohanprasath/Course-Work | gpl-3.0 |
Form a model that predicts systolic blood pressure using weight, gender, and cholesterol level as explanatory variables. Store the fitted model in variable named fit. | # exercise 4
# Put your solution here!
fit = smf.ols('SBP ~ sFRW + SEX + sCHOL', data=fram).fit() | data_analysis/uh_data_analysis_with_python/hy-data-analysis-with-python-spring-2020/part08-e01_regression/reference/project_notebook_regression_analysis.ipynb | mohanprasath/Course-Work | gpl-3.0 |
How much does the inclusion of age increase the explanatory power of the model? Which variables explain the variance of the target variable most?
Your solution here
The inclusion of age increases the models explanatory power by a very small margin, we can compare the results of R-squared, Adj. R-squared, AIC, BIC etc.... | # exercise 6
# Put your solution here!
temp = ['sFRW', 'SEX', 'sCHOL', 'sAGE']
s = ''
for w in temp:
for w2 in temp:
if w != w2:
s += w+':'+w2+' '
s+='+ '
s = 'SBP ~ sFRW + SEX + sCHOL + sAGE + ' + s
s = s[0:len(s)-3]
fit = smf.ols(s, data=fram).fit() | data_analysis/uh_data_analysis_with_python/hy-data-analysis-with-python-spring-2020/part08-e01_regression/reference/project_notebook_regression_analysis.ipynb | mohanprasath/Course-Work | gpl-3.0 |
Then visualize the model as the function of weight for the youngest (sAGE=-1.0), middle aged (sAGE=0.0), and oldest (sAGE=1.0) women while assuming the background variables to be centered. Remember to consider the changes in the intercept and in the regression coefficient caused by age. Visualize both the data points a... | # exercise 7
# Put your solution here!
p = fit.params
fram.plot.scatter("sFRW", "SBP")
int1 = p.Intercept - p["sAGE"]
int2 = p.Intercept
int3 = p.Intercept + p["sAGE"]
slope1 = p.sFRW - p["sFRW:sAGE"]
slope2 = p.sFRW
slope3 = p.sFRW + p["sFRW:sAGE"]
abline_plot(intercept=int1, slope=slope1, ax=plt.gca(), color="gre... | data_analysis/uh_data_analysis_with_python/hy-data-analysis-with-python-spring-2020/part08-e01_regression/reference/project_notebook_regression_analysis.ipynb | mohanprasath/Course-Work | gpl-3.0 |
How does the dependence of blood pressure on weight change as a person gets older?
Your solution here.
The dependence lowers as a person gets old, meaning that it lowers the significance of weight compared to bloodpressure when older.
Even more accurate model
Include the background variable sCIG from the data and it... | # exercise 8
# Put your solution here!
temp = ['sFRW', 'SEX', 'sCHOL', 'sAGE', 'sCIG']
s = ''
for w in temp:
for w2 in temp:
if w != w2:
s += w+':'+w2+' '
s+='+ '
s = 'SBP ~ sFRW + SEX + sCHOL + sAGE + sCIG + ' + s
s = s[0:len(s)-3]
fit = smf.ols(s, data=fram).fit()
#... | data_analysis/uh_data_analysis_with_python/hy-data-analysis-with-python-spring-2020/part08-e01_regression/reference/project_notebook_regression_analysis.ipynb | mohanprasath/Course-Work | gpl-3.0 |
How does the model and its accuracy look?
Your solution here.
Model does look reasonable, coefficients seem to correlate nicely compared to intuition. I am not sure what is meant by accuracy, but comparing AIC,BIC, R-squared etc. it is quite similar to the previous one.
Logistic regression | def logistic(x):
return 1.0 / (1.0 + np.exp(-x)) | data_analysis/uh_data_analysis_with_python/hy-data-analysis-with-python-spring-2020/part08-e01_regression/reference/project_notebook_regression_analysis.ipynb | mohanprasath/Course-Work | gpl-3.0 |
We will continue predicting high blood pressure by taking in some continuous background variables, such as the age.
Recreate the model HIGH_BP ~ sFRW + SEX + SEX:sFRW presented in the introduction. Make sure, that you get the same results. Use name fit for the fitted model. Compute and store the error rate into variabl... | # exercise 9
# Put your solution here!
fram["HIGH_BP"] = (fram.SBP >= 140) | (fram.DBP >= 90)
fram.HIGH_BP = fram.HIGH_BP.astype('int', copy=False)
fit = smf.glm(formula="HIGH_BP ~ sFRW + SEX + SEX:sFRW", data=fram,
family=sm.families.Binomial()).fit()
error_rate_orig = np.mean(((fit.fittedvalues < 0.5) & fram.HIGH_B... | data_analysis/uh_data_analysis_with_python/hy-data-analysis-with-python-spring-2020/part08-e01_regression/reference/project_notebook_regression_analysis.ipynb | mohanprasath/Course-Work | gpl-3.0 |
Add the sAGE variable and its interactions. Check the prediction accuracy of the model and compare it to the previous model. Store the prediction accuracy to variable error_rate. | # exercise 10
# Put your solution here!
fit = smf.glm(formula="HIGH_BP ~ sFRW + SEX + SEX:sFRW + sAGE * SEX + sAGE:sFRW", data=fram,
family=sm.families.Binomial()).fit()
error_rate = np.mean(((fit.fittedvalues < 0.5) & fram.HIGH_BP) |
((fit.fittedvalues > 0.5) & ~fram.HIGH_BP))
print('Original error: %0.5f, New error... | data_analysis/uh_data_analysis_with_python/hy-data-analysis-with-python-spring-2020/part08-e01_regression/reference/project_notebook_regression_analysis.ipynb | mohanprasath/Course-Work | gpl-3.0 |
Visualize the predicted probability of high blood pressure as the function of weight. Remember to use normalized values (rescale) also for those variables that are not included in the visualization, so that sensible values are used for them (data average). Draw two figures with altogether six curves: young, middle aged... | # exercise 11
def logistic(x):
return 1.0 / (1.0 + np.exp(-x))
# Put your solution here!
p = fit.params
X = np.linspace(-2,4,100)
fig, ax = plt.subplots(nrows=1, ncols=2)
ax[0].scatter(fram.sFRW[(fram.SEX=="female")],
fram.HIGH_BP[(fram.SEX=="female")])
ax[0].set_title("female")
ax[1].scatter(fram.sFRW[(fram.S... | data_analysis/uh_data_analysis_with_python/hy-data-analysis-with-python-spring-2020/part08-e01_regression/reference/project_notebook_regression_analysis.ipynb | mohanprasath/Course-Work | gpl-3.0 |
How do the models with different ages and genders differ from each other?
Your solution here.
From the graphs it seems that females have a lower chance of high BP when younger. Generally females are more affected by weight compared to males.
On the contrary to females, males seem to have a higher chance of high BP whe... | # exercise 12
# Put your solution here!
def train_test_split(df, train_fraction=0.8):
train = df.sample(frac=train_fraction)
test = df.drop(train.index)
return train,test
| data_analysis/uh_data_analysis_with_python/hy-data-analysis-with-python-spring-2020/part08-e01_regression/reference/project_notebook_regression_analysis.ipynb | mohanprasath/Course-Work | gpl-3.0 |
Check the prediction accuracy of your model using cross validation. Use 100-fold cross validation and training_fraction 0.8. | # exercise 13
np.random.seed(1)
# Put your solution here!
error_model=[]
error_null=[]
for i in range(100):
train, test = train_test_split(fram)
fit = smf.glm(formula="HIGH_BP ~ sFRW + SEX + SEX:sFRW + sAGE * SEX + sAGE:sFRW", data=train,
family=sm.families.Binomial()).fit()
pred = fit.predict(test... | data_analysis/uh_data_analysis_with_python/hy-data-analysis-with-python-spring-2020/part08-e01_regression/reference/project_notebook_regression_analysis.ipynb | mohanprasath/Course-Work | gpl-3.0 |
Predicting coronary heart disease
Let us use again the same data to learn a model for the occurrence of coronary heart disease. We will use logistic regression to predict whether a patient sometimes shows symptoms of coronary heart disease. For this, add to the data a binary variable hasCHD, that describes the event (C... | # exercise 14
# Put your solution here!
fram['hasCHD'] = fram.CHD > 0
fram.hasCHD = fram.hasCHD.astype('int', copy=False) | data_analysis/uh_data_analysis_with_python/hy-data-analysis-with-python-spring-2020/part08-e01_regression/reference/project_notebook_regression_analysis.ipynb | mohanprasath/Course-Work | gpl-3.0 |
Next, form a logistic regression model for variable hasCHD by using variables sCHOL, sCIG, and sFRW, and their interactions as explanatory variables. Store the fitted model to variable fit. Compute the prediction accuracy of the model, store it to variable error_rate. | # exercise 15
# Put your solution here!
temp = ['sFRW', 'sCHOL', 'sCIG']
s = ''
for w in temp:
for w2 in temp:
if w != w2:
s += w+':'+w2+' '
s+='+ '
s = 'hasCHD ~ sFRW + sCHOL + sCIG + ' + s
s = s[0:len(s)-3]
fit = smf.glm(formula=s, data=fram,
family=sm.families.Bi... | data_analysis/uh_data_analysis_with_python/hy-data-analysis-with-python-spring-2020/part08-e01_regression/reference/project_notebook_regression_analysis.ipynb | mohanprasath/Course-Work | gpl-3.0 |
Visualize the model by using the most important explanator on the x axis. Visualize both the points (with plt.scatter)
and the logistic curve (with plt.plot). | # exercise 16
def logistic(x):
return 1.0 / (1.0 + np.exp(-x))
# Put your solution here!
p = fit.params
X = np.linspace(-1,3,100)
plt.scatter(fram["sCIG"], fram["hasCHD"] + np.random.uniform(-0.10, 0.10, len(fram)), marker = ".")
plt.plot(X, logistic(X*(p.sCIG) + p.Intercept), color="red")
| data_analysis/uh_data_analysis_with_python/hy-data-analysis-with-python-spring-2020/part08-e01_regression/reference/project_notebook_regression_analysis.ipynb | mohanprasath/Course-Work | gpl-3.0 |
Is the prediction accuracy of the model good or bad? Can we expect to have practical use of the model?
Your solution here.
Considering error rate being 22% it can give pretty good generalizations of what are the risk factors for having CHD. It can give pretty good predictions so I think it could be used somewhere in p... | # exercise 17
# Put your solution here!
vals = pd.DataFrame([[200,17,100]],columns=['sCHOL', 'sCIG', 'sFRW'])
point = {}
for ind in vals:
col = ind[1:]
temp = fram[col].append(vals[ind])
point[ind] = rescale(temp).iloc[-1]
predicted = fit.predict(point).iloc[0]
print(predicted) | data_analysis/uh_data_analysis_with_python/hy-data-analysis-with-python-spring-2020/part08-e01_regression/reference/project_notebook_regression_analysis.ipynb | mohanprasath/Course-Work | gpl-3.0 |
使用视觉注意力生成图像描述
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://tensorflow.google.cn/tutorials/text/image_captioning">
<img src="https://tensorflow.google.cn/images/tf_logo_32px.png" />
在 tensorFlow.google.cn 上查看</a>
</td>
<td>
<a target="_blank" href="https:/... | import tensorflow as tf
# You'll generate plots of attention in order to see which parts of an image
# our model focuses on during captioning
import matplotlib.pyplot as plt
# Scikit-learn includes many helpful utilities
from sklearn.model_selection import train_test_split
from sklearn.utils import shuffle
import re... | site/zh-cn/tutorials/text/image_captioning.ipynb | tensorflow/docs-l10n | apache-2.0 |
下载并准备 MS-COCO 数据集
您将使用 MS-COCO 数据集来训练模型。此数据集包含了超过 82,000 个图像,每个图像至少具有 5 个不同的描述注解。下面的代码将自动下载并提取数据集。
小心:前方含有大量下载。您将使用的训练集包含 13GB 的文件。 | # Download caption annotation files
annotation_folder = '/annotations/'
if not os.path.exists(os.path.abspath('.') + annotation_folder):
annotation_zip = tf.keras.utils.get_file('captions.zip',
cache_subdir=os.path.abspath('.'),
origi... | site/zh-cn/tutorials/text/image_captioning.ipynb | tensorflow/docs-l10n | apache-2.0 |
可选:限制训练集的大小
为了加快此教程的训练速度,您将使用 30,000 个描述及其对应图像的子集来训练模型。选择使用更多数据将提高生成描述的质量。 | # Read the json file
with open(annotation_file, 'r') as f:
annotations = json.load(f)
# Store captions and image names in vectors
all_captions = []
all_img_name_vector = []
for annot in annotations['annotations']:
caption = '<start> ' + annot['caption'] + ' <end>'
image_id = annot['image_id']
... | site/zh-cn/tutorials/text/image_captioning.ipynb | tensorflow/docs-l10n | apache-2.0 |
使用 InceptionV3 预处理图像
接下来,您将使用 InceptionV3(已在 Imagenet 上进行了预训练)对每个图像进行分类。您将从最后一个卷积层中提取特征。
首先,您需要将图像转换为 InceptionV3 的预期格式:
将图像大小调整为 299 x 299 像素
使用 preprocess_input 方法预处理图像,以对其进行归一化,以便使其包含范围在 -1 和 1 内的像素,从而与用于训练 InceptionV3 的图像格式相匹配。 | def load_image(image_path):
img = tf.io.read_file(image_path)
img = tf.image.decode_jpeg(img, channels=3)
img = tf.image.resize(img, (299, 299))
img = tf.keras.applications.inception_v3.preprocess_input(img)
return img, image_path | site/zh-cn/tutorials/text/image_captioning.ipynb | tensorflow/docs-l10n | apache-2.0 |
初始化 InceptionV3 并加载经过预训练的 Imagenet 权重
现在,您将创建一个 tf.keras 模型,其中输出层是 InceptionV3 架构中的最后一个卷积层。此层的输出形状为 8x8x2048。之所以使用最后一个卷积层,是因为我们在此示例中使用了注意力。您无法在训练期间执行此初始化,因为它可能会成为瓶颈。
您将通过网络对每个图像进行前向传递,并将得到的向量存储在字典中(image_name --> feature_vector)。
当通过网络传递所有图像后,对字典进行 pickle 操作,然后将其保存到磁盘。 | image_model = tf.keras.applications.InceptionV3(include_top=False,
weights='imagenet')
new_input = image_model.input
hidden_layer = image_model.layers[-1].output
image_features_extract_model = tf.keras.Model(new_input, hidden_layer) | site/zh-cn/tutorials/text/image_captioning.ipynb | tensorflow/docs-l10n | apache-2.0 |
缓存从 InceptionV3 中提取的特征
您将使用 InceptionV3 预处理每个图像,然后将输出缓存到磁盘。将输出缓存到 RAM 会更快,但也会占用大量内存(每个图像需要 8 * 8 * 2048 个浮点数)。写入时,这将超出 Colab 的内存限制(目前为 12GB 内存)。
使用更复杂的缓存策略(例如,将图像分片来减少对磁盘 I/O 的随机访问)可以提高性能,但这需要编写更多代码。
使用 GPU 在 Colab 中运行缓存大约需要 10 分钟。如果您想查看进度条,可以进行以下操作:
安装 tqdm:
!pip install tqdm
导入 tqdm:
from tqdm import tqdm
将下面的行... | # Get unique images
encode_train = sorted(set(img_name_vector))
# Feel free to change batch_size according to your system configuration
image_dataset = tf.data.Dataset.from_tensor_slices(encode_train)
image_dataset = image_dataset.map(
load_image, num_parallel_calls=tf.data.experimental.AUTOTUNE).batch(16)
for img,... | site/zh-cn/tutorials/text/image_captioning.ipynb | tensorflow/docs-l10n | apache-2.0 |
预处理描述并进行分词
首先,您需要对描述进行分词(例如,按照空格进行切分)。这会为我们提供一个包含数据中的所有唯一单词的词汇表(例如,“surfing”、“football”等)。
接下来,将词汇表的大小限制为前 5,000 个单词(以节省内存)。将所有其他单词替换为“UNK”(未知)。
然后,创建“词到索引”和“索引到词”的映射。
最后,将所有序列填充为与最长序列相同的长度。 | # Find the maximum length of any caption in our dataset
def calc_max_length(tensor):
return max(len(t) for t in tensor)
# Choose the top 5000 words from the vocabulary
top_k = 5000
tokenizer = tf.keras.preprocessing.text.Tokenizer(num_words=top_k,
oov_token="<un... | site/zh-cn/tutorials/text/image_captioning.ipynb | tensorflow/docs-l10n | apache-2.0 |
将数据拆分为训练和测试 | # Create training and validation sets using an 80-20 split
img_name_train, img_name_val, cap_train, cap_val = train_test_split(img_name_vector,
cap_vector,
test_size=0.2,
... | site/zh-cn/tutorials/text/image_captioning.ipynb | tensorflow/docs-l10n | apache-2.0 |
创建用于训练的 tf.data 数据集
我们的图像和描述已准备就绪!接下来,让我们创建一个 tf.data 数据集,用于训练我们的模型。 | # Feel free to change these parameters according to your system's configuration
BATCH_SIZE = 64
BUFFER_SIZE = 1000
embedding_dim = 256
units = 512
vocab_size = top_k + 1
num_steps = len(img_name_train) // BATCH_SIZE
# Shape of the vector extracted from InceptionV3 is (64, 2048)
# These two variables represent that vec... | site/zh-cn/tutorials/text/image_captioning.ipynb | tensorflow/docs-l10n | apache-2.0 |
模型
有趣的事实:下面的解码器与使用注意力机制的神经网络机器翻译示例中的解码器相同。
此模型架构的灵感来自论文 Show, Attend and Tell。
在此示例中,您将从 InceptionV3 的较低卷积层中提取特征,从而得到一个形状为 (8, 8, 2048) 的向量。
将上述形状挤压为 (64, 2048) 的形状。
然后通过 CNN 编码器(包含一个全连接层)传递此向量。
RNN(此处为 GRU)会将注意力放在图像上以预测下一个单词。 | class BahdanauAttention(tf.keras.Model):
def __init__(self, units):
super(BahdanauAttention, self).__init__()
self.W1 = tf.keras.layers.Dense(units)
self.W2 = tf.keras.layers.Dense(units)
self.V = tf.keras.layers.Dense(1)
def call(self, features, hidden):
# features(CNN_encoder output) shape ==... | site/zh-cn/tutorials/text/image_captioning.ipynb | tensorflow/docs-l10n | apache-2.0 |
检查点 | checkpoint_path = "./checkpoints/train"
ckpt = tf.train.Checkpoint(encoder=encoder,
decoder=decoder,
optimizer = optimizer)
ckpt_manager = tf.train.CheckpointManager(ckpt, checkpoint_path, max_to_keep=5)
start_epoch = 0
if ckpt_manager.latest_checkpoint:
start_ep... | site/zh-cn/tutorials/text/image_captioning.ipynb | tensorflow/docs-l10n | apache-2.0 |
训练
提取存储在相应 .npy 文件中的特征,然后通过编码器传递这些特征。
将编码器输出、隐藏状态(已初始化为 0)和解码器输入(即起始词例)传递给解码器。
解码器会返回预测和解码器隐藏状态。
然后将解码器隐藏状态传回模型,并使用预测计算损失。
使用 Teacher Forcing 方法决定解码器的下一个输入。
Teacher Forcing 是一种将目标词作为下一个输入传递给解码器的技术。
最后一步是计算梯度并将其应用于优化器和反向传播。 | # adding this in a separate cell because if you run the training cell
# many times, the loss_plot array will be reset
loss_plot = []
@tf.function
def train_step(img_tensor, target):
loss = 0
# initializing the hidden state for each batch
# because the captions are not related from image to image
hidden = deco... | site/zh-cn/tutorials/text/image_captioning.ipynb | tensorflow/docs-l10n | apache-2.0 |
说明!
评估函数与训练循环类似,不同之处在于,此处不使用 Teacher Forcing。每一个时间步骤的解码器输入都是其前一步的预测结果、隐藏状态和编码器输出。
当模型预测到最后一个词例时停止。
存储每一个时间步骤的注意力权重。 | def evaluate(image):
attention_plot = np.zeros((max_length, attention_features_shape))
hidden = decoder.reset_state(batch_size=1)
temp_input = tf.expand_dims(load_image(image)[0], 0)
img_tensor_val = image_features_extract_model(temp_input)
img_tensor_val = tf.reshape(img_tensor_val, (img_tensor_v... | site/zh-cn/tutorials/text/image_captioning.ipynb | tensorflow/docs-l10n | apache-2.0 |
在自己的图像上进行尝试
为了增加趣味性,我们在下面提供了一个方法,可让您通过我们刚才训练的模型为您自己的图像生成描述。请记住,这个模型是使用较少数据训练的,而您的图像可能与训练数据不同(因此请做好心理准备,您可能会得到奇怪的结果)。 | image_url = 'https://tensorflow.org/images/surf.jpg'
image_extension = image_url[-4:]
image_path = tf.keras.utils.get_file('image'+image_extension,
origin=image_url)
result, attention_plot = evaluate(image_path)
print ('Prediction Caption:', ' '.join(result))
plot_attention(image_p... | site/zh-cn/tutorials/text/image_captioning.ipynb | tensorflow/docs-l10n | apache-2.0 |
The HoloViews options system allows controlling the various attributes of a plot. While different plotting extensions like bokeh, matplotlib and plotly offer different features and the style options may differ, there are a wide array of options and concepts that are shared across the different extensions. Specifically ... | hv.HoloMap({i: hv.Curve([1, 2, 3-i], group='Group', label='Label') for i in range(3)}, 'Value') | examples/user_guide/Customizing_Plots.ipynb | basnijholt/holoviews | bsd-3-clause |
The title formatter may however be overriden with an explicit title, which may include any combination of the three formatter variables: | hv.Curve([1, 2, 3]).opts(title="Custom Title") | examples/user_guide/Customizing_Plots.ipynb | basnijholt/holoviews | bsd-3-clause |
Background
Another option which can be controlled at the level of a plot is the background color which may be set using the bgcolor option: | hv.Curve([1, 2, 3]).opts(bgcolor='lightgray') | examples/user_guide/Customizing_Plots.ipynb | basnijholt/holoviews | bsd-3-clause |
Font sizes
Controlling the font sizes of a plot is very common so HoloViews provides a convenient option to set the fontsize. The fontsize accepts a dictionary which allows supplying fontsizes for different components of the plot from the title, to the axis labels, ticks and legends. The full list of plot components th... | hv.Curve([1, 2, 3], label='Title').opts(fontsize={'title': 16, 'labels': 14, 'xticks': 6, 'yticks': 12}) | examples/user_guide/Customizing_Plots.ipynb | basnijholt/holoviews | bsd-3-clause |
Plot hooks
HoloViews does not expose every single option a plotting extension like matplotlib or bokeh provides, therefore it is sometimes necessary to dig deeper to achieve precisely the customizations one might need. One convenient way of doing so is to use plot hooks to modify the plot object directly. The hooks are... | def hook(plot, element):
print('plot.state: ', plot.state)
print('plot.handles: ', sorted(plot.handles.keys()))
plot.handles['xaxis'].axis_label_text_color = 'red'
plot.handles['yaxis'].axis_label_text_color = 'blue'
hv.Curve([1, 2, 3]).opts(hooks=[hook]) | examples/user_guide/Customizing_Plots.ipynb | basnijholt/holoviews | bsd-3-clause |
Customizing axes
Controlling the axis scales is one of the most common changes to make to a plot, so we will provide a quick overview of the three main types of axes and then go into some more detail on how to control the axis labels, ranges, ticks and orientation.
Types of axes
There are four main types of axes suppor... | semilogy = hv.Curve(np.logspace(0, 5), label='Semi-log y axes')
loglog = hv.Curve((np.logspace(0, 5), np.logspace(0, 5)), label='Log-log axes')
semilogy.opts(logy=True) + loglog.opts(logx=True, logy=True, shared_axes=False) | examples/user_guide/Customizing_Plots.ipynb | basnijholt/holoviews | bsd-3-clause |
Datetime axes
All current plotting extensions allow plotting datetime data, if you ensure the dates array is of a valid datetime dtype. | from bokeh.sampledata.stocks import GOOG, AAPL
goog_dates = np.array(GOOG['date'], dtype=np.datetime64)
aapl_dates = np.array(AAPL['date'], dtype=np.datetime64)
goog = hv.Curve((goog_dates, GOOG['adj_close']), 'Date', 'Stock Index', label='Google')
aapl = hv.Curve((aapl_dates, AAPL['adj_close']), 'Date', 'Stock Index... | examples/user_guide/Customizing_Plots.ipynb | basnijholt/holoviews | bsd-3-clause |
Categorical axes
While the handling of categorical data handles significantly between plotting extensions the same basic concepts apply. If the data is a string type or other object type it is formatted as a string and each unique category is assigned a tick along the axis. When overlaying elements the categories are c... | points = hv.Points([(chr(i+65), chr(j+65), i*j) for i in range(10) for j in range(10)], vdims='z')
heatmap = hv.HeatMap(points)
(heatmap * points).opts(
opts.HeatMap(toolbar='above', tools=['hover']),
opts.Points(tools=['hover'], size=dim('z')*0.3)) | examples/user_guide/Customizing_Plots.ipynb | basnijholt/holoviews | bsd-3-clause |
As a more complex example which does not implicitly assume categorical axes due to the element type we will create a set of random samples indexed by categories from 'A' to 'E' using the Scatter Element and overlay them. Secondly we compute the mean and standard deviation for each category displayed using a set of Erro... | overlay = hv.NdOverlay({group: hv.Scatter(([group]*100, np.random.randn(100)*(5-i)-i))
for i, group in enumerate(['A', 'B', 'C', 'D', 'E'])})
errorbars = hv.ErrorBars([(k, el.reduce(function=np.mean), el.reduce(function=np.std))
for k, el in overlay.items()])
curve = ... | examples/user_guide/Customizing_Plots.ipynb | basnijholt/holoviews | bsd-3-clause |
Categorical axes are special in that they support multi-level nesting in some cases. Currently this is only supported for certain element types (BoxWhisker, Violin and Bars) but eventually all chart-like elements will interpret multiple key dimensions as a multi-level categorical hierarchy. To demonstrate this behavior... | groups = [chr(65+g) for g in np.random.randint(0, 3, 200)]
boxes = hv.BoxWhisker((groups, np.random.randint(0, 5, 200), np.random.randn(200)),
['Group', 'Category'], 'Value').sort()
boxes.opts(width=600) | examples/user_guide/Customizing_Plots.ipynb | basnijholt/holoviews | bsd-3-clause |
Axis positions
Axes may be hidden or moved to a different location using the xaxis and yaxis options, which accept None, 'right'/'bottom', 'left'/'top' and 'bare' as values. | np.random.seed(42)
ys = np.random.randn(101).cumsum(axis=0)
curve = hv.Curve(ys, ('x', 'x-label'), ('y', 'y-label'))
(curve.relabel('No axis').opts(xaxis=None, yaxis=None) +
curve.relabel('Bare axis').opts(xaxis='bare') +
curve.relabel('Moved axis').opts(xaxis='top', yaxis='right')) | examples/user_guide/Customizing_Plots.ipynb | basnijholt/holoviews | bsd-3-clause |
Inverting axes
Another option to control axes is to invert the x-/y-axes using the invert_axes options, i.e. turn a vertical plot into a horizontal plot. Secondly each individual axis can be flipped left to right or upside down respectively using the invert_xaxis and invert_yaxis options. | bars = hv.Bars([('Australia', 10), ('United States', 14), ('United Kingdom', 7)], 'Country')
(bars.relabel('Invert axes').opts(invert_axes=True, width=400) +
bars.relabel('Invert x-axis').opts(invert_xaxis=True) +
bars.relabel('Invert y-axis').opts(invert_yaxis=True)).opts(shared_axes=False) | examples/user_guide/Customizing_Plots.ipynb | basnijholt/holoviews | bsd-3-clause |
Axis labels
Ordinarily axis labels are controlled using the dimension label, however explicitly xlabel and ylabel options make it possible to override the label at the plot level. Additionally the labelled option allows specifying which axes should be labelled at all, making it possible to hide axis labels: | (curve.relabel('Dimension labels') +
curve.relabel("xlabel='Custom x-label'").opts(xlabel='Custom x-label') +
curve.relabel('Unlabelled').opts(labelled=[])) | examples/user_guide/Customizing_Plots.ipynb | basnijholt/holoviews | bsd-3-clause |
Axis ranges
The ranges of a plot are ordinarily controlled by computing the data range and combining it with the dimension range and soft_range but they may also be padded or explicitly overridden using xlim and ylim options.
Dimension ranges
data range: The data range is computed by min and max of the dimension value... | curve.redim(x=hv.Dimension('x', range=(-10, 90))) | examples/user_guide/Customizing_Plots.ipynb | basnijholt/holoviews | bsd-3-clause |
Dimension.soft_range
Declaringa soft_range on the other hand combines the data range and the supplied range, i.e. it will pick whichever extent is wider. Using the same example as above we can see it uses the -10 value supplied in the soft_range but also extends to 100, which is the upper bound of the actual data: | curve.redim(x=hv.Dimension('x', soft_range=(-10, 90))) | examples/user_guide/Customizing_Plots.ipynb | basnijholt/holoviews | bsd-3-clause |
Padding
Applying padding to the ranges is an easy way to ensure that the data is not obscured by the margins. The padding is specified by the fraction by which to increase auto-ranged extents to make datapoints more visible around borders. The padding considers the width and height of the plot to keep the visual extent... | (curve.relabel('Pad both axes').opts(padding=0.1) +
curve.relabel('Pad y-axis').opts(padding=(0, 0.1)) +
curve.relabel('Pad y-axis upper bound').opts(padding=(0, (0, 0.1)))).opts(shared_axes=False) | examples/user_guide/Customizing_Plots.ipynb | basnijholt/holoviews | bsd-3-clause |
xlim/ylim
The data ranges, dimension ranges and padding combine across plots in an overlay to ensure that all the data is contained in the viewport. In some cases it is more convenient to override the ranges with explicit xlim and ylim options which have the highest precedence and will be respected no matter what. | curve.relabel('Explicit xlim/ylim').opts(xlim=(-10, 110), ylim=(-14, 6)) | examples/user_guide/Customizing_Plots.ipynb | basnijholt/holoviews | bsd-3-clause |
Axis ticks
Setting tick locations differs a little bit dependening on the plotting extension, interactive backends such as bokeh or plotly dynamically update the ticks, which means fixed tick locations may not be appropriate and the formatters have to be applied in Javascript code. Nevertheless most options to control ... | (curve.relabel('N ticks (xticks=10)').opts(xticks=10) +
curve.relabel('Listed ticks (xticks=[0, 1, 2])').opts(xticks=[0, 50, 100]) +
curve.relabel("Tick labels (xticks=[(0, 'zero'), ...").opts(xticks=[(0, 'zero'), (50, 'fifty'), (100, 'one hundred')])) | examples/user_guide/Customizing_Plots.ipynb | basnijholt/holoviews | bsd-3-clause |
Lastly each extension will accept the custom Ticker objects the library provides, which can be used to achieve layouts not usually available.
Tick formatters
Tick formatting works very differently in different backends, however the xformatter and yformatter options try to minimize these differences. Tick formatters may... | def formatter(value):
return str(value) + ' days'
curve.relabel('Tick formatters').opts(xformatter=formatter, yformatter='$%.2f', width=500) | examples/user_guide/Customizing_Plots.ipynb | basnijholt/holoviews | bsd-3-clause |
Tick orientation
Particularly when dealing with categorical axes it is often useful to control the tick rotation. This can be achieved using the xrotation and yrotation options which accept angles in degrees. | bars.opts(xrotation=45) | examples/user_guide/Customizing_Plots.ipynb | basnijholt/holoviews | bsd-3-clause |
Define some helper functions | ## Return a mask of elements of b found in a: optimal for numeric arrays
def match( a, v, return_indices = False ) :
index = np.argsort( a )
## Get insertion indices
sorted_index = np.searchsorted( a, v, sorter = index )
## Truncate the indices by the length of a
index = np.take( index, sorted_index, mode = "clip" )... | year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb | ivannz/study_notes | mit |
A handy procedure for converting an $(v_{ij})$ list into a sparse matrix. | ## Convert the edgelist into sparse matrix
def to_sparse_coo( u, v, shape, dtype = np.int32 ) :
## Create a COOrdinate sparse matrix from the given ij-indices
assert( len( u ) == len( v ) )
return sp.coo_matrix( (
np.ones( len( u ) + len( v ), dtype = dtype ), (
np.concatenate( ( u, v ) ), np.concatenate( ( v,... | year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb | ivannz/study_notes | mit |
Load the DBLP dataset, making a cached copy if required. | ## Create cache if necessary
tick = tm.time( )
if not os.path.exists( cached_dblp_file ) :
## Load the csv file into a dataframe
dblp = pd.read_csv( raw_dblp_file, # nrows = 10000,
## On-the-fly decompression
compression = "gzip", header = None, quoting = 0,
## Assign column headers
names = [ 'author1', 'auth... | year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb | ivannz/study_notes | mit |
Now split the DBLP dataset in two periods: pre and post 2010. First preprocess the pre 2010 data. | pre2010 = dblp[ dblp.year <= 2010 ].copy( ) | year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb | ivannz/study_notes | mit |
Reencode the vertices of the pre-2010 graph in a less wasteful format. Use sklearn's LabelEncoder() to this end. | from sklearn.preprocessing import LabelEncoder
le = LabelEncoder( ).fit( np.concatenate( ( pre2010[ "author1" ].values, pre2010[ "author2" ].values ) ) )
pre2010_values = le.classes_
## Recode the edge data
for col in [ 'author1', 'author2', ] :
pre2010[ col ] = le.transform( pre2010[ col ] ) | year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb | ivannz/study_notes | mit |
Convert the edge list data into a sparse matrix | pre2010_adj = to_sparse_coo(
pre2010[ "author1" ].values, pre2010[ "author2" ].values,
shape = 2 * [ len( le.classes_ ) ] )
## Eliminate duplicates by converting them into ones
pre2010_adj = pre2010_adj.tocsr( )
pre2010_adj.data = np.ones_like( pre2010_adj.data ) | year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb | ivannz/study_notes | mit |
Find the vertices of the pre 2010 period that are in post-2010 | post2010 = dblp[ dblp.year > 2010 ]
common_vertices = np.intersect1d( pre2010_values,
np.union1d( post2010[ "author1" ].values, post2010[ "author2" ].values ) ) | year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb | ivannz/study_notes | mit |
Remove completely new vertices from post 2010 data | post2010 = post2010[ (
match( common_vertices, post2010[ "author1" ].values ) &
match( common_vertices, post2010[ "author2" ].values ) ) ]
del common_vertices | year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb | ivannz/study_notes | mit |
Map the post 2010 vertices to pre 2010 vertices and construct the adjacency matrix. | for col in [ 'author1', 'author2', ] :
post2010[ col ] = le.transform( post2010[ col ] )
## The adjacency matrix
post2010_adj = sp.coo_matrix( ( np.ones( post2010.shape[0], dtype = np.bool ),
( post2010[ "author1" ].values, post2010[ "author2" ].values )
), shape = pre2010_adj.shape ).tolil( ) | year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb | ivannz/study_notes | mit |
Leave only those edges in the post 2010 dataset, which had not existed during 2000-2010. | post2010_adj[ pre2010_adj.nonzero( ) ] = 0 | year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb | ivannz/study_notes | mit |
Eliminate duplicate edges and transform into a CSR format | post2010_adj = post2010_adj.tocsr( )
post2010_adj.data = np.ones_like( post2010_adj.data ) | year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb | ivannz/study_notes | mit |
Here we have two aligned symmetric adjacency matrices : for edges exisited before 2010 and new edges formed after 2010. | print post2010_adj.__repr__()
print pre2010_adj.__repr__( ) | year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb | ivannz/study_notes | mit |
All edes of the post2010 graph are included and considered to be positive examples | positive = np.append( *( c.reshape((-1, 1)) for c in post2010_adj.nonzero( ) ), axis = 1 ) | year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb | ivannz/study_notes | mit |
Now a slightly harder part is to generate an adequate number of negative examples, so that the final training sample would be balanced. | ## Generate a sample of vertex pairs with no edge in both periods
negative = np.random.choice( pre2010_adj.shape[ 0 ], size = ( 2 * positive.shape[0], positive.shape[1] ) ) | year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb | ivannz/study_notes | mit |
Compie the final training dataset. | E = np.vstack( ( positive, negative ) )
y = np.vstack( ( np.ones( ( positive.shape[ 0 ], 1 ), dtype = np.float ),
np.zeros( ( negative.shape[ 0 ], 1 ), dtype = np.float ) ) ) | year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb | ivannz/study_notes | mit |
So, finally, we got ourselves a trainig set of edges with 2:1 negative-to-postive ratio.
Feature construction
The first pair of features is the degrees on the edge endpoints: for $(i,j) \in V\times V$
$$ \phi^1_{ij} = |N_i|\,\text{ and }\,\phi^2_{ij} = |N_j|\,,$$
where $N_v$ is the set of adjacent vertices of a node $v... | def phi_degree( edges, A ) :
deg = A.sum( axis = 1 ).astype( np.float )
return np.append( deg[ edges[ :, 0 ] ], deg[ edges[ :, 1 ] ], axis = 1 ) | year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb | ivannz/study_notes | mit |
It turns out that at least two edge features can be constructed via a so called "sandwich" matrix. | def __sparse_sandwich( edges, A, W = None ) :
AA = A.dot( A.T ) if W is None else A.dot( W ).dot( A.T )
result = AA[ edges[:,0], edges[:,1] ]
del AA ; gc.collect( 0 ) ; gc.collect( 1 ) ; gc.collect( 2 )
return result.reshape(-1, 1) | year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb | ivannz/study_notes | mit |
The next feature is the Adamic/adar score: for $(i,j)\in V\times V$
$$ \phi^3_{ij} = \sum_{v\in N_i \cap N_j } \frac{1}{\log |N_v|}\,.$$
Another feature is the numberod neighbours shared by the endpoints:
$$\phi^4_{ij} = |N_i\cap N_j|\,.$$
In fact both features are special cases of the same formula :
$$ (\phi_{ij}) = A... | def phi_adamic_adar( edges, A ) :
inv_log_deg = 1.0 / np.log( A.sum( axis = 1 ).getA1( ) )
inv_log_deg[ np.isinf( inv_log_deg ) ] = 0
result = __sparse_sandwich( edges, A, sp.diags( inv_log_deg, 0 ) )
del inv_log_deg ; gc.collect( 0 ) ; gc.collect( 1 ) ; gc.collect( 2 )
return result
def phi_common... | year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb | ivannz/study_notes | mit |
Yet another potential feature is the so-called personalized page rank. Basically it is the same page Rank score, but with the ability to teleport only to a single node.
In particular, the global Pagerank is the stationary distribution of the markov chain with this transition kernel:
$$ M = \beta P + (1-\beta) \frac{1}{... | def __sparse_pagerank( A, beta = 0.85, one = None, niter = 1000, rel_eps = 1e-6, verbose = True ) :
## Initialize the iterations
one = one if one is not None else np.ones( ( 1, A.shape[ 0 ] ), dtype = np.float )
one = sp.csr_matrix( one / one.sum( axis = 1 ) )
## Get the out-degree
out = np.asarray( A.sum( axis = 1 ... | year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb | ivannz/study_notes | mit |
Now the feature extractors themselves: the global pagerank and the presonalized (rooted) pagerank. | ## The global pagerank score
def phi_gpr( edges, A, verbose = True ) :
pi, s, k = __sparse_pagerank( A, one = None, verbose = verbose )
return np.concatenate( ( pi[ :, edges[ :, 0 ] ], pi[ :,edges[ :, 1 ] ] ), axis = 0 ).T
def phi_ppr( edges, A ) :
pass #result = np.empty( edges.shape, dtype = np.float )
# retur... | year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb | ivannz/study_notes | mit |
Computing the features
Vertex degrees | tick = tm.time()
phi_12 = phi_degree( E, pre2010_adj )
tock = tm.time()
print "Vertex degree computed in %.3f sec." % ( tock - tick, ) | year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb | ivannz/study_notes | mit |
Adamic/Adar metric | tick = tm.time()
phi_3 = phi_adamic_adar( E, pre2010_adj )
tock = tm.time()
print "Adamic/adar computed in %.3f sec." % ( tock - tick, ) | year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb | ivannz/study_notes | mit |
Common neighbours | tick = tm.time()
phi_4 = phi_common_neighbours( E, pre2010_adj )
tock = tm.time()
print "Common neighbours computed in %.3f sec." % ( tock - tick, ) | year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb | ivannz/study_notes | mit |
Global Pagerank | tick = tm.time()
phi_56 = phi_gpr( E, pre2010_adj, verbose = False )
tock = tm.time()
print "Global pagerank computed in %.3f sec." % ( tock - tick, ) | year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb | ivannz/study_notes | mit |
Rooted (personalized) pagerank | tick = tm.time()
phi_78 = phi_ppr( E, pre2010_adj, verbose = False )
tock = tm.time()
print "Personalized pagerank computed in %.3f sec." % ( tock - tick, ) | year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb | ivannz/study_notes | mit |
Compute all-pairs shortest paths | # tick = tm.time()
# phi_5 = phi_shortest_paths( E, pre2010_adj )
# tock = tm.time()
# print "Shortest paths computed in %.3f sec." % ( tock - tick, ) | year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb | ivannz/study_notes | mit |
Collect all features into a numpy matrix | X = np.hstack( ( phi_12, phi_3, phi_4, phi_56 ) ) #, phi_78 ) ) | year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb | ivannz/study_notes | mit |
Having computed all the features, lets make a subsample so that the classfification would run faster. | from sklearn.cross_validation import train_test_split
X_modelling, X_main, y_modelling, y_main = train_test_split( X, y.ravel( ), train_size = 0.20 ) | year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb | ivannz/study_notes | mit |
Attach SciKit's grid search and x-validation modules. | from sklearn.grid_search import GridSearchCV
from sklearn.cross_validation import cross_val_score | year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb | ivannz/study_notes | mit |
We are going to analyze many classifiers at once. | classifiers = list( ) | year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb | ivannz/study_notes | mit |
Logistic Regression
Logistic regression for binary classification solves the following problem on the training dataset $(x_i,t_i){i=1}^n \in \mathbb{R}^{1+p}\times {0,1}$:
$$ \sum{i=1}^n t_i \log \sigma( \beta'x_i ) + (1-t_i) \log \bigl( 1-\sigma( \beta'x_i ) \bigr) \to \min_{\beta, \beta_0} \,, $$
where $\sigma(z) = \... | from sklearn.linear_model import LogisticRegression
LR_grid = GridSearchCV( LogisticRegression( ), cv = 10, verbose = 1,
param_grid = { "C" : np.logspace( -2, 2, num = 5 ) }, n_jobs = -1
).fit( X_modelling, y_modelling )
classifiers.append( ( "Logistic", LR_grid.best_estimator_ ) ) | year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb | ivannz/study_notes | mit |
Linear and Quadratic Discriminant Analysis
It is a widely known fact that sometimes simple models beat more complicated ones in terms of their accuracy. Thus let's consdier LDA and QDA. | from sklearn.lda import LDA
from sklearn.qda import QDA
classifiers.append( ( "LDA", LDA( ) ) )
classifiers.append( ( "QDA", QDA( ) ) ) | year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb | ivannz/study_notes | mit |
Decision tree classifiers
Let's employ the classification tree model. On its own a decision tree is a volatile classifier, meaning that the addition of new data can drammatically alter its structure, that is why let's utilize boosted trees and randomized forests. These methods learn the intirinsic nonlinear features of... | from sklearn.ensemble import RandomForestClassifier
RF_grid = GridSearchCV( RandomForestClassifier( n_estimators = 50 ), cv = 10, verbose = 1,
param_grid = { "max_depth" : [ 3, 5, 15, 30 ] }, n_jobs = -1
).fit( X_modelling, y_modelling )
classifiers.append( ( "RandomForest", RF_grid.best_estimator_ ) ) | year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb | ivannz/study_notes | mit |
Boosted tree (AdaBoost) | from sklearn.ensemble import AdaBoostClassifier
classifiers.append( ( "AdaBoost", AdaBoostClassifier( n_estimators = 50 ) ) ) | year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb | ivannz/study_notes | mit |
Simple tree
One does not expect a simple tree to do comparably well against ensemble classifiers. | from sklearn.tree import DecisionTreeClassifier
tree_grid = GridSearchCV( DecisionTreeClassifier( criterion = "gini" ), cv = 10, verbose = 1,
param_grid = { "max_depth" : [ 3, 5, 15, 30 ] }, n_jobs = -1
).fit( X_modelling, y_modelling )
classifiers.append( ( "Tree", tree_grid.best_estimator_ ) ) | year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb | ivannz/study_notes | mit |
$k$-Nearest Negihbours
Another quite engineering approach to classification is to follow a simple rule : if the majority of a points $l$ nearest neighbours correspond to a class $c$ then this point is very likely to come from calss $c$ as well. Know them by their freinds! | from sklearn.neighbors import KNeighborsClassifier
knn_grid = GridSearchCV( KNeighborsClassifier( ), cv = 10, verbose = 50,
param_grid = { "n_neighbors" : [ 2, 3, 5, 15, 30 ] }, n_jobs = -1
).fit( X_modelling, y_modelling )
classifiers.append( ( "k-NN", knn_grid.best_estimator_ ) ) | year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb | ivannz/study_notes | mit |
Support Vector Machine classification | from sklearn.svm import SVC
from sklearn.linear_model import SGDClassifier | year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb | ivannz/study_notes | mit |
Testing
Split the dataset into train and test. | X_train, X_test, y_train, y_test = train_test_split( X_main, y_main, train_size = 0.20 ) | year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb | ivannz/study_notes | mit |
Subsample the train dataset | subsample = np.random.permutation( X_train.shape[ 0 ] )#[ : 50000 ]
X_train_subsample, y_train_subsample = X_train[ subsample ], y_train[ subsample ] | year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb | ivannz/study_notes | mit |
Run tests | results = dict()
for name, clf in classifiers :
tick = tm.time( )
results[name] = cross_val_score( clf, X_train_subsample, y_train_subsample, n_jobs = -1, verbose = 1, cv = 10 )
tock = tm.time( )
print "k-fold crossvalidation for %s took %.3f sec." % ( name, tock - tick, )
k_fold_frame = pd.DataFrame( ... | year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_project2.ipynb | ivannz/study_notes | mit |
Similarity metrics: When both Customers rate two movies exactly the same | # Create a dataframe manually to illustrate the examples
ratings = pd.DataFrame(columns = ["customer", "movie", "rating"],
data=[
['Ana','movie_1',1],
['Ana', 'movie_2', 5],
['Bob','movie_1',1],
... | notebooks/notebook-1-similarity-concepts.ipynb | hcorona/recsys-101-workshop | mit |
When two customers rate the same movies very differently | # Create a dataframe manually to illustrate the examples
ratings = pd.DataFrame(columns = ["customer", "movie", "rating"],
data=[
['Ana','movie_1',5],
['Ana', 'movie_2', 1],
['Bob','movie_1',1],
... | notebooks/notebook-1-similarity-concepts.ipynb | hcorona/recsys-101-workshop | mit |
When two customers rate different movies | # Create a dataframe manually to illustrate the examples
data=[['Ana','movie_1',5],['Ana', 'movie_2', 1],['Bob','movie_3',5],['Bob', 'movie_4', 5]]
ratings = pd.DataFrame(columns = ["customer", "movie", "rating"], data=data)
ratings_matrix = ratings.pivot_table(index='customer', columns='movie', values='rating', fill_v... | notebooks/notebook-1-similarity-concepts.ipynb | hcorona/recsys-101-workshop | mit |
Positive people vs. Negative people | # Create a dataframe manually to illustrate the examples
data=[['Ana','movie_1',5],['Ana', 'movie_2', 4],['Bob','movie_1',3],['Bob', 'movie_2', 2]]
ratings = pd.DataFrame(columns = ["customer", "movie", "rating"], data=data)
ratings_matrix = ratings.pivot_table(index='customer', columns='movie', values='rating', fill_v... | notebooks/notebook-1-similarity-concepts.ipynb | hcorona/recsys-101-workshop | mit |
People who rate a lot of movies vs. people who don't rate a lot of movies | # Create a dataframe manually to illustrate the examples
data=[['Ana','movie_1',5],['Ana', 'movie_2', 4],['Ana', 'movie_3', 4],['Bob','movie_1',3],['Bob', 'movie_2', 2]]
ratings = pd.DataFrame(columns = ["customer", "movie", "rating"], data=data)
ratings_matrix = ratings.pivot_table(index='customer', columns='movie', v... | notebooks/notebook-1-similarity-concepts.ipynb | hcorona/recsys-101-workshop | mit |
Map Coloring
This short notebook shows how map coloring can be formulated as a constraint satisfaction problem.
In map coloring, the goal is to color the states shown in a given map such that no two bordering states
share the same color. As an example, consider the map of Australia that is shown below. Australia
has... | def map_coloring_csp():
Variables = [ 'WA', 'NSW', 'V', 'NT', 'SA', 'Q' ]
Values = { 'red', 'blue', 'green' }
Constraints = { 'WA != NT', 'WA != SA',
'NT != SA', 'NT != Q',
'SA != Q', 'SA != NSW', 'SA != V',
'Q != NSW', 'NSW != V'
... | Python/2 Constraint Solver/Map-Coloring.ipynb | karlstroetmann/Artificial-Intelligence | gpl-2.0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.