markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Accuracy, precision and recallClassification accuracy for each class:
for i,j in enumerate(cm.diagonal()/cm.sum(axis=1)): print("%d: %.4f" % (i,j))
_____no_output_____
MIT
notebooks/sklearn-mnist-nn.ipynb
CSCfi/machine-learning-scripts
Precision and recall for each class:
print(classification_report(y_test, pred_nn_fast, labels=labels))
_____no_output_____
MIT
notebooks/sklearn-mnist-nn.ipynb
CSCfi/machine-learning-scripts
Failure analysisWe can also inspect the results in more detail. Let's use the `show_failures()` helper function (defined in `pml_utils.py`) to show the wrongly classified test digits.The helper function is defined as:```show_failures(predictions, y_test, X_test, trueclass=None, predictedclass=None, maxtoshow=10)```whe...
show_failures(pred_nn_fast, y_test, X_test)
_____no_output_____
MIT
notebooks/sklearn-mnist-nn.ipynb
CSCfi/machine-learning-scripts
We can use `show_failures()` to inspect failures in more detail. For example:* show failures in which the true class was "5":
show_failures(pred_nn_fast, y_test, X_test, trueclass='5')
_____no_output_____
MIT
notebooks/sklearn-mnist-nn.ipynb
CSCfi/machine-learning-scripts
* show failures in which the prediction was "0":
show_failures(pred_nn_fast, y_test, X_test, predictedclass='0')
_____no_output_____
MIT
notebooks/sklearn-mnist-nn.ipynb
CSCfi/machine-learning-scripts
* show failures in which the true class was "0" and the prediction was "2":
show_failures(pred_nn_fast, y_test, X_test, trueclass='0', predictedclass='2')
_____no_output_____
MIT
notebooks/sklearn-mnist-nn.ipynb
CSCfi/machine-learning-scripts
"Sequence classification using Recurrent Neural Networks"> "PyTorch implementation for sequence classification using RNNs"- toc: false- branch: master- badges: true- comments: true- categories: [PyTorch, classification, RNN]- image: images/- hide: false- search_exclude: true- metadata_key1: metadata_value1- metadata_k...
from sequential_tasks import TemporalOrderExp6aSequence as QRSU # Create a data generator. Predefined generator is implemented in file sequential_tasks. example_generator = QRSU.get_predefined_generator( difficulty_level=QRSU.DifficultyLevel.EASY, batch_size=32, ) example_batch = example_generator[1...
The sequence is: BbXcXcbE The class label is: Q
Apache-2.0
_notebooks/2021-01-07-seq-classification.ipynb
aizardar/blogs
We can see that our sequence contain 8 elements starting with B and ending with E. This sequence belong to class Q as per the rule defined earlier. Each element is one-hot encoded. Thus, we can represent our first sequence (BbXcXcbE) with a sequence of rows of one-hot encoded vectors (as shown above). . Similarly, clas...
import torch import torch.nn as nn # Set the random seed for reproducible results torch.manual_seed(1) class SimpleRNN(nn.Module): def __init__(self, input_size, hidden_size, output_size): # This just calls the base class constructor super().__init__() # Neural network layers assigned as a...
_____no_output_____
Apache-2.0
_notebooks/2021-01-07-seq-classification.ipynb
aizardar/blogs
3. Defining the Training Loop
def train(model, train_data_gen, criterion, optimizer, device): # Set the model to training mode. This will turn on layers that would # otherwise behave differently during evaluation, such as dropout. model.train() # Store the number of sequences that were classified correctly num_correct = 0 ...
_____no_output_____
Apache-2.0
_notebooks/2021-01-07-seq-classification.ipynb
aizardar/blogs
4. Defining the Testing Loop
def test(model, test_data_gen, criterion, device): # Set the model to evaluation mode. This will turn off layers that would # otherwise behave differently during training, such as dropout. model.eval() # Store the number of sequences that were classified correctly num_correct = 0 # A context m...
_____no_output_____
Apache-2.0
_notebooks/2021-01-07-seq-classification.ipynb
aizardar/blogs
5. Putting it All Together
import matplotlib.pyplot as plt from plot_lib import set_default, plot_state, print_colourbar set_default() def train_and_test(model, train_data_gen, test_data_gen, criterion, optimizer, max_epochs, verbose=True): # Automatically determine the device that PyTorch should use for computation device = torch.device...
_____no_output_____
Apache-2.0
_notebooks/2021-01-07-seq-classification.ipynb
aizardar/blogs
5. Simple RNN: 10 EpochsLet's create a simple recurrent network and train for 10 epochs.
# Setup the training and test data generators difficulty = QRSU.DifficultyLevel.EASY batch_size = 32 train_data_gen = QRSU.get_predefined_generator(difficulty, batch_size) test_data_gen = QRSU.get_predefined_generator(difficulty, batch_size) # Setup the RNN and training settings input_size = train_data_gen.n...
torch.Size([4, 8]) torch.Size([4, 4]) torch.Size([4]) torch.Size([4]) torch.Size([4, 4]) torch.Size([4])
Apache-2.0
_notebooks/2021-01-07-seq-classification.ipynb
aizardar/blogs
6. RNN: Increasing Epoch to 100
# Setup the training and test data generators difficulty = QRSU.DifficultyLevel.EASY batch_size = 32 train_data_gen = QRSU.get_predefined_generator(difficulty, batch_size) test_data_gen = QRSU.get_predefined_generator(difficulty, batch_size) # Setup the RNN and training settings input_size = train_data_gen.n...
[Epoch 100/100] loss: 0.0081, acc: 100.00% - test_loss: 0.0069, test_acc: 100.00%
Apache-2.0
_notebooks/2021-01-07-seq-classification.ipynb
aizardar/blogs
Make RACs from initial structure
import numpy as np import matplotlib.pyplot as plt %matplotlib inline import pickle from collections import defaultdict from molSimplify.Informatics.autocorrelation import* def make_rac(xyz_file, m_depth, l_depth, is_oct): properties = ['electronegativity', 'size', 'polarizability', 'nuclear_charge'] this_mol...
_____no_output_____
MIT
make_racs.ipynb
craigerboi/oer_active_learning
Now we define different racs with differing feature depths so we can perform the gridsearch in rac_depth_search.ipynb
mc_depths = [2, 3, 4] lc_depths = [0, 1] oer_desc_data = pickle.load(open("racs_and_desc/oer_desc_data.p", "rb"),) name2oer_desc_and_rac = defaultdict() for mc_d in mc_depths: for lc_d in lc_depths: racs = [] oer_desc_for_ml = [] cat_names_for_ml = [] for name in oer_desc_data: ...
_____no_output_____
MIT
make_racs.ipynb
craigerboi/oer_active_learning
使用线性回归预测波士顿房价**作者:** [PaddlePaddle](https://github.com/PaddlePaddle) **日期:** 2021.05 **摘要:** 本示例教程将会演示如何使用线性回归完成波士顿房价预测。 一、简要介绍经典的线性回归模型主要用来预测一些存在着线性关系的数据集。回归模型可以理解为:存在一个点集,用一条曲线去拟合它分布的过程。如果拟合曲线是一条直线,则称为线性回归。如果是一条二次曲线,则被称为二次回归。线性回归是回归模型中最简单的一种。 本示例简要介绍如何用飞桨开源框架,实现波士顿房价预测。其思路是,假设uci-housing数据集中的房子属性和房价之间的关系可以被属性间的线性组合...
import paddle import numpy as np import os import matplotlib import matplotlib.pyplot as plt import pandas as pd import seaborn as sns import warnings warnings.filterwarnings("ignore") print(paddle.__version__)
2.1.0
Apache-2.0
docs/practices/linear_regression/linear_regression.ipynb
Liu-xiandong/docs
三、数据集介绍本示例采用uci-housing数据集,这是经典线性回归的数据集。数据集共7084条数据,可以拆分成506行,每行14列。前13列用来描述房屋的各种信息,最后一列为该类房屋价格中位数。 前13列用来描述房屋的各种信息![avatar](https://ai-studio-static-online.cdn.bcebos.com/c19602ce74284e3b9a50422f8dc37c0c1c79cf5cd8424994b6a6b073dcb7c057) 3.1 数据处理
#下载数据 !wget https://archive.ics.uci.edu/ml/machine-learning-databases/housing/housing.data -O housing.data # 从文件导入数据 datafile = './housing.data' housing_data = np.fromfile(datafile, sep=' ') feature_names = ['CRIM', 'ZN', 'INDUS', 'CHAS', 'NOX', 'RM', 'AGE','DIS', 'RAD', 'TAX', 'PTRATIO', 'B', 'LSTAT', 'MEDV'] feature...
_____no_output_____
Apache-2.0
docs/practices/linear_regression/linear_regression.ipynb
Liu-xiandong/docs
3.2 数据归一化处理下图展示各属性的取值范围分布:
sns.boxplot(data=df.iloc[:, 0:13])
_____no_output_____
Apache-2.0
docs/practices/linear_regression/linear_regression.ipynb
Liu-xiandong/docs
从上图看出,各属性的数值范围差异太大,甚至不能够在一个画布上充分的展示各属性具体的最大、最小值以及异常值等。下面进行归一化。 做归一化(或 Feature scaling)至少有以下2个理由:* 过大或过小的数值范围会导致计算时的浮点上溢或下溢。* 不同的数值范围会导致不同属性对模型的重要性不同(至少在训练的初始阶段如此),而这个隐含的假设常常是不合理的。这会对优化的过程造成困难,使训练时间大大的加长.
features_max = housing_data.max(axis=0) features_min = housing_data.min(axis=0) features_avg = housing_data.sum(axis=0) / housing_data.shape[0] BATCH_SIZE = 20 def feature_norm(input): f_size = input.shape output_features = np.zeros(f_size, np.float32) for batch_id in range(f_size[0]): for index in ...
_____no_output_____
Apache-2.0
docs/practices/linear_regression/linear_regression.ipynb
Liu-xiandong/docs
四、模型组网线性回归就是一个从输入到输出的简单的全连接层。对于波士顿房价数据集,假设属性和房价之间的关系可以被属性间的线性组合描述。
class Regressor(paddle.nn.Layer): def __init__(self): super(Regressor, self).__init__() self.fc = paddle.nn.Linear(13, 1,) def forward(self, inputs): pred = self.fc(inputs) return pred
_____no_output_____
Apache-2.0
docs/practices/linear_regression/linear_regression.ipynb
Liu-xiandong/docs
定义绘制训练过程的损失值变化趋势的方法 `draw_train_process` .
train_nums = [] train_costs = [] def draw_train_process(iters, train_costs): plt.title("training cost", fontsize=24) plt.xlabel("iter", fontsize=14) plt.ylabel("cost", fontsize=14) plt.plot(iters, train_costs, color='red', label='training cost') plt.show()
_____no_output_____
Apache-2.0
docs/practices/linear_regression/linear_regression.ipynb
Liu-xiandong/docs
五、方式1:使用基础API完成模型训练&预测 5.1 模型训练下面展示模型训练的代码。这里用到的是线性回归模型最常用的损失函数--均方误差(MSE),用来衡量模型预测的房价和真实房价的差异。对损失函数进行优化所采用的方法是梯度下降法.
import paddle.nn.functional as F y_preds = [] labels_list = [] def train(model): print('start training ... ') # 开启模型训练模式 model.train() EPOCH_NUM = 500 train_num = 0 optimizer = paddle.optimizer.SGD(learning_rate=0.001, parameters=model.parameters()) for epoch_id in range(EPOCH_NUM): ...
_____no_output_____
Apache-2.0
docs/practices/linear_regression/linear_regression.ipynb
Liu-xiandong/docs
可以从上图看出,随着训练轮次的增加,损失在呈降低趋势。但由于每次仅基于少量样本更新参数和计算损失,所以损失下降曲线会出现震荡。 5.2 模型预测
# 获取预测数据 INFER_BATCH_SIZE = 100 infer_features_np = np.array([data[:13] for data in test_data]).astype("float32") infer_labels_np = np.array([data[-1] for data in test_data]).astype("float32") infer_features = paddle.to_tensor(infer_features_np) infer_labels = paddle.to_tensor(infer_labels_np) fetch_list = model(infe...
_____no_output_____
Apache-2.0
docs/practices/linear_regression/linear_regression.ipynb
Liu-xiandong/docs
上图可以看出,训练出来的模型的预测结果与真实结果是较为接近的。 六、方式2:使用高层API完成模型训练&预测也可以用飞桨的高层API来做线性回归训练,高层API相较于底层API更加的简洁方便。
import paddle paddle.set_default_dtype("float64") # step1:用高层API定义数据集,无需进行数据处理等,高层API为你一条龙搞定 train_dataset = paddle.text.datasets.UCIHousing(mode='train') eval_dataset = paddle.text.datasets.UCIHousing(mode='test') # step2:定义模型 class UCIHousing(paddle.nn.Layer): def __init__(self): super(UCIHousing, self)...
The loss value printed in the log is the current step, and the metric is the average value of previous steps. Epoch 1/5 step 51/51 [==============================] - loss: 624.0728 - 2ms/step Eval begin... step 13/13 [==============================] - loss: 397.2567 - 878us/step Eval samples: 102 Epoc...
Apache-2.0
docs/practices/linear_regression/linear_regression.ipynb
Liu-xiandong/docs
Exercícios Aula 02 - Thainá Mariane Souza Silva 816118386 Exercicios de Lista1 Crie um programa que recebe uma lista de números e - retorne o maior elemento - retorne a soma dos elementos - retorne o número de ocorrências do primeiro elemento da lista - retorne a média dos elementos - retorne o valor mais próximo d...
import math lista = [input("Digite uma lista") for i in range(5)] maior = max(lista) soma = sum(lista) ocorrencia = lista.count(lista[0]) negativo = sum(i for i in list if i < 0) media = sum(lista)/ len(lista) print("O maior elemento da lista é: {} " .formart(maior)) print("A soma dos elementos da lista é: {} " .fo...
_____no_output_____
MIT
Exercicio02.ipynb
thainamariianr/LingProg
2 Faça um programa que receba duas listas e retorne True se são iguais ou False caso contrario. Duas listas são iguais se possuem os mesmos valores e na mesma ordem.
#Para cada x no input ..... split quebrar a string de acordo com o que foi definido lista = [input("Digite um valor para incluir na lista ") for i in range(3)] lista2 = [input("Digite um valor para incluir na lista 2 ") for i in range(3)] if lista == lista2: print("True") else: print("False")
_____no_output_____
MIT
Exercicio02.ipynb
thainamariianr/LingProg
3 Faça um programa que receba duas listas e retorne True se têm os mesmos elementos ou False caso contrário Duas listas possuem os mesmos elementos quando são compostas pelos mesmos valores, mas não obrigatoriamente na mesma ordem
#Para cada x no input ..... split quebrar a string de acordo com o que foi definido lista = [input("Digite um valor para incluir na lista: ") for i in range(3)] lista = [input("Digite um valor para incluir na lista: ") for i in range(3)] result = lista if lista == lista2: print("true") else: i=0 for i in ...
_____no_output_____
MIT
Exercicio02.ipynb
thainamariianr/LingProg
4 Faça um programa que percorre uma lista com o seguinte formato: [['Brasil', 'Italia', [10, 9]], ['Brasil', 'Espanha', [5, 7]], ['Italia', 'Espanha', [7,8]]]. Essa lista indica o número de faltas que cada time fez em cada jogo. Na lista acima, no jogo entre Brasil e Itália, o Brasil fez 10 faltas e a Itália fez 9. O ...
import operator lista = [['Brasil', 'Italia', [10, 9]], ['Brasil', 'Espanha', [5, 7]], ['Italia', 'Espanha', [7,8]]] dicionario = {"Brasil": 0, "Italia": 0, "Espanha": 0} total_faltas = 0 for item in lista: total_faltas += sum(item[2]) dicionario[item[0]] += item[2][0] dicionario[item[1]] += item[2][1...
_____no_output_____
MIT
Exercicio02.ipynb
thainamariianr/LingProg
Exercicios Dicionario 5 Escreva um programa que conta a quantidade de vogais em uma string e armazena tal quantidade em um dicionário, onde a chave é a vogal considerada.
import string palavra = input("Digite uma palavra: ") vogal = ['a', 'e', 'i', 'o', 'u'] dicionario = {'a': 0, 'e': 0, 'i':0, 'o': 0, 'u': 0} for letra in palavra: if letra in vogal: dicionario[letra] = dicionario[letra] + 1 print(dicionario)
_____no_output_____
MIT
Exercicio02.ipynb
thainamariianr/LingProg
6 Escreva um programa que lê̂ duas notas de vários alunos e armazena tais notas em um dicionário, onde a chave é o nome do aluno. A entrada de dados deve terminar quando for lida uma string vazia como nome. Escreva uma função que retorna a média do aluno, dado seu nome.
texto = input("Digite o nome do aluno é duas notas, Nome, nota1, nota2 separando por ponto e virgula") texto = texto.split(";") notas = {} count = 0 for n in texto: nota = n.split(",") notas[nota[0]] = {"nota1": nota[1], "nota2": nota[2]} for n in notas: media = (int(notas[n]['nota1']) + int(notas[...
_____no_output_____
MIT
Exercicio02.ipynb
thainamariianr/LingProg
7 Uma pista de Kart permite 10 voltas para cada um de 6 corredores. Escreva um programa que leia todos os tempos em segundos e os guarde em um dicionário, onde a chave é o nome do corredor. Ao fnal diga de quem foi a melhor volta da prova e em que volta; e ainda a classifcação fnal em ordem (1o o campeão). O campeão é...
i = 0 while(i <= 6): voltas = input("Digite os valores de voltas: 'Piloto':[2,6,8,1] ") dic = dict(x.split() for x in voltas.splitlines()) print(dic)
_____no_output_____
MIT
Exercicio02.ipynb
thainamariianr/LingProg
Self Study 3 In this self study we perform character recognition using SVM classifiers. We use the MNIST dataset, which consists of 70000 handwritten digits 0..9 at a resolution of 28x28 pixels. Stuff we need:
import matplotlib.pyplot as plt import numpy as np import time from sklearn.neural_network import MLPClassifier from sklearn.svm import SVC from sklearn.model_selection import train_test_split from sklearn.metrics import confusion_matrix,accuracy_score from sklearn.datasets import fetch_openml ##couldn't run with the ...
_____no_output_____
MIT
ML_selfstudy3/MLSelfStudy3-F20.ipynb
Jbarata98/ML_AAU1920
Now we get the MNIST data. Using the fetch_mldata function, this will be downloaded from the web, and stored in the directory you specify as data_home (replace my path in the following cell):
from sklearn.datasets import fetch_openml mnist = fetch_openml(name='mnist_784', data_home='/home/starksultana/Documentos/Mestrado_4o ano/2o sem AAU/ML/ML_selfstudy3')
_____no_output_____
MIT
ML_selfstudy3/MLSelfStudy3-F20.ipynb
Jbarata98/ML_AAU1920
The data has .data and .target attributes. The following gives us some basic information on the data:
print("Number of datapoints: {}\n".format(mnist.data.shape[0])) print("Number of features: {}\n".format(mnist.data.shape[1])) print("features: ", mnist.data[0].reshape(196,4)) print("List of labels: {}\n".format(np.unique(mnist.target)))
Number of datapoints: 70000 Number of features: 784 features: [[ 0. 0. 0. 0.] [ 0. 0. 0. 0.] [ 0. 0. 0. 0.] [ 0. 0. 0. 0.] [ 0. 0. 0. 0.] [ 0. 0. 0. 0.] [ 0. 0. 0. 0.] [ 0. 0. 0. 0.] [ 0. 0. 0. 0.] [ 0. 0. 0. 0.] [ 0. 0. 0. 0.] [...
MIT
ML_selfstudy3/MLSelfStudy3-F20.ipynb
Jbarata98/ML_AAU1920
We can plot individual datapoints as follows:
index = 9 print("Value of datapoint no. {}:\n{}\n".format(index,mnist.data[index])) print("As image:\n") plt.imshow(mnist.data[index].reshape(28,28),cmap=plt.cm.gray_r) #plt.show()
Value of datapoint no. 9: [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0....
MIT
ML_selfstudy3/MLSelfStudy3-F20.ipynb
Jbarata98/ML_AAU1920
To make things a little bit simpler (and faster!), we can extract from the data binary subsets, that only contain the data for two selected digits:
digit0='4' digit1='5' mnist_bin_data=mnist.data[np.logical_or(mnist.target==digit0,mnist.target==digit1)] mnist_bin_target=mnist.target[np.logical_or(mnist.target==digit0,mnist.target==digit1)] print("The first datapoint now is: \n") plt.imshow(mnist_bin_data[0].reshape(28,28),cmap=plt.cm.gray_r) plt.show() print(mnist...
The first datapoint now is:
MIT
ML_selfstudy3/MLSelfStudy3-F20.ipynb
Jbarata98/ML_AAU1920
**Exercise 1 [SVM]:** Split the mnist_bin data into training and test set. Learn different SVM models by varying the kernel functions (SVM). For each configuration, determine the time it takes to learn the model, and the accuracy on the test data. You can get the current time using:`import time` `now = time.time()`*Cau...
###Exercise 1 ''' Completely dies with 7 and 9, cant make it work :( In the rest of the tasks it performed quite well with really high accuracies, for example 1 and 0 it ran with 99 % accuracy with a test size of 30 %, and 7 misclassifications, ran in 1,72 secs. 6 and 3 it only has 23 misclassification but runs in 4 t...
_____no_output_____
MIT
ML_selfstudy3/MLSelfStudy3-F20.ipynb
Jbarata98/ML_AAU1920
---Exercise 21st approach try to reshape the data?normalize the pixels?
##exercise 2 from sklearn.preprocessing import StandardScaler now = time.time() #x: np.ndarray = mnist_bin_data #y: np.ndarray = mnist_bin_target print("don't worry i've just started") scaler = StandardScaler() trnX, tstX, trnY, tstY = train_test_split(mnist.data, mnist.target, test_size=0.3,random_state=20) pri...
_____no_output_____
MIT
ML_selfstudy3/MLSelfStudy3-F20.ipynb
Jbarata98/ML_AAU1920
Object classification
from sklearn.linear_model import SGDClassifier from sklearn.model_selection import train_test_split import numpy as np import os import ast from glob import glob import random import traceback from tabulate import tabulate import pickle from sklearn.pipeline import make_pipeline from sklearn.preprocessing import Stan...
_____no_output_____
MIT
Test/Object_classification/Object_classification_Random_Forest.ipynb
marcolamartina/LamIra
Parameters
new_data=True load_old_params=True save_params=False selected_space=True from google.colab import drive drive.mount('/content/drive')
_____no_output_____
MIT
Test/Object_classification/Object_classification_Random_Forest.ipynb
marcolamartina/LamIra
Utils functions
def translate(name): translate_dict={"apple":"mela", "ball":"palla", "bell pepper":"peperone", "binder":"raccoglitore", "bowl":"ciotola", "calculator":"calcolatrice", "camera":"fotocamera", ...
_____no_output_____
MIT
Test/Object_classification/Object_classification_Random_Forest.ipynb
marcolamartina/LamIra
Data
obj_dir = "/content/drive/My Drive/Tesi/Code/Object_classification" #obj_dir = "/Users/marco/Google Drive/Tesi/Code/Object_classification" data_dir = obj_dir+"/Data" model_filename = obj_dir+"/model.pkl" exclusion_list=["binder","camera","cell phone","dry battery"] test_folder=["apple_3", "bell_pepper_1", ...
0.07% 0.07% 0.08% 0.09% 0.10% 0.10% 0.11% 0.12% 0.13% 0.13% 0.14% 0.15% 0.16% 0.16% 0.17% 0.18% 0.19% 0.19% 0.20% 0.21% 0.22% 0.22% 0.23% 0.24% 0.25% 0.25% 0.26% 0.27% 0.27% 0.28% 0.29% 0.30% 0.30% 0.31% 0.32% 0.33% 0.33% 0.34% 0.35% 0.36% 0.36% 0.37% 0.38% 0.39% 0.39% 0.40% 0.41% 0.42% 0.42% 0.43% 0.44% 0.45% 0.45% 0....
MIT
Test/Object_classification/Object_classification_Random_Forest.ipynb
marcolamartina/LamIra
Save input data
if selected_space: new_y_train=[] for i in y_train: new_label=dictionary[i][1] #new_label=new_label.split("-")[0] new_y_train.append(new_label) new_y_test=[] for i in y_test: new_label=dictionary[i][1] #new_label=new_label.split("-")[0] new_y_test....
_____no_output_____
MIT
Test/Object_classification/Object_classification_Random_Forest.ipynb
marcolamartina/LamIra
Classifier fitting
if load_old_params and False: with open(model_filename, 'rb') as file: clf = pickle.load(file) else: clf = RandomForestClassifier(n_jobs=-1, n_estimators=30) clf.fit(X_train,y_train) print(clf.score(X_test,y_test))
_____no_output_____
MIT
Test/Object_classification/Object_classification_Random_Forest.ipynb
marcolamartina/LamIra
Saving parameters
if save_params: with open(model_filename, 'wb') as file: pickle.dump(clf, file)
_____no_output_____
MIT
Test/Object_classification/Object_classification_Random_Forest.ipynb
marcolamartina/LamIra
Score
def classify_prediction(prediction): sure=[] unsure=[] dubious=[] cannot_answer=[] for pred in prediction: o,p=pred values=list(p.values()) keys=list(p.keys()) # sure if values[0]>0.8: sure.append(pred) # unsure elif values...
_____no_output_____
MIT
Test/Object_classification/Object_classification_Random_Forest.ipynb
marcolamartina/LamIra
Test
clf.score(X_test,y_test) plt.plot(clf.feature_importances_) clf.feature_importances_ from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score, confusion_matrix import pandas as pd def classification_report(y_true, y_pred): print(f"Accuracy: {accuracy_score(y_true, y_pred)}.") print(f"...
_____no_output_____
MIT
Test/Object_classification/Object_classification_Random_Forest.ipynb
marcolamartina/LamIra
You are given two strings as input. You want to find out if these **two strings** are **at most one edit away** from each other.An edit is defined as either- **inserting a character**: length increased by 1- **removing a character**: length decreased by 1- **replacing a character**: length doesn't change*this edit dist...
# method 1: brutal force # O(N) # N is the length of the **shorter** string def oneEdit(s1, s2): l1 = len(s1) l2 = len(s2) if (l1 == l2): return checkReplace(s1, s2) elif abs(l1-l2) == 1: return checkInsRem(s1, s2) else: return False def checkReplace(s1, s2): foundDiff ...
_____no_output_____
MIT
notebooks/ch1_arrays_and_strings/1.5 One Away.ipynb
Julyzzzzzz/Practice-on-data-structures-and-algorithms
Differential Methylated Genes - Pairwise
import pandas as pd import anndata import xarray as xr from ALLCools.plot import * from ALLCools.mcds import MCDS from ALLCools.clustering import PairwiseDMG, cluster_enriched_features import pathlib
_____no_output_____
MIT
docs/allcools/cell_level/dmg/04-PairwiseDMG.ipynb
mukamel-lab/ALLCools
Parameters
adata_path = '../step_by_step/100kb/adata.with_coords.h5ad' cluster_col = 'L1' # change this to the paths to your MCDS files gene_fraction_dir = 'gene_frac/' obs_dim = 'cell' var_dim = 'gene' # DMG mc_type = 'CHN' top_n = 1000 adj_p_cutoff = 1e-3 delta_rate_cutoff = 0.3 auroc_cutoff = 0.9 random_state = 0 n_jobs = 30
_____no_output_____
MIT
docs/allcools/cell_level/dmg/04-PairwiseDMG.ipynb
mukamel-lab/ALLCools
Load
adata = anndata.read_h5ad(adata_path) cell_meta = adata.obs.copy() cell_meta.index.name = obs_dim gene_meta = pd.read_csv(f'{gene_fraction_dir}/GeneMetadata.csv.gz', index_col=0) gene_mcds = MCDS.open(f'{gene_fraction_dir}/*_da_frac.mcds', use_obs=cell_meta.index) gene_mcds
_____no_output_____
MIT
docs/allcools/cell_level/dmg/04-PairwiseDMG.ipynb
mukamel-lab/ALLCools
Pairwise DMG
pwdmg = PairwiseDMG(max_cell_per_group=1000, top_n=top_n, adj_p_cutoff=adj_p_cutoff, delta_rate_cutoff=delta_rate_cutoff, auroc_cutoff=auroc_cutoff, random_state=random_state, n_jobs=n_jobs) pwdmg.fit...
_____no_output_____
MIT
docs/allcools/cell_level/dmg/04-PairwiseDMG.ipynb
mukamel-lab/ALLCools
Aggregating Cluster DMGWeighted total AUROC aggregated from the pairwise comparisons Aggregate Pairwise Comparisons
cluster_dmgs = pwdmg.aggregate_pairwise_dmg(adata, groupby=cluster_col) # save all the DMGs with pd.HDFStore(f'{cluster_col}.ClusterRankedPWDMG.{mc_type}.hdf') as hdf: for cluster, dmgs in cluster_dmgs.items(): hdf[cluster] = dmgs[dmgs > 0.0001]
_____no_output_____
MIT
docs/allcools/cell_level/dmg/04-PairwiseDMG.ipynb
mukamel-lab/ALLCools
Import libraries
import os import warnings warnings.filterwarnings('ignore') #Packages related to data importing, manipulation, exploratory data #analysis, data understanding import numpy as np import pandas as pd from pandas import Series, DataFrame from termcolor import colored as cl # text customization #Packages related to data vi...
_____no_output_____
MIT
Credit Card Fraud Detection.ipynb
mouhamadibrahim/Credit-Card-Fraud-Detection
Importing data This dataset contains the real bank transactions made by European cardholders in the year 2013, the dataset can be downlaoded here: https://www.kaggle.com/mlg-ulb/creditcardfraud
data=pd.read_csv("creditcard.csv")
_____no_output_____
MIT
Credit Card Fraud Detection.ipynb
mouhamadibrahim/Credit-Card-Fraud-Detection
Checking transactions we can see that only 17% are fraud transactions
Total_transactions = len(data) normal = len(data[data.Class == 0]) fraudulent = len(data[data.Class == 1]) fraud_percentage = round(fraudulent/normal*100, 2) print(cl('Total number of Trnsactions are {}'.format(Total_transactions), attrs = ['bold'])) print(cl('Number of Normal Transactions are {}'.format(normal), attrs...
Total number of Trnsactions are 284807 Number of Normal Transactions are 284315 Number of fraudulent Transactions are 492 Percentage of fraud Transactions is 0.17
MIT
Credit Card Fraud Detection.ipynb
mouhamadibrahim/Credit-Card-Fraud-Detection
Feature Scaling
sc = StandardScaler() amount = data['Amount'].values data['Amount'] = sc.fit_transform(amount.reshape(-1, 1))
_____no_output_____
MIT
Credit Card Fraud Detection.ipynb
mouhamadibrahim/Credit-Card-Fraud-Detection
Dropping columns and other features
data.drop(['Time'], axis=1, inplace=True) data.drop_duplicates(inplace=True) X = data.drop('Class', axis = 1).values y = data['Class'].values
_____no_output_____
MIT
Credit Card Fraud Detection.ipynb
mouhamadibrahim/Credit-Card-Fraud-Detection
Training the model
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 1)
_____no_output_____
MIT
Credit Card Fraud Detection.ipynb
mouhamadibrahim/Credit-Card-Fraud-Detection
Decision Trees
DT = DecisionTreeClassifier(max_depth = 4, criterion = 'entropy') DT.fit(X_train, y_train) dt_yhat = DT.predict(X_test) print('Accuracy score of the Decision Tree model is {}'.format(accuracy_score(y_test, dt_yhat))) print('F1 score of the Decision Tree model is {}'.format(f1_score(y_test, dt_yhat)))
F1 score of the Decision Tree model is 0.7521367521367521
MIT
Credit Card Fraud Detection.ipynb
mouhamadibrahim/Credit-Card-Fraud-Detection
K nearest neighbor
n = 7 KNN = KNeighborsClassifier(n_neighbors = n) KNN.fit(X_train, y_train) knn_yhat = KNN.predict(X_test) print('Accuracy score of the K-Nearest Neighbors model is {}'.format(accuracy_score(y_test, knn_yhat))) print('F1 score of the K-Nearest Neighbors model is {}'.format(f1_score(y_test, knn_yhat)))
Accuracy score of the K-Nearest Neighbors model is 0.999288989494457 F1 score of the K-Nearest Neighbors model is 0.7949790794979079
MIT
Credit Card Fraud Detection.ipynb
mouhamadibrahim/Credit-Card-Fraud-Detection
Logistic Regression
lr = LogisticRegression() lr.fit(X_train, y_train) lr_yhat = lr.predict(X_test) print('Accuracy score of the Logistic Regression model is {}'.format(accuracy_score(y_test, lr_yhat))) print('F1 score of the Logistic Regression model is {}'.format(f1_score(y_test, lr_yhat)))
F1 score of the Logistic Regression model is 0.6666666666666666
MIT
Credit Card Fraud Detection.ipynb
mouhamadibrahim/Credit-Card-Fraud-Detection
SVM classifier
svm = SVC() svm.fit(X_train, y_train) svm_yhat = svm.predict(X_test) print('Accuracy score of the Support Vector Machines model is {}'.format(accuracy_score(y_test, svm_yhat))) print('F1 score of the Support Vector Machines model is {}'.format(f1_score(y_test, svm_yhat)))
F1 score of the Support Vector Machines model is 0.7813953488372093
MIT
Credit Card Fraud Detection.ipynb
mouhamadibrahim/Credit-Card-Fraud-Detection
Random Forest
rf = RandomForestClassifier(max_depth = 4) rf.fit(X_train, y_train) rf_yhat = rf.predict(X_test) print('Accuracy score of the Random Forest model is {}'.format(accuracy_score(y_test, rf_yhat))) print('F1 score of the Random Forest model is {}'.format(f1_score(y_test, rf_yhat)))
F1 score of the Random Forest model is 0.7397260273972602
MIT
Credit Card Fraud Detection.ipynb
mouhamadibrahim/Credit-Card-Fraud-Detection
XGBClassifier
xgb = XGBClassifier(max_depth = 4) xgb.fit(X_train, y_train) xgb_yhat = xgb.predict(X_test) print('Accuracy score of the XGBoost model is {}'.format(accuracy_score(y_test, xgb_yhat))) print('F1 score of the XGBoost model is {}'.format(f1_score(y_test, xgb_yhat)))
F1 score of the XGBoost model is 0.8495575221238937
MIT
Credit Card Fraud Detection.ipynb
mouhamadibrahim/Credit-Card-Fraud-Detection
![JohnSnowLabs](https://nlp.johnsnowlabs.com/assets/images/logo.png) Serving Spark NLP with API: Synapse ML SynapseML Installation
import json import os from google.colab import files license_keys = files.upload() with open(list(license_keys.keys())[0]) as f: license_keys = json.load(f) # Defining license key-value pairs as local variables locals().update(license_keys) # Adding license key-value pairs to environment variables os.environ.up...
_____no_output_____
Apache-2.0
tutorials/RestAPI/Serving_SparkNLP_with_Synapse.ipynb
iamvarol/spark-nlp-workshop
Imports and Spark Session
import pandas as pd import pyspark import sparknlp import sparknlp_jsl from pyspark.sql import SparkSession from pyspark.ml import Pipeline, PipelineModel import pyspark.sql.functions as F from pyspark.sql.types import * from sparknlp.base import * from sparknlp.annotator import * from sparknlp_jsl.annotator import * f...
_____no_output_____
Apache-2.0
tutorials/RestAPI/Serving_SparkNLP_with_Synapse.ipynb
iamvarol/spark-nlp-workshop
Preparing a pipeline with Entity Resolution
# Annotator that transforms a text column from dataframe into an Annotation ready for NLP document_assembler = DocumentAssembler()\ .setInputCol("text")\ .setOutputCol("document") # Sentence Detector DL annotator, processes various sentences per line sentenceDetectorDL = SentenceDetectorDLModel.pretrained(...
sentence_detector_dl_healthcare download started this may take some time. Approximate size to download 367.3 KB [OK!] embeddings_clinical download started this may take some time. Approximate size to download 1.6 GB [OK!] ner_clinical download started this may take some time. Approximate size to download 13.9 MB [OK!] ...
Apache-2.0
tutorials/RestAPI/Serving_SparkNLP_with_Synapse.ipynb
iamvarol/spark-nlp-workshop
Adding a clinical note as a text example
clinical_note = """A 28-year-old female with a history of gestational diabetes mellitus diagnosed eight years prior to presentation and subsequent type two diabetes mellitus (T2DM), one prior episode of HTG-induced pancreatitis three years prior to presentation, associated with an acute hepatitis, and ob...
_____no_output_____
Apache-2.0
tutorials/RestAPI/Serving_SparkNLP_with_Synapse.ipynb
iamvarol/spark-nlp-workshop
Creating a JSON file with the clinical noteSince SynapseML runs a webservice that accepts HTTP calls with json format
data_json = {"text": clinical_note }
_____no_output_____
Apache-2.0
tutorials/RestAPI/Serving_SparkNLP_with_Synapse.ipynb
iamvarol/spark-nlp-workshop
Running a Synapse server
serving_input = spark.readStream.server() \ .address("localhost", 9999, "benchmark_api") \ .option("name", "benchmark_api") \ .load() \ .parseRequest("benchmark_api", data.schema) serving_output = resolver_p_model.transform(serving_input) \ .makeReply("icd10cm_code") server = serving_output.writeS...
_____no_output_____
Apache-2.0
tutorials/RestAPI/Serving_SparkNLP_with_Synapse.ipynb
iamvarol/spark-nlp-workshop
Checking Results
for i in range (0, len(response_list[0].json())): print(response_list[0].json()[i]['result'])
O2441 O2411 E11 K8520 B15 E669 Z6841 R35 R631 R630 R111 J988 E11 G600 K130 R52 M6283 R4689 O046 E785 E872 E639 H5330 R799 R829 E785 A832 G600 J988
Apache-2.0
tutorials/RestAPI/Serving_SparkNLP_with_Synapse.ipynb
iamvarol/spark-nlp-workshop
5.1 - Introduction to convnetsThis notebook contains the code sample found in Chapter 5, Section 1 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: i...
from keras import layers from keras import models model = models.Sequential() model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1))) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(64, (3, 3), activation='relu')) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(64...
_____no_output_____
MIT
.ipynb_checkpoints/5.1-introduction-to-convnets-checkpoint.ipynb
zhangdongwl/deep-learning-with-python-notebooks
Let's display the architecture of our convnet so far:
model.summary()
Model: "sequential_1" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d_1 (Conv2D) (None, 26, 26, 32) 320 __________________________________...
MIT
.ipynb_checkpoints/5.1-introduction-to-convnets-checkpoint.ipynb
zhangdongwl/deep-learning-with-python-notebooks
You can see above that the output of every `Conv2D` and `MaxPooling2D` layer is a 3D tensor of shape `(height, width, channels)`. The width and height dimensions tend to shrink as we go deeper in the network. The number of channels is controlled by the first argument passed to the `Conv2D` layers (e.g. 32 or 64).The ne...
model.add(layers.Flatten()) model.add(layers.Dense(64, activation='relu')) model.add(layers.Dense(10, activation='softmax'))
_____no_output_____
MIT
.ipynb_checkpoints/5.1-introduction-to-convnets-checkpoint.ipynb
zhangdongwl/deep-learning-with-python-notebooks
We are going to do 10-way classification, so we use a final layer with 10 outputs and a softmax activation. Now here's what our network looks like:
model.summary()
Model: "sequential_1" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d_1 (Conv2D) (None, 26, 26, 32) 320 __________________________________...
MIT
.ipynb_checkpoints/5.1-introduction-to-convnets-checkpoint.ipynb
zhangdongwl/deep-learning-with-python-notebooks
As you can see, our `(3, 3, 64)` outputs were flattened into vectors of shape `(576,)`, before going through two `Dense` layers.Now, let's train our convnet on the MNIST digits. We will reuse a lot of the code we have already covered in the MNIST example from Chapter 2.
from keras.datasets import mnist from keras.utils import to_categorical (train_images, train_labels), (test_images, test_labels) = mnist.load_data() train_images = train_images.reshape((60000, 28, 28, 1)) train_images = train_images.astype('float32') / 255 test_images = test_images.reshape((10000, 28, 28, 1)) test_i...
Epoch 1/5 60000/60000 [==============================] - 10s 173us/step - loss: 0.1661 - accuracy: 0.9479 Epoch 2/5 60000/60000 [==============================] - 10s 165us/step - loss: 0.0454 - accuracy: 0.9857 Epoch 3/5 60000/60000 [==============================] - 10s 164us/step - loss: 0.0314 - accuracy: 0.9900 Ep...
MIT
.ipynb_checkpoints/5.1-introduction-to-convnets-checkpoint.ipynb
zhangdongwl/deep-learning-with-python-notebooks
Let's evaluate the model on the test data:
test_loss, test_acc = model.evaluate(test_images, test_labels) test_acc
_____no_output_____
MIT
.ipynb_checkpoints/5.1-introduction-to-convnets-checkpoint.ipynb
zhangdongwl/deep-learning-with-python-notebooks
A short video on how bagging works https://www.youtube.com/watch?v=2Mg8QD0F1dQ
def bootstrap(X,Y, n=None): #Bootstrap function if n == None: n = len(X) resample_i = np.floor(np.random.rand(n)*len(X)).astype(int) X_resample = X[resample_i] Y_resample = Y[resample_i] return X_resample, Y_resample def bagging(n_sample,n_bag): #Perform bagging procedure. Bootstrap and o...
_____no_output_____
MIT
examples/CNN Bagging.ipynb
sarahalamdari/DIRECT_capstone
First, we need to perform the splitting procedure as we did in the CNN notebook to get train and test sets Now lets perform bagging of an ensamble of 50 models with each model containing 3800 bootstrapped samples from X_train
bagModel = bagging(3800,50)
Model fitting on the 1th bootstrapped set (28, 300, 1) Epoch 1/20 3800/3800 [==============================] - 201s 53ms/step - loss: 0.1491 - acc: 0.4853 - mean_squared_error: 0.1491 Epoch 2/20 3800/3800 [==============================] - 7s 2ms/step - loss: 0.1416 - acc: 0.5047 - mean_squared_error: 0.1416 Epoch 3/20...
MIT
examples/CNN Bagging.ipynb
sarahalamdari/DIRECT_capstone
Bagging different numbers of models in an ensamble to test accuracy change
bagModel2 = bagging(3800,10) bagged_predict.keys() bagged_predict = predict(bagModel) Accuracy= baggedAccuracy(bagged_predict,Y_test) print("Bagged Accuracy(50 models): %.2f%% "%Accuracy)
Bagged Accuracy(50 models): 62.48%
MIT
examples/CNN Bagging.ipynb
sarahalamdari/DIRECT_capstone
Improved accuracy! Variance reduction helps!
bagModel2.keys() bag_pred2 = predict(bagModel2) Accuracy2= baggedAccuracy(bag_pred2,Y_test) print("Bagged Accuracy(10 models): %.2f%% "%Accuracy2)
Bagged Accuracy(10 models): 62.29%
MIT
examples/CNN Bagging.ipynb
sarahalamdari/DIRECT_capstone
The results for 10-model and 50-model ensamble are only slightly different
n_sample = int(len(X_train)*0.6) n_sample model_num_list = [10,20,30,40,50] def accuracy_bag(n_sample,model_num_list): model_bags = [] accuracy_bags = [] for i in model_num_list: print('Bagging {} models'.format(i)) bagmodel = bagging(n_sample,i) bag_pred = predict(bagmodel) ...
_____no_output_____
MIT
examples/CNN Bagging.ipynb
sarahalamdari/DIRECT_capstone
I tried to get the accuracy of an ensamble from 1 to 50 models but my machine broke down overnight.. I guess this is where GCP becomes handy. Next up: perform bagging for LSTM model. You can see from the accuracy plot: 3/5 of the bagging accuracy is better than a single model accuracy without bagging(Single model accu...
accuracybags = accuracy_bag(n_sample,model_num_list) accuracybags accuracybags_array = np.asarray(accuracybags) from matplotlib import pyplot as plt plt.figure(figsize=(10,10),dpi=80) plt.scatter(model_num_list,accuracybags) plt.xlabel('Ensamble model quantity',fontsize=20) plt.ylabel('Bagging accuracy',fontsize=16)
_____no_output_____
MIT
examples/CNN Bagging.ipynb
sarahalamdari/DIRECT_capstone
The accuracy doesn't show an ascending trend as the ensamble contains more models, which is weird. Then I ran the model model for 3800 samples out of 4522 observations in the train set instead of 2639 samples with the same test train split.
n_sample = 3800 accuracybag2 = accuracy_bag(n_sample,[20,30]) accuracybag3 = accuracy_bag(n_sample,[40,50]) accuracybag4 = accuracy_bag(n_sample,[10]) accuracybag2 accuracybag3
_____no_output_____
MIT
examples/CNN Bagging.ipynb
sarahalamdari/DIRECT_capstone
I did the bagging separately because I was afraid of my machine breaking down..
accuracy_3800sample = accuracybag4 +accuracybag2 + accuracybag3 accuracy_2640sample = accuracybags accuracy_3800sample
_____no_output_____
MIT
examples/CNN Bagging.ipynb
sarahalamdari/DIRECT_capstone
Combine the accuracy results for 2640 and 3800 samples for (10,20,30,40,50) models bagging
plt.figure(figsize=(10,10),dpi=80) scatter_3800sample = plt.scatter([10,20,30,40,50],accuracy_3800sample,color = 'Blue') scatter_2640sample = plt.scatter([10,20,30,40,50],accuracy_2640sample,color = 'Green') plt.xlabel('Bagging Ensamble Model Quantity',fontsize=20) plt.ylabel('Bagging Accuracy(%)',fontsize=16) CNN_accu...
_____no_output_____
MIT
examples/CNN Bagging.ipynb
sarahalamdari/DIRECT_capstone
I tried to get the accuracy bags from 1 to 50 models but my machine broke down overnight.. I guess this is where GCP becomes handy. Next up: perform bagging for LSTM model. You can see from the accuracy plot: 3/5 of the bagging accuracy is better than a single model accuracy without bagging(Single model accuracy 60.97...
from collections import Counter bagModel.keys() lists = [] for i in range(30): model_number = "model%s" % (i+1) pred_list = conversion(prediction[model_number]) lists.append(pred_list) Ytest_list=conversion(Y_test) pred_list = [] for i in range(1522): for j in range(30): new_list = [] ne...
_____no_output_____
MIT
examples/CNN Bagging.ipynb
sarahalamdari/DIRECT_capstone
The result is worse!
from keras.layers import Dropout, Convolution2D, MaxPooling2D top_words = 1000 max_words = 150 filters = 32 #filter = 1 x KERNEL inpurt_shape = (X_train.shape[1:]) print(inpurt_shape) # create the model model = Sequential() model.add(Convolution2D(16, kernel_size=3, activation='elu', padding='same', ...
(28, 300, 1) _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d_305 (Conv2D) (None, 28, 300, 16) 160 ___________________________________________...
MIT
examples/CNN Bagging.ipynb
sarahalamdari/DIRECT_capstone
A basic machine learning problem: image classification . A basic machine learning problem: image classification```{admonition} Can a machine (function) tell the difference ? Mathematically, gray-scale image can be just taken as matrix in $R^{n_0\times n_0}$. The next figure shows different result from: human vision an...
from IPython.display import HTML HTML('<iframe id="kaltura_player" src="https://cdnapisec.kaltura.com/p/2356971/sp/235697100/embedIframeJs/uiconf_id/41416911/partner_id/2356971?iframeembed=true&playerId=kaltura_player&entry_id=1_b5pq3bnx&flashvars[streamerType]=auto&amp;flashvars[localizationCode]=en&amp;flashvars[lead...
/anaconda3/lib/python3.7/site-packages/IPython/core/display.py:689: UserWarning: Consider using IPython.display.IFrame instead warnings.warn("Consider using IPython.display.IFrame instead")
MIT
ch01/Untitled.ipynb
liuzhengqi1996/math452
Calculating Pagerank on Wikidata
import numpy as np import pandas as pd import os %env MY=/Users/pedroszekely/data/wikidata-20200504 %env WD=/Volumes/GoogleDrive/Shared drives/KGTK/datasets/wikidata-20200504
env: MY=/Users/pedroszekely/data/wikidata-20200504 env: WD=/Volumes/GoogleDrive/Shared drives/KGTK/datasets/wikidata-20200504
MIT
examples/Example4 - Wikidata Pagerank.ipynb
robuso/kgtk
We need to filter the wikidata edge file to remove all edges where `node2` is a literal. We can do this by running `ifexists` to keep edges where `node2` also appears in `node1`.This takes 2-3 hours on a laptop.
!time gzcat "$WD/wikidata_edges_20200504.tsv.gz" \ | kgtk ifexists --filter-on "$WD/wikidata_edges_20200504.tsv.gz" --input-keys node2 --filter-keys node1 \ | gzip > "$MY/wikidata-item-edges.tsv.gz" !gzcat $MY/wikidata-item-edges.tsv.gz | wc
460763981 3225347876 32869769062
MIT
examples/Example4 - Wikidata Pagerank.ipynb
robuso/kgtk
We have 460 million edges that connect items to other items, let's make sure this is what we want before spending a lot of time computing pagerank
!gzcat $MY/wikidata-item-edges.tsv.gz | head
id node1 label node2 rank node2;magnitude node2;unit node2;date node2;item node2;lower node2;upper node2;latitude node2;longitude node2;precision node2;calendar node2;entity-type Q8-P31-1 Q8 P31 Q331769 normal Q331769 item Q8-P31-2 Q8 P31 Q60539479 normal Q60539479 item Q8-P31-3 Q8 P31 Q9415 normal ...
MIT
examples/Example4 - Wikidata Pagerank.ipynb
robuso/kgtk
Let's do a sanity check to make sure that we have the edges that we want.We can do this by counting how many edges of each `entity-type`. Good news, we only have items and properties.
!time gzcat $MY/wikidata-item-edges.tsv.gz | kgtk unique $MY/wikidata-item-edges.tsv.gz --column 'node2;entity-type'
node1 label node2 item count 460737401 property count 26579 gzcat: error writing to output: Broken pipe gzcat: /Users/pedroszekely/data/wikidata-20200504/wikidata-item-edges.tsv.gz: uncompress failed real 21m44.450s user 21m29.078s sys 0m7.958s
MIT
examples/Example4 - Wikidata Pagerank.ipynb
robuso/kgtk
We only needd `node`, `label` and `node2`, so let's remove the other columns
!time gzcat $MY/wikidata-item-edges.tsv.gz | kgtk remove-columns -c 'id,rank,node2;magnitude,node2;unit,node2;date,node2;item,node2;lower,node2;upper,node2;latitude,node2;longitude,node2;precision,node2;calendar,node2;entity-type' \ | gzip > $MY/wikidata-item-edges-only.tsv.gz !gzcat $MY/wikidata-item-edges-only.tsv....
_____no_output_____
MIT
examples/Example4 - Wikidata Pagerank.ipynb
robuso/kgtk
The `kgtk graph-statistics` command will compute pagerank. It will run out of memory on a laptop with 16GB of memory.
!time kgtk graph_statistics --directed --degrees --pagerank --log $MY/log.txt -i $MY/wikidata-item-edges-only.tsv > $MY/wikidata-pagerank-degrees.tsv
/bin/sh: line 1: 89795 Killed: 9 kgtk graph-statistics --directed --degrees --pagerank --log $MY/log.txt -i $MY/wikidata-item-edges-only.tsv > $MY/wikidata-pagerank-degrees.tsv real 32m57.832s user 19m47.624s sys 8m58.352s
MIT
examples/Example4 - Wikidata Pagerank.ipynb
robuso/kgtk
We ran it on a server with 256GM of memory. It used 50GB and produced the following files:
!exa -l "$WD"/*sorted* !gzcat "$WD/wikidata-pagerank-only-sorted.tsv.gz" | head
node1 property node2 id Q13442814 vertex_pagerank 0.02422254325848587 Q13442814-vertex_pagerank-881612 Q1860 vertex_pagerank 0.00842243515354162 Q1860-vertex_pagerank-140 Q5 vertex_pagerank 0.0073505352600377934 Q5-vertex_pagerank-188 Q5633421 vertex_pagerank 0.005898322426631837 Q5633421-vertex_pagerank-101732 Q215024...
MIT
examples/Example4 - Wikidata Pagerank.ipynb
robuso/kgtk
Oh, the `graph_statistics` command is not using standard column naming, using `property` instead of `label`.This will be fixed, for now, let's rename the columns.
!kgtk rename-col -i "$WD/wikidata-pagerank-only-sorted.tsv.gz" --mode NONE --output-columns node1 label node2 id | gzip > $MY/wikidata-pagerank-only-sorted.tsv.gz !gzcat $MY/wikidata-pagerank-only-sorted.tsv.gz | head
node1 label node2 id Q13442814 vertex_pagerank 0.02422254325848587 Q13442814-vertex_pagerank-881612 Q1860 vertex_pagerank 0.00842243515354162 Q1860-vertex_pagerank-140 Q5 vertex_pagerank 0.0073505352600377934 Q5-vertex_pagerank-188 Q5633421 vertex_pagerank 0.005898322426631837 Q5633421-vertex_pagerank-101732 Q21502402 ...
MIT
examples/Example4 - Wikidata Pagerank.ipynb
robuso/kgtk
Let's put the labels on the entity labels as columns so that we can read what is what. To do that, we concatenate the pagerank file with the labels file, and then ask kgtk to lift the labels into new columns.
!time kgtk cat -i "$MY/wikidata_labels.tsv" $MY/pagerank.tsv | gzip > $MY/pagerank-and-labels.tsv.gz !time kgtk lift -i $MY/pagerank-and-labels.tsv.gz | gzip > "$WD/wikidata-pagerank-en.tsv.gz"
real 32m37.811s user 11m5.594s sys 10m30.283s
MIT
examples/Example4 - Wikidata Pagerank.ipynb
robuso/kgtk
Now we can look at the labels. Here are the top 20 pagerank items in Wikidata:
!gzcat "$WD/wikidata-pagerank-en.tsv.gz" | head -20
node1 label node2 id node1;label label;label node2;label Q13442814 vertex_pagerank 0.02422254325848587 Q13442814-vertex_pagerank-881612 'scholarly article'@en Q1860 vertex_pagerank 0.00842243515354162 Q1860-vertex_pagerank-140 'English'@en Q5 vertex_pagerank 0.0073505352600377934 Q5-vertex_pagerank-188 'human'@en ...
MIT
examples/Example4 - Wikidata Pagerank.ipynb
robuso/kgtk