markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
现在,我们使用编译后的训练步骤重新运行训练循环:
import time epochs = 2 for epoch in range(epochs): print("\nStart of epoch %d" % (epoch,)) start_time = time.time() # Iterate over the batches of the dataset. for step, (x_batch_train, y_batch_train) in enumerate(train_dataset): loss_value = train_step(x_batch_train, y_batch_train) # ...
site/zh-cn/guide/keras/writing_a_training_loop_from_scratch.ipynb
tensorflow/docs-l10n
apache-2.0
速度快了很多,对吗? 对模型跟踪的损失进行低级处理 层和模型以递归方式跟踪调用 self.add_loss(value) 的层在前向传递过程中创建的任何损失。可在前向传递结束时通过属性 model.losses 获得标量损失值的结果列表。 如果要使用这些损失分量,应将它们求和并添加到训练步骤的主要损失中。 考虑下面这个层,它会产生活动正则化损失:
class ActivityRegularizationLayer(layers.Layer): def call(self, inputs): self.add_loss(1e-2 * tf.reduce_sum(inputs)) return inputs
site/zh-cn/guide/keras/writing_a_training_loop_from_scratch.ipynb
tensorflow/docs-l10n
apache-2.0
我们构建一个使用它的超简单模型:
inputs = keras.Input(shape=(784,), name="digits") x = layers.Dense(64, activation="relu")(inputs) # Insert activity regularization as a layer x = ActivityRegularizationLayer()(x) x = layers.Dense(64, activation="relu")(x) outputs = layers.Dense(10, name="predictions")(x) model = keras.Model(inputs=inputs, outputs=outp...
site/zh-cn/guide/keras/writing_a_training_loop_from_scratch.ipynb
tensorflow/docs-l10n
apache-2.0
我们的训练步骤现在应当如下所示:
@tf.function def train_step(x, y): with tf.GradientTape() as tape: logits = model(x, training=True) loss_value = loss_fn(y, logits) # Add any extra losses created during the forward pass. loss_value += sum(model.losses) grads = tape.gradient(loss_value, model.trainable_weights) ...
site/zh-cn/guide/keras/writing_a_training_loop_from_scratch.ipynb
tensorflow/docs-l10n
apache-2.0
总结 现在,您已了解如何使用内置训练循环以及从头开始编写自己的训练循环。 总之,下面是一个简单的端到端示例,它将您在本指南中学到的所有知识串联起来:一个在 MNIST 数字上训练的 DCGAN。 端到端示例:从头开始的 GAN 训练循环 您可能熟悉生成对抗网络 (GAN)。通过学习图像训练数据集的隐分布(图像的“隐空间”),GAN 可以生成看起来极为真实的新图像。 一个 GAN 由两部分组成:一个“生成器”模型(可将隐空间中的点映射到图像空间中的点)和一个“判别器”模型,后者是一个可以区分真实图像(来自训练数据集)与虚假图像(生成器网络的输出)之间差异的分类器。 GAN 训练循环如下所示: 训练判别器。 在隐空间中对一批随机...
discriminator = keras.Sequential( [ keras.Input(shape=(28, 28, 1)), layers.Conv2D(64, (3, 3), strides=(2, 2), padding="same"), layers.LeakyReLU(alpha=0.2), layers.Conv2D(128, (3, 3), strides=(2, 2), padding="same"), layers.LeakyReLU(alpha=0.2), layers.GlobalMaxPooling...
site/zh-cn/guide/keras/writing_a_training_loop_from_scratch.ipynb
tensorflow/docs-l10n
apache-2.0
接着,我们创建一个生成器网络,它可以将隐向量转换成形状为 (28, 28, 1)(表示 MNIST 数字)的输出:
latent_dim = 128 generator = keras.Sequential( [ keras.Input(shape=(latent_dim,)), # We want to generate 128 coefficients to reshape into a 7x7x128 map layers.Dense(7 * 7 * 128), layers.LeakyReLU(alpha=0.2), layers.Reshape((7, 7, 128)), layers.Conv2DTranspose(128, (4...
site/zh-cn/guide/keras/writing_a_training_loop_from_scratch.ipynb
tensorflow/docs-l10n
apache-2.0
这是关键部分:训练循环。如您所见,训练非常简单。训练步骤函数仅有 17 行代码。
# Instantiate one optimizer for the discriminator and another for the generator. d_optimizer = keras.optimizers.Adam(learning_rate=0.0003) g_optimizer = keras.optimizers.Adam(learning_rate=0.0004) # Instantiate a loss function. loss_fn = keras.losses.BinaryCrossentropy(from_logits=True) @tf.function def train_step(r...
site/zh-cn/guide/keras/writing_a_training_loop_from_scratch.ipynb
tensorflow/docs-l10n
apache-2.0
我们通过在各个图像批次上重复调用 train_step 来训练 GAN。 由于我们的判别器和生成器是卷积神经网络,因此您将在 GPU 上运行此代码。
import os # Prepare the dataset. We use both the training & test MNIST digits. batch_size = 64 (x_train, _), (x_test, _) = keras.datasets.mnist.load_data() all_digits = np.concatenate([x_train, x_test]) all_digits = all_digits.astype("float32") / 255.0 all_digits = np.reshape(all_digits, (-1, 28, 28, 1)) dataset = tf....
site/zh-cn/guide/keras/writing_a_training_loop_from_scratch.ipynb
tensorflow/docs-l10n
apache-2.0
1. Implementar o algoritmo K-means Nesta etapa você irá implementar as funções que compõe o algoritmo do KMeans uma a uma. É importante entender e ler a documentação de cada função, principalmente as dimensões dos dados esperados na saída. 1.1 Inicializar os centróides A primeira etapa do algoritmo consiste em iniciali...
def calculate_initial_centers(dataset, k): """ Inicializa os centróides iniciais de maneira arbitrária Argumentos: dataset -- Conjunto de dados - [m,n] k -- Número de centróides desejados Retornos: centroids -- Lista com os centróides calculados - [k,n] """ #### CODE ...
2019/09-clustering/cl_Helio.ipynb
InsightLab/data-science-cookbook
mit
1.2 Definir os Clusters Na segunda etapa do algoritmo serão definidos o grupo de cada dado, de acordo com os centróides calculados. 1.2.1 Função de distância Codifique a função de distância euclidiana entre dois pontos (a, b). Definido pela equação: $$ dist(a, b) = \sqrt{(a_1-b_1)^{2}+(a_2-b_2)^{2}+ ... + (a_n-b_n)^{2}...
import math def euclidean_distance(a, b): """ Calcula a distância euclidiana entre os pontos a e b Argumentos: a -- Um ponto no espaço - [1,n] b -- Um ponto no espaço - [1,n] Retornos: distance -- Distância euclidiana entre os pontos """ #### CODE HERE #### #s = 0 ...
2019/09-clustering/cl_Helio.ipynb
InsightLab/data-science-cookbook
mit
1.2.2 Calcular o centroide mais próximo Utilizando a função de distância codificada anteriormente, complete a função abaixo para calcular o centroid mais próximo de um ponto qualquer. Dica: https://docs.scipy.org/doc/numpy/reference/generated/numpy.argmin.html
def nearest_centroid(a, centroids): """ Calcula o índice do centroid mais próximo ao ponto a Argumentos: a -- Um ponto no espaço - [1,n] centroids -- Lista com os centróides - [k,n] Retornos: nearest_index -- Índice do centróide mais próximo """ #### CODE HERE #### ...
2019/09-clustering/cl_Helio.ipynb
InsightLab/data-science-cookbook
mit
1.2.3 Calcular centroid mais próximo de cada dado do dataset Utilizando a função anterior que retorna o índice do centroid mais próximo, calcule o centroid mais próximo de cada dado do dataset.
def all_nearest_centroids(dataset, centroids): """ Calcula o índice do centroid mais próximo para cada ponto do dataset Argumentos: dataset -- Conjunto de dados - [m,n] centroids -- Lista com os centróides - [k,n] Retornos: nearest_indexes -- Índices do centróides mais próximo...
2019/09-clustering/cl_Helio.ipynb
InsightLab/data-science-cookbook
mit
1.3 Métrica de avaliação Após formar os clusters, como sabemos se o resultado gerado é bom? Para isso, precisamos definir uma métrica de avaliação. O algoritmo K-means tem como objetivo escolher centróides que minimizem a soma quadrática das distância entre os dados de um cluster e seu centróide. Essa métrica é conheci...
def inertia(dataset, centroids, nearest_indexes): """ Soma das distâncias quadradas das amostras para o centro do cluster mais próximo. Argumentos: dataset -- Conjunto de dados - [m,n] centroids -- Lista com os centróides - [k,n] nearest_indexes -- Índices do centróides mais próximos -...
2019/09-clustering/cl_Helio.ipynb
InsightLab/data-science-cookbook
mit
Verifique o resultado do algoritmo abaixo!
kmeans = KMeans(n_clusters=k) kmeans.fit(dataset) print("Inércia = ", kmeans.inertia_) plt.scatter(dataset[:,0], dataset[:,1], c=kmeans.labels_) plt.scatter(kmeans.cluster_centers_[:,0], kmeans.cluster_centers_[:,1], marker='^', c='red', s=100) plt.show()
2019/09-clustering/cl_Helio.ipynb
InsightLab/data-science-cookbook
mit
2.2 Comparar com algoritmo do Scikit-Learn Use a implementação do algoritmo do scikit-learn do K-means para o mesmo conjunto de dados. Mostre o valor da inércia e os conjuntos gerados pelo modelo. Você pode usar a mesma estrutura da célula de código anterior. Dica: https://scikit-learn.org/stable/modules/generated/sk...
#### CODE HERE #### # fonte: https://stackabuse.com/k-means-clustering-with-scikit-learn/ import matplotlib.pyplot as plt from sklearn.cluster import KMeans plt.scatter(dataset[:,0],dataset[:,1], label='True Position') kmeans = KMeans(n_clusters=k) kmeans.fit(dataset) print("Inércia = ", kmeans.inertia_) #print...
2019/09-clustering/cl_Helio.ipynb
InsightLab/data-science-cookbook
mit
4. Dataset Real Exercícios 1 - Aplique o algoritmo do K-means desenvolvido por você no datatse iris [1]. Mostre os resultados obtidos utilizando pelo menos duas métricas de avaliação de clusteres [2]. [1] http://archive.ics.uci.edu/ml/datasets/iris [2] http://scikit-learn.org/stable/modules/clustering.html#clustering-...
#### CODE HERE #### from sklearn import metrics url="http://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data" data=pd.read_csv(url, header=None) labels_true = [0, 0, 0, 1, 1, 1] labels_pred = [0, 0, 1, 1, 2, 2] metrics.homogeneity_score(labels_true, labels_pred) metrics.completeness_score(labels_true...
2019/09-clustering/cl_Helio.ipynb
InsightLab/data-science-cookbook
mit
From a sample of the RMS Titanic data, we can see the various features present for each passenger on the ship: - Survived: Outcome of survival (0 = No; 1 = Yes) - Pclass: Socio-economic class (1 = Upper class; 2 = Middle class; 3 = Lower class) - Name: Name of passenger - Sex: Sex of the passenger - Age: Age of the pas...
# Store the 'Survived' feature in a new variable and remove it from the dataset outcomes = full_data['Survived'] data = full_data.drop('Survived', axis = 1) # Show the new dataset with 'Survived' removed display(data.head()) display(outcomes.head())
titanic_survival_exploration/titanic_survival_exploration.ipynb
timzhangau/ml_nano
mit
Examining the survival statistics, a large majority of males did not survive the ship sinking. However, a majority of females did survive the ship sinking. Let's build on our previous prediction: If a passenger was female, then we will predict that they survived. Otherwise, we will predict the passenger did not survive...
def predictions_1(data): """ Model with one feature: - Predict a passenger survived if they are female. """ predictions = [] for _, passenger in data.iterrows(): # Remove the 'pass' statement below # and write your prediction conditions here if passenger['...
titanic_survival_exploration/titanic_survival_exploration.ipynb
timzhangau/ml_nano
mit
Answer: Predictions have an accuracy of 78.68% under assumption that all female passengers survived and the remaining did not Using just the Sex feature for each passenger, we are able to increase the accuracy of our predictions by a significant margin. Now, let's consider using an additional feature to see if we can ...
vs.survival_stats(data, outcomes, 'Age', ["Sex == 'male'"])
titanic_survival_exploration/titanic_survival_exploration.ipynb
timzhangau/ml_nano
mit
Examining the survival statistics, the majority of males younger than 10 survived the ship sinking, whereas most males age 10 or older did not survive the ship sinking. Let's continue to build on our previous prediction: If a passenger was female, then we will predict they survive. If a passenger was male and younger t...
def predictions_2(data): """ Model with two features: - Predict a passenger survived if they are female. - Predict a passenger survived if they are male and younger than 10. """ predictions = [] for _, passenger in data.iterrows(): # Remove the 'pass' statement...
titanic_survival_exploration/titanic_survival_exploration.ipynb
timzhangau/ml_nano
mit
Answer: Predictions have an accuracy of 79.35% under the assumption that all female passengers and all male passengers younger than 10 survived. Adding the feature Age as a condition in conjunction with Sex improves the accuracy by a small margin more than with simply using the feature Sex alone. Now it's your turn: F...
vs.survival_stats(data, outcomes, 'Age', ["Sex == 'female'", "Pclass == 3"])
titanic_survival_exploration/titanic_survival_exploration.ipynb
timzhangau/ml_nano
mit
After exploring the survival statistics visualization, fill in the missing code below so that the function will make your prediction. Make sure to keep track of the various features and conditions you tried before arriving at your final prediction model. Hint: You can start your implementation of this function using th...
def predictions_3(data): """ Model with multiple features. Makes a prediction with an accuracy of at least 80%. """ predictions = [] for _, passenger in data.iterrows(): # Remove the 'pass' statement below # and write your prediction conditions here if passenger['Sex']...
titanic_survival_exploration/titanic_survival_exploration.ipynb
timzhangau/ml_nano
mit
You can define such a function at any point in your testbook. Note that you need to use self inside your function definition rather than using the word browser.
def login(self, data): self.get_url(data['url']) if self.is_available('name=credential_0', 1): self.kb_type('name=credential_0', data['username']) self.kb_type('name=credential_1', data['password']) self.submit_btn('Login') assert self.is_available("Logout") return self.get_eleme...
notebooks/howto_dynamically_add_functions_to_browser.ipynb
ldiary/marigoso
mit
Once defined, you can call the register_function method of test object to attach the function to the browser object.
test.register_function("browser", [login])
notebooks/howto_dynamically_add_functions_to_browser.ipynb
ldiary/marigoso
mit
You can then confirm that the login is now a bound method of browser and can be used right away just like any other methods bound to browser.
browser.login
notebooks/howto_dynamically_add_functions_to_browser.ipynb
ldiary/marigoso
mit
You can re-execute the same cell over and over as many times as you want. Simply put your cursor in the cell again, edit at will, and type Shift-Enter to execute. IPython can execute shell commands, which shall be prefixed with !. For example, in the next cell, try issuing several system commands in-place with Ctrl-...
!ls !ls -la
howto/00-abc-ipython.ipynb
vkuznet/rep
apache-2.0
In a cell, you can type anything from a single python expression to an arbitrarily long amount of code (although for reasons of readability, you are recommended to limit this to a few dozen lines):
def f(x): """My function x : parameter""" return x + 1 print "f(3) = ", f(3)
howto/00-abc-ipython.ipynb
vkuznet/rep
apache-2.0
User interface When you start a new notebook server with ipython notebook, your browser should open into the Dashboard, a page listing all notebooks available in the current directory as well as letting you create new notebooks. In this page, you can also drag and drop existing .py files over the file list to import t...
dict??
howto/00-abc-ipython.ipynb
vkuznet/rep
apache-2.0
Tab completion and tooltips The notebook uses the same underlying machinery for tab completion that IPython uses at the terminal, but displays the information differently. Whey you complete with the Tab key, IPython shows a drop list with all available completions. If you type more characters while this list is open,...
# Position your cursor after the ( and hit the Tab key: list()
howto/00-abc-ipython.ipynb
vkuznet/rep
apache-2.0
Display of complex objects As the 'tour' notebook shows, the IPython notebook has fairly sophisticated display capabilities. In addition to the examples there, you can study the display_protocol notebook in this same examples folder, to learn how to customize arbitrary objects (in your own code or external libraries)...
%matplotlib inline # an alternative is the following code: # %pylab inline # which does also many imports from numpy and matplotlib libraries and this is very handy, # though one shall remember that global imports are evil import matplotlib.pyplot as plt plt.plot([1, 2, 4, 8, 16])
howto/00-abc-ipython.ipynb
vkuznet/rep
apache-2.0
Other handy features IPython allows you upload files on the server and modify any text files on the server. Revisit Dashboard to check these features. Also, from Dashboard you can start shell sessions, which is good when some configuring on a server-side is needed. However, running IPython locally is very popular opti...
%quickref
howto/00-abc-ipython.ipynb
vkuznet/rep
apache-2.0
(Optional) Experimenting with Feature Extraction This exercise is meant to give you an opportunity to explore the sliding window computations and how their parameters affect feature extraction. There aren't any right or wrong answers -- it's just a chance to experiment! We've provided you with some images and kernels y...
from learntools.computer_vision.visiontools import edge, blur, bottom_sobel, emboss, sharpen, circle image_dir = '../input/computer-vision-resources/' circle_64 = tf.expand_dims(circle([64, 64], val=1.0, r_shrink=4), axis=-1) kaggle_k = visiontools.read_image(image_dir + str('k.jpg'), channels=1) car = visiontools.rea...
notebooks/computer_vision/raw/ex4.ipynb
Kaggle/learntools
apache-2.0
To choose one to experiment with, just enter it's name in the appropriate place below. Then, set the parameters for the window computation. Try out some different combinations and see what they do!
# YOUR CODE HERE: choose an image image = circle_64 # YOUR CODE HERE: choose a kernel kernel = bottom_sobel visiontools.show_extraction( image, kernel, # YOUR CODE HERE: set parameters conv_stride=1, conv_padding='valid', pool_size=2, pool_stride=2, pool_padding='same', subplot_s...
notebooks/computer_vision/raw/ex4.ipynb
Kaggle/learntools
apache-2.0
The Receptive Field Trace back all the connections from some neuron and eventually you reach the input image. All of the input pixels a neuron is connected to is that neuron's receptive field. The receptive field just tells you which parts of the input image a neuron receives information from. As we've seen, if your fi...
# View the solution (Run this code cell to receive credit!) q_1.check() # Lines below will give you a hint #_COMMENT_IF(PROD)_ q_1.hint()
notebooks/computer_vision/raw/ex4.ipynb
Kaggle/learntools
apache-2.0
So why stack layers like this? Three (3, 3) kernels have 27 parameters, while one (7, 7) kernel has 49, though they both create the same receptive field. This stacking-layers trick is one of the ways convnets are able to create large receptive fields without increasing the number of parameters too much. You'll see how ...
import pandas as pd # Load the time series as a Pandas dataframe machinelearning = pd.read_csv( '../input/computer-vision-resources/machinelearning.csv', parse_dates=['Week'], index_col='Week', ) machinelearning.plot();
notebooks/computer_vision/raw/ex4.ipynb
Kaggle/learntools
apache-2.0
What about the kernels? Images are two-dimensional and so our kernels were 2D arrays. A time-series is one-dimensional, so what should the kernel be? A 1D array! Here are some kernels sometimes used on time-series data:
detrend = tf.constant([-1, 1], dtype=tf.float32) average = tf.constant([0.2, 0.2, 0.2, 0.2, 0.2], dtype=tf.float32) spencer = tf.constant([-3, -6, -5, 3, 21, 46, 67, 74, 67, 46, 32, 3, -5, -6, -3], dtype=tf.float32) / 320
notebooks/computer_vision/raw/ex4.ipynb
Kaggle/learntools
apache-2.0
Convolution on a sequence works just like convolution on an image. The difference is just that a sliding window on a sequence only has one direction to travel -- left to right -- instead of the two directions on an image. And just like before, the features picked out depend on the pattern on numbers in the kernel. Can ...
# UNCOMMENT ONE kernel = detrend # kernel = average # kernel = spencer # Reformat for TensorFlow ts_data = machinelearning.to_numpy() ts_data = tf.expand_dims(ts_data, axis=0) ts_data = tf.cast(ts_data, dtype=tf.float32) kern = tf.reshape(kernel, shape=(*kernel.shape, 1, 1)) ts_filter = tf.nn.conv1d( input=ts_dat...
notebooks/computer_vision/raw/ex4.ipynb
Kaggle/learntools
apache-2.0
We followed the simulation mentioned in the paper for model selection in linear regression. There are two 150 $\times$ 21 design matrices used as input data. The first set was generated independently with $\rho = 0$ from $N$(0, 1) and the second set was autocorrelated, AR(1) with $\rho$ = 0.7. In addition to the 21 ori...
url1 = 'http://www4.stat.ncsu.edu/~boos/var.select/sim/x.quad.0.txt' url2 = 'http://www4.stat.ncsu.edu/~boos/var.select/sim/x.quad.70.txt' url3 = 'http://www4.stat.ncsu.edu/~boos/var.select/sim/h0_0.rs35.txt' url4 = 'http://www4.stat.ncsu.edu/~boos/var.select/sim/h1_0.rs35.txt' url5 = 'http://www4.stat.ncsu.edu/~boos/v...
simulation_fastfsr.ipynb
CeciliaShi/STA-663-Final-Project
mit
Fast FSR
res_fsr1 = simulation(x1_matrix, [y1_h0, y1_h1, y1_h2, y1_h3, y1_h4], fastfsr.fsr_fast) res_fsr1 res_fsr2 = simulation(x2_matrix, [y2_h0, y2_h1, y2_h2, y2_h3, y2_h4], fastfsr.fsr_fast) res_fsr2
simulation_fastfsr.ipynb
CeciliaShi/STA-663-Final-Project
mit
In addition to Fast FSR, we also ran BIC and LASSO on the data sets and compared the results of the three models. In the originally paper, the authors used the R package leaps for best subset selection. However, there does not exist corresponding package or function in Python and hence we also implemented a regression...
res_bic1 = simulation(x1_matrix, [y1_h0, y1_h1, y1_h2, y1_h3, y1_h4], fastfsr.bic_sim) res_bic1 res_bic2 = simulation(x2_matrix, [y2_h0, y2_h1, y2_h2, y2_h3, y2_h4], fastfsr.bic_sim) res_bic2
simulation_fastfsr.ipynb
CeciliaShi/STA-663-Final-Project
mit
LASSO
res_lasso1 = simulation(x1_matrix, [y1_h0, y1_h1, y1_h2, y1_h3, y1_h4], fastfsr.lasso_fit) res_lasso1 res_lasso2 = simulation(x2_matrix, [y2_h0, y2_h1, y2_h2, y2_h3, y2_h4], fastfsr.lasso_fit) res_lasso2
simulation_fastfsr.ipynb
CeciliaShi/STA-663-Final-Project
mit
Model Comparison False Selection Rate
xlabel = [1, 2, 3, 4, 5] xlabel_name = ['H0', 'H1', 'H2', 'H3', 'H4'] fig, ax = plt.subplots(figsize = (6, 5)) fig.autofmt_xdate() plt.plot(xlabel, res_bic1['fsr_mr'], marker = 'o', markersize = 4, color = 'blue', linestyle = 'solid', label = 'BIC') plt.plot(xlabel, res_fsr1['fsr_mr'], marker = 'o...
simulation_fastfsr.ipynb
CeciliaShi/STA-663-Final-Project
mit
Correct Selection Rate
xlabel = [1, 2, 3, 4, 5] xlabel_name = ['H0', 'H1', 'H2', 'H3', 'H4'] fig, ax = plt.subplots(figsize = (6, 5)) fig.autofmt_xdate() plt.plot(xlabel, res_bic1['csr'], marker = 'o', markersize = 4, color = 'blue', linestyle = 'solid', label = 'BIC') plt.plot(xlabel, res_fsr1['csr'], marker = 'o', mar...
simulation_fastfsr.ipynb
CeciliaShi/STA-663-Final-Project
mit
Average Model Size
xlabel = [1, 2, 3, 4, 5] xlabel_name = ['H0', 'H1', 'H2', 'H3', 'H4'] fig, ax = plt.subplots(figsize = (6, 5)) fig.autofmt_xdate() plt.plot(xlabel, res_bic1['size'], marker = 'o', markersize = 4, color = 'blue', linestyle = 'solid', label = 'BIC') plt.plot(xlabel, res_fsr1['size'], marker = 'o', m...
simulation_fastfsr.ipynb
CeciliaShi/STA-663-Final-Project
mit
BigQuery TensorFlow 리더의 엔드 투 엔드 예제 <table class="tfo-notebook-buttons" align="left"> <td><a target="_blank" href="https://www.tensorflow.org/io/tutorials/bigquery"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org에서 보기</a></td> <td><a target="_blank" href="https://colab.research.google.co...
try: # Use the Colab's preinstalled TensorFlow 2.x %tensorflow_version 2.x except: pass !pip install fastavro !pip install tensorflow-io==0.9.0 !pip install google-cloud-bigquery-storage
site/ko/io/tutorials/bigquery.ipynb
tensorflow/docs-l10n
apache-2.0
인증합니다.
from google.colab import auth auth.authenticate_user() print('Authenticated')
site/ko/io/tutorials/bigquery.ipynb
tensorflow/docs-l10n
apache-2.0
프로젝트 ID를 설정합니다.
PROJECT_ID = "<YOUR PROJECT>" #@param {type:"string"} ! gcloud config set project $PROJECT_ID %env GCLOUD_PROJECT=$PROJECT_ID
site/ko/io/tutorials/bigquery.ipynb
tensorflow/docs-l10n
apache-2.0
Python 라이브러리를 가져오고 상수를 정의합니다.
from __future__ import absolute_import, division, print_function, unicode_literals import os from six.moves import urllib import tempfile import numpy as np import pandas as pd import tensorflow as tf from google.cloud import bigquery from google.api_core.exceptions import GoogleAPIError LOCATION = 'us' # Storage ...
site/ko/io/tutorials/bigquery.ipynb
tensorflow/docs-l10n
apache-2.0
BigQuery로 인구 조사 데이터 가져오기 BigQuery에 데이터를 로드하는 도우미 메서드를 정의합니다.
def create_bigquery_dataset_if_necessary(dataset_id): # Construct a full Dataset object to send to the API. client = bigquery.Client(project=PROJECT_ID) dataset = bigquery.Dataset(bigquery.dataset.DatasetReference(PROJECT_ID, dataset_id)) dataset.location = LOCATION try: dataset = client.create_dataset(d...
site/ko/io/tutorials/bigquery.ipynb
tensorflow/docs-l10n
apache-2.0
BigQuery에서 인구 조사 데이터를 로드합니다.
load_data_into_bigquery(TRAINING_URL, TRAINING_TABLE_ID) load_data_into_bigquery(EVAL_URL, EVAL_TABLE_ID)
site/ko/io/tutorials/bigquery.ipynb
tensorflow/docs-l10n
apache-2.0
가져온 데이터를 확인합니다. 수행할 작업: <YOUR PROJECT>를 PROJECT_ID로 바꿉니다. 참고: --use_bqstorage_api는 BigQueryStorage API를 사용하여 데이터를 가져오고 사용 권한이 있는지 확인합니다. 프로젝트에 이 부분이 활성화되어 있는지 확인합니다(https://cloud.google.com/bigquery/docs/reference/storage/#enabling_the_api).
%%bigquery --use_bqstorage_api SELECT * FROM `<YOUR PROJECT>.census_dataset.census_training_table` LIMIT 5
site/ko/io/tutorials/bigquery.ipynb
tensorflow/docs-l10n
apache-2.0
BigQuery 리더를 사용하여 TensorFlow DataSet에 인구 조사 데이터 로드하기 BigQuery에서 인구 조사 데이터를 읽고 TensorFlow DataSet로 변환합니다.
from tensorflow.python.framework import ops from tensorflow.python.framework import dtypes from tensorflow_io.bigquery import BigQueryClient from tensorflow_io.bigquery import BigQueryReadSession def transofrom_row(row_dict): # Trim all string tensors trimmed_dict = { column: (tf.strings.strip(...
site/ko/io/tutorials/bigquery.ipynb
tensorflow/docs-l10n
apache-2.0
특성 열 정의하기
def get_categorical_feature_values(column): query = 'SELECT DISTINCT TRIM({}) FROM `{}`.{}.{}'.format(column, PROJECT_ID, DATASET_ID, TRAINING_TABLE_ID) client = bigquery.Client(project=PROJECT_ID) dataset_ref = client.dataset(DATASET_ID) job_config = bigquery.QueryJobConfig() query_job = client.query(query, ...
site/ko/io/tutorials/bigquery.ipynb
tensorflow/docs-l10n
apache-2.0
모델 빌드 및 훈련하기 모델을 빌드합니다.
Dense = tf.keras.layers.Dense model = tf.keras.Sequential( [ feature_layer, Dense(100, activation=tf.nn.relu, kernel_initializer='uniform'), Dense(75, activation=tf.nn.relu), Dense(50, activation=tf.nn.relu), Dense(25, activation=tf.nn.relu), Dense(1, activation=tf.nn.sigmoid) ]) ...
site/ko/io/tutorials/bigquery.ipynb
tensorflow/docs-l10n
apache-2.0
모델을 훈련합니다.
model.fit(training_ds, epochs=5)
site/ko/io/tutorials/bigquery.ipynb
tensorflow/docs-l10n
apache-2.0
모델 평가하기 모델을 평가합니다.
loss, accuracy = model.evaluate(eval_ds) print("Accuracy", accuracy)
site/ko/io/tutorials/bigquery.ipynb
tensorflow/docs-l10n
apache-2.0
몇 가지 무작위 샘플을 평가합니다.
sample_x = { 'age' : np.array([56, 36]), 'workclass': np.array(['Local-gov', 'Private']), 'education': np.array(['Bachelors', 'Bachelors']), 'marital_status': np.array(['Married-civ-spouse', 'Married-civ-spouse']), 'occupation': np.array(['Tech-support', 'Other-service']), 'relationship': n...
site/ko/io/tutorials/bigquery.ipynb
tensorflow/docs-l10n
apache-2.0
Review Before we start playing with the actual implementations let us review a couple of things about RL. Reinforcement Learning is concerned with how software agents ought to take actions in an environment so as to maximize some notion of cumulative reward. Reinforcement learning differs from standard supervised ...
%psource PassiveTDAgent
rl.ipynb
grantvk/aima-python
mit
The Agent Program can be obtained by creating the instance of the class by passing the appropriate parameters. Because of the call method the object that is created behaves like a callable and returns an appropriate action as most Agent Programs do. To instantiate the object we need a policy(pi) and a mdp whose utili...
from mdp import sequential_decision_environment sequential_decision_environment
rl.ipynb
grantvk/aima-python
mit
Active Reinforcement Learning Unlike Passive Reinforcement Learning in Active Reinforcement Learning we are not bound by a policy pi and we need to select our actions. In other words the agent needs to learn an optimal policy. The fundamental tradeoff the agent needs to face is that of exploration vs. exploitation. QL...
%psource QLearningAgent
rl.ipynb
grantvk/aima-python
mit
1. GIS data selection First, run the cell below to browse to the directory your input CSV file is located at and select the input file. Sample file shall be located at .\ gep-onsset\test_data.
import tkinter as tk from tkinter import filedialog, messagebox from openpyxl import load_workbook root = tk.Tk() root.withdraw() root.attributes("-topmost", True) messagebox.showinfo('OnSSET', 'Open the input file with extracted GIS data') input_file = filedialog.askopenfilename() onsseter = SettlementProcessor(input...
Generator.ipynb
KTH-dESA/PyOnSSET
mit
2. Modelling period and target electrification rate Next, define the modelling period and the electrification rate to be achieved by the end of the analysis. Further down you will also define an intermediate year and target (in the Levers section).
start_year = 2018 end_year = 2030 electrification_rate_target = 1 # E.g. 1 for 100% electrification rate or 0.80 for 80% electrification rate
Generator.ipynb
KTH-dESA/PyOnSSET
mit
3. Levers Next, define the values of the levers. These are the 6 levers that are available on the GEP Explorer. Contrary to the GEP Explorer where each lever has two or three pre-defined values, here they can take any value. Lever 1: Population growth For the first lever first, enter the expected population in the coun...
end_year_pop = 26858618
Generator.ipynb
KTH-dESA/PyOnSSET
mit
Lever 2: Electricity demand target For the second lever, enter the target tier (level of electricity access) for urban and rural households respectively. This can take a value between 1 (lowest level of electricity access) and 5 (highest level of electricity access) as in ESMAPs Multi-Tier Framework for Measuring Elect...
urban_target_tier = 5 rural_target_tier = 3
Generator.ipynb
KTH-dESA/PyOnSSET
mit
Lever 3: Intermediate electrification rate target For the third lever, enter the intermediate target year and target electrification rate for that year.
intermediate_year = 2025 intermediate_electrification_target = 0.63 # E.g. for a target electrification rate of 75%, enter 0.75
Generator.ipynb
KTH-dESA/PyOnSSET
mit
Lever 4: Grid generating cost of electricity This lever examines different average costs of generating electricity by the power-plants connected to the national grid. This cost is one of the factors that affect the LCoE of connecting to the grid (together with extension of the grid lines etc.), and may affect the split...
grid_generation_cost = 0.046622 ### This is the grid cost electricity USD/kWh as expected in the end year of the analysis
Generator.ipynb
KTH-dESA/PyOnSSET
mit
Lever 5: PV system cost adjustment This lever reflects the role of PV system costs on electrification results. All PV based systems will be adjusted by a factor to simulate a higher or lower cost of PV systems (compared to the baseline values entered below). A value lower than 1 means lower investment costs for PV syst...
pv_adjustment_factor = 1
Generator.ipynb
KTH-dESA/PyOnSSET
mit
Lever 6: Prioritization algorithm This lever reflects the prioritization approach in order to achieve the electrification rate specified (in Lever 3) in intermediate target year of the analysis. There are currently two options available: Baseline: Prioritizes grid densification first (ramp up in already electrified clu...
prioritization = 1 # Select 1 or 2. 1 = baseline, 2 = intensification, 3 = easily accessible settlements auto_intensification = 2 # Buffer distance (km) for automatic intensification if choosing prioritization 1 annual_new_grid_connections_limit = 109 # This is the maximum amoun...
Generator.ipynb
KTH-dESA/PyOnSSET
mit
4. Enter country specific data In addition to the levers above the user can customize a large number of variables describing the social - economic - technological environment in the selected country. Note! Most input values shall represent future estimates for the variable, i.e. they describe and NOT current values. a...
pop_start_year = 18620000 ### Write the population in the base year (e.g. 2018) urban_ratio_start_year = 0.17 ### Write the urban population population ratio in the base year (e.g. 2018) urban_ratio_end_year = 0.20 ### Write the urban population population ratio in the end year (e.g. 2030) num_people_per...
Generator.ipynb
KTH-dESA/PyOnSSET
mit
b. Technology specifications & costs The cell below contains all the information that is used to calculate the levelized costs for all the technologies, including grid. These default values should be updated to reflect the most accurate values in the country. There are currently 7 potential technologies to include in t...
diesel_techs = 0 ### 0 = diesel NOT included, 1 = diesel included grid_power_plants_capital_cost = 2000 ### The cost in USD/kW to for capacity upgrades of the grid grid_losses = 0.1 ### The fraction of electricity lost in transmission and distribution (percentage) base_to_pea...
Generator.ipynb
KTH-dESA/PyOnSSET
mit
The cells below contain additional technology specifications
coordinate_units = 1000 # 1000 if coordinates are in m, 1 if coordinates are in km discount_rate = 0.08 # E.g. 0.08 means a discount rate of 8% # Transmission and distribution costs hv_line_capacity=69 # kV hv_line_cost=53000 # USD/km mv_line_cost = 7000 # USD/kW mv_line_capacity=50 # kV mv_line_max_length=50 # km mv...
Generator.ipynb
KTH-dESA/PyOnSSET
mit
5. GIS data import and processing OnSSET is a GIS based tool and its proper function depends heavily on the diligent preparation and calibration of the necessary geospatial data. Documentation on GIS processing in regards to OnSSET can be found <a href="http://onsset-manual.readthedocs.io/en/latest/data_acquisition.htm...
yearsofanalysis = [intermediate_year, end_year] onsseter.condition_df() onsseter.grid_penalties() onsseter.calc_wind_cfs() onsseter.calibrate_pop_and_urban(pop_start_year, end_year_pop, end_year_pop, urban_ratio_start_year, urban_ratio_end_year, start_year, end_year, intermediate_year...
Generator.ipynb
KTH-dESA/PyOnSSET
mit
Calibration of currently electrified settlements The model calibrates which settlements are likely to be electrified in the start year, to match the national statistical values defined above. A settlement is considered to be electrified if it meets all of the following conditions: - Has more night-time lights than the ...
min_night_lights = 0 ### 0 Indicates no night light, while any number above refers to the night-lights intensity min_pop = 0 ### Settlement population above which we can assume that it could be electrified max_service_transformer_distance = 2 ### Distance in km from the existing grid network below which we...
Generator.ipynb
KTH-dESA/PyOnSSET
mit
The figure below show the results of the calibration. Settlements in blue are considered to be (at least partly) electrified already in the start year of the analysis, while settlements in yellow are yet to be electrified. Re-running the calibration step with different intial values may change the map below.
from matplotlib import pyplot as plt colors = ['#73B2FF','#EDD100','#EDA800','#1F6600','#98E600','#70A800','#1FA800'] plt.figure(figsize=(9,9)) plt.plot(onsseter.df.loc[onsseter.df[SET_ELEC_CURRENT]==0, SET_X_DEG], onsseter.df.loc[onsseter.df[SET_ELEC_CURRENT]==0, SET_Y_DEG], 'y,') plt.plot(onsseter.df.loc[onsseter.df[...
Generator.ipynb
KTH-dESA/PyOnSSET
mit
In some cases it can be of interest to filter out clusters with very low populations, e.g. to increase computational speed or to remove false positives in the data. Setting the pop_threshold variable below larger than 0 will filter out all settlements below that threshold form the analysis.
pop_threshold = 0 # If you wish to remove low density population cells, enter a threshold above 0 onsseter.df = onsseter.df.loc[onsseter.df[SET_POP] > pop_threshold]
Generator.ipynb
KTH-dESA/PyOnSSET
mit
6. Define the demand This piece of code defines the target electricity demand in the region/country. Residential electricity demand is defined as kWh/household/year, while all other demands are defined as kWh/capita/year. Note that at the moment, all productive uses demands are set to 0 by default.
# Define the annual household electricity targets to choose from tier_1 = 38.7 # 38.7 refers to kWh/household/year. tier_2 = 219 tier_3 = 803 tier_4 = 2117 tier_5 = 2993 onsseter.prepare_wtf_tier_columns(num_people_per_hh_rural, num_people_per_hh_urban, tier_1, tier_2, tier_3, tier_4, tier_5) onsseter.df[SET_EDU_DE...
Generator.ipynb
KTH-dESA/PyOnSSET
mit
7. Start a scenario run, which calculate and compare technology costs for every settlement in the country Based on the previous calculation this piece of code identifies the LCoE that every off-grid technology can provide, for each single populated settlement of the selected country. The cell then takes all the current...
onsseter.current_mv_line_dist() for year in yearsofanalysis: end_year_pop = 1 eleclimit = eleclimits[year] time_step = time_steps[year] grid_cap_gen_limit = time_step * annual_grid_cap_gen_limit * 1000 grid_connect_limit = time_step * annual_new_grid_connections_limit * 1000 onsse...
Generator.ipynb
KTH-dESA/PyOnSSET
mit
8. Results, Summaries and Visualization With all the calculations and grid-extensions complete, this block gets the final results on which technology was chosen for each point, how much capacity needs to be installed and what it will cost. Then the summaries, plots and maps are generated.
elements = [] for year in yearsofanalysis: elements.append("Population{}".format(year)) elements.append("NewConnections{}".format(year)) elements.append("Capacity{}".format(year)) elements.append("Investment{}".format(year)) techs = ["Grid", "SA_Diesel", "SA_PV", "MG_Diesel", "MG_PV", "MG_Wind", "MG_Hy...
Generator.ipynb
KTH-dESA/PyOnSSET
mit
9. Exporting results This code generates three csv files: - one containing all the results for the scenario created - one containing the summary for the scenario created - one containing some if the key input variables of the scenario Before we proceed, please write the scenario_name in the first cell below. then mo...
scenario_name = "scenario_5" list1 = [('Start_year',start_year,'','',''), ('End_year',end_year,'','',''), ('End year electrification rate target',electrification_rate_target,'','',''), ('Intermediate target year', intermediate_year,'','',''), ('Intermediate electrification rate tar...
Generator.ipynb
KTH-dESA/PyOnSSET
mit
Multi-task recommenders <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/recommenders/examples/multitask"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.r...
!pip install -q tensorflow-recommenders !pip install -q --upgrade tensorflow-datasets import os import pprint import tempfile from typing import Dict, Text import numpy as np import tensorflow as tf import tensorflow_datasets as tfds import tensorflow_recommenders as tfrs
docs/examples/multitask.ipynb
tensorflow/recommenders
apache-2.0
A multi-task model There are two critical parts to multi-task recommenders: They optimize for two or more objectives, and so have two or more losses. They share variables between the tasks, allowing for transfer learning. In this tutorial, we will define our models as before, but instead of having a single task, we ...
class MovielensModel(tfrs.models.Model): def __init__(self, rating_weight: float, retrieval_weight: float) -> None: # We take the loss weights in the constructor: this allows us to instantiate # several model objects with different loss weights. super().__init__() embedding_dimension = 32 # Us...
docs/examples/multitask.ipynb
tensorflow/recommenders
apache-2.0
Rating-specialized model Depending on the weights we assign, the model will encode a different balance of the tasks. Let's start with a model that only considers ratings.
model = MovielensModel(rating_weight=1.0, retrieval_weight=0.0) model.compile(optimizer=tf.keras.optimizers.Adagrad(0.1)) cached_train = train.shuffle(100_000).batch(8192).cache() cached_test = test.batch(4096).cache() model.fit(cached_train, epochs=3) metrics = model.evaluate(cached_test, return_dict=True) print(f"...
docs/examples/multitask.ipynb
tensorflow/recommenders
apache-2.0
The model does OK on predicting ratings (with an RMSE of around 1.11), but performs poorly at predicting which movies will be watched or not: its accuracy at 100 is almost 4 times worse than a model trained solely to predict watches. Retrieval-specialized model Let's now try a model that focuses on retrieval only.
model = MovielensModel(rating_weight=0.0, retrieval_weight=1.0) model.compile(optimizer=tf.keras.optimizers.Adagrad(0.1)) model.fit(cached_train, epochs=3) metrics = model.evaluate(cached_test, return_dict=True) print(f"Retrieval top-100 accuracy: {metrics['factorized_top_k/top_100_categorical_accuracy']:.3f}.") prin...
docs/examples/multitask.ipynb
tensorflow/recommenders
apache-2.0
We get the opposite result: a model that does well on retrieval, but poorly on predicting ratings. Joint model Let's now train a model that assigns positive weights to both tasks.
model = MovielensModel(rating_weight=1.0, retrieval_weight=1.0) model.compile(optimizer=tf.keras.optimizers.Adagrad(0.1)) model.fit(cached_train, epochs=3) metrics = model.evaluate(cached_test, return_dict=True) print(f"Retrieval top-100 accuracy: {metrics['factorized_top_k/top_100_categorical_accuracy']:.3f}.") prin...
docs/examples/multitask.ipynb
tensorflow/recommenders
apache-2.0
The result is a model that performs roughly as well on both tasks as each specialized model. Making prediction We can use the trained multitask model to get trained user and movie embeddings, as well as the predicted rating:
trained_movie_embeddings, trained_user_embeddings, predicted_rating = model({ "user_id": np.array(["42"]), "movie_title": np.array(["Dances with Wolves (1990)"]) }) print("Predicted rating:") print(predicted_rating)
docs/examples/multitask.ipynb
tensorflow/recommenders
apache-2.0
Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space.
from urllib.request import urlretrieve from os.path import isfile, isdir import zipfile dataset_folder_path = 'data' dataset_filename = 'text8.zip' dataset_name = 'Text8 Dataset' if not isfile(dataset_filename): urlretrieve( 'http://mattmahoney.net/dc/text8.zip', dataset_filename) if not isdir(da...
embeddings/Skip-Gram_word2vec.ipynb
postBG/DL_project
mit
Subsampling Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ i...
from collections import Counter word_counts = Counter(int_words) word_counts.most_common(3) threshold = 1e-5 total_counts = len(int_words) frequencies = {word: count / total_counts for word, count in word_counts.items()} drop_prob = {word: 1 - np.sqrt(threshold / frequencies[word]) for word in int_words} drop_prob[...
embeddings/Skip-Gram_word2vec.ipynb
postBG/DL_project
mit
Negative sampling For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from th...
# Number of negative labels to sample n_sampled = 100 with train_graph.as_default(): softmax_w = tf.Variable(tf.truncated_normal([n_vocab, n_embedding], stddev=0.1)) # create softmax weight matrix here softmax_b = tf.Variable(tf.zeros([n_vocab])) # create softmax biases here # Calculate the loss using ...
embeddings/Skip-Gram_word2vec.ipynb
postBG/DL_project
mit
Structuring Code Don't repeat yourself (and others) to keep your programs small Structure code into functions and files. You can import functions and data from files (so-called modules): ```python File module/example.py x = 'Example data' def example_function(): pass if name == 'main': whatever() File in ...
mass = -1 assert mass > 0, 'Mass cannot be negative!'
7_BestPractices_Testing_Performance.ipynb
apozas/BIST-Python-Bootcamp
gpl-3.0
Make sure your code does what it is supposed to do by testing it with simple examples where you know what to expect Save those tests (e.g. in if __name__ == __main__ of each module) to keep your code correct while changing parts ⚠ Automate testing ```python import unittest def square(x): return x*x class Test...
%timeit [i**2 for i in range(100000)] %timeit np.arange(100000)**2
7_BestPractices_Testing_Performance.ipynb
apozas/BIST-Python-Bootcamp
gpl-3.0
Real-life Example We tried to speed-up the simulation of an N-body problem describing cells interacting in pairs via the spherical potential $U(r) = -(r - r_{min})(r - r_{max})^2$, where $r = |\vec x_j - \vec x_i|$. The resulting forces can be calculated as python def forces(t, X, N): """Calculate forces from neigh...
import pstats p = pstats.Stats('nbodystats') p.strip_dirs().sort_stats('cumulative').print_stats(10);
7_BestPractices_Testing_Performance.ipynb
apozas/BIST-Python-Bootcamp
gpl-3.0
Vectorize, 7x While the initial function was already using numpy and vectors it still involves a for-loop that can be vectorized ... well, actually tensorized: python def forces(t, X, N): r = X[N] - np.tile(X, (k, 1, 1)).transpose(1, 0, 2) norm_r = np.minimum(np.linalg.norm(r, axis=2), r_max) norm_F = 2*(r_...
p = pstats.Stats('theanostats') p.strip_dirs().sort_stats('cumulative').print_stats(10);
7_BestPractices_Testing_Performance.ipynb
apozas/BIST-Python-Bootcamp
gpl-3.0
Let's grab some text To start with, we need some text from which we'll try to extract named entities using various methods and libraries. There are several ways of doing this e.g.: 1. copy and paste the text from Perseus or the Latin Library into a text document, and read it into a variable 2. load a text from one of t...
my_passage = "urn:cts:latinLit:phi0448.phi001.perseus-eng2:1"
participants_notebooks/Sunoikisis - Named Entity Extraction 1b-LV.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
With this information, we can query a CTS API and get some information about this text. For example, we can "discover" its canonical text structure, an essential information to be able to cite this text.
# We set up a resolver which communicates with an API available in Leipzig resolver = HttpCTSResolver(CTS("http://cts.dh.uni-leipzig.de/api/cts/"))
participants_notebooks/Sunoikisis - Named Entity Extraction 1b-LV.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
But we can also query the same API and get back the text of a specific text section, for example the entire book 1. To do so, we need to append the indication of the reference scope (i.e. book 1) to the URN.
# We require some metadata information textMetadata = resolver.getMetadata("urn:cts:latinLit:phi0448.phi001.perseus-eng2") # Texts in CTS Metadata have one interesting property : its citation scheme. # Citation are embedded objects that carries information about how a text can be quoted, what depth it has print([citati...
participants_notebooks/Sunoikisis - Named Entity Extraction 1b-LV.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
The text that we have just fetched by using a programming interface (API) can also be viewed in the browser. [HOW CAN I SEE THE TEXT, LIKE IN THE COMMON SESSION?] Or even imported as an iframe into this notebook!
from IPython.display import IFrame IFrame('http://cts.dh.uni-leipzig.de/read/latinLit/phi0448/phi001/perseus-eng2/1', width=1000, height=350)
participants_notebooks/Sunoikisis - Named Entity Extraction 1b-LV.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
NER with CLTK ( = Classical Language ToolKit ) The CLTK library has some basic support for the extraction of named entities from Latin and Greek texts (see CLTK's documentation). The current implementation (as of version 0.1.47) uses a lookup-based method. For each token in a text, the tagger checks whether that token ...
%%time tagged_text_cltk = tag_ner('latin', input_text=de_bello_gallico_book1)
participants_notebooks/Sunoikisis - Named Entity Extraction 1b-LV.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
NER with NLTK (= Natural Language ToolKit)
stanford_model_english = "/opt/nlp/stanford-tools/stanford-ner-2015-12-09/classifiers/english.muc.7class.distsim.crf.ser.gz"
participants_notebooks/Sunoikisis - Named Entity Extraction 1b-LV.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
ner_tagger = StanfordNERTagger(stanford_model_english)
tagged_text_nltk = ner_tagger.tag(de_bello_gallico_book1.split(" "))
participants_notebooks/Sunoikisis - Named Entity Extraction 1b-LV.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
Let's have a look at the output
tagged_text_nltk[:20] # Wrap up
participants_notebooks/Sunoikisis - Named Entity Extraction 1b-LV.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0