markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
现在,我们使用编译后的训练步骤重新运行训练循环:
import time epochs = 2 for epoch in range(epochs): print("\nStart of epoch %d" % (epoch,)) start_time = time.time() # Iterate over the batches of the dataset. for step, (x_batch_train, y_batch_train) in enumerate(train_dataset): loss_value = train_step(x_batch_train, y_batch_train) # Log every 200 batches. if step % 200 == 0: print( "Training loss (for one batch) at step %d: %.4f" % (step, float(loss_value)) ) print("Seen so far: %d samples" % ((step + 1) * batch_size)) # Display metrics at the end of each epoch. train_acc = train_acc_metric.result() print("Training acc over epoch: %.4f" % (float(train_acc),)) # Reset training metrics at the end of each epoch train_acc_metric.reset_states() # Run a validation loop at the end of each epoch. for x_batch_val, y_batch_val in val_dataset: test_step(x_batch_val, y_batch_val) val_acc = val_acc_metric.result() val_acc_metric.reset_states() print("Validation acc: %.4f" % (float(val_acc),)) print("Time taken: %.2fs" % (time.time() - start_time))
site/zh-cn/guide/keras/writing_a_training_loop_from_scratch.ipynb
tensorflow/docs-l10n
apache-2.0
速度快了很多,对吗? 对模型跟踪的损失进行低级处理 层和模型以递归方式跟踪调用 self.add_loss(value) 的层在前向传递过程中创建的任何损失。可在前向传递结束时通过属性 model.losses 获得标量损失值的结果列表。 如果要使用这些损失分量,应将它们求和并添加到训练步骤的主要损失中。 考虑下面这个层,它会产生活动正则化损失:
class ActivityRegularizationLayer(layers.Layer): def call(self, inputs): self.add_loss(1e-2 * tf.reduce_sum(inputs)) return inputs
site/zh-cn/guide/keras/writing_a_training_loop_from_scratch.ipynb
tensorflow/docs-l10n
apache-2.0
我们构建一个使用它的超简单模型:
inputs = keras.Input(shape=(784,), name="digits") x = layers.Dense(64, activation="relu")(inputs) # Insert activity regularization as a layer x = ActivityRegularizationLayer()(x) x = layers.Dense(64, activation="relu")(x) outputs = layers.Dense(10, name="predictions")(x) model = keras.Model(inputs=inputs, outputs=outputs)
site/zh-cn/guide/keras/writing_a_training_loop_from_scratch.ipynb
tensorflow/docs-l10n
apache-2.0
我们的训练步骤现在应当如下所示:
@tf.function def train_step(x, y): with tf.GradientTape() as tape: logits = model(x, training=True) loss_value = loss_fn(y, logits) # Add any extra losses created during the forward pass. loss_value += sum(model.losses) grads = tape.gradient(loss_value, model.trainable_weights) optimizer.apply_gradients(zip(grads, model.trainable_weights)) train_acc_metric.update_state(y, logits) return loss_value
site/zh-cn/guide/keras/writing_a_training_loop_from_scratch.ipynb
tensorflow/docs-l10n
apache-2.0
总结 现在,您已了解如何使用内置训练循环以及从头开始编写自己的训练循环。 总之,下面是一个简单的端到端示例,它将您在本指南中学到的所有知识串联起来:一个在 MNIST 数字上训练的 DCGAN。 端到端示例:从头开始的 GAN 训练循环 您可能熟悉生成对抗网络 (GAN)。通过学习图像训练数据集的隐分布(图像的“隐空间”),GAN 可以生成看起来极为真实的新图像。 一个 GAN 由两部分组成:一个“生成器”模型(可将隐空间中的点映射到图像空间中的点)和一个“判别器”模型,后者是一个可以区分真实图像(来自训练数据集)与虚假图像(生成器网络的输出)之间差异的分类器。 GAN 训练循环如下所示: 训练判别器。 在隐空间中对一批随机点采样。 通过“生成器”模型将这些点转换为虚假图像。 获取一批真实图像,并将它们与生成的图像组合。 训练“判别器”模型以对生成的图像与真实图像进行分类。 训练生成器。 在隐空间中对随机点采样。 通过“生成器”网络将这些点转换为虚假图像。 获取一批真实图像,并将它们与生成的图像组合。 训练“生成器”模型以“欺骗”判别器,并将虚假图像分类为真实图像。 有关 GAN 工作原理的详细介绍,请参阅 Deep Learning with Python。 我们来实现这个训练循环。首先,创建用于区分虚假数字和真实数字的判别器:
discriminator = keras.Sequential( [ keras.Input(shape=(28, 28, 1)), layers.Conv2D(64, (3, 3), strides=(2, 2), padding="same"), layers.LeakyReLU(alpha=0.2), layers.Conv2D(128, (3, 3), strides=(2, 2), padding="same"), layers.LeakyReLU(alpha=0.2), layers.GlobalMaxPooling2D(), layers.Dense(1), ], name="discriminator", ) discriminator.summary()
site/zh-cn/guide/keras/writing_a_training_loop_from_scratch.ipynb
tensorflow/docs-l10n
apache-2.0
接着,我们创建一个生成器网络,它可以将隐向量转换成形状为 (28, 28, 1)(表示 MNIST 数字)的输出:
latent_dim = 128 generator = keras.Sequential( [ keras.Input(shape=(latent_dim,)), # We want to generate 128 coefficients to reshape into a 7x7x128 map layers.Dense(7 * 7 * 128), layers.LeakyReLU(alpha=0.2), layers.Reshape((7, 7, 128)), layers.Conv2DTranspose(128, (4, 4), strides=(2, 2), padding="same"), layers.LeakyReLU(alpha=0.2), layers.Conv2DTranspose(128, (4, 4), strides=(2, 2), padding="same"), layers.LeakyReLU(alpha=0.2), layers.Conv2D(1, (7, 7), padding="same", activation="sigmoid"), ], name="generator", )
site/zh-cn/guide/keras/writing_a_training_loop_from_scratch.ipynb
tensorflow/docs-l10n
apache-2.0
这是关键部分:训练循环。如您所见,训练非常简单。训练步骤函数仅有 17 行代码。
# Instantiate one optimizer for the discriminator and another for the generator. d_optimizer = keras.optimizers.Adam(learning_rate=0.0003) g_optimizer = keras.optimizers.Adam(learning_rate=0.0004) # Instantiate a loss function. loss_fn = keras.losses.BinaryCrossentropy(from_logits=True) @tf.function def train_step(real_images): # Sample random points in the latent space random_latent_vectors = tf.random.normal(shape=(batch_size, latent_dim)) # Decode them to fake images generated_images = generator(random_latent_vectors) # Combine them with real images combined_images = tf.concat([generated_images, real_images], axis=0) # Assemble labels discriminating real from fake images labels = tf.concat( [tf.ones((batch_size, 1)), tf.zeros((real_images.shape[0], 1))], axis=0 ) # Add random noise to the labels - important trick! labels += 0.05 * tf.random.uniform(labels.shape) # Train the discriminator with tf.GradientTape() as tape: predictions = discriminator(combined_images) d_loss = loss_fn(labels, predictions) grads = tape.gradient(d_loss, discriminator.trainable_weights) d_optimizer.apply_gradients(zip(grads, discriminator.trainable_weights)) # Sample random points in the latent space random_latent_vectors = tf.random.normal(shape=(batch_size, latent_dim)) # Assemble labels that say "all real images" misleading_labels = tf.zeros((batch_size, 1)) # Train the generator (note that we should *not* update the weights # of the discriminator)! with tf.GradientTape() as tape: predictions = discriminator(generator(random_latent_vectors)) g_loss = loss_fn(misleading_labels, predictions) grads = tape.gradient(g_loss, generator.trainable_weights) g_optimizer.apply_gradients(zip(grads, generator.trainable_weights)) return d_loss, g_loss, generated_images
site/zh-cn/guide/keras/writing_a_training_loop_from_scratch.ipynb
tensorflow/docs-l10n
apache-2.0
我们通过在各个图像批次上重复调用 train_step 来训练 GAN。 由于我们的判别器和生成器是卷积神经网络,因此您将在 GPU 上运行此代码。
import os # Prepare the dataset. We use both the training & test MNIST digits. batch_size = 64 (x_train, _), (x_test, _) = keras.datasets.mnist.load_data() all_digits = np.concatenate([x_train, x_test]) all_digits = all_digits.astype("float32") / 255.0 all_digits = np.reshape(all_digits, (-1, 28, 28, 1)) dataset = tf.data.Dataset.from_tensor_slices(all_digits) dataset = dataset.shuffle(buffer_size=1024).batch(batch_size) epochs = 1 # In practice you need at least 20 epochs to generate nice digits. save_dir = "./" for epoch in range(epochs): print("\nStart epoch", epoch) for step, real_images in enumerate(dataset): # Train the discriminator & generator on one batch of real images. d_loss, g_loss, generated_images = train_step(real_images) # Logging. if step % 200 == 0: # Print metrics print("discriminator loss at step %d: %.2f" % (step, d_loss)) print("adversarial loss at step %d: %.2f" % (step, g_loss)) # Save one generated image img = tf.keras.preprocessing.image.array_to_img( generated_images[0] * 255.0, scale=False ) img.save(os.path.join(save_dir, "generated_img" + str(step) + ".png")) # To limit execution time we stop after 10 steps. # Remove the lines below to actually train the model! if step > 10: break
site/zh-cn/guide/keras/writing_a_training_loop_from_scratch.ipynb
tensorflow/docs-l10n
apache-2.0
1. Implementar o algoritmo K-means Nesta etapa você irá implementar as funções que compõe o algoritmo do KMeans uma a uma. É importante entender e ler a documentação de cada função, principalmente as dimensões dos dados esperados na saída. 1.1 Inicializar os centróides A primeira etapa do algoritmo consiste em inicializar os centróides de maneira aleatória. Essa etapa é uma das mais importantes do algoritmo e uma boa inicialização pode diminuir bastante o tempo de convergência. Para inicializar os centróides você pode considerar o conhecimento prévio sobre os dados, mesmo sem saber a quantidade de grupos ou sua distribuição. Dica: https://docs.scipy.org/doc/numpy/reference/generated/numpy.random.uniform.html
def calculate_initial_centers(dataset, k): """ Inicializa os centróides iniciais de maneira arbitrária Argumentos: dataset -- Conjunto de dados - [m,n] k -- Número de centróides desejados Retornos: centroids -- Lista com os centróides calculados - [k,n] """ #### CODE HERE #### centroid = [] c = 0 while (c < k): x = np.array(np.random.uniform(min(dataset[:,0]),max(dataset[:,0]))) y = np.array(np.random.uniform(min(dataset[:,1]),max(dataset[:,1]))) centroid.append([x,y]) c += 1 centroids = np.array(centroid) ### END OF CODE ### return centroids
2019/09-clustering/cl_Helio.ipynb
InsightLab/data-science-cookbook
mit
1.2 Definir os Clusters Na segunda etapa do algoritmo serão definidos o grupo de cada dado, de acordo com os centróides calculados. 1.2.1 Função de distância Codifique a função de distância euclidiana entre dois pontos (a, b). Definido pela equação: $$ dist(a, b) = \sqrt{(a_1-b_1)^{2}+(a_2-b_2)^{2}+ ... + (a_n-b_n)^{2}} $$ $$ dist(a, b) = \sqrt{\sum_{i=1}^{n}(a_i-b_i)^{2}} $$
import math def euclidean_distance(a, b): """ Calcula a distância euclidiana entre os pontos a e b Argumentos: a -- Um ponto no espaço - [1,n] b -- Um ponto no espaço - [1,n] Retornos: distance -- Distância euclidiana entre os pontos """ #### CODE HERE #### #s = 0 #for i in range(len(a)): # diff = (a[i] - b[i]) # s += diff*diff #distance = math.sqrt(s) #distance = math.sqrt(sum([((a[i] - b[i])**2) for i in range(len(a))])) distance = math.sqrt(sum((a-b)**2)) ### END OF CODE ### return distance
2019/09-clustering/cl_Helio.ipynb
InsightLab/data-science-cookbook
mit
1.2.2 Calcular o centroide mais próximo Utilizando a função de distância codificada anteriormente, complete a função abaixo para calcular o centroid mais próximo de um ponto qualquer. Dica: https://docs.scipy.org/doc/numpy/reference/generated/numpy.argmin.html
def nearest_centroid(a, centroids): """ Calcula o índice do centroid mais próximo ao ponto a Argumentos: a -- Um ponto no espaço - [1,n] centroids -- Lista com os centróides - [k,n] Retornos: nearest_index -- Índice do centróide mais próximo """ #### CODE HERE #### dist = float("inf") k = len(centroids) for i in range(k): d = euclidean_distance(a, centroids[i]) if d < dist: nindex = i dist = d nearest_index = nindex ### END OF CODE ### return nearest_index
2019/09-clustering/cl_Helio.ipynb
InsightLab/data-science-cookbook
mit
1.2.3 Calcular centroid mais próximo de cada dado do dataset Utilizando a função anterior que retorna o índice do centroid mais próximo, calcule o centroid mais próximo de cada dado do dataset.
def all_nearest_centroids(dataset, centroids): """ Calcula o índice do centroid mais próximo para cada ponto do dataset Argumentos: dataset -- Conjunto de dados - [m,n] centroids -- Lista com os centróides - [k,n] Retornos: nearest_indexes -- Índices do centróides mais próximos - [m,1] """ #### CODE HERE #### nearest_indexes = [nearest_centroid(dataset[i], centroids) for i in range(len(dataset))] ### END OF CODE ### return nearest_indexes
2019/09-clustering/cl_Helio.ipynb
InsightLab/data-science-cookbook
mit
1.3 Métrica de avaliação Após formar os clusters, como sabemos se o resultado gerado é bom? Para isso, precisamos definir uma métrica de avaliação. O algoritmo K-means tem como objetivo escolher centróides que minimizem a soma quadrática das distância entre os dados de um cluster e seu centróide. Essa métrica é conhecida como inertia. $$\sum_{i=0}^{n}\min_{c_j \in C}(||x_i - c_j||^2)$$ A inertia, ou o critério de soma dos quadrados dentro do cluster, pode ser reconhecido como uma medida de o quão internamente coerentes são os clusters, porém ela sofre de alguns inconvenientes: A inertia pressupõe que os clusters são convexos e isotrópicos, o que nem sempre é o caso. Desta forma, pode não representar bem em aglomerados alongados ou variedades com formas irregulares. A inertia não é uma métrica normalizada: sabemos apenas que valores mais baixos são melhores e zero é o valor ótimo. Mas em espaços de dimensões muito altas, as distâncias euclidianas tendem a se tornar infladas (este é um exemplo da chamada “maldição da dimensionalidade”). A execução de um algoritmo de redução de dimensionalidade, como o PCA, pode aliviar esse problema e acelerar os cálculos. Fonte: https://scikit-learn.org/stable/modules/clustering.html Para podermos avaliar os nosso clusters, codifique a métrica da inertia abaixo, para isso você pode utilizar a função de distância euclidiana construída anteriormente. $$inertia = \sum_{i=0}^{n}\min_{c_j \in C} (dist(x_i, c_j))^2$$
def inertia(dataset, centroids, nearest_indexes): """ Soma das distâncias quadradas das amostras para o centro do cluster mais próximo. Argumentos: dataset -- Conjunto de dados - [m,n] centroids -- Lista com os centróides - [k,n] nearest_indexes -- Índices do centróides mais próximos - [m,1] Retornos: inertia -- Soma total do quadrado da distância entre os dados de um cluster e seu centróide tmp_data = np.array([[1,2,3],[3,6,5],[4,5,6]]) tmp_centroide = np.array([[2,3,4]]) tmp_nearest_indexes = all_nearest_centroids(tmp_data, tmp_centroide) if inertia(tmp_data, tmp_centroide, tmp_nearest_indexes) == 26: """ #### CODE HERE #### '''s = 0 for i in range(len(tmp_data)): m = euclidean_distance(tmp_data[i],tmp_centroide[0])**2 s += m''' inertia = sum([euclidean_distance(tmp_data[i],tmp_centroide[0])**2 for i in range(len(tmp_data))]) ### END OF CODE ### return inertia
2019/09-clustering/cl_Helio.ipynb
InsightLab/data-science-cookbook
mit
Verifique o resultado do algoritmo abaixo!
kmeans = KMeans(n_clusters=k) kmeans.fit(dataset) print("Inércia = ", kmeans.inertia_) plt.scatter(dataset[:,0], dataset[:,1], c=kmeans.labels_) plt.scatter(kmeans.cluster_centers_[:,0], kmeans.cluster_centers_[:,1], marker='^', c='red', s=100) plt.show()
2019/09-clustering/cl_Helio.ipynb
InsightLab/data-science-cookbook
mit
2.2 Comparar com algoritmo do Scikit-Learn Use a implementação do algoritmo do scikit-learn do K-means para o mesmo conjunto de dados. Mostre o valor da inércia e os conjuntos gerados pelo modelo. Você pode usar a mesma estrutura da célula de código anterior. Dica: https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans
#### CODE HERE #### # fonte: https://stackabuse.com/k-means-clustering-with-scikit-learn/ import matplotlib.pyplot as plt from sklearn.cluster import KMeans plt.scatter(dataset[:,0],dataset[:,1], label='True Position') kmeans = KMeans(n_clusters=k) kmeans.fit(dataset) print("Inércia = ", kmeans.inertia_) #print(kmeans.cluster_centers_) #print(kmeans.labels_) plt.scatter(dataset[:,0],dataset[:,1], c=kmeans.labels_, cmap='Set3') plt.scatter(kmeans.cluster_centers_[:,0] ,kmeans.cluster_centers_[:,1], marker='^', color='black', s=100)
2019/09-clustering/cl_Helio.ipynb
InsightLab/data-science-cookbook
mit
4. Dataset Real Exercícios 1 - Aplique o algoritmo do K-means desenvolvido por você no datatse iris [1]. Mostre os resultados obtidos utilizando pelo menos duas métricas de avaliação de clusteres [2]. [1] http://archive.ics.uci.edu/ml/datasets/iris [2] http://scikit-learn.org/stable/modules/clustering.html#clustering-evaluation Dica: você pode utilizar as métricas completeness e homogeneity. 2 - Tente melhorar o resultado obtido na questão anterior utilizando uma técnica de mineração de dados. Explique a diferença obtida. Dica: você pode tentar normalizar os dados [3]. - [3] https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.normalize.html 3 - Qual o número de clusteres (K) você escolheu na questão anterior? Desenvolva o Método do Cotovelo sem usar biblioteca e descubra o valor de K mais adequado. Após descobrir, utilize o valor obtido no algoritmo do K-means. 4 - Utilizando os resultados da questão anterior, refaça o cálculo das métricas e comente os resultados obtidos. Houve uma melhoria? Explique.
#### CODE HERE #### from sklearn import metrics url="http://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data" data=pd.read_csv(url, header=None) labels_true = [0, 0, 0, 1, 1, 1] labels_pred = [0, 0, 1, 1, 2, 2] metrics.homogeneity_score(labels_true, labels_pred) metrics.completeness_score(labels_true, labels_pred) #2 #3 #4
2019/09-clustering/cl_Helio.ipynb
InsightLab/data-science-cookbook
mit
From a sample of the RMS Titanic data, we can see the various features present for each passenger on the ship: - Survived: Outcome of survival (0 = No; 1 = Yes) - Pclass: Socio-economic class (1 = Upper class; 2 = Middle class; 3 = Lower class) - Name: Name of passenger - Sex: Sex of the passenger - Age: Age of the passenger (Some entries contain NaN) - SibSp: Number of siblings and spouses of the passenger aboard - Parch: Number of parents and children of the passenger aboard - Ticket: Ticket number of the passenger - Fare: Fare paid by the passenger - Cabin Cabin number of the passenger (Some entries contain NaN) - Embarked: Port of embarkation of the passenger (C = Cherbourg; Q = Queenstown; S = Southampton) Since we're interested in the outcome of survival for each passenger or crew member, we can remove the Survived feature from this dataset and store it as its own separate variable outcomes. We will use these outcomes as our prediction targets. Run the code cell below to remove Survived as a feature of the dataset and store it in outcomes.
# Store the 'Survived' feature in a new variable and remove it from the dataset outcomes = full_data['Survived'] data = full_data.drop('Survived', axis = 1) # Show the new dataset with 'Survived' removed display(data.head()) display(outcomes.head())
titanic_survival_exploration/titanic_survival_exploration.ipynb
timzhangau/ml_nano
mit
Examining the survival statistics, a large majority of males did not survive the ship sinking. However, a majority of females did survive the ship sinking. Let's build on our previous prediction: If a passenger was female, then we will predict that they survived. Otherwise, we will predict the passenger did not survive. Fill in the missing code below so that the function will make this prediction. Hint: You can access the values of each feature for a passenger like a dictionary. For example, passenger['Sex'] is the sex of the passenger.
def predictions_1(data): """ Model with one feature: - Predict a passenger survived if they are female. """ predictions = [] for _, passenger in data.iterrows(): # Remove the 'pass' statement below # and write your prediction conditions here if passenger['Sex'] == 'female': prediction = 1 else: prediction = 0 predictions.append(prediction) # Return our predictions return pd.Series(predictions) # Make the predictions predictions = predictions_1(data)
titanic_survival_exploration/titanic_survival_exploration.ipynb
timzhangau/ml_nano
mit
Answer: Predictions have an accuracy of 78.68% under assumption that all female passengers survived and the remaining did not Using just the Sex feature for each passenger, we are able to increase the accuracy of our predictions by a significant margin. Now, let's consider using an additional feature to see if we can further improve our predictions. For example, consider all of the male passengers aboard the RMS Titanic: Can we find a subset of those passengers that had a higher rate of survival? Let's start by looking at the Age of each male, by again using the survival_stats function. This time, we'll use a fourth parameter to filter out the data so that only passengers with the Sex 'male' will be included. Run the code cell below to plot the survival outcomes of male passengers based on their age.
vs.survival_stats(data, outcomes, 'Age', ["Sex == 'male'"])
titanic_survival_exploration/titanic_survival_exploration.ipynb
timzhangau/ml_nano
mit
Examining the survival statistics, the majority of males younger than 10 survived the ship sinking, whereas most males age 10 or older did not survive the ship sinking. Let's continue to build on our previous prediction: If a passenger was female, then we will predict they survive. If a passenger was male and younger than 10, then we will also predict they survive. Otherwise, we will predict they do not survive. Fill in the missing code below so that the function will make this prediction. Hint: You can start your implementation of this function using the prediction code you wrote earlier from predictions_1.
def predictions_2(data): """ Model with two features: - Predict a passenger survived if they are female. - Predict a passenger survived if they are male and younger than 10. """ predictions = [] for _, passenger in data.iterrows(): # Remove the 'pass' statement below # and write your prediction conditions here if passenger['Sex'] == 'female': prediction = 1 else: if passenger['Age'] < 10: prediction = 1 else: prediction =0 predictions.append(prediction) # Return our predictions return pd.Series(predictions) # Make the predictions predictions = predictions_2(data)
titanic_survival_exploration/titanic_survival_exploration.ipynb
timzhangau/ml_nano
mit
Answer: Predictions have an accuracy of 79.35% under the assumption that all female passengers and all male passengers younger than 10 survived. Adding the feature Age as a condition in conjunction with Sex improves the accuracy by a small margin more than with simply using the feature Sex alone. Now it's your turn: Find a series of features and conditions to split the data on to obtain an outcome prediction accuracy of at least 80%. This may require multiple features and multiple levels of conditional statements to succeed. You can use the same feature multiple times with different conditions. Pclass, Sex, Age, SibSp, and Parch are some suggested features to try. Use the survival_stats function below to to examine various survival statistics. Hint: To use mulitple filter conditions, put each condition in the list passed as the last argument. Example: ["Sex == 'male'", "Age &lt; 18"]
vs.survival_stats(data, outcomes, 'Age', ["Sex == 'female'", "Pclass == 3"])
titanic_survival_exploration/titanic_survival_exploration.ipynb
timzhangau/ml_nano
mit
After exploring the survival statistics visualization, fill in the missing code below so that the function will make your prediction. Make sure to keep track of the various features and conditions you tried before arriving at your final prediction model. Hint: You can start your implementation of this function using the prediction code you wrote earlier from predictions_2.
def predictions_3(data): """ Model with multiple features. Makes a prediction with an accuracy of at least 80%. """ predictions = [] for _, passenger in data.iterrows(): # Remove the 'pass' statement below # and write your prediction conditions here if passenger['Sex'] == 'female': if passenger['Pclass'] == 3 and passenger['Age'] > 20: prediction = 0 else: prediction = 1 else: if passenger['Age'] < 10: prediction = 1 else: prediction = 0 predictions.append(prediction) # Return our predictions return pd.Series(predictions) # Make the predictions predictions = predictions_3(data)
titanic_survival_exploration/titanic_survival_exploration.ipynb
timzhangau/ml_nano
mit
You can define such a function at any point in your testbook. Note that you need to use self inside your function definition rather than using the word browser.
def login(self, data): self.get_url(data['url']) if self.is_available('name=credential_0', 1): self.kb_type('name=credential_0', data['username']) self.kb_type('name=credential_1', data['password']) self.submit_btn('Login') assert self.is_available("Logout") return self.get_element("id=logged_in_user").text
notebooks/howto_dynamically_add_functions_to_browser.ipynb
ldiary/marigoso
mit
Once defined, you can call the register_function method of test object to attach the function to the browser object.
test.register_function("browser", [login])
notebooks/howto_dynamically_add_functions_to_browser.ipynb
ldiary/marigoso
mit
You can then confirm that the login is now a bound method of browser and can be used right away just like any other methods bound to browser.
browser.login
notebooks/howto_dynamically_add_functions_to_browser.ipynb
ldiary/marigoso
mit
You can re-execute the same cell over and over as many times as you want. Simply put your cursor in the cell again, edit at will, and type Shift-Enter to execute. IPython can execute shell commands, which shall be prefixed with !. For example, in the next cell, try issuing several system commands in-place with Ctrl-Enter, such as pwd and then ls:
!ls !ls -la
howto/00-abc-ipython.ipynb
vkuznet/rep
apache-2.0
In a cell, you can type anything from a single python expression to an arbitrarily long amount of code (although for reasons of readability, you are recommended to limit this to a few dozen lines):
def f(x): """My function x : parameter""" return x + 1 print "f(3) = ", f(3)
howto/00-abc-ipython.ipynb
vkuznet/rep
apache-2.0
User interface When you start a new notebook server with ipython notebook, your browser should open into the Dashboard, a page listing all notebooks available in the current directory as well as letting you create new notebooks. In this page, you can also drag and drop existing .py files over the file list to import them as notebooks (see the manual for further details on how these files are interpreted). Once you open an existing notebook (like this one) or create a new one, you are in the main notebook interface, which consists of a main editing area (where these cells are contained) as well as a collapsible left panel, a permanent header area at the top, and a pager that rises from the bottom when needed and can be collapsed again. Main editing area Here, you can move with the arrow keys or using the scroll bars. The cursor enters code cells immediately, but only selects text (markdown) cells without entering in them; to enter a text cell, use Enter, and Shift-Enter to exit it again (just like to execute a code cell). Header bar The header area at the top allows you to rename an existing notebook and open up a short help tooltip. This area also indicates with a Busy mark on the right whenever the kernel is busy executing code. Header panel This panel contains a number of panes that can be collapsed vertically by clicking on their title bar, and the whole panel can also be collapsed by clicking on the vertical divider (note that you can not drag the divider, for now you can only click on it). The Notebook section contains actions that pertain to the whole notebook, such as downloading the current notebook either in its original format or as a .py script, and printing/exporting it to different markup languages. The Cell section lets you manipulate individual cells, and the names should be fairly self-explanatory. The Kernel section lets you signal the kernel executing your code. Interrupt does the equivalent of hitting Ctrl-C at a terminal, and Restart fully kills the kernel process and starts a fresh one. Obviously this means that all your previous variables are destroyed, but it also makes it easy to get a fresh kernel in which to re-execute a notebook. The Help section contains links to the documentation of some projects closely related to IPython as well as the minimal keybindings you need to know. But you should use Esc-h (or click the QuickHelp button at the top) and learn some of the other keybindings, as it will make your workflow much more fluid and efficient. Please pay attention that there are two modes: Command mode and Edit mode (details in the reference: press Esc-h). The pager at the bottom Whenever IPython needs to display additional information, such as when you type somefunction? in a cell, the notebook opens a pane at the bottom where this information is shown. You can keep this pager pane open for reference (it doesn't block input in the main area) or dismiss it by clicking on its divider bar. Try by executing the following cell:
dict??
howto/00-abc-ipython.ipynb
vkuznet/rep
apache-2.0
Tab completion and tooltips The notebook uses the same underlying machinery for tab completion that IPython uses at the terminal, but displays the information differently. Whey you complete with the Tab key, IPython shows a drop list with all available completions. If you type more characters while this list is open, IPython automatically eliminates from the list options that don't match the new characters; once there is only one option left you can hit Tab once more (or Enter) to complete. You can also select the completion you want with the arrow keys or the mouse, and then hit Enter. In addition, if you hit Tab inside of open parentheses, IPython will search for the docstring of the last object left of the parens and will display it on a tooltip. For example, type list(&lt;TAB&gt; and you will see the docstring for the builtin list constructor:
# Position your cursor after the ( and hit the Tab key: list()
howto/00-abc-ipython.ipynb
vkuznet/rep
apache-2.0
Display of complex objects As the 'tour' notebook shows, the IPython notebook has fairly sophisticated display capabilities. In addition to the examples there, you can study the display_protocol notebook in this same examples folder, to learn how to customize arbitrary objects (in your own code or external libraries) to display in the notebook in any way you want, including graphical forms or mathematical expressions. Plotting support To turn on inline plotting, you can use %matplotlib magic, after executing it the plots will not be shown in a new window (as done usually), but will be shown in notebook:
%matplotlib inline # an alternative is the following code: # %pylab inline # which does also many imports from numpy and matplotlib libraries and this is very handy, # though one shall remember that global imports are evil import matplotlib.pyplot as plt plt.plot([1, 2, 4, 8, 16])
howto/00-abc-ipython.ipynb
vkuznet/rep
apache-2.0
Other handy features IPython allows you upload files on the server and modify any text files on the server. Revisit Dashboard to check these features. Also, from Dashboard you can start shell sessions, which is good when some configuring on a server-side is needed. However, running IPython locally is very popular option and doesn't require much experience. But the greatest benefit you can get from using IPython on your server. Note, that apart from installing IPython and giving access from network, you should protect access . IPython quick reference short summary of special commands available will be shown after executing this cell:
%quickref
howto/00-abc-ipython.ipynb
vkuznet/rep
apache-2.0
(Optional) Experimenting with Feature Extraction This exercise is meant to give you an opportunity to explore the sliding window computations and how their parameters affect feature extraction. There aren't any right or wrong answers -- it's just a chance to experiment! We've provided you with some images and kernels you can use. Run this cell to see them.
from learntools.computer_vision.visiontools import edge, blur, bottom_sobel, emboss, sharpen, circle image_dir = '../input/computer-vision-resources/' circle_64 = tf.expand_dims(circle([64, 64], val=1.0, r_shrink=4), axis=-1) kaggle_k = visiontools.read_image(image_dir + str('k.jpg'), channels=1) car = visiontools.read_image(image_dir + str('car_illus.jpg'), channels=1) car = tf.image.resize(car, size=[200, 200]) images = [(circle_64, "circle_64"), (kaggle_k, "kaggle_k"), (car, "car")] plt.figure(figsize=(14, 4)) for i, (img, title) in enumerate(images): plt.subplot(1, len(images), i+1) plt.imshow(tf.squeeze(img)) plt.axis('off') plt.title(title) plt.show(); kernels = [(edge, "edge"), (blur, "blur"), (bottom_sobel, "bottom_sobel"), (emboss, "emboss"), (sharpen, "sharpen")] plt.figure(figsize=(14, 4)) for i, (krn, title) in enumerate(kernels): plt.subplot(1, len(kernels), i+1) visiontools.show_kernel(krn, digits=2, text_size=20) plt.title(title) plt.show()
notebooks/computer_vision/raw/ex4.ipynb
Kaggle/learntools
apache-2.0
To choose one to experiment with, just enter it's name in the appropriate place below. Then, set the parameters for the window computation. Try out some different combinations and see what they do!
# YOUR CODE HERE: choose an image image = circle_64 # YOUR CODE HERE: choose a kernel kernel = bottom_sobel visiontools.show_extraction( image, kernel, # YOUR CODE HERE: set parameters conv_stride=1, conv_padding='valid', pool_size=2, pool_stride=2, pool_padding='same', subplot_shape=(1, 4), figsize=(14, 6), )
notebooks/computer_vision/raw/ex4.ipynb
Kaggle/learntools
apache-2.0
The Receptive Field Trace back all the connections from some neuron and eventually you reach the input image. All of the input pixels a neuron is connected to is that neuron's receptive field. The receptive field just tells you which parts of the input image a neuron receives information from. As we've seen, if your first layer is a convolution with $3 \times 3$ kernels, then each neuron in that layer gets input from a $3 \times 3$ patch of pixels (except maybe at the border). What happens if you add another convolutional layer with $3 \times 3$ kernels? Consider this next illustration: <figure> <img src="https://i.imgur.com/HmwQm2S.png" alt="Illustration of the receptive field of two stacked convolutions." width=250> </figure> Now trace back the connections from the neuron at top and you can see that it's connected to a $5 \times 5$ patch of pixels in the input (the bottom layer): each neuron in the $3 \times 3$ patch in the middle layer is connected to a $3 \times 3$ input patch, but they overlap in a $5 \times 5$ patch. So that neuron at top has a $5 \times 5$ receptive field. 1) Growing the Receptive Field Now, if you added a third convolutional layer with a (3, 3) kernel, what receptive field would its neurons have? Run the cell below for an answer. (Or see a hint first!)
# View the solution (Run this code cell to receive credit!) q_1.check() # Lines below will give you a hint #_COMMENT_IF(PROD)_ q_1.hint()
notebooks/computer_vision/raw/ex4.ipynb
Kaggle/learntools
apache-2.0
So why stack layers like this? Three (3, 3) kernels have 27 parameters, while one (7, 7) kernel has 49, though they both create the same receptive field. This stacking-layers trick is one of the ways convnets are able to create large receptive fields without increasing the number of parameters too much. You'll see how to do this yourself in the next lesson! (Optional) One-Dimensional Convolution Convolutional networks turn out to be useful not only (two-dimensional) images, but also on things like time-series (one-dimensional) and video (three-dimensional). We've seen how convolutional networks can learn to extract features from (two-dimensional) images. It turns out that convnets can also learn to extract features from things like time-series (one-dimensional) and video (three-dimensional). In this (optional) exercise, we'll see what convolution looks like on a time-series. The time series we'll use is from Google Trends. It measures the popularity of the search term "machine learning" for weeks from January 25, 2015 to January 15, 2020.
import pandas as pd # Load the time series as a Pandas dataframe machinelearning = pd.read_csv( '../input/computer-vision-resources/machinelearning.csv', parse_dates=['Week'], index_col='Week', ) machinelearning.plot();
notebooks/computer_vision/raw/ex4.ipynb
Kaggle/learntools
apache-2.0
What about the kernels? Images are two-dimensional and so our kernels were 2D arrays. A time-series is one-dimensional, so what should the kernel be? A 1D array! Here are some kernels sometimes used on time-series data:
detrend = tf.constant([-1, 1], dtype=tf.float32) average = tf.constant([0.2, 0.2, 0.2, 0.2, 0.2], dtype=tf.float32) spencer = tf.constant([-3, -6, -5, 3, 21, 46, 67, 74, 67, 46, 32, 3, -5, -6, -3], dtype=tf.float32) / 320
notebooks/computer_vision/raw/ex4.ipynb
Kaggle/learntools
apache-2.0
Convolution on a sequence works just like convolution on an image. The difference is just that a sliding window on a sequence only has one direction to travel -- left to right -- instead of the two directions on an image. And just like before, the features picked out depend on the pattern on numbers in the kernel. Can you guess what kind of features these kernels extract? Uncomment one of the kernels below and run the cell to see!
# UNCOMMENT ONE kernel = detrend # kernel = average # kernel = spencer # Reformat for TensorFlow ts_data = machinelearning.to_numpy() ts_data = tf.expand_dims(ts_data, axis=0) ts_data = tf.cast(ts_data, dtype=tf.float32) kern = tf.reshape(kernel, shape=(*kernel.shape, 1, 1)) ts_filter = tf.nn.conv1d( input=ts_data, filters=kern, stride=1, padding='VALID', ) # Format as Pandas Series machinelearning_filtered = pd.Series(tf.squeeze(ts_filter).numpy()) machinelearning_filtered.plot(); #%%RM_IF(PROD)%% # UNCOMMENT ONE kernel = detrend # kernel = average # kernel = spencer # Reformat for TensorFlow ts_data = machinelearning.to_numpy() ts_data = tf.expand_dims(ts_data, axis=0) ts_data = tf.cast(ts_data, dtype=tf.float32) kern = tf.reshape(kernel, shape=(*kernel.shape, 1, 1)) ts_filter = tf.nn.conv1d( input=ts_data, filters=kern, stride=1, padding='VALID', ) # Format as Pandas Series machinelearning_filtered = pd.Series(tf.squeeze(ts_filter).numpy()) machinelearning_filtered.plot(); #%%RM_IF(PROD)%% # UNCOMMENT ONE # kernel = detrend kernel = average # kernel = spencer # Reformat for TensorFlow ts_data = machinelearning.to_numpy() ts_data = tf.expand_dims(ts_data, axis=0) ts_data = tf.cast(ts_data, dtype=tf.float32) kern = tf.reshape(kernel, shape=(*kernel.shape, 1, 1)) ts_filter = tf.nn.conv1d( input=ts_data, filters=kern, stride=1, padding='VALID', ) # Format as Pandas Series machinelearning_filtered = pd.Series(tf.squeeze(ts_filter).numpy()) machinelearning_filtered.plot(); #%%RM_IF(PROD)%% # UNCOMMENT ONE # kernel = detrend # kernel = average kernel = spencer # Reformat for TensorFlow ts_data = machinelearning.to_numpy() ts_data = tf.expand_dims(ts_data, axis=0) ts_data = tf.cast(ts_data, dtype=tf.float32) kern = tf.reshape(kernel, shape=(*kernel.shape, 1, 1)) ts_filter = tf.nn.conv1d( input=ts_data, filters=kern, stride=1, padding='VALID', ) # Format as Pandas Series machinelearning_filtered = pd.Series(tf.squeeze(ts_filter).numpy()) machinelearning_filtered.plot();
notebooks/computer_vision/raw/ex4.ipynb
Kaggle/learntools
apache-2.0
We followed the simulation mentioned in the paper for model selection in linear regression. There are two 150 $\times$ 21 design matrices used as input data. The first set was generated independently with $\rho = 0$ from $N$(0, 1) and the second set was autocorrelated, AR(1) with $\rho$ = 0.7. In addition to the 21 original variables, the squared of the 21 variables and their pairwise interactions were also added. Hence the number of total variables is $k_T$ = 252. The added variables all have true coefficients equal to 0. The responses were generated from 5 models with the following labels: H0: all $\beta$s = 0; H1: $\beta_7$=1, $\beta_14$=1; H2: $\beta_6$6=9, $\beta_7$=4, $\beta_8$=1, $\beta_13$=9, $\beta_14$=4, $\beta_15$=1; H3: $\beta_5$=25, $\beta_6$=16, $\beta_7$=9, $\beta_8$=4, $\beta_9$=1, $\beta_{12}$=25, $\beta_{13}$=16, $\beta_{14}$=9, $\beta_{15}$=4, $\beta_{16}$=1; H4: $\beta_4$=49, $\beta_5$ = 36, $\beta_6$=25, $\beta_7$=16, $\beta_8$=9, $\beta_9$=4, $\beta_{10}$=1, $\beta_{11}$=49,$\beta_{12}$=36, $\beta_{13}$=25, $\beta_{14}$=16, $\beta_{15}$=9, $\beta_{16}$=4, $\beta_{17}$=1. The coefficients were standardized to achieve a theoretical $R^2$=0.35. With the known `true model', we can test the accuracy of the models on this simulated data set. The authors posted their simulated data sets publicly online and we used the same data set to check our results. We used several measures to compare the performance among Fast FSR, BIC, and LASSO. These measures include: (1) size: average model size. (2)ME: average model error = $ (1/n)|\hat{Y}- \mu|^2$. (3)FSR: average false selection rate = average of (number of unimportant variables selected)$/$(1+ number of total variables selected), and a lower value is preferred. (4) MSE: average error mean square of the chosen model and close to $\sigma^2$ suggests selection method is tuned. (5) CSR: the average proportion of correctly chosen variables, and a lower value is preferred. (6) JAC: Jaccard's measure that combines FSR and CSR in a particular way, and a lower value is preferred. The Monte Carlo standard errors of the above measures were calculated as well.
url1 = 'http://www4.stat.ncsu.edu/~boos/var.select/sim/x.quad.0.txt' url2 = 'http://www4.stat.ncsu.edu/~boos/var.select/sim/x.quad.70.txt' url3 = 'http://www4.stat.ncsu.edu/~boos/var.select/sim/h0_0.rs35.txt' url4 = 'http://www4.stat.ncsu.edu/~boos/var.select/sim/h1_0.rs35.txt' url5 = 'http://www4.stat.ncsu.edu/~boos/var.select/sim/h2_0.rs35.txt' url6 = 'http://www4.stat.ncsu.edu/~boos/var.select/sim/h3_0.rs35.txt' url7 = 'http://www4.stat.ncsu.edu/~boos/var.select/sim/h4_0.rs35.txt' url8 = 'http://www4.stat.ncsu.edu/~boos/var.select/sim/h0_70.rs35.txt' url9 = 'http://www4.stat.ncsu.edu/~boos/var.select/sim/h1_70.rs35.txt' url10 = 'http://www4.stat.ncsu.edu/~boos/var.select/sim/h2_70.rs35.txt' url11 = 'http://www4.stat.ncsu.edu/~boos/var.select/sim/h3_70.rs35.txt' url12 = 'http://www4.stat.ncsu.edu/~boos/var.select/sim/h4_70.rs35.txt' # 0 - 20: indepedent standard normal # 21 - 41: squares # 42 - 251: pairwise interactions x1 = pd.read_csv(url1, header = None, delim_whitespace = True) x2 = pd.read_csv(url2, header = None, delim_whitespace = True) # response y1_h0 = pd.read_csv(url3, delim_whitespace = True) y1_h1 = pd.read_csv(url4, delim_whitespace = True) y1_h2 = pd.read_csv(url5, delim_whitespace = True) y1_h3 = pd.read_csv(url6, delim_whitespace = True) y1_h4 = pd.read_csv(url7, delim_whitespace = True) y2_h0 = pd.read_csv(url8, delim_whitespace = True) y2_h1 = pd.read_csv(url9, delim_whitespace = True) y2_h2 = pd.read_csv(url10, delim_whitespace = True) y2_h3 = pd.read_csv(url11, delim_whitespace = True) y2_h4 = pd.read_csv(url12, delim_whitespace = True) # simulation x1_matrix = x1.ix[:,0:20] x2_matrix = x2.ix[:,0:20] hbeta = np.zeros((5, 14)) # true betas for five models hbeta[0,:] = np.repeat(-1, 14) hbeta[1,:] = np.concatenate([np.array([6, 13]), np.repeat(-1, 12)]) hbeta[2,:] = np.concatenate([np.array([5, 6, 7, 12, 13, 14]), np.repeat(-1, 8)]) hbeta[3,:] = np.concatenate([np.array([4, 5, 6, 7, 8, 11, 12, 13, 14, 15]), np.repeat(-1, 4)]) hbeta[4,:] = np.array([3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16]) def simulation(x, y, model): """function that can perform simulation for all three models""" res = np.zeros((5, 14)) me_out = np.zeros((100, 4)) me_out1_21 = np.zeros((100, 4, 5)) me_out2_21 = np.zeros((100, 4, 5)) jac = np.zeros(100) data = y for m in range(5): hdata = data[m] hdata_y = np.array(hdata.ix[:,2]).reshape([100, 150]) mu = hdata.ix[0:149,1] # true means beta = hbeta[m,:] for i in tqdm(range(100)): out_bic = model(x, hdata_y[i,:]) size = out_bic['size'] # number of fitted x's correct = len(np.intersect1d(out_bic['index'], beta)) # number correct false = size - correct # number false me_out[i, 0] = np.mean((out_bic['fitted'] - mu)**2) # model error me_out[i, 1] = size me_out[i, 2] = false me_out[i, 3] = out_bic['residual']/out_bic['df_residual'] me_out1_21[:,:,1] = np.round(me_out, 4) model_size = sum(beta != -1) # true model size for i in range(100): if me_out[i, 2] + model_size > 0: jac[i] = (me_out[i, 1] - me_out[i, 2])/(me_out[i, 2] + model_size) else: jac[i] = 1 res[m, 0] = 0 # rho res[m, 1] = m # model h0, h1 res[m, 2] = np.mean(me_out[:,1]) # model size res[m, 3] = np.std(me_out[:,1]/10) # se of the mean res[m, 4] = np.mean(me_out[:,0]) # me res[m, 5] = np.std(me_out[:,0])/10 res[m, 6] = np.mean(me_out[:,2]/(1 + me_out[:,1])) # fsr res[m, 7] = np.std(me_out[:,2]/(1 + me_out[:,1]))/10 res[m, 8] = np.mean(me_out[:,3]) # mse of selected model res[m, 9] = np.std(me_out[:,3])/10 if model_size > 0: csr = (me_out[:,1] - me_out[:,2])/model_size else: csr = np.ones(100) res[m, 10] = np.mean(csr) res[m, 11] = np.std(csr)/10 res[m, 12] = np.mean(jac) res[m, 13] = np.std(jac)/10 return np.round(pd.DataFrame({'rho': res[:,0], 'H': res[:,1], 'size': res[:,2], 'me': res[:,4], 'fsr_mr': res[:,6], 'mse': res[:,8], 'csr': res[:,10], 'jac': res[:,12], 'se_size': res[:,3], 'se_me': res[:,5], 'se_fsr': res[:,7], 'se_mse': res[:,9], 'se_csr': res[:,11], 'se_jac': res[:,13]}), 3)
simulation_fastfsr.ipynb
CeciliaShi/STA-663-Final-Project
mit
Fast FSR
res_fsr1 = simulation(x1_matrix, [y1_h0, y1_h1, y1_h2, y1_h3, y1_h4], fastfsr.fsr_fast) res_fsr1 res_fsr2 = simulation(x2_matrix, [y2_h0, y2_h1, y2_h2, y2_h3, y2_h4], fastfsr.fsr_fast) res_fsr2
simulation_fastfsr.ipynb
CeciliaShi/STA-663-Final-Project
mit
In addition to Fast FSR, we also ran BIC and LASSO on the data sets and compared the results of the three models. In the originally paper, the authors used the R package leaps for best subset selection. However, there does not exist corresponding package or function in Python and hence we also implemented a regression subset selection. The other method LASSO and cross validation was computed with the package scikit-learn. BIC
res_bic1 = simulation(x1_matrix, [y1_h0, y1_h1, y1_h2, y1_h3, y1_h4], fastfsr.bic_sim) res_bic1 res_bic2 = simulation(x2_matrix, [y2_h0, y2_h1, y2_h2, y2_h3, y2_h4], fastfsr.bic_sim) res_bic2
simulation_fastfsr.ipynb
CeciliaShi/STA-663-Final-Project
mit
LASSO
res_lasso1 = simulation(x1_matrix, [y1_h0, y1_h1, y1_h2, y1_h3, y1_h4], fastfsr.lasso_fit) res_lasso1 res_lasso2 = simulation(x2_matrix, [y2_h0, y2_h1, y2_h2, y2_h3, y2_h4], fastfsr.lasso_fit) res_lasso2
simulation_fastfsr.ipynb
CeciliaShi/STA-663-Final-Project
mit
Model Comparison False Selection Rate
xlabel = [1, 2, 3, 4, 5] xlabel_name = ['H0', 'H1', 'H2', 'H3', 'H4'] fig, ax = plt.subplots(figsize = (6, 5)) fig.autofmt_xdate() plt.plot(xlabel, res_bic1['fsr_mr'], marker = 'o', markersize = 4, color = 'blue', linestyle = 'solid', label = 'BIC') plt.plot(xlabel, res_fsr1['fsr_mr'], marker = 'o', markersize = 4, color = 'green', linestyle = 'solid', label = 'Fast FSR') plt.plot(xlabel, res_lasso1['fsr_mr'], marker = 'o', markersize = 4, color = 'purple', linestyle = 'solid', label = 'LASSO') plt.legend(loc='upper right') ax.set_ylabel('FSR', fontsize = 16) ax.set_title('FSR Rates rho = 0') plt.xticks(xlabel, xlabel_name) pass xlabel = [1, 2, 3, 4, 5] xlabel_name = ['H0', 'H1', 'H2', 'H3', 'H4'] fig, ax = plt.subplots(figsize = (6, 5)) fig.autofmt_xdate() plt.plot(xlabel, res_bic2['fsr_mr'], marker = 'o', markersize = 4, color = 'blue', linestyle = 'solid', label = 'BIC') plt.plot(xlabel, res_fsr2['fsr_mr'], marker = 'o', markersize = 4, color = 'green', linestyle = 'solid', label = 'Fast FSR') plt.plot(xlabel, res_lasso2['fsr_mr'], marker = 'o', markersize = 4, color = 'purple', linestyle = 'solid', label = 'LASSO') plt.legend(loc='upper right') ax.set_ylabel('FSR', fontsize = 16) ax.set_title('FSR Rates rho = 0.7') plt.xticks(xlabel, xlabel_name) pass
simulation_fastfsr.ipynb
CeciliaShi/STA-663-Final-Project
mit
Correct Selection Rate
xlabel = [1, 2, 3, 4, 5] xlabel_name = ['H0', 'H1', 'H2', 'H3', 'H4'] fig, ax = plt.subplots(figsize = (6, 5)) fig.autofmt_xdate() plt.plot(xlabel, res_bic1['csr'], marker = 'o', markersize = 4, color = 'blue', linestyle = 'solid', label = 'BIC') plt.plot(xlabel, res_fsr1['csr'], marker = 'o', markersize = 4, color = 'green', linestyle = 'solid', label = 'Fast FSR') plt.plot(xlabel, res_lasso1['csr'], marker = 'o', markersize = 4, color = 'purple', linestyle = 'solid', label = 'LASSO') plt.legend(loc='upper right') ax.set_ylabel('CSR', fontsize = 16) ax.set_title('CSR Rates rho = 0') plt.xticks(xlabel, xlabel_name) plt.ylim([0,1.05]) pass
simulation_fastfsr.ipynb
CeciliaShi/STA-663-Final-Project
mit
Average Model Size
xlabel = [1, 2, 3, 4, 5] xlabel_name = ['H0', 'H1', 'H2', 'H3', 'H4'] fig, ax = plt.subplots(figsize = (6, 5)) fig.autofmt_xdate() plt.plot(xlabel, res_bic1['size'], marker = 'o', markersize = 4, color = 'blue', linestyle = 'solid', label = 'BIC') plt.plot(xlabel, res_fsr1['size'], marker = 'o', markersize = 4, color = 'green', linestyle = 'solid', label = 'Fast FSR') plt.plot(xlabel, res_lasso1['size'], marker = 'o', markersize = 4, color = 'purple', linestyle = 'solid', label = 'LASSO') plt.legend(loc='lower right') ax.set_ylabel('Model Size', fontsize = 16) ax.set_title('Model Size rho = 0') plt.xticks(xlabel, xlabel_name) pass
simulation_fastfsr.ipynb
CeciliaShi/STA-663-Final-Project
mit
BigQuery TensorFlow 리더의 엔드 투 엔드 예제 <table class="tfo-notebook-buttons" align="left"> <td><a target="_blank" href="https://www.tensorflow.org/io/tutorials/bigquery"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org에서 보기</a></td> <td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/io/tutorials/bigquery.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab에서 실행하기</a></td> <td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/io/tutorials/bigquery.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub에서소스 보기</a></td> <td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/io/tutorials/bigquery.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">노트북 다운로드하기</a></td> </table> 개요 이 가이드에서는 Keras 순차 API를 사용하여 신경망을 훈련하기 위한 BigQuery TensorFlow 리더의 사용 방법을 보여줍니다. 데이터세트 이 튜토리얼에서는 UC Irvine 머신러닝 리포지토리에서 제공하는 United States Census Income 데이터세트를 사용합니다. 이 데이터세트에는 연령, 학력, 결혼 상태, 직업 및 연간 수입이 $50,000 이상인지 여부를 포함하여 1994년 인구 조사 데이터베이스에 수록된 사람들에 대한 정보가 포함되어 있습니다. 설정 GCP 프로젝트 설정하기 노트북 환경과 관계없이 다음 단계가 필요합니다. GCP 프로젝트를 선택하거나 만듭니다. 프로젝트에 결제가 사용 설정되어 있는지 확인하세요. BigQuery Storage API 사용 아래 셀에 프로젝트 ID를 입력합니다. 그런 다음 셀을 실행하여 Cloud SDK가 이 노트북의 모든 명령에 올바른 프로젝트를 사용하는지 확인합니다. 참고: Jupyter는 앞에 !가 붙은 줄을 셸 명령으로 실행하고 앞에 $가 붙은 Python 변수를 이러한 명령에 보간하여 넣습니다. 필수 패키지를 설치하고 런타임을 다시 시작합니다.
try: # Use the Colab's preinstalled TensorFlow 2.x %tensorflow_version 2.x except: pass !pip install fastavro !pip install tensorflow-io==0.9.0 !pip install google-cloud-bigquery-storage
site/ko/io/tutorials/bigquery.ipynb
tensorflow/docs-l10n
apache-2.0
인증합니다.
from google.colab import auth auth.authenticate_user() print('Authenticated')
site/ko/io/tutorials/bigquery.ipynb
tensorflow/docs-l10n
apache-2.0
프로젝트 ID를 설정합니다.
PROJECT_ID = "<YOUR PROJECT>" #@param {type:"string"} ! gcloud config set project $PROJECT_ID %env GCLOUD_PROJECT=$PROJECT_ID
site/ko/io/tutorials/bigquery.ipynb
tensorflow/docs-l10n
apache-2.0
Python 라이브러리를 가져오고 상수를 정의합니다.
from __future__ import absolute_import, division, print_function, unicode_literals import os from six.moves import urllib import tempfile import numpy as np import pandas as pd import tensorflow as tf from google.cloud import bigquery from google.api_core.exceptions import GoogleAPIError LOCATION = 'us' # Storage directory DATA_DIR = os.path.join(tempfile.gettempdir(), 'census_data') # Download options. DATA_URL = 'https://storage.googleapis.com/cloud-samples-data/ml-engine/census/data' TRAINING_FILE = 'adult.data.csv' EVAL_FILE = 'adult.test.csv' TRAINING_URL = '%s/%s' % (DATA_URL, TRAINING_FILE) EVAL_URL = '%s/%s' % (DATA_URL, EVAL_FILE) DATASET_ID = 'census_dataset' TRAINING_TABLE_ID = 'census_training_table' EVAL_TABLE_ID = 'census_eval_table' CSV_SCHEMA = [ bigquery.SchemaField("age", "FLOAT64"), bigquery.SchemaField("workclass", "STRING"), bigquery.SchemaField("fnlwgt", "FLOAT64"), bigquery.SchemaField("education", "STRING"), bigquery.SchemaField("education_num", "FLOAT64"), bigquery.SchemaField("marital_status", "STRING"), bigquery.SchemaField("occupation", "STRING"), bigquery.SchemaField("relationship", "STRING"), bigquery.SchemaField("race", "STRING"), bigquery.SchemaField("gender", "STRING"), bigquery.SchemaField("capital_gain", "FLOAT64"), bigquery.SchemaField("capital_loss", "FLOAT64"), bigquery.SchemaField("hours_per_week", "FLOAT64"), bigquery.SchemaField("native_country", "STRING"), bigquery.SchemaField("income_bracket", "STRING"), ] UNUSED_COLUMNS = ["fnlwgt", "education_num"]
site/ko/io/tutorials/bigquery.ipynb
tensorflow/docs-l10n
apache-2.0
BigQuery로 인구 조사 데이터 가져오기 BigQuery에 데이터를 로드하는 도우미 메서드를 정의합니다.
def create_bigquery_dataset_if_necessary(dataset_id): # Construct a full Dataset object to send to the API. client = bigquery.Client(project=PROJECT_ID) dataset = bigquery.Dataset(bigquery.dataset.DatasetReference(PROJECT_ID, dataset_id)) dataset.location = LOCATION try: dataset = client.create_dataset(dataset) # API request return True except GoogleAPIError as err: if err.code != 409: # http_client.CONFLICT raise return False def load_data_into_bigquery(url, table_id): create_bigquery_dataset_if_necessary(DATASET_ID) client = bigquery.Client(project=PROJECT_ID) dataset_ref = client.dataset(DATASET_ID) table_ref = dataset_ref.table(table_id) job_config = bigquery.LoadJobConfig() job_config.write_disposition = bigquery.WriteDisposition.WRITE_TRUNCATE job_config.source_format = bigquery.SourceFormat.CSV job_config.schema = CSV_SCHEMA load_job = client.load_table_from_uri( url, table_ref, job_config=job_config ) print("Starting job {}".format(load_job.job_id)) load_job.result() # Waits for table load to complete. print("Job finished.") destination_table = client.get_table(table_ref) print("Loaded {} rows.".format(destination_table.num_rows))
site/ko/io/tutorials/bigquery.ipynb
tensorflow/docs-l10n
apache-2.0
BigQuery에서 인구 조사 데이터를 로드합니다.
load_data_into_bigquery(TRAINING_URL, TRAINING_TABLE_ID) load_data_into_bigquery(EVAL_URL, EVAL_TABLE_ID)
site/ko/io/tutorials/bigquery.ipynb
tensorflow/docs-l10n
apache-2.0
가져온 데이터를 확인합니다. 수행할 작업: <YOUR PROJECT>를 PROJECT_ID로 바꿉니다. 참고: --use_bqstorage_api는 BigQueryStorage API를 사용하여 데이터를 가져오고 사용 권한이 있는지 확인합니다. 프로젝트에 이 부분이 활성화되어 있는지 확인합니다(https://cloud.google.com/bigquery/docs/reference/storage/#enabling_the_api).
%%bigquery --use_bqstorage_api SELECT * FROM `<YOUR PROJECT>.census_dataset.census_training_table` LIMIT 5
site/ko/io/tutorials/bigquery.ipynb
tensorflow/docs-l10n
apache-2.0
BigQuery 리더를 사용하여 TensorFlow DataSet에 인구 조사 데이터 로드하기 BigQuery에서 인구 조사 데이터를 읽고 TensorFlow DataSet로 변환합니다.
from tensorflow.python.framework import ops from tensorflow.python.framework import dtypes from tensorflow_io.bigquery import BigQueryClient from tensorflow_io.bigquery import BigQueryReadSession def transofrom_row(row_dict): # Trim all string tensors trimmed_dict = { column: (tf.strings.strip(tensor) if tensor.dtype == 'string' else tensor) for (column,tensor) in row_dict.items() } # Extract feature column income_bracket = trimmed_dict.pop('income_bracket') # Convert feature column to 0.0/1.0 income_bracket_float = tf.cond(tf.equal(tf.strings.strip(income_bracket), '>50K'), lambda: tf.constant(1.0), lambda: tf.constant(0.0)) return (trimmed_dict, income_bracket_float) def read_bigquery(table_name): tensorflow_io_bigquery_client = BigQueryClient() read_session = tensorflow_io_bigquery_client.read_session( "projects/" + PROJECT_ID, PROJECT_ID, table_name, DATASET_ID, list(field.name for field in CSV_SCHEMA if not field.name in UNUSED_COLUMNS), list(dtypes.double if field.field_type == 'FLOAT64' else dtypes.string for field in CSV_SCHEMA if not field.name in UNUSED_COLUMNS), requested_streams=2) dataset = read_session.parallel_read_rows() transformed_ds = dataset.map (transofrom_row) return transformed_ds BATCH_SIZE = 32 training_ds = read_bigquery(TRAINING_TABLE_ID).shuffle(10000).batch(BATCH_SIZE) eval_ds = read_bigquery(EVAL_TABLE_ID).batch(BATCH_SIZE)
site/ko/io/tutorials/bigquery.ipynb
tensorflow/docs-l10n
apache-2.0
특성 열 정의하기
def get_categorical_feature_values(column): query = 'SELECT DISTINCT TRIM({}) FROM `{}`.{}.{}'.format(column, PROJECT_ID, DATASET_ID, TRAINING_TABLE_ID) client = bigquery.Client(project=PROJECT_ID) dataset_ref = client.dataset(DATASET_ID) job_config = bigquery.QueryJobConfig() query_job = client.query(query, job_config=job_config) result = query_job.to_dataframe() return result.values[:,0] from tensorflow import feature_column feature_columns = [] # numeric cols for header in ['capital_gain', 'capital_loss', 'hours_per_week']: feature_columns.append(feature_column.numeric_column(header)) # categorical cols for header in ['workclass', 'marital_status', 'occupation', 'relationship', 'race', 'native_country', 'education']: categorical_feature = feature_column.categorical_column_with_vocabulary_list( header, get_categorical_feature_values(header)) categorical_feature_one_hot = feature_column.indicator_column(categorical_feature) feature_columns.append(categorical_feature_one_hot) # bucketized cols age = feature_column.numeric_column('age') age_buckets = feature_column.bucketized_column(age, boundaries=[18, 25, 30, 35, 40, 45, 50, 55, 60, 65]) feature_columns.append(age_buckets) feature_layer = tf.keras.layers.DenseFeatures(feature_columns)
site/ko/io/tutorials/bigquery.ipynb
tensorflow/docs-l10n
apache-2.0
모델 빌드 및 훈련하기 모델을 빌드합니다.
Dense = tf.keras.layers.Dense model = tf.keras.Sequential( [ feature_layer, Dense(100, activation=tf.nn.relu, kernel_initializer='uniform'), Dense(75, activation=tf.nn.relu), Dense(50, activation=tf.nn.relu), Dense(25, activation=tf.nn.relu), Dense(1, activation=tf.nn.sigmoid) ]) # Compile Keras model model.compile( loss='binary_crossentropy', metrics=['accuracy'])
site/ko/io/tutorials/bigquery.ipynb
tensorflow/docs-l10n
apache-2.0
모델을 훈련합니다.
model.fit(training_ds, epochs=5)
site/ko/io/tutorials/bigquery.ipynb
tensorflow/docs-l10n
apache-2.0
모델 평가하기 모델을 평가합니다.
loss, accuracy = model.evaluate(eval_ds) print("Accuracy", accuracy)
site/ko/io/tutorials/bigquery.ipynb
tensorflow/docs-l10n
apache-2.0
몇 가지 무작위 샘플을 평가합니다.
sample_x = { 'age' : np.array([56, 36]), 'workclass': np.array(['Local-gov', 'Private']), 'education': np.array(['Bachelors', 'Bachelors']), 'marital_status': np.array(['Married-civ-spouse', 'Married-civ-spouse']), 'occupation': np.array(['Tech-support', 'Other-service']), 'relationship': np.array(['Husband', 'Husband']), 'race': np.array(['White', 'Black']), 'gender': np.array(['Male', 'Male']), 'capital_gain': np.array([0, 7298]), 'capital_loss': np.array([0, 0]), 'hours_per_week': np.array([40, 36]), 'native_country': np.array(['United-States', 'United-States']) } model.predict(sample_x)
site/ko/io/tutorials/bigquery.ipynb
tensorflow/docs-l10n
apache-2.0
Review Before we start playing with the actual implementations let us review a couple of things about RL. Reinforcement Learning is concerned with how software agents ought to take actions in an environment so as to maximize some notion of cumulative reward. Reinforcement learning differs from standard supervised learning in that correct input/output pairs are never presented, nor sub-optimal actions explicitly corrected. Further, there is a focus on on-line performance, which involves finding a balance between exploration (of uncharted territory) and exploitation (of current knowledge). -- Source: Wikipedia In summary we have a sequence of state action transitions with rewards associated with some states. Our goal is to find the optimal policy (pi) which tells us what action to take in each state. Passive Reinforcement Learning In passive Reinforcement Learning the agent follows a fixed policy and tries to learn the Reward function and the Transition model (if it is not aware of that). Passive Temporal Difference Agent The PassiveTDAgent class in the rl module implements the Agent Program (notice the usage of word Program) described in Fig 21.4 of the AIMA Book. PassiveTDAgent uses temporal differences to learn utility estimates. In simple terms we learn the difference between the states and backup the values to previous states while following a fixed policy. Let us look into the source before we see some usage examples.
%psource PassiveTDAgent
rl.ipynb
grantvk/aima-python
mit
The Agent Program can be obtained by creating the instance of the class by passing the appropriate parameters. Because of the call method the object that is created behaves like a callable and returns an appropriate action as most Agent Programs do. To instantiate the object we need a policy(pi) and a mdp whose utility of states will be estimated. Let us import a GridMDP object from the mdp module. Figure 17.1 (sequential_decision_environment) is similar to Figure 21.1 but has some discounting as gamma = 0.9.
from mdp import sequential_decision_environment sequential_decision_environment
rl.ipynb
grantvk/aima-python
mit
Active Reinforcement Learning Unlike Passive Reinforcement Learning in Active Reinforcement Learning we are not bound by a policy pi and we need to select our actions. In other words the agent needs to learn an optimal policy. The fundamental tradeoff the agent needs to face is that of exploration vs. exploitation. QLearning Agent The QLearningAgent class in the rl module implements the Agent Program described in Fig 21.8 of the AIMA Book. In Q-Learning the agent learns an action-value function Q which gives the utility of taking a given action in a particular state. Q-Learning does not required a transition model and hence is a model free method. Let us look into the source before we see some usage examples.
%psource QLearningAgent
rl.ipynb
grantvk/aima-python
mit
1. GIS data selection First, run the cell below to browse to the directory your input CSV file is located at and select the input file. Sample file shall be located at .\ gep-onsset\test_data.
import tkinter as tk from tkinter import filedialog, messagebox from openpyxl import load_workbook root = tk.Tk() root.withdraw() root.attributes("-topmost", True) messagebox.showinfo('OnSSET', 'Open the input file with extracted GIS data') input_file = filedialog.askopenfilename() onsseter = SettlementProcessor(input_file) onsseter.df['IsUrban'] = 0 onsseter.df['Conflict'] = 0 onsseter.df['PerCapitaDemand'] = 0
Generator.ipynb
KTH-dESA/PyOnSSET
mit
2. Modelling period and target electrification rate Next, define the modelling period and the electrification rate to be achieved by the end of the analysis. Further down you will also define an intermediate year and target (in the Levers section).
start_year = 2018 end_year = 2030 electrification_rate_target = 1 # E.g. 1 for 100% electrification rate or 0.80 for 80% electrification rate
Generator.ipynb
KTH-dESA/PyOnSSET
mit
3. Levers Next, define the values of the levers. These are the 6 levers that are available on the GEP Explorer. Contrary to the GEP Explorer where each lever has two or three pre-defined values, here they can take any value. Lever 1: Population growth For the first lever first, enter the expected population in the country by the end year of the analysis (e.g. 2030). The default values in the GEP Explorer are based on the medium growth variant and high growth variant of the UN Population Database, found <a href="https://population.un.org/wpp/" target="_blank">here</a>.
end_year_pop = 26858618
Generator.ipynb
KTH-dESA/PyOnSSET
mit
Lever 2: Electricity demand target For the second lever, enter the target tier (level of electricity access) for urban and rural households respectively. This can take a value between 1 (lowest level of electricity access) and 5 (highest level of electricity access) as in ESMAPs Multi-Tier Framework for Measuring Electricity Access (found <a href="https://www.esmap.org/node/55526" target="_blank">here</a>). Alternatively, enter 6 to use a distribution of the tiers across the country based on poverty levels and GDP according to the methodology found here link.
urban_target_tier = 5 rural_target_tier = 3
Generator.ipynb
KTH-dESA/PyOnSSET
mit
Lever 3: Intermediate electrification rate target For the third lever, enter the intermediate target year and target electrification rate for that year.
intermediate_year = 2025 intermediate_electrification_target = 0.63 # E.g. for a target electrification rate of 75%, enter 0.75
Generator.ipynb
KTH-dESA/PyOnSSET
mit
Lever 4: Grid generating cost of electricity This lever examines different average costs of generating electricity by the power-plants connected to the national grid. This cost is one of the factors that affect the LCoE of connecting to the grid (together with extension of the grid lines etc.), and may affect the split between grid- and off-grid technologies.
grid_generation_cost = 0.046622 ### This is the grid cost electricity USD/kWh as expected in the end year of the analysis
Generator.ipynb
KTH-dESA/PyOnSSET
mit
Lever 5: PV system cost adjustment This lever reflects the role of PV system costs on electrification results. All PV based systems will be adjusted by a factor to simulate a higher or lower cost of PV systems (compared to the baseline values entered below). A value lower than 1 means lower investment costs for PV systems compared to baseline, and a value larger than 1 means higher investment cost for PV systems compared to baseline. E.g. 0.75 would mean a cost that is 25% lower compared to baseline costs.
pv_adjustment_factor = 1
Generator.ipynb
KTH-dESA/PyOnSSET
mit
Lever 6: Prioritization algorithm This lever reflects the prioritization approach in order to achieve the electrification rate specified (in Lever 3) in intermediate target year of the analysis. There are currently two options available: Baseline: Prioritizes grid densification first (ramp up in already electrified clusters) then selection based on lowest invetsment cost per capita. Grid densification is limited by a grid capacity cap per year and a number of sensible grid connections per year. Intensification: Same as above, plus automatic grid intensification to all clusters within a predefined buffer of X km. (Default value for X is 2km)
prioritization = 1 # Select 1 or 2. 1 = baseline, 2 = intensification, 3 = easily accessible settlements auto_intensification = 2 # Buffer distance (km) for automatic intensification if choosing prioritization 1 annual_new_grid_connections_limit = 109 # This is the maximum amount of new households that can be connected to the grid in one year (thousands) annual_grid_cap_gen_limit = 100 # This is the maximum generation capacity that can be added to the grid in one year (MW)
Generator.ipynb
KTH-dESA/PyOnSSET
mit
4. Enter country specific data In addition to the levers above the user can customize a large number of variables describing the social - economic - technological environment in the selected country. Note! Most input values shall represent future estimates for the variable, i.e. they describe and NOT current values. a. Demographics and Social components
pop_start_year = 18620000 ### Write the population in the base year (e.g. 2018) urban_ratio_start_year = 0.17 ### Write the urban population population ratio in the base year (e.g. 2018) urban_ratio_end_year = 0.20 ### Write the urban population population ratio in the end year (e.g. 2030) num_people_per_hh_urban = 4.3 ### Write the number of people per household expected in the end year (e.g. 2030) num_people_per_hh_rural = 4.5 ### Write the number of people per household expected in the end year (e.g. 2030) elec_ratio_start_year = 0.11 ### Write the electrification rate in the base year (e.g. 2018) urban_elec_ratio = 0.492 ### Write urban electrification rate in the base year (e.g. 2018) rural_elec_ratio = 0.032 ### Write rural electrification rate in the base year (e.g. 2018)
Generator.ipynb
KTH-dESA/PyOnSSET
mit
b. Technology specifications & costs The cell below contains all the information that is used to calculate the levelized costs for all the technologies, including grid. These default values should be updated to reflect the most accurate values in the country. There are currently 7 potential technologies to include in the model: * Grid * PV Mini-grid * Wind Mini-grid * Hydro Mini-grid * Diesel Mini-grid * PV Stand-alone systems * Diesel Stand-alone systems First, decide whether to include diesel technologies or not:
diesel_techs = 0 ### 0 = diesel NOT included, 1 = diesel included grid_power_plants_capital_cost = 2000 ### The cost in USD/kW to for capacity upgrades of the grid grid_losses = 0.1 ### The fraction of electricity lost in transmission and distribution (percentage) base_to_peak = 0.8 ### The ratio of base grid demand to peak demand (percentage) existing_grid_cost_ratio = 0.1 ### The additional cost per round of electrification (percentage) diesel_price = 0.5 ### This is the diesel price in USD/liter as expected in the end year of the analysis. sa_diesel_capital_cost = 938 ### Stand-alone Diesel capital cost (USD/kW) as expected in the years of the analysis mg_diesel_capital_cost = 721 ### Mini-grid Diesel capital cost (USD/kW) as expected in the years of the analysis mg_pv_capital_cost = 2950 ### Mini-grid PV capital cost (USD/kW) as expected in the years of the analysis mg_wind_capital_cost = 3750 ### Mini-grid Wind capital cost (USD/kW) as expected in the years of the analysis mg_hydro_capital_cost = 3000 ### Mini-grid Hydro capital cost (USD/kW) as expected in the years of the analysis sa_pv_capital_cost_1 = 9620 ### Stand-alone PV capital cost (USD/kW) for household systems under 20 W sa_pv_capital_cost_2 = 8780 ### Stand-alone PV capital cost (USD/kW) for household systems between 21-50 W sa_pv_capital_cost_3 = 6380 ### Stand-alone PV capital cost (USD/kW) for household systems between 51-100 W sa_pv_capital_cost_4 = 4470 ### Stand-alone PV capital cost (USD/kW) for household systems between 101-1000 W sa_pv_capital_cost_5 = 6950 ### Stand-alone PV capital cost (USD/kW) for household systems over 1 kW
Generator.ipynb
KTH-dESA/PyOnSSET
mit
The cells below contain additional technology specifications
coordinate_units = 1000 # 1000 if coordinates are in m, 1 if coordinates are in km discount_rate = 0.08 # E.g. 0.08 means a discount rate of 8% # Transmission and distribution costs hv_line_capacity=69 # kV hv_line_cost=53000 # USD/km mv_line_cost = 7000 # USD/kW mv_line_capacity=50 # kV mv_line_max_length=50 # km mv_increase_rate=0.1 max_mv_line_dist = 50 # km MV_line_amperage_limit = 8 # Ampere (A) lv_line_capacity=0.24 #kV lv_line_max_length=0.8 # km lv_line_cost=4250 # USD/km service_Transf_type=50 # kVa service_Transf_cost=4250 # $/unit max_nodes_per_serv_trans=300 # maximum number of nodes served by each service transformer hv_lv_transformer_cost=25000 # USD/unit hv_mv_transformer_cost=25000 # USD/unit mv_lv_transformer_cost=10000 # USD/unit mv_mv_transformer_cost=10000 # USD/unit # Centralized grid costs grid_calc = Technology(om_of_td_lines=0.1, distribution_losses=grid_losses, connection_cost_per_hh=150, base_to_peak_load_ratio=base_to_peak, capacity_factor=1, tech_life=30, grid_capacity_investment=grid_power_plants_capital_cost, grid_price=grid_generation_cost) # Mini-grid hydro costs mg_hydro_calc = Technology(om_of_td_lines=0.03, distribution_losses=0.05, connection_cost_per_hh=100, base_to_peak_load_ratio=0.85, capacity_factor=0.5, tech_life=30, capital_cost=mg_hydro_capital_cost, om_costs=0.02, ) # Mini-grid wind costs mg_wind_calc = Technology(om_of_td_lines=0.03, distribution_losses=0.05, connection_cost_per_hh=100, base_to_peak_load_ratio=0.85, capital_cost=mg_wind_capital_cost, om_costs=0.02, tech_life=20, ) # Mini-grid PV costs mg_pv_calc = Technology(om_of_td_lines=0.03, distribution_losses=0.05, connection_cost_per_hh=100, base_to_peak_load_ratio=0.85, tech_life=20, om_costs=0.02, capital_cost=mg_pv_capital_cost * pv_adjustment_factor ) # Stand-alone PV costs sa_pv_calc = Technology(base_to_peak_load_ratio=0.9, tech_life=15, om_costs=0.02, capital_cost={0.020: sa_pv_capital_cost_1 * pv_adjustment_factor, 0.050: sa_pv_capital_cost_2 * pv_adjustment_factor, 0.100: sa_pv_capital_cost_3 * pv_adjustment_factor, 1: sa_pv_capital_cost_4 * pv_adjustment_factor, 5: sa_pv_capital_cost_5 * pv_adjustment_factor}, standalone=True ) # Mini-grid diesel costs mg_diesel_calc = Technology(om_of_td_lines=0.03, distribution_losses=0.05, connection_cost_per_hh=100, base_to_peak_load_ratio=0.85, capacity_factor=0.7, tech_life=15, om_costs=0.1, efficiency=0.33, capital_cost=mg_diesel_capital_cost, diesel_price=diesel_price, diesel_truck_consumption=33.7, diesel_truck_volume=15000, ) # Stand-alone diesel costs sa_diesel_calc = Technology(base_to_peak_load_ratio=0.9, capacity_factor=0.7, tech_life=10, om_costs=0.1, capital_cost=sa_diesel_capital_cost, diesel_price=diesel_price, standalone=True, efficiency=0.28, diesel_truck_consumption=14, diesel_truck_volume=300)
Generator.ipynb
KTH-dESA/PyOnSSET
mit
5. GIS data import and processing OnSSET is a GIS based tool and its proper function depends heavily on the diligent preparation and calibration of the necessary geospatial data. Documentation on GIS processing in regards to OnSSET can be found <a href="http://onsset-manual.readthedocs.io/en/latest/data_acquisition.html" target="_blank">here</a>. The following cell reads the CSV-file containing the extracted GIS data for the country chosen in the previous section, and displays a snap-shot of some of the data.
yearsofanalysis = [intermediate_year, end_year] onsseter.condition_df() onsseter.grid_penalties() onsseter.calc_wind_cfs() onsseter.calibrate_pop_and_urban(pop_start_year, end_year_pop, end_year_pop, urban_ratio_start_year, urban_ratio_end_year, start_year, end_year, intermediate_year) eleclimits = {intermediate_year: intermediate_electrification_target, end_year: electrification_rate_target} time_steps = {intermediate_year: intermediate_year-start_year, end_year: end_year-intermediate_year} display(Markdown('#### The csv file has been imported correctly. Here is a preview:')) display(onsseter.df[['Country','Pop','NightLights','TravelHours','GHI','WindVel','Hydropower','HydropowerDist']].sample(7))
Generator.ipynb
KTH-dESA/PyOnSSET
mit
Calibration of currently electrified settlements The model calibrates which settlements are likely to be electrified in the start year, to match the national statistical values defined above. A settlement is considered to be electrified if it meets all of the following conditions: - Has more night-time lights than the defined threshold (this is set to 0 by default) - Is closer to the existing grid network than the distance limit - Has more population than the threshold First, define the threshold limits. Then run the calibration and check if the results seem okay. Else, redefine these thresholds and run again.
min_night_lights = 0 ### 0 Indicates no night light, while any number above refers to the night-lights intensity min_pop = 0 ### Settlement population above which we can assume that it could be electrified max_service_transformer_distance = 2 ### Distance in km from the existing grid network below which we can assume a settlement could be electrified max_mv_line_distance = 2 max_hv_line_distance = 25 Technology.set_default_values(base_year=start_year, start_year=start_year, end_year=end_year, discount_rate=discount_rate, HV_line_type=hv_line_capacity, HV_line_cost=hv_line_cost, MV_line_type=mv_line_capacity, MV_line_amperage_limit=MV_line_amperage_limit, MV_line_cost=mv_line_cost, LV_line_type=lv_line_capacity, LV_line_cost=lv_line_cost, LV_line_max_length=lv_line_max_length, service_Transf_type=service_Transf_type, service_Transf_cost = service_Transf_cost, max_nodes_per_serv_trans=max_nodes_per_serv_trans, MV_LV_sub_station_cost=mv_lv_transformer_cost, MV_MV_sub_station_cost=mv_mv_transformer_cost, HV_LV_sub_station_cost=hv_lv_transformer_cost, HV_MV_sub_station_cost=hv_mv_transformer_cost) elec_modelled, urban_internal_elec_ratio, rural_internal_elec_ratio = onsseter.elec_current_and_future(elec_ratio_start_year, urban_elec_ratio, rural_elec_ratio, pop_start_year, start_year, min_night_lights=min_night_lights, min_pop=min_pop, max_transformer_dist=max_service_transformer_distance, max_mv_dist=max_mv_line_distance, max_hv_dist=max_hv_line_distance) onsseter.grid_reach_estimate(start_year, gridspeed=9999)
Generator.ipynb
KTH-dESA/PyOnSSET
mit
The figure below show the results of the calibration. Settlements in blue are considered to be (at least partly) electrified already in the start year of the analysis, while settlements in yellow are yet to be electrified. Re-running the calibration step with different intial values may change the map below.
from matplotlib import pyplot as plt colors = ['#73B2FF','#EDD100','#EDA800','#1F6600','#98E600','#70A800','#1FA800'] plt.figure(figsize=(9,9)) plt.plot(onsseter.df.loc[onsseter.df[SET_ELEC_CURRENT]==0, SET_X_DEG], onsseter.df.loc[onsseter.df[SET_ELEC_CURRENT]==0, SET_Y_DEG], 'y,') plt.plot(onsseter.df.loc[onsseter.df[SET_ELEC_CURRENT]==1, SET_X_DEG], onsseter.df.loc[onsseter.df[SET_ELEC_CURRENT]==1, SET_Y_DEG], 'b,') if onsseter.df[SET_X_DEG].max() - onsseter.df[SET_X_DEG].min() > onsseter.df[SET_Y_DEG].max() - onsseter.df[SET_Y_DEG].min(): plt.xlim(onsseter.df[SET_X_DEG].min() - 1, onsseter.df[SET_X_DEG].max() + 1) plt.ylim((onsseter.df[SET_Y_DEG].min()+onsseter.df[SET_Y_DEG].max())/2 - 0.5*abs(onsseter.df[SET_X_DEG].max() - onsseter.df[SET_X_DEG].min()) - 1, (onsseter.df[SET_Y_DEG].min()+onsseter.df[SET_Y_DEG].max())/2 + 0.5*abs(onsseter.df[SET_X_DEG].max() - onsseter.df[SET_X_DEG].min()) + 1) else: plt.xlim((onsseter.df[SET_X_DEG].min()+onsseter.df[SET_X_DEG].max())/2 - 0.5*abs(onsseter.df[SET_Y_DEG].max() - onsseter.df[SET_Y_DEG].min()) - 1, (onsseter.df[SET_X_DEG].min()+onsseter.df[SET_X_DEG].max())/2 + 0.5*abs(onsseter.df[SET_Y_DEG].max() - onsseter.df[SET_Y_DEG].min()) + 1) plt.ylim(onsseter.df[SET_Y_DEG].min() -1, onsseter.df[SET_Y_DEG].max() +1) plt.figure(figsize=(30,30))
Generator.ipynb
KTH-dESA/PyOnSSET
mit
In some cases it can be of interest to filter out clusters with very low populations, e.g. to increase computational speed or to remove false positives in the data. Setting the pop_threshold variable below larger than 0 will filter out all settlements below that threshold form the analysis.
pop_threshold = 0 # If you wish to remove low density population cells, enter a threshold above 0 onsseter.df = onsseter.df.loc[onsseter.df[SET_POP] > pop_threshold]
Generator.ipynb
KTH-dESA/PyOnSSET
mit
6. Define the demand This piece of code defines the target electricity demand in the region/country. Residential electricity demand is defined as kWh/household/year, while all other demands are defined as kWh/capita/year. Note that at the moment, all productive uses demands are set to 0 by default.
# Define the annual household electricity targets to choose from tier_1 = 38.7 # 38.7 refers to kWh/household/year. tier_2 = 219 tier_3 = 803 tier_4 = 2117 tier_5 = 2993 onsseter.prepare_wtf_tier_columns(num_people_per_hh_rural, num_people_per_hh_urban, tier_1, tier_2, tier_3, tier_4, tier_5) onsseter.df[SET_EDU_DEMAND] = 0 # Demand for educational facilities (kWh/capita/year) onsseter.df[SET_HEALTH_DEMAND] = 0 # Demand for health facilities (kWh/capita/year) onsseter.df[SET_COMMERCIAL_DEMAND] = 0 # Demand for commercial activities (kWh/capita/year) onsseter.df[SET_AGRI_DEMAND] = 0 # Demand for agricultural activities (kWh/capita/year) productive_demand = 0 # 1 if productive demand is defined and should be included, else 0
Generator.ipynb
KTH-dESA/PyOnSSET
mit
7. Start a scenario run, which calculate and compare technology costs for every settlement in the country Based on the previous calculation this piece of code identifies the LCoE that every off-grid technology can provide, for each single populated settlement of the selected country. The cell then takes all the currently grid-connected points in the country, and looks at the points within a certain distance from them, to see if it is more economical to connect them to the grid, or to use one of the off-grid technologies calculated above. Once more points are connected to the grid, the process is repeated, so that new points close to those points might also be connected. This is repeated until there are no new points to connect to the grid.
onsseter.current_mv_line_dist() for year in yearsofanalysis: end_year_pop = 1 eleclimit = eleclimits[year] time_step = time_steps[year] grid_cap_gen_limit = time_step * annual_grid_cap_gen_limit * 1000 grid_connect_limit = time_step * annual_new_grid_connections_limit * 1000 onsseter.set_scenario_variables(year, num_people_per_hh_rural, num_people_per_hh_urban, time_step, start_year, urban_elec_ratio, rural_elec_ratio, urban_target_tier, rural_target_tier, end_year_pop, productive_demand) onsseter.calculate_off_grid_lcoes(mg_hydro_calc, mg_wind_calc, mg_pv_calc, sa_pv_calc, mg_diesel_calc, sa_diesel_calc, 0, 0, 0, 0, 0, year, start_year, end_year, time_step, diesel_techs=diesel_techs) onsseter.pre_electrification(grid_calc, grid_generation_cost, year, time_step, start_year) onsseter.run_elec(grid_calc, max_mv_line_dist, year, start_year, end_year, time_step, grid_cap_gen_limit, grid_connect_limit, auto_intensification, prioritization) onsseter.results_columns(mg_hydro_calc, mg_wind_calc, mg_pv_calc, sa_pv_calc, mg_diesel_calc, sa_diesel_calc, grid_calc, 0, 0, 0, 0, 0, year) onsseter.calculate_investments(mg_hydro_calc, mg_wind_calc, mg_pv_calc, sa_pv_calc, mg_diesel_calc, sa_diesel_calc, grid_calc, 0, 0, 0, 0, 0, year, end_year, time_step) onsseter.apply_limitations(eleclimit, year, time_step, prioritization, auto_intensification) onsseter.final_decision(mg_hydro_calc, mg_wind_calc, mg_pv_calc, sa_pv_calc, mg_diesel_calc, sa_diesel_calc, grid_calc, 0, 0, 0, 0, 0, year, end_year, time_step)
Generator.ipynb
KTH-dESA/PyOnSSET
mit
8. Results, Summaries and Visualization With all the calculations and grid-extensions complete, this block gets the final results on which technology was chosen for each point, how much capacity needs to be installed and what it will cost. Then the summaries, plots and maps are generated.
elements = [] for year in yearsofanalysis: elements.append("Population{}".format(year)) elements.append("NewConnections{}".format(year)) elements.append("Capacity{}".format(year)) elements.append("Investment{}".format(year)) techs = ["Grid", "SA_Diesel", "SA_PV", "MG_Diesel", "MG_PV", "MG_Wind", "MG_Hydro"] sumtechs = [] for year in yearsofanalysis: sumtechs.extend(["Population{}".format(year) + t for t in techs]) sumtechs.extend(["NewConnections{}".format(year) + t for t in techs]) sumtechs.extend(["Capacity{}".format(year) + t for t in techs]) sumtechs.extend(["Investment{}".format(year) + t for t in techs]) summary = pd.Series(index=sumtechs, name='country') for year in yearsofanalysis: for t in techs: summary.loc["Population{}".format(year) + t] = onsseter.df.loc[(onsseter.df[SET_MIN_OVERALL + '{}'.format(year)] == t + '{}'.format(year)), SET_POP + '{}'.format(year)].sum() summary.loc["NewConnections{}".format(year) + t] = onsseter.df.loc[(onsseter.df[SET_MIN_OVERALL + '{}'.format(year)] == t + '{}'.format(year)) & (onsseter.df[SET_ELEC_FINAL_CODE + '{}'.format(year)] < 99), SET_NEW_CONNECTIONS + '{}'.format(year)].sum() summary.loc["Capacity{}".format(year) + t] = onsseter.df.loc[(onsseter.df[SET_MIN_OVERALL + '{}'.format(year)] == t + '{}'.format(year)) & (onsseter.df[SET_ELEC_FINAL_CODE + '{}'.format(year)] < 99), SET_NEW_CAPACITY + '{}'.format(year)].sum()/1000 summary.loc["Investment{}".format(year) + t] = onsseter.df.loc[(onsseter.df[SET_MIN_OVERALL + '{}'.format(year)] == t + '{}'.format(year)) & (onsseter.df[SET_ELEC_FINAL_CODE + '{}'.format(year)] < 99), SET_INVESTMENT_COST + '{}'.format(year)].sum() index = techs + ['Total'] columns = [] for year in yearsofanalysis: columns.append("Population{}".format(year)) columns.append("NewConnections{}".format(year)) columns.append("Capacity{} (MW)".format(year)) columns.append("Investment{} (million USD)".format(year)) summary_table = pd.DataFrame(index=index, columns=columns) summary_table[columns[0]] = summary.iloc[0:7].astype(int).tolist() + [int(summary.iloc[0:7].sum())] summary_table[columns[1]] = summary.iloc[7:14].astype(int).tolist() + [int(summary.iloc[7:14].sum())] summary_table[columns[2]] = summary.iloc[14:21].astype(int).tolist() + [int(summary.iloc[14:21].sum())] summary_table[columns[3]] = [round(x/1e4)/1e2 for x in summary.iloc[21:28].astype(float).tolist()] + [round(summary.iloc[21:28].sum()/1e4)/1e2] summary_table[columns[4]] = summary.iloc[28:35].astype(int).tolist() + [int(summary.iloc[28:35].sum())] summary_table[columns[5]] = summary.iloc[35:42].astype(int).tolist() + [int(summary.iloc[35:42].sum())] summary_table[columns[6]] = summary.iloc[42:49].astype(int).tolist() + [int(summary.iloc[42:49].sum())] summary_table[columns[7]] = [round(x/1e4)/1e2 for x in summary.iloc[49:56].astype(float).tolist()] + [round(summary.iloc[49:56].sum()/1e4)/1e2] display(Markdown('### Summary \n These are the summarized results for full electrification of the selected country by the final year')) summary_table import matplotlib.pylab as plt import seaborn as sns colors = ['#73B2FF','#EDD100','#EDA800','#1F6600','#98E600','#70A800','#1FA800'] techs_colors = dict(zip(techs, colors)) summary_plot=summary_table.drop(labels='Total',axis=0) fig_size = [15, 15] font_size = 10 plt.rcParams["figure.figsize"] = fig_size f, axarr = plt.subplots(2, 2) fig_size = [15, 15] font_size = 10 plt.rcParams["figure.figsize"] = fig_size sns.barplot(x=summary_plot.index.tolist(), y=columns[4], data=summary_plot, ax=axarr[0, 0], palette=colors) axarr[0, 0].set_ylabel(columns[4], fontsize=2*font_size) axarr[0, 0].tick_params(labelsize=font_size) sns.barplot(x=summary_plot.index.tolist(), y=columns[5], data=summary_plot, ax=axarr[0, 1], palette=colors) axarr[0, 1].set_ylabel(columns[5], fontsize=2*font_size) axarr[0, 1].tick_params(labelsize=font_size) sns.barplot(x=summary_plot.index.tolist(), y=columns[6], data=summary_plot, ax=axarr[1, 0], palette=colors) axarr[1, 0].set_ylabel(columns[6], fontsize=2*font_size) axarr[1, 0].tick_params(labelsize=font_size) sns.barplot(x=summary_plot.index.tolist(), y=columns[7], data=summary_plot, ax=axarr[1, 1], palette=colors) axarr[1, 1].set_ylabel(columns[7], fontsize=2*font_size) axarr[1, 1].tick_params(labelsize=font_size) from matplotlib import pyplot as plt colors = ['#73B2FF','#EDD100','#EDA800','#1F6600','#98E600','#70A800','#1FA800'] plt.figure(figsize=(9,9)) plt.plot(onsseter.df.loc[onsseter.df['FinalElecCode{}'.format(end_year)]==3, SET_X_DEG], onsseter.df.loc[onsseter.df['FinalElecCode{}'.format(end_year)]==3, SET_Y_DEG], color='#EDA800', marker=',', linestyle='none') plt.plot(onsseter.df.loc[onsseter.df['FinalElecCode{}'.format(end_year)]==2, SET_X_DEG], onsseter.df.loc[onsseter.df['FinalElecCode{}'.format(end_year)]==2, SET_Y_DEG], color='#EDD100', marker=',', linestyle='none') plt.plot(onsseter.df.loc[onsseter.df['FinalElecCode{}'.format(end_year)]==4, SET_X_DEG], onsseter.df.loc[onsseter.df['FinalElecCode{}'.format(end_year)]==4, SET_Y_DEG], color='#1F6600', marker=',', linestyle='none') plt.plot(onsseter.df.loc[onsseter.df['FinalElecCode{}'.format(end_year)]==5, SET_X_DEG], onsseter.df.loc[onsseter.df['FinalElecCode{}'.format(end_year)]==5, SET_Y_DEG], color='#98E600', marker=',', linestyle='none') plt.plot(onsseter.df.loc[onsseter.df['FinalElecCode{}'.format(end_year)]==6, SET_X_DEG], onsseter.df.loc[onsseter.df['FinalElecCode{}'.format(end_year)]==6, SET_Y_DEG], color='#70A800', marker=',', linestyle='none') plt.plot(onsseter.df.loc[onsseter.df['FinalElecCode{}'.format(end_year)]==7, SET_X_DEG], onsseter.df.loc[onsseter.df['FinalElecCode{}'.format(end_year)]==7, SET_Y_DEG], color='#1FA800', marker=',', linestyle='none') plt.plot(onsseter.df.loc[onsseter.df['FinalElecCode{}'.format(end_year)]==1, SET_X_DEG], onsseter.df.loc[onsseter.df['FinalElecCode{}'.format(end_year)]==1, SET_Y_DEG], color='#73B2FF', marker=',', linestyle='none') if onsseter.df[SET_X_DEG].max() - onsseter.df[SET_X_DEG].min() > onsseter.df[SET_Y_DEG].max() - onsseter.df[SET_Y_DEG].min(): plt.xlim(onsseter.df[SET_X_DEG].min() - 1, onsseter.df[SET_X_DEG].max() + 1) plt.ylim((onsseter.df[SET_Y_DEG].min()+onsseter.df[SET_Y_DEG].max())/2 - 0.5*abs(onsseter.df[SET_X_DEG].max() - onsseter.df[SET_X_DEG].min()) - 1, (onsseter.df[SET_Y_DEG].min()+onsseter.df[SET_Y_DEG].max())/2 + 0.5*abs(onsseter.df[SET_X_DEG].max() - onsseter.df[SET_X_DEG].min()) + 1) else: plt.xlim((onsseter.df[SET_X_DEG].min()+onsseter.df[SET_X_DEG].max())/2 - 0.5*abs(onsseter.df[SET_Y_DEG].max() - onsseter.df[SET_Y_DEG].min()) - 1, (onsseter.df[SET_X_DEG].min()+onsseter.df[SET_X_DEG].max())/2 + 0.5*abs(onsseter.df[SET_Y_DEG].max() - onsseter.df[SET_Y_DEG].min()) + 1) plt.ylim(onsseter.df[SET_Y_DEG].min() -1, onsseter.df[SET_Y_DEG].max() +1) plt.figure(figsize=(30,30))
Generator.ipynb
KTH-dESA/PyOnSSET
mit
9. Exporting results This code generates three csv files: - one containing all the results for the scenario created - one containing the summary for the scenario created - one containing some if the key input variables of the scenario Before we proceed, please write the scenario_name in the first cell below. then move on to the next cell and run it to browse to the directory where you want to save your results. Sample file shall be located at .\ gep-onsset\sample_output. **Note that if you do not change the scenario name, the previous output files will be overwritten
scenario_name = "scenario_5" list1 = [('Start_year',start_year,'','',''), ('End_year',end_year,'','',''), ('End year electrification rate target',electrification_rate_target,'','',''), ('Intermediate target year', intermediate_year,'','',''), ('Intermediate electrification rate target', intermediate_electrification_target,'','',''), ('PV cost adjustment factor', pv_adjustment_factor, '', '', ''), ('Urban target tier', urban_target_tier, '', '', ''), ('Rural target tier', rural_target_tier, '', '', ''), ('Prioritization', prioritization,'','','1 = baseline, 2 = intensification'), ('Auto intensification distance', auto_intensification, '', '', 'Buffer distance (km) for automatic intensification if choosing prioritization 1'), ('coordinate_units',coordinate_units,'','','1000 if coordinates are in m, 1 if coordinates are in km'), ('discount_rate',discount_rate,'','',''), ('pop_threshold',pop_threshold,'','',''), ('pop_start_year',pop_start_year,'','','the population in the base year (e.g. 2016)'), ('pop_end_year',end_year_pop,'','','the projected population in the end year (e.g. 2030)'), ('urban_ratio_start_year',urban_ratio_start_year,'','','the urban population population ratio in the base year (e.g. 2016)'), ('urban_ratio_end_year',urban_ratio_end_year,'','','the urban population population ratio in the end year (e.g. 2030)'), ('num_people_per_hh_urban',num_people_per_hh_urban,'','','the number of people per household expected in the end year (e.g. 2030)'), ('num_people_per_hh_rural',num_people_per_hh_rural,'','','the number of people per household expected in the end year (e.g. 2030)'), ('elec_ratio_start_year',elec_ratio_start_year,'','','the electrification rate in the base year (e.g. 2016)'), ('urban_elec_ratio',urban_elec_ratio,'','','urban electrification rate in the base year (e.g. 2016)'), ('rural_elec_ratio',rural_elec_ratio,'','','rural electrification rate in the base year (e.g. 2016)'), ('grid_generation_cost',grid_generation_cost,'','','This is the grid cost electricity USD/kWh as expected in the end year of the analysis'), ('grid_power_plants_capital_cost',grid_power_plants_capital_cost,'','','The cost in USD/kW to for capacity upgrades of the grid-connected power plants'), ('grid_losses',grid_losses,'','','The fraction of electricity lost in transmission and distribution (percentage)'), ('base_to_peak',base_to_peak,'','','The ratio of base grid demand to peak demand (percentage)'), ('existing_grid_cost_ratio',existing_grid_cost_ratio,'','','The additional cost per round of electrification (percentage)'), ('diesel_price',diesel_price,'','','This is the diesel price in USD/liter as expected in the end year of the analysis'), ('sa_diesel_capital_cost',sa_diesel_capital_cost,'','','Stand-alone Diesel capital cost (USD/kW) as expected in the years of the analysis'), ('mg_diesel_capital_cost',mg_diesel_capital_cost,'','','Mini-grid Diesel capital cost (USD/kW) as expected in the years of the analysis'), ('mg_pv_capital_cost',mg_pv_capital_cost,'','','Mini-grid PV capital cost (USD/kW) as expected in the years of the analysis'), ('mg_wind_capital_cost',mg_wind_capital_cost,'','','Mini-grid Wind capital cost (USD/kW) as expected in the years of the analysis'), ('mg_hydro_capital_cost',mg_hydro_capital_cost,'','','Mini-grid Hydro capital cost (USD/kW) as expected in the years of the analysis'), ('sa_pv_capital_cost_1',sa_pv_capital_cost_1,'','','Stand-alone PV capital cost (USD/kW) for household systems under 20 W'), ('sa_pv_capital_cost_2',sa_pv_capital_cost_2,'','','Stand-alone PV capital cost (USD/kW) for household systems between 21-50 W'), ('sa_pv_capital_cost_3',sa_pv_capital_cost_3,'','','Stand-alone PV capital cost (USD/kW) for household systems between 51-100 W'), ('sa_pv_capital_cost_4',sa_pv_capital_cost_4,'','','Stand-alone PV capital cost (USD/kW) for household systems between 101-200 W'), ('sa_pv_capital_cost_5',sa_pv_capital_cost_5,'','','Stand-alone PV capital cost (USD/kW) for household systems over 200 W'), ('mv_line_cost',mv_line_cost,'','','Cost of MV lines in USD/km'), ('lv_line_cost',lv_line_cost,'','','Cost of LV lines in USD/km'), ('mv_line_capacity',mv_line_capacity,'','','Capacity of MV lines in kW/line'), ('lv_line_capacity',lv_line_capacity,'','','Capacity of LV lines in kW/line'), ('lv_line_max_length',lv_line_max_length,'','','Maximum length of LV lines (km)'), ('hv_line_cost',hv_line_cost,'','','Cost of HV lines in USD/km'), ('mv_line_max_length',mv_line_max_length,'','','Maximum length of MV lines (km)'), ('hv_lv_transformer_cost',hv_lv_transformer_cost,'','','Cost of HV/MV transformer (USD/unit)'), ('mv_increase_rate',mv_increase_rate,'','','percentage'), ('max_grid_extension_dist',max_mv_line_dist,'','','Maximum distance that the grid may be extended by means of MV lines'), ('annual_new_grid_connections_limit', annual_new_grid_connections_limit,'','','This is the maximum amount of new households that can be connected to the grid in one year (thousands)'), ('grid_capacity_limit',annual_grid_cap_gen_limit,'','','This is the maximum generation capacity that can be added to the grid in one year (MW)'), ('GIS data: Administrative boundaries','','','','Delineates the boundaries of the analysis.'), ('GIS data: DEM','','','','Filled DEM (elevation) maps are use in a number of processes in the analysis (Energy potentials, restriction zones, grid extension suitability map etc.).'), ('GIS data: Hydropower','','','','Points showing potential mini/small hydropower potential. Provides power availability in each identified point.'), ('GIS data: Land Cover','','','','Land cover maps are use in a number of processes in the analysis (Energy potentials, restriction zones, grid extension suitability map etc.).'), ('GIS data: Night-time Lights','','','','Dataset used to,identify and spatially calibrate the currently electrified/non-electrified population.'), ('GIS data: Population','','','','Spatial identification and quantification of the current (base year) population. This dataset sets the basis of the ONSSET analysis as it is directly connected with the electricity demand and the assignment of energy access goals'), ('GIS data: Roads','','','','Current road infrastructure is used in order to specify grid extension suitability.'), ('GIS data: Solar GHI','','','','Provide information about the Global Horizontal Irradiation (kWh/m2/year) over an area. This is later used to identify the availability/suitability of Photovoltaic systems.'), ('GIS data: Substations','','','','Current Substation infrastructure is used in order to specify grid extension suitability.'), ('GIS data: Existing grid','','','','Current grid network'), ('GIS data: Planned grid','','','','Planned/committed grid network extensions'), ('GIS data: Travel-time','','','','Visualizes spatially the travel time required to reach from any individual cell to the closest town with population more than 50,000 people.'), ('GIS data: Wind velocity','','','','Provide information about the wind velocity (m/sec) over an area. This is later used to identify the availability/suitability of wind power (using Capacity factors).'), ] labels = ['Variable','Value', 'Source', 'Comments', 'Description'] df_variables = pd.DataFrame.from_records(list1, columns=labels) messagebox.showinfo('OnSSET', 'Browse to the folder where you want to save the outputs') output_dir = filedialog.askdirectory() output_dir_variables = os.path.join(output_dir, '{}_Variables.csv'.format(scenario_name)) output_dir_results = os.path.join(output_dir, '{}_Results.csv'.format(scenario_name)) output_dir_summaries = os.path.join(output_dir, '{}_Summaries.csv'.format(scenario_name)) # Returning the result as a csv file onsseter.df.to_csv(output_dir_results, index=False) # Returning the summary as a csv file summary_table.to_csv(output_dir_summaries, index=True) # Returning the input variables as a csv file df_variables.to_csv(output_dir_variables, index=False)
Generator.ipynb
KTH-dESA/PyOnSSET
mit
Multi-task recommenders <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/recommenders/examples/multitask"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/recommenders/blob/main/docs/examples/multitask.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/recommenders/blob/main/docs/examples/multitask.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/recommenders/docs/examples/multitask.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> In the basic retrieval tutorial we built a retrieval system using movie watches as positive interaction signals. In many applications, however, there are multiple rich sources of feedback to draw upon. For example, an e-commerce site may record user visits to product pages (abundant, but relatively low signal), image clicks, adding to cart, and, finally, purchases. It may even record post-purchase signals such as reviews and returns. Integrating all these different forms of feedback is critical to building systems that users love to use, and that do not optimize for any one metric at the expense of overall performance. In addition, building a joint model for multiple tasks may produce better results than building a number of task-specific models. This is especially true where some data is abundant (for example, clicks), and some data is sparse (purchases, returns, manual reviews). In those scenarios, a joint model may be able to use representations learned from the abundant task to improve its predictions on the sparse task via a phenomenon known as transfer learning. For example, this paper shows that a model predicting explicit user ratings from sparse user surveys can be substantially improved by adding an auxiliary task that uses abundant click log data. In this tutorial, we are going to build a multi-objective recommender for Movielens, using both implicit (movie watches) and explicit signals (ratings). Imports Let's first get our imports out of the way.
!pip install -q tensorflow-recommenders !pip install -q --upgrade tensorflow-datasets import os import pprint import tempfile from typing import Dict, Text import numpy as np import tensorflow as tf import tensorflow_datasets as tfds import tensorflow_recommenders as tfrs
docs/examples/multitask.ipynb
tensorflow/recommenders
apache-2.0
A multi-task model There are two critical parts to multi-task recommenders: They optimize for two or more objectives, and so have two or more losses. They share variables between the tasks, allowing for transfer learning. In this tutorial, we will define our models as before, but instead of having a single task, we will have two tasks: one that predicts ratings, and one that predicts movie watches. The user and movie models are as before: ```python user_model = tf.keras.Sequential([ tf.keras.layers.StringLookup( vocabulary=unique_user_ids, mask_token=None), # We add 1 to account for the unknown token. tf.keras.layers.Embedding(len(unique_user_ids) + 1, embedding_dimension) ]) movie_model = tf.keras.Sequential([ tf.keras.layers.StringLookup( vocabulary=unique_movie_titles, mask_token=None), tf.keras.layers.Embedding(len(unique_movie_titles) + 1, embedding_dimension) ]) ``` However, now we will have two tasks. The first is the rating task: python tfrs.tasks.Ranking( loss=tf.keras.losses.MeanSquaredError(), metrics=[tf.keras.metrics.RootMeanSquaredError()], ) Its goal is to predict the ratings as accurately as possible. The second is the retrieval task: python tfrs.tasks.Retrieval( metrics=tfrs.metrics.FactorizedTopK( candidates=movies.batch(128) ) ) As before, this task's goal is to predict which movies the user will or will not watch. Putting it together We put it all together in a model class. The new component here is that - since we have two tasks and two losses - we need to decide on how important each loss is. We can do this by giving each of the losses a weight, and treating these weights as hyperparameters. If we assign a large loss weight to the rating task, our model is going to focus on predicting ratings (but still use some information from the retrieval task); if we assign a large loss weight to the retrieval task, it will focus on retrieval instead.
class MovielensModel(tfrs.models.Model): def __init__(self, rating_weight: float, retrieval_weight: float) -> None: # We take the loss weights in the constructor: this allows us to instantiate # several model objects with different loss weights. super().__init__() embedding_dimension = 32 # User and movie models. self.movie_model: tf.keras.layers.Layer = tf.keras.Sequential([ tf.keras.layers.StringLookup( vocabulary=unique_movie_titles, mask_token=None), tf.keras.layers.Embedding(len(unique_movie_titles) + 1, embedding_dimension) ]) self.user_model: tf.keras.layers.Layer = tf.keras.Sequential([ tf.keras.layers.StringLookup( vocabulary=unique_user_ids, mask_token=None), tf.keras.layers.Embedding(len(unique_user_ids) + 1, embedding_dimension) ]) # A small model to take in user and movie embeddings and predict ratings. # We can make this as complicated as we want as long as we output a scalar # as our prediction. self.rating_model = tf.keras.Sequential([ tf.keras.layers.Dense(256, activation="relu"), tf.keras.layers.Dense(128, activation="relu"), tf.keras.layers.Dense(1), ]) # The tasks. self.rating_task: tf.keras.layers.Layer = tfrs.tasks.Ranking( loss=tf.keras.losses.MeanSquaredError(), metrics=[tf.keras.metrics.RootMeanSquaredError()], ) self.retrieval_task: tf.keras.layers.Layer = tfrs.tasks.Retrieval( metrics=tfrs.metrics.FactorizedTopK( candidates=movies.batch(128).map(self.movie_model) ) ) # The loss weights. self.rating_weight = rating_weight self.retrieval_weight = retrieval_weight def call(self, features: Dict[Text, tf.Tensor]) -> tf.Tensor: # We pick out the user features and pass them into the user model. user_embeddings = self.user_model(features["user_id"]) # And pick out the movie features and pass them into the movie model. movie_embeddings = self.movie_model(features["movie_title"]) return ( user_embeddings, movie_embeddings, # We apply the multi-layered rating model to a concatentation of # user and movie embeddings. self.rating_model( tf.concat([user_embeddings, movie_embeddings], axis=1) ), ) def compute_loss(self, features: Dict[Text, tf.Tensor], training=False) -> tf.Tensor: ratings = features.pop("user_rating") user_embeddings, movie_embeddings, rating_predictions = self(features) # We compute the loss for each task. rating_loss = self.rating_task( labels=ratings, predictions=rating_predictions, ) retrieval_loss = self.retrieval_task(user_embeddings, movie_embeddings) # And combine them using the loss weights. return (self.rating_weight * rating_loss + self.retrieval_weight * retrieval_loss)
docs/examples/multitask.ipynb
tensorflow/recommenders
apache-2.0
Rating-specialized model Depending on the weights we assign, the model will encode a different balance of the tasks. Let's start with a model that only considers ratings.
model = MovielensModel(rating_weight=1.0, retrieval_weight=0.0) model.compile(optimizer=tf.keras.optimizers.Adagrad(0.1)) cached_train = train.shuffle(100_000).batch(8192).cache() cached_test = test.batch(4096).cache() model.fit(cached_train, epochs=3) metrics = model.evaluate(cached_test, return_dict=True) print(f"Retrieval top-100 accuracy: {metrics['factorized_top_k/top_100_categorical_accuracy']:.3f}.") print(f"Ranking RMSE: {metrics['root_mean_squared_error']:.3f}.")
docs/examples/multitask.ipynb
tensorflow/recommenders
apache-2.0
The model does OK on predicting ratings (with an RMSE of around 1.11), but performs poorly at predicting which movies will be watched or not: its accuracy at 100 is almost 4 times worse than a model trained solely to predict watches. Retrieval-specialized model Let's now try a model that focuses on retrieval only.
model = MovielensModel(rating_weight=0.0, retrieval_weight=1.0) model.compile(optimizer=tf.keras.optimizers.Adagrad(0.1)) model.fit(cached_train, epochs=3) metrics = model.evaluate(cached_test, return_dict=True) print(f"Retrieval top-100 accuracy: {metrics['factorized_top_k/top_100_categorical_accuracy']:.3f}.") print(f"Ranking RMSE: {metrics['root_mean_squared_error']:.3f}.")
docs/examples/multitask.ipynb
tensorflow/recommenders
apache-2.0
We get the opposite result: a model that does well on retrieval, but poorly on predicting ratings. Joint model Let's now train a model that assigns positive weights to both tasks.
model = MovielensModel(rating_weight=1.0, retrieval_weight=1.0) model.compile(optimizer=tf.keras.optimizers.Adagrad(0.1)) model.fit(cached_train, epochs=3) metrics = model.evaluate(cached_test, return_dict=True) print(f"Retrieval top-100 accuracy: {metrics['factorized_top_k/top_100_categorical_accuracy']:.3f}.") print(f"Ranking RMSE: {metrics['root_mean_squared_error']:.3f}.")
docs/examples/multitask.ipynb
tensorflow/recommenders
apache-2.0
The result is a model that performs roughly as well on both tasks as each specialized model. Making prediction We can use the trained multitask model to get trained user and movie embeddings, as well as the predicted rating:
trained_movie_embeddings, trained_user_embeddings, predicted_rating = model({ "user_id": np.array(["42"]), "movie_title": np.array(["Dances with Wolves (1990)"]) }) print("Predicted rating:") print(predicted_rating)
docs/examples/multitask.ipynb
tensorflow/recommenders
apache-2.0
Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space.
from urllib.request import urlretrieve from os.path import isfile, isdir import zipfile dataset_folder_path = 'data' dataset_filename = 'text8.zip' dataset_name = 'Text8 Dataset' if not isfile(dataset_filename): urlretrieve( 'http://mattmahoney.net/dc/text8.zip', dataset_filename) if not isdir(dataset_folder_path): with zipfile.ZipFile(dataset_filename) as zip_ref: zip_ref.extractall(dataset_folder_path) with open('data/text8') as f: text = f.read()
embeddings/Skip-Gram_word2vec.ipynb
postBG/DL_project
mit
Subsampling Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by $$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$ where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset. I'm going to leave this up to you as an exercise. This is more of a programming challenge, than about deep learning specifically. But, being able to prepare your data for your network is an important skill to have. Check out my solution to see how I did it. Exercise: Implement subsampling for the words in int_words. That is, go through int_words and discard each word given the probablility $P(w_i)$ shown above. Note that $P(w_i)$ is the probability that a word is discarded. Assign the subsampled data to train_words.
from collections import Counter word_counts = Counter(int_words) word_counts.most_common(3) threshold = 1e-5 total_counts = len(int_words) frequencies = {word: count / total_counts for word, count in word_counts.items()} drop_prob = {word: 1 - np.sqrt(threshold / frequencies[word]) for word in int_words} drop_prob[0] ## Your code here import random train_words = [word for word in int_words if drop_prob[word] < random.random()]
embeddings/Skip-Gram_word2vec.ipynb
postBG/DL_project
mit
Negative sampling For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called "negative sampling". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss. Exercise: Below, create weights and biases for the softmax layer. Then, use tf.nn.sampled_softmax_loss to calculate the loss. Be sure to read the documentation to figure out how it works.
# Number of negative labels to sample n_sampled = 100 with train_graph.as_default(): softmax_w = tf.Variable(tf.truncated_normal([n_vocab, n_embedding], stddev=0.1)) # create softmax weight matrix here softmax_b = tf.Variable(tf.zeros([n_vocab])) # create softmax biases here # Calculate the loss using negative sampling loss = tf.nn.sampled_softmax_loss(softmax_w, softmax_b, labels, embed, n_sampled, n_vocab) cost = tf.reduce_mean(loss) optimizer = tf.train.AdamOptimizer().minimize(cost)
embeddings/Skip-Gram_word2vec.ipynb
postBG/DL_project
mit
Structuring Code Don't repeat yourself (and others) to keep your programs small Structure code into functions and files. You can import functions and data from files (so-called modules): ```python File module/example.py x = 'Example data' def example_function(): pass if name == 'main': whatever() File in . from module import example example.x ``` Documentation Document why and not what (what states the code) Commit early and often to keep track of why you change things False (usually out-dated) documentation is worse than none Turn questions (yours or others) into documentation Write self-documenting code, e.g. (from real-life C++) change C++ if ((j != k) and (d_type[d_lattice-&gt;d_cell_id[j]] == mesenchyme) and (d_type[d_lattice-&gt;d_cell_id[k]] == mesenchyme) and (dist &lt; r_link) and (fabs(r.w/(d_X[d_lattice-&gt;d_cell_id[j]].w + d_X[d_lattice-&gt;d_cell_id[k]].w)) &gt; 0.2)) { d_link[i].a = d_lattice-&gt;d_cell_id[j]; d_link[i].b = d_lattice-&gt;d_cell_id[k]; } to C++ auto both_mesenchyme = (d_type[d_lattice-&gt;d_cell_id[j]] == mesenchyme) and (d_type[d_lattice-&gt;d_cell_id[k]] == mesenchyme); auto along_w = fabs(r.w/(d_X[d_lattice-&gt;d_cell_id[j]].w + d_X[d_lattice-&gt;d_cell_id[k]].w)) &gt; 0.2; if (both_mesenchyme and (dist &lt; r_link) and along_w) { d_link[i].a = d_lattice-&gt;d_cell_id[j]; d_link[i].b = d_lattice-&gt;d_cell_id[k]; } Use docstrings: python def documented_function(): """This is a docstring""" pass ⚠ Turn documentation into code using docopt: ```python """A Docopt Example. Usage: docoptest.py [flag] [--parameter <x>] docoptest.py (-h | --help) Options: -h --help Show this screen. --parameter <x> Pass parameter. """ if name == 'main': from docopt import docopt args = docopt(__doc__) print(args) ``` Results in ```bash $ python docoptest.py wrong usage Usage: docoptest.py [flag] [--parameter <x>] docoptest.py (-h | --help) $ python docoptest.py -h A Docopt Example. Usage: docoptest.py [flag] [--parameter <x>] docoptest.py (-h | --help) Options: -h --help Show this screen. --parameter <x> Pass parameter. $ python docoptest.py flag {'--help': False, '--parameter': None, 'flag': True} ``` Testing Turn debugging print-statements into assert-statements
mass = -1 assert mass > 0, 'Mass cannot be negative!'
7_BestPractices_Testing_Performance.ipynb
apozas/BIST-Python-Bootcamp
gpl-3.0
Make sure your code does what it is supposed to do by testing it with simple examples where you know what to expect Save those tests (e.g. in if __name__ == __main__ of each module) to keep your code correct while changing parts ⚠ Automate testing ```python import unittest def square(x): return x*x class TestSquare(unittest.TestCase): def test_square(self): self.assertEqual(square(2), 4) if name == 'main': unittest.main() ``` ⚠ Generate fuzzy tests using hypothesis: ```python from hypothesis import given from hypothesis.strategies import text, floats @given(text(min_size=10), text(min_size=10), text(min_size=10)) def test_triangle_inequality(self, a, b, c): self.assertTrue(zipstance(a, c) <= zipstance(a, b) + zipstance(b, c)) ``` ⚠ Turn documentation into tests using doctest: ```python def square(x): """Return the square of x. &gt;&gt;&gt; square(2) 0 """ return x*x if name == 'main': import doctest doctest.testmod() ``` Will give ```bash $ python doctestest.py File "doctestest.py", line 4, in main.square Failed example: square(2) Expected: 0 Got: 4 1 items had failures: 1 of 1 in main.square Test Failed 1 failures. ``` ⚠ Use mutation tests if you need to be sure that your code is correct. They replace pieces of code, e.g. &lt; with &gt; to find out whether your tests cover everything. Optimizing Performance Quick Tests
%timeit [i**2 for i in range(100000)] %timeit np.arange(100000)**2
7_BestPractices_Testing_Performance.ipynb
apozas/BIST-Python-Bootcamp
gpl-3.0
Real-life Example We tried to speed-up the simulation of an N-body problem describing cells interacting in pairs via the spherical potential $U(r) = -(r - r_{min})(r - r_{max})^2$, where $r = |\vec x_j - \vec x_i|$. The resulting forces can be calculated as python def forces(t, X, N): """Calculate forces from neighbouring cells""" for i, x in enumerate(X): r = X[N[i]] - x norm_r = np.minimum(np.linalg.norm(r, axis=1), r_max) norm_F = 2*(r_min - norm_r)*(norm_r - r_max) - (norm_r - r_max)**2 F[i] = np.sum(r*(norm_F/norm_r)[:, None], axis=0) return F.ravel() where N is a matrix giving the $k$ nearest neighbours. Profiling The cProfile module help identify bottlenecks when running 'command': ```python import cProfile cProfile.run('command', 'nbodystats') ``` and pstat allows you then to analyse the data saved in nbodystats (in the actual code forces was wrapped into a class with __call__):
import pstats p = pstats.Stats('nbodystats') p.strip_dirs().sort_stats('cumulative').print_stats(10);
7_BestPractices_Testing_Performance.ipynb
apozas/BIST-Python-Bootcamp
gpl-3.0
Vectorize, 7x While the initial function was already using numpy and vectors it still involves a for-loop that can be vectorized ... well, actually tensorized: python def forces(t, X, N): r = X[N] - np.tile(X, (k, 1, 1)).transpose(1, 0, 2) norm_r = np.minimum(np.linalg.norm(r, axis=2), r_max) norm_F = 2*(r_min - norm_r)*(norm_r - r_max) - (norm_r - r_max)**2 F = np.sum(r*(norm_F/norm_r)[:, None].transpose(0, 2, 1), axis=1) return F.ravel() Re-use resources, 1.5x Broadcasting gives just a "view" instead of a copy returned by np.tile: python def forces(t, X, N): r = X[N] - X[:, None, :] norm_r = np.minimum(np.linalg.norm(r, axis=2), r_max) norm_F = 2*(r_min - norm_r)*(norm_r - r_max) - (norm_r - r_max)**2 F = np.sum(r*(norm_F/norm_r)[:, None].transpose(0, 2, 1), axis=1) return F.ravel() The standard inplace operators +=, -=, *=, and /= can speed up large numpy calculations a little, Pandas operations often have an inplace flag, e.g. df.reset_index(inplace=True). Sometimes allocating memory with np.empty or pd.DataFrame pays off. Make sure to use the fastest data type for each column of a pd.DataFrame. ⚠ Compile the calculation, 2x (3.5x on GTX 1060 for 10k cells) Finally code can be compiled in various ways. We choose Theano because it requires little rewritting (and can run code on the GPU as well): ```python from theano import Tensor as T from theano import function X = T.matrix('X', dtype='floatX') N = T.imatrix('N') r = X[N] - X[:, None, :] norm_r = T.minimum(r.norm(2, axis=2), 1) norm_F = 2(0.5 - norm_r)(norm_r - 1) - (norm_r - 1)*2 F = T.sum(r(norm_F/norm_r).dimshuffle(0, 1, 'x'), axis=1) f = function([X, N], F.ravel(), allow_input_downcast=True) def forces(t, X, N): return f(X.reshape(-1, 3), self.N) ``` Aftermath
p = pstats.Stats('theanostats') p.strip_dirs().sort_stats('cumulative').print_stats(10);
7_BestPractices_Testing_Performance.ipynb
apozas/BIST-Python-Bootcamp
gpl-3.0
Let's grab some text To start with, we need some text from which we'll try to extract named entities using various methods and libraries. There are several ways of doing this e.g.: 1. copy and paste the text from Perseus or the Latin Library into a text document, and read it into a variable 2. load a text from one of the Latin corpora available via cltk (cfr. this blog post) 3. or load it from Perseus by leveraging its Canonical Text Services API Let's gor for #3 :) What's CTS? CTS URNs stand for Canonical Text Service Uniform Resource Names. You can think of a CTS URN like a social security number for texts (or parts of texts). Here are some examples of CTS URNs with different levels of granularity: - urn:cts:latinLit:phi0448 (Caesar) - urn:cts:latinLit:phi0448.phi001 (Caesar's De Bello Gallico) - urn:cts:latinLit:phi0448.phi001.perseus-lat2 DBG Latin edtion - urn:cts:latinLit:phi0448.phi001.perseus-lat2:1 DBG Latin edition, book 1 - urn:cts:latinLit:phi0448.phi001.perseus-lat2:1.1.1 DBG Latin edition, book 1, chapter 1, section 1 How do I find out the CTS URN of a given author or text? The Perseus Catalog is your friend! (crf. e.g. http://catalog.perseus.org/catalog/urn:cts:latinLit:phi0448) Querying a CTS API The URN of the Latin [ho messo quella inglese] edition of Caesar's De Bello Gallico is urn:cts:latinLit:phi0448.phi001.perseus-eng1.
my_passage = "urn:cts:latinLit:phi0448.phi001.perseus-eng2:1"
participants_notebooks/Sunoikisis - Named Entity Extraction 1b-LV.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
With this information, we can query a CTS API and get some information about this text. For example, we can "discover" its canonical text structure, an essential information to be able to cite this text.
# We set up a resolver which communicates with an API available in Leipzig resolver = HttpCTSResolver(CTS("http://cts.dh.uni-leipzig.de/api/cts/"))
participants_notebooks/Sunoikisis - Named Entity Extraction 1b-LV.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
But we can also query the same API and get back the text of a specific text section, for example the entire book 1. To do so, we need to append the indication of the reference scope (i.e. book 1) to the URN.
# We require some metadata information textMetadata = resolver.getMetadata("urn:cts:latinLit:phi0448.phi001.perseus-eng2") # Texts in CTS Metadata have one interesting property : its citation scheme. # Citation are embedded objects that carries information about how a text can be quoted, what depth it has print([citation.name for citation in textMetadata.citation]) my_passage = "urn:cts:latinLit:phi0448.phi001.perseus-eng2:1"
participants_notebooks/Sunoikisis - Named Entity Extraction 1b-LV.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
The text that we have just fetched by using a programming interface (API) can also be viewed in the browser. [HOW CAN I SEE THE TEXT, LIKE IN THE COMMON SESSION?] Or even imported as an iframe into this notebook!
from IPython.display import IFrame IFrame('http://cts.dh.uni-leipzig.de/read/latinLit/phi0448/phi001/perseus-eng2/1', width=1000, height=350)
participants_notebooks/Sunoikisis - Named Entity Extraction 1b-LV.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
NER with CLTK ( = Classical Language ToolKit ) The CLTK library has some basic support for the extraction of named entities from Latin and Greek texts (see CLTK's documentation). The current implementation (as of version 0.1.47) uses a lookup-based method. For each token in a text, the tagger checks whether that token is contained within a predefined list of possible named entities: - list of Latin proper nouns: https://github.com/cltk/latin_proper_names_cltk - list of Greek proper nouns: https://github.com/cltk/greek_proper_names_cltk Let's run CLTK's tagger (it takes a moment):
%%time tagged_text_cltk = tag_ner('latin', input_text=de_bello_gallico_book1)
participants_notebooks/Sunoikisis - Named Entity Extraction 1b-LV.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
NER with NLTK (= Natural Language ToolKit)
stanford_model_english = "/opt/nlp/stanford-tools/stanford-ner-2015-12-09/classifiers/english.muc.7class.distsim.crf.ser.gz"
participants_notebooks/Sunoikisis - Named Entity Extraction 1b-LV.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
ner_tagger = StanfordNERTagger(stanford_model_english)
tagged_text_nltk = ner_tagger.tag(de_bello_gallico_book1.split(" "))
participants_notebooks/Sunoikisis - Named Entity Extraction 1b-LV.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
Let's have a look at the output
tagged_text_nltk[:20] # Wrap up
participants_notebooks/Sunoikisis - Named Entity Extraction 1b-LV.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0