markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
For more complicated models or fits it may be better to use the `estimate_line_parameters` function instead of manually creating e.g. a `Gaussian1D` model and setting the center. An example of this pattern is given below.Note that we provided a default `Gaussian1D` model to the `estimate_line_parameters` function abov...
halpha_line_estimates = [] for line in halpha_lines: line_region = SpectralRegion(line['line_center']-3*u.angstrom, line['line_center']+3*u.angstrom) line_spectrum = extract_region(sdss_halpha_contsub, line_region) line_estimate = fitting.estimate_line_parameters(line_spectr...
_____no_output_____
BSD-3-Clause
aas_233_workshop/09b-Specutils/Specutils_analysis.ipynb
astropy/astropy-workshops
Keras simple CNN 2020/11/11Ryutaro Hashimoto___ Table of Contents1  Setup1.1  Launching a Sagemaker session1.2  Prepare the dataset for training2  Train the model2.1  Specifying the Instance Type2.2  Setting for hyperparameters2.3  Metrics2.4&nbs...
import sagemaker sagemaker_session = sagemaker.Session() role = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxx' # ← your iam role ARN
_____no_output_____
MIT
2_training/Custom_Model/tensorflow/keras_script_mode_pipe_mode_horovod/keras_CNN_CIFAR10.ipynb
RyutaroHashimoto/aws_sagemaker
Prepare the dataset for trainingSkip the next code since you have already downloaded it.
!python generate_cifar10_tfrecords.py --data-dir ./data
_____no_output_____
MIT
2_training/Custom_Model/tensorflow/keras_script_mode_pipe_mode_horovod/keras_CNN_CIFAR10.ipynb
RyutaroHashimoto/aws_sagemaker
Next, we upload the data to Amazon S3:
from sagemaker.s3 import S3Uploader bucket = 'sagemaker-tutorial-hashimoto' dataset_uri = S3Uploader.upload('data', 's3://{}/tf-cifar10-example/data'.format(bucket)) display(dataset_uri)
_____no_output_____
MIT
2_training/Custom_Model/tensorflow/keras_script_mode_pipe_mode_horovod/keras_CNN_CIFAR10.ipynb
RyutaroHashimoto/aws_sagemaker
Train the model Specifying the Instance Type
instance_type = 'ml.p2.xlarge'
_____no_output_____
MIT
2_training/Custom_Model/tensorflow/keras_script_mode_pipe_mode_horovod/keras_CNN_CIFAR10.ipynb
RyutaroHashimoto/aws_sagemaker
Setting for hyperparameters
hyperparameters = {'epochs': 10, 'batch-size': 256}
_____no_output_____
MIT
2_training/Custom_Model/tensorflow/keras_script_mode_pipe_mode_horovod/keras_CNN_CIFAR10.ipynb
RyutaroHashimoto/aws_sagemaker
Metrics
metric_definitions = [ {'Name': 'train:loss', 'Regex': '.*loss: ([0-9\\.]+) - accuracy: [0-9\\.]+.*'}, {'Name': 'train:accuracy', 'Regex': '.*loss: [0-9\\.]+ - accuracy: ([0-9\\.]+).*'}, {'Name': 'validation:accuracy', 'Regex': '.*step - loss: [0-9\\.]+ - accuracy: [0-9\\.]+ - val_loss: [0-9\\.]+ - val_accu...
_____no_output_____
MIT
2_training/Custom_Model/tensorflow/keras_script_mode_pipe_mode_horovod/keras_CNN_CIFAR10.ipynb
RyutaroHashimoto/aws_sagemaker
Tags
tags = [{'Key': 'Project', 'Value': 'cifar10'}, {'Key': 'TensorBoard', 'Value': 'file'}]
_____no_output_____
MIT
2_training/Custom_Model/tensorflow/keras_script_mode_pipe_mode_horovod/keras_CNN_CIFAR10.ipynb
RyutaroHashimoto/aws_sagemaker
Setting for estimator
import subprocess from sagemaker.tensorflow import TensorFlow estimator = TensorFlow(entry_point='cifar10_keras_main.py', source_dir='source_dir', metric_definitions=metric_definitions, hyperparameters=hyperparameters, role=ro...
Help on class TensorFlow in module sagemaker.tensorflow.estimator: class TensorFlow(sagemaker.estimator.Framework) | TensorFlow(py_version=None, framework_version=None, model_dir=None, image_uri=None, distribution=None, **kwargs) | | Handle end-to-end training and deployment of user-provided TensorFlow code. |...
MIT
2_training/Custom_Model/tensorflow/keras_script_mode_pipe_mode_horovod/keras_CNN_CIFAR10.ipynb
RyutaroHashimoto/aws_sagemaker
Specify data input and output
inputs = { 'train': '{}/train'.format(dataset_uri), 'validation': '{}/validation'.format(dataset_uri), 'eval': '{}/eval'.format(dataset_uri), }
_____no_output_____
MIT
2_training/Custom_Model/tensorflow/keras_script_mode_pipe_mode_horovod/keras_CNN_CIFAR10.ipynb
RyutaroHashimoto/aws_sagemaker
Execute Training
estimator.fit(inputs)
2021-02-08 06:06:01 Starting - Starting the training job... 2021-02-08 06:06:25 Starting - Launching requested ML instancesProfilerReport-1612764359: InProgress ...... 2021-02-08 06:07:32 Starting - Preparing the instances for training.........
MIT
2_training/Custom_Model/tensorflow/keras_script_mode_pipe_mode_horovod/keras_CNN_CIFAR10.ipynb
RyutaroHashimoto/aws_sagemaker
Checking the accuracy of a model with TensorBoardUsing the visualization tool [TensorBoard](https://www.tensorflow.org/tensorboard), we can compare our training jobs.In a local setting, install TensorBoard with `pip install tensorboard`. Then run the command generated by the following code:
!python generate_tensorboard_command.py ! AWS_REGION=us-west-2 tensorboard --logdir file:"s3://sagemaker-us-west-2-005242542034/cifar10-tf-2021-02-08-04-01-54-836/model"
_____no_output_____
MIT
2_training/Custom_Model/tensorflow/keras_script_mode_pipe_mode_horovod/keras_CNN_CIFAR10.ipynb
RyutaroHashimoto/aws_sagemaker
After running that command, we can access TensorBoard locally at http://localhost:6006.Based on the TensorBoard metrics, we can see that:1. All jobs run for 10 epochs (0 - 9).1. Both File Mode and Pipe Mode run for ~1 minute - Pipe Mode doesn't affect training performance.1. Distributed training runs for only 45 second...
predictor = estimator.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge')
_____no_output_____
MIT
2_training/Custom_Model/tensorflow/keras_script_mode_pipe_mode_horovod/keras_CNN_CIFAR10.ipynb
RyutaroHashimoto/aws_sagemaker
Invoke the endpointI'll try to generate a random matrix and see if the predictor is working.
import numpy as np data = np.random.randn(1, 32, 32, 3) print('Predicted class: {}'.format(np.argmax(predictor.predict(data)['predictions'])))
_____no_output_____
MIT
2_training/Custom_Model/tensorflow/keras_script_mode_pipe_mode_horovod/keras_CNN_CIFAR10.ipynb
RyutaroHashimoto/aws_sagemaker
Download the dataset for prediction
from tensorflow.keras.datasets import cifar10 (x_train, y_train), (x_test, y_test) = cifar10.load_data()
_____no_output_____
MIT
2_training/Custom_Model/tensorflow/keras_script_mode_pipe_mode_horovod/keras_CNN_CIFAR10.ipynb
RyutaroHashimoto/aws_sagemaker
Prediction
from tensorflow.keras.preprocessing.image import ImageDataGenerator def predict(data): predictions = predictor.predict(data)['predictions'] return predictions predicted = [] actual = [] batches = 0 batch_size = 128 datagen = ImageDataGenerator() for data in datagen.flow(x_test, y_test, batch_size=batch_size...
_____no_output_____
MIT
2_training/Custom_Model/tensorflow/keras_script_mode_pipe_mode_horovod/keras_CNN_CIFAR10.ipynb
RyutaroHashimoto/aws_sagemaker
Accuracy
from sklearn.metrics import accuracy_score accuracy = accuracy_score(y_pred=predicted, y_true=actual) display('Average accuracy: {}%'.format(round(accuracy * 100, 2)))
_____no_output_____
MIT
2_training/Custom_Model/tensorflow/keras_script_mode_pipe_mode_horovod/keras_CNN_CIFAR10.ipynb
RyutaroHashimoto/aws_sagemaker
Confusion Matrix
%matplotlib inline import matplotlib.pyplot as plt import pandas as pd import seaborn as sn from sklearn.metrics import confusion_matrix cm = confusion_matrix(y_pred=predicted, y_true=actual) cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] sn.set(rc={'figure.figsize': (11.7,8.27)}) sn.set(font_scale=1.4) # fo...
_____no_output_____
MIT
2_training/Custom_Model/tensorflow/keras_script_mode_pipe_mode_horovod/keras_CNN_CIFAR10.ipynb
RyutaroHashimoto/aws_sagemaker
CleanupTo avoid incurring extra charges to your AWS account, let's delete the endpoint we created:
predictor.delete_endpoint()
_____no_output_____
MIT
2_training/Custom_Model/tensorflow/keras_script_mode_pipe_mode_horovod/keras_CNN_CIFAR10.ipynb
RyutaroHashimoto/aws_sagemaker
We will use Naive Bayes to model the "Pima Indians Diabetes" data set. This model will predict which people are likely to develop diabetes.This dataset is originally from the National Institute of Diabetes and Digestive and Kidney Diseases. The objective of the dataset is to diagnostically predict whether or not a pati...
# data processing, CSV file I/O # matplotlib.pyplot plots data
_____no_output_____
MIT
Naive_Bayes_Diabetes/Naive_Bayes.ipynb
abhisngh/Data-Science
Load and review data
# Check number of columns and rows in data frame # To check first 5 rows of data set # If there are any null values in data set # Excluding Outcome column # Histogram of first 8 columns
_____no_output_____
MIT
Naive_Bayes_Diabetes/Naive_Bayes.ipynb
abhisngh/Data-Science
Identify Correlation in data
#show correlation matrix # However we want to see correlation in graphical representation
_____no_output_____
MIT
Naive_Bayes_Diabetes/Naive_Bayes.ipynb
abhisngh/Data-Science
Calculate diabetes ratio of True/False from outcome variable Spliting the data Lets check split of data Now lets check diabetes True/False ratio in split data Data Preparation Check hidden missing values As we checked missing values earlier but haven't got any. But there can be lots of entries with 0 values. We m...
# Print Classification report
_____no_output_____
MIT
Naive_Bayes_Diabetes/Naive_Bayes.ipynb
abhisngh/Data-Science
Resolução dos Exercícios - Lista I 1. Crie três variáveis e atribua os valores a seguir: a=1, b=5.9 e c=‘teste’. A partir disso, retorne o tipo de cada uma das variáveis.
# Criando as variáveis a=1 b=5 c='teste' # Retornando o tipo de cada variável print("Tipos das variáveis:\n>> Variável 'a' é do tipo {typea}." "\n>> Variável 'b' é do tipo {typeb}." "\n>> Variável 'c' é do tipo {typec}".format(typea=type(a), ...
Tipos das variáveis: >> Variável 'a' é do tipo <class 'int'>. >> Variável 'b' é do tipo <class 'int'>. >> Variável 'c' é do tipo <class 'str'>
MIT
Aula1/ResolucaoExercicios_Aula01.ipynb
anablima/CursoUSP_PythonNLP
2. Troque o valor da variável a por ‘1’ e verifique se o tipo da variável mudou.
# Alterando a variável a='1' # Retornando o novo tipo da variável print("O tipo da variável 'a' mudou para ", type(a))
O tipo da variável 'a' mudou para <class 'str'>
MIT
Aula1/ResolucaoExercicios_Aula01.ipynb
anablima/CursoUSP_PythonNLP
3. Faça a soma da variável b com a variável c. Interprete a saída, tanto em caso de execução correta quanto em caso de execução com erro.
print(b+c) # Não podemos realizar operações aritméticas entre variáveis com tipos diferentes. # Para isso ambas as variáveis precisam ser do mesmo tipo ou retorna erro.
_____no_output_____
MIT
Aula1/ResolucaoExercicios_Aula01.ipynb
anablima/CursoUSP_PythonNLP
4. Crie uma lista com números de 0 a 9 (em qualquer ordem) e faça:* a) Adicione o número 6* b) Insira o número 7 na 3ª posição da lista* c) Remova o elemento 3 da lista* d) Adicione o número 4* e) Verifique o número de ocorrências do número 4 na lista
# Criando a lista l1 = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] l1 # a) Adicione o número 6 l1.append(6) l1 # b) Insira o número 7 na 3ª posição da lista l1.insert(2,7) l1 # c) Remova o elemento 3 da lista l1.remove(3) l1 # d) Adicione o número 4 l1.append(4) l1 # e) Verifique o número de ocorrências do número 4 na lista print(l...
2
MIT
Aula1/ResolucaoExercicios_Aula01.ipynb
anablima/CursoUSP_PythonNLP
5. Ainda com a lista criada na questão anterior, faça:* a) Retorne os primeiros 3 elementos da lista* b) Retorne os elementos que estão da 3ª posição até a 7ª posição da lista* c) Retorne os elementos da lista de 3 em 3 elementos* d) Retorne os 3 últimos elementos da lista* e) Retorne todos os elementos menos os 4 últ...
# a) Retorne os primeiros 3 elementos da lista print('Lista:', l1) print('\n3 primeiros elementos da lista:', l1[:3]) # b) Retorne os elementos que estão da 3ª posição até a 7ª posição da lista print('Lista:', l1) print('\nElementos da 3ª a 7ª posição da lista:', l1[2:7]) # c) Retorne os elementos da lista de 3 em 3 el...
Lista: [0, 1, 7, 2, 4, 5, 6, 7, 8, 9, 6, 4] Todos os elementos menos os 4 últimos da lista: [0, 1, 7, 2, 4, 5, 6, 7]
MIT
Aula1/ResolucaoExercicios_Aula01.ipynb
anablima/CursoUSP_PythonNLP
6. Com a lista das questões anteriores, retorne o 6º elemento da lista.
print('Lista:', l1) print('\n6ª posição da lista:', l1[6])
Lista: [0, 1, 2, 4, 4, 5, 6, 7, 9, 12] 6ª posição da lista: 7
MIT
Aula1/ResolucaoExercicios_Aula01.ipynb
anablima/CursoUSP_PythonNLP
7. Altere o valor do 7º elemento da lista para o valor 12.
print('Lista:', l1) l1[6] = 12 print('\nLista com a alteração:', l1)
Lista com a alteração: [0, 1, 7, 2, 4, 5, 12, 9, 6, 4]
MIT
Aula1/ResolucaoExercicios_Aula01.ipynb
anablima/CursoUSP_PythonNLP
8. Inverta a ordem dos elementos na lista.
print('Lista:', l1) l1.reverse() print('\nLista invertida:', l1)
Lista invertida: [12, 9, 7, 6, 5, 4, 4, 2, 1, 0]
MIT
Aula1/ResolucaoExercicios_Aula01.ipynb
anablima/CursoUSP_PythonNLP
9. Ordene a lista
print('Lista:', l1) l1.sort() print('\nLista invertida:', l1)
Lista invertida: [0, 1, 2, 4, 4, 5, 6, 7, 9, 12]
MIT
Aula1/ResolucaoExercicios_Aula01.ipynb
anablima/CursoUSP_PythonNLP
10. Crie uma tupla com números de 0 a 9 (em qualquer ordem) e tente:* a) Alterar o valor do 3º elemento da tupla para o valor 10* b) Verificar o índice (posição) do valor 5 na tupla
# Criando a tupla t1 = (0, 1, 2, 3, 4, 5, 6, 7, 8, 9) t1 # a) Alterar o valor do 3º elemento da tupla para o valor 10 t1[3] = 10 t1 # Tuplas não são alteráveis, somente as listas são. # b) Verificar o índice (posição) do valor 5 na tupla print('Tupla: ', t1) print('\nIndex do número 5 é:', t1.index(5))
Tupla: (0, 1, 2, 3, 4, 5, 6, 7, 8, 9) Index do número 5 é: 5
MIT
Aula1/ResolucaoExercicios_Aula01.ipynb
anablima/CursoUSP_PythonNLP
Boltzmann MachinesA Boltzmann machine is a type of stochastic recurrent neural network. It is a Markov random field (a undirected graphical model is a set of random variables that has the *Markov property* (the conditional probability distribution of future states of the process (conditional on both past and present s...
from sklearn.neural_network import BernoulliRBM X = np.array([[0.5, 0, 0], [0, 0.7, 1], [1, 0, 1], [1, 0.2, 1]]) rbm = BernoulliRBM(n_components=2) rbm.fit(X) print('Shape of X: {}'.format(X.shape)) X_r = rbm.transform(X) print('Dimensionality reduced X : \n{}'.format(X_r)) from scipy.ndimage import convolve from skle...
[BernoulliRBM] Iteration 1, pseudo-likelihood = -25.39, time = 0.13s [BernoulliRBM] Iteration 2, pseudo-likelihood = -23.77, time = 0.17s [BernoulliRBM] Iteration 3, pseudo-likelihood = -22.94, time = 0.18s [BernoulliRBM] Iteration 4, pseudo-likelihood = -21.91, time = 0.17s [BernoulliRBM] Iteration 5, pseudo-likelihoo...
MIT
section_4/4-7.ipynb
PacktPublishing/Hands-On-Machine-Learning-with-Scikit-Learn-and-TensorFlow-2.0
To run the code, you need to enable the CUDA in the setting. You can enable in the menu: `Runtime > Change runtime type` and choose GPU in the hardware accelerator item.
# install shapefromprojections package %cd /content !git clone https://github.com/jakeoung/ShapeFromProjections %cd ShapeFromProjections !pip install -e . import sys import os sys.path.append(os.getcwd()) # install CUDA kernels %cd ctdr/cuda !python build.py build_ext --inplace %cd ../../run import numpy as np import m...
_____no_output_____
MIT
ctdr_toy_example.ipynb
Aarya-Create/PBL-Mesh
Find the comparables: extra_features.txtThe file `extra_features.txt` contains important property information like number and quality of pools, detached garages, outbuildings, canopies, and more. Let's load this file and grab a subset with the important columns to continue our study.
%load_ext autoreload %autoreload 2 from pathlib import Path import pickle import pandas as pd from src.definitions import ROOT_DIR from src.data.utils import Table, save_pickle extra_features_fn = ROOT_DIR / 'data/external/2016/Real_building_land/extra_features.txt' assert extra_features_fn.exists() extra_features = ...
_____no_output_____
BSD-3-Clause
notebooks/01_Exploratory/1.3-rp-hcad-data-view-extra_features.ipynb
RafaelPinto/hcad_pred
Load accounts of interestLet's remove the account numbers that don't meet free-standing single-family home criteria that we found while processing the `building_res.txt` file.
skiprows = extra_features.get_skiprows() extra_features_df = extra_features.get_df(skiprows=skiprows) extra_features_df.head() extra_features_df.l_dscr.value_counts().head(25)
_____no_output_____
BSD-3-Clause
notebooks/01_Exploratory/1.3-rp-hcad-data-view-extra_features.ipynb
RafaelPinto/hcad_pred
Grab slice of the extra features of interestWith the value counts on the extra feature description performed above we can see that the majority of the features land in the top 15 categories. Let's filter out the rests of the columns.
cols = extra_features_df.l_dscr.value_counts().head(15).index cond0 = extra_features_df['l_dscr'].isin(cols) extra_features_df = extra_features_df.loc[cond0, :]
_____no_output_____
BSD-3-Clause
notebooks/01_Exploratory/1.3-rp-hcad-data-view-extra_features.ipynb
RafaelPinto/hcad_pred
Build pivot tables for count and gradeThere appear to be two important values related to each extra feature: uts (units area in square feet) and grade. Since a property can have multiple features of the same class, e.g. frame utility shed, let's aggregate them by adding the uts values, and also by taking the mean of t...
extra_features_pivot_uts = extra_features_df.pivot_table(index='acct', columns='l_dscr', values='uts', aggfunc='sum', ...
_____no_output_____
BSD-3-Clause
notebooks/01_Exploratory/1.3-rp-hcad-data-view-extra_features.ipynb
RafaelPinto/hcad_pred
add `acct` column to make easier the merging process ahead
extra_features_uts_grade.reset_index(inplace=True)
_____no_output_____
BSD-3-Clause
notebooks/01_Exploratory/1.3-rp-hcad-data-view-extra_features.ipynb
RafaelPinto/hcad_pred
Fix column namesWe would like the column names to be all lower case, with no spaces nor non-alphanumeric characters.
from src.data.utils import fix_column_names extra_features_uts_grade.columns extra_features_uts_grade = fix_column_names(extra_features_uts_grade) extra_features_uts_grade.columns
_____no_output_____
BSD-3-Clause
notebooks/01_Exploratory/1.3-rp-hcad-data-view-extra_features.ipynb
RafaelPinto/hcad_pred
Find duplicated rows
cond0 = extra_features_uts_grade.duplicated() extra_features_uts_grade.loc[cond0, :]
_____no_output_____
BSD-3-Clause
notebooks/01_Exploratory/1.3-rp-hcad-data-view-extra_features.ipynb
RafaelPinto/hcad_pred
Describe
extra_features_uts_grade.info() extra_features_uts_grade.describe()
_____no_output_____
BSD-3-Clause
notebooks/01_Exploratory/1.3-rp-hcad-data-view-extra_features.ipynb
RafaelPinto/hcad_pred
Export real_acct
save_fn = ROOT_DIR / 'data/raw/2016/extra_features_uts_grade_comps.pickle' save_pickle(extra_features_uts_grade, save_fn)
_____no_output_____
BSD-3-Clause
notebooks/01_Exploratory/1.3-rp-hcad-data-view-extra_features.ipynb
RafaelPinto/hcad_pred
Data Analysis Project In our data project, we use data directly imported from the World Data Bank. We have chosen to focus on nine different countries: Brazil, China, Denmark, India, Japan, Nigeria, Spain, Turkmenistan and the US. These countries are chosen because they are relatively different, which makes the analys...
import pandas as pd import numpy as np
_____no_output_____
MIT
dataproject/dataProject.ipynb
NumEconCopenhagen/projects-2019-tba
**We import the packages** we need. If we do not have the packages, we have to install them. Therefore, install:>`pip install pandas-datareader`>`pip install wbdata`
import pandas_datareader import datetime
_____no_output_____
MIT
dataproject/dataProject.ipynb
NumEconCopenhagen/projects-2019-tba
We import the setup to download data directly from world data bank:
from pandas_datareader import wb
_____no_output_____
MIT
dataproject/dataProject.ipynb
NumEconCopenhagen/projects-2019-tba
Download Data directly from the World Data Bank We define the countries for the download:China, Japan, Brazil, U.S., Denmark, Spain, Turkmenistan, India, Nigeria.
countries = ['CN','JP','BR','US','DK','ES','TM','IN','NG']
_____no_output_____
MIT
dataproject/dataProject.ipynb
NumEconCopenhagen/projects-2019-tba
We define the indicators for the download:GDP per capita, GDP (current US $), Population total, Urban Population in %, Fertility Rate, Literacy rate.
indicators = {"NY.GDP.PCAP.KD":"GDP per capita", "NY.GDP.MKTP.CD":"GDP(current US $)", "SP.POP.TOTL":"Population total", "SP.URB.TOTL.IN.ZS":"Urban Population in %", "SP.DYN.TFRT.IN":"Fertility Rate", "SE.ADT.LITR.ZS": "Literacy rate, adult total in %" }
_____no_output_____
MIT
dataproject/dataProject.ipynb
NumEconCopenhagen/projects-2019-tba
We download the data and have a look at the table.
data_wb = wb.download(indicator= indicators, country= countries, start=1990, end=2017) data_wb = data_wb.rename(columns = {"NY.GDP.PCAP.KD":"gdp_pC","NY.GDP.MKTP.CD":"gdp", "SP.POP.TOTL":"pop", "SP.URB.TOTL.IN.ZS":"urban_pop%", "SP.DYN.TFRT.IN":"frt", "SE.ADT.LITR.ZS":"litr"}) data_...
_____no_output_____
MIT
dataproject/dataProject.ipynb
NumEconCopenhagen/projects-2019-tba
We save the data file as an excel sheet in the folder we saved the current file.
writer = pd.ExcelWriter('pandas_simple.xlsx', engine='xlsxwriter') data_wb.to_excel(r"./data_wb1.xlsx")
_____no_output_____
MIT
dataproject/dataProject.ipynb
NumEconCopenhagen/projects-2019-tba
Overview of the Data and Adaption
#Tonje data_wb.dtypes
_____no_output_____
MIT
dataproject/dataProject.ipynb
NumEconCopenhagen/projects-2019-tba
In order to ease the reading of the tables, we create a separation in all floats for the whole following file. Afterwards, we round the numbers with two decimals.
pd.options.display.float_format = '{:,}'.format round(data_wb.head(),2)
_____no_output_____
MIT
dataproject/dataProject.ipynb
NumEconCopenhagen/projects-2019-tba
Since the gdp is inconvenient to work with, we create a new variable gdp_in_billions showing the gdp in billions US $ and add it to the dataset.We have a look at the table to check whether it worked out.
data_wb['gdp_in_bil'] = data_wb['gdp']/1000000000 round(data_wb.head(),2) #just to check
_____no_output_____
MIT
dataproject/dataProject.ipynb
NumEconCopenhagen/projects-2019-tba
We delete the variable gdp since we will continue working exclusively with the variable gdp_in_bil.
del data_wb['gdp'] round(data_wb.head(),2) #just to check
_____no_output_____
MIT
dataproject/dataProject.ipynb
NumEconCopenhagen/projects-2019-tba
We have a look at the shape of the dataset in order to get an overview of the observations and variables.
data_wb.shape
_____no_output_____
MIT
dataproject/dataProject.ipynb
NumEconCopenhagen/projects-2019-tba
We perform a summary statistics to get an overview of our dataset.
round(data_wb.describe(),2)
_____no_output_____
MIT
dataproject/dataProject.ipynb
NumEconCopenhagen/projects-2019-tba
Detection of Missing Data We count the missing data:
data_wb.isnull().sum().sum()
_____no_output_____
MIT
dataproject/dataProject.ipynb
NumEconCopenhagen/projects-2019-tba
We have a look at how many observations each variable has:
data_wb.count()
_____no_output_____
MIT
dataproject/dataProject.ipynb
NumEconCopenhagen/projects-2019-tba
We search for the number of missing values of each variable. (Same step as before, only the other way around.)
data_wb.isnull().sum()
_____no_output_____
MIT
dataproject/dataProject.ipynb
NumEconCopenhagen/projects-2019-tba
We drop the literacy rate, because this variable has nearly no data.
data_wb.drop(['litr'], axis = 1, inplace = True)
_____no_output_____
MIT
dataproject/dataProject.ipynb
NumEconCopenhagen/projects-2019-tba
We search for the nine missing values of fertility rate. It seems like there is no data for the fertility rate for the year 2017.
round(data_wb.groupby('year').mean(),2)
_____no_output_____
MIT
dataproject/dataProject.ipynb
NumEconCopenhagen/projects-2019-tba
We look whether every country misses the data for the fertility rate for the year 2017.
round(data_wb.loc[data_wb['year'] == '2017', :].head(-1),2)
_____no_output_____
MIT
dataproject/dataProject.ipynb
NumEconCopenhagen/projects-2019-tba
We drop the year 2017.
I = data_wb['year'] == "2017" data_wb.drop(data_wb[I].index, inplace = True)
_____no_output_____
MIT
dataproject/dataProject.ipynb
NumEconCopenhagen/projects-2019-tba
Cleaned data set We perform a summary statistic of our cleaned dataset.
round(data_wb.describe(),2)
_____no_output_____
MIT
dataproject/dataProject.ipynb
NumEconCopenhagen/projects-2019-tba
And we check the number of observations and variables.
data_wb.shape
_____no_output_____
MIT
dataproject/dataProject.ipynb
NumEconCopenhagen/projects-2019-tba
We control whether the dataset is balanced.
data_wb.count()
_____no_output_____
MIT
dataproject/dataProject.ipynb
NumEconCopenhagen/projects-2019-tba
The data set is balanced. Data Analysis and Visualisations We use the average level of every variable for each single country.The overview shows that countries with a high gdp per capita have a low fertility rate. Countries with a high gdp per capita have a huge share of urban population. We can start to think about ...
round(data_wb.groupby('country').mean(),2)
_____no_output_____
MIT
dataproject/dataProject.ipynb
NumEconCopenhagen/projects-2019-tba
Interactive plot Now, we want to make an interactive plot which displays the development of GDP per capita over timefor the different countries. First, we import the necessary packages and tools: **Import the packages** we need. If we do not have the packages, we have to install them. Therefore, install:>`pip install...
import matplotlib.pyplot as plt %matplotlib inline from ipywidgets import interact, interactive, fixed, interact_manual import ipywidgets as widgets
_____no_output_____
MIT
dataproject/dataProject.ipynb
NumEconCopenhagen/projects-2019-tba
Then, we define the relevant variables in a way which simplifies the coding:
country=data_wb["country"] year=data_wb["year"] gdp_pC=data_wb["gdp_pC"]
_____no_output_____
MIT
dataproject/dataProject.ipynb
NumEconCopenhagen/projects-2019-tba
We create a function constructing a figure:
def interactive_figure(country, data_wb): """define an interactive figure that uses countries and the dataframe as inputs """ data_country = data_wb[data_wb.country == country] year = data_country.year gdp_pC = data_country.gdp_pC fig = plt.figure(dpi=100) ax = fig.add_subplot(1,1,1) ax...
_____no_output_____
MIT
dataproject/dataProject.ipynb
NumEconCopenhagen/projects-2019-tba
We make it interactive with a drop down menu:
widgets.interact(interactive_figure, year = widgets.fixed(year), data_wb = widgets.fixed(data_wb), country=widgets.Dropdown(description="Country", options=data_wb.country.unique()), gdp_pC=widgets.fixed(gdp_pC) );
_____no_output_____
MIT
dataproject/dataProject.ipynb
NumEconCopenhagen/projects-2019-tba
We can see that the overall trend for the selected countries is increasing GDP per capita.However, for the Western countries and Japan we can see the trace of the 2008 financial crisis. For Spain, one of the countries that suffered most from this crisis, the dip is particularly visible. It is also worth noticing that ...
import folium
_____no_output_____
MIT
dataproject/dataProject.ipynb
NumEconCopenhagen/projects-2019-tba
Our goal is to visualize the data on a world map using makers.In order to define the location of the markers, we add the coordinates of the counries. Therefore, we add the variable 'Lat' for latitude and 'Lon' for longitude of the respecitve country to each observation in our data set.
row_indexes=data_wb[data_wb['country']== 'Brazil'].index data_wb.loc[row_indexes,'Lat']= -14.2350 data_wb.loc[row_indexes,'Lon']= -51.9253 row_indexes=data_wb[data_wb['country']== 'China'].index data_wb.loc[row_indexes,'Lat']= 33.5449 data_wb.loc[row_indexes,'Lon']= 103.149 row_indexes=data_wb[data_wb['country']== 'D...
_____no_output_____
MIT
dataproject/dataProject.ipynb
NumEconCopenhagen/projects-2019-tba
Now, we want to create the map. 1. We define the variables year (selectedyear) and variable (selectedvariable) we want to display. 2. We have to create an empty map. Since our countries are located all over the world, we have to display the whole world. 3. In order to run the loop later on, we create an overview ...
# Definition of variables of interest selectedyear = 2010 #select the year you are interested in selectedvariable = 'gdp_pC' ##select the variable you are interested in # Creation of an empty map map = folium.Map(location=[0,0], tiles="Mapbox Bright", zoom_start=2) #Creation of an overview data set displaying ...
_____no_output_____
MIT
dataproject/dataProject.ipynb
NumEconCopenhagen/projects-2019-tba
Looking at the gdp per capita in the year 2010, we can see at one glance that developed countries have a substantially higher gdp per capita than emerging and developing countries. Mapping has the advantage of getting an overview and possible correlation of locations at one glance. We save the map in the same folder as...
map.save('./map.py')
_____no_output_____
MIT
dataproject/dataProject.ipynb
NumEconCopenhagen/projects-2019-tba
We drop the variables for the coordinates since they are no longer needed.
data_wb.drop(['Lat','Lon'], axis = 1, inplace = True)
_____no_output_____
MIT
dataproject/dataProject.ipynb
NumEconCopenhagen/projects-2019-tba
Fertility Rate per Country The average annual fertility rate presents an overview of the fertility rate for the copuntries and shows that Japan and Spain have the lowest fertility rate, while Nigeria has the highest.
ax = data_wb.groupby('country').frt.mean().plot(kind='bar') ax.set_ylabel('Avg. annual fertility rate')
_____no_output_____
MIT
dataproject/dataProject.ipynb
NumEconCopenhagen/projects-2019-tba
The following graph presents annual growth rate of the fertility rate for each country. We observe that denmark is the only country with a negative growth rate. The leading country is India with a growth rate of 0.020 over the years. Surprisingly, Nigeria and the US have almost the same growth rate.
def annual_growth(x): x_last = x.values[-1] x_first = x.values[0] num_years = len(x) growth_annualized = (x_last/x_first)**(1/num_years) - 1.0 return growth_annualized ax = data_wb.groupby('country')['frt'].agg(annual_growth).plot(kind='bar') ax.set_ylabel('Annual growth (fertility rate)...
_____no_output_____
MIT
dataproject/dataProject.ipynb
NumEconCopenhagen/projects-2019-tba
We look what kind of variables we have. Years should be a numeric variable for the next grapph, but it is a objective (string).
data_wb.dtypes
_____no_output_____
MIT
dataproject/dataProject.ipynb
NumEconCopenhagen/projects-2019-tba
We convert year into a float variable.
data_wb['year'] = data_wb.year.astype(float)
_____no_output_____
MIT
dataproject/dataProject.ipynb
NumEconCopenhagen/projects-2019-tba
We prove what we have done.
data_wb.dtypes
_____no_output_____
MIT
dataproject/dataProject.ipynb
NumEconCopenhagen/projects-2019-tba
Fertility Rate per Country from 1990 until 2016
data_wb = data_wb.set_index(["year", "country"]) #plot fertility rate over the years data_wb.unstack('country')['frt'].plot()
_____no_output_____
MIT
dataproject/dataProject.ipynb
NumEconCopenhagen/projects-2019-tba
The fertility rate declines continously in most countries. An exception is Turkmenistan. In this country the fertility rate seems to oszilliate. The US had a little peak in 2007, but since then the fertility rate is declining. Correlation Table Before we proceed with a regression, we want to have a look at the correl...
import seaborn as sns fig = plt.subplots(figsize = (10,10)) sns.set(font_scale=1.5) sns.heatmap(data_wb.corr(),square = True,cbar=True,annot=True,annot_kws={'size': 10}) plt.show()
_____no_output_____
MIT
dataproject/dataProject.ipynb
NumEconCopenhagen/projects-2019-tba
This gives a good indication for what to expect from the regression. In the following regression we are interested in ferility rate, and we can see this table that fertility rate is negatively correlated with GDP, urban population and population in general (although the effect is small) Panel Regression We want to per...
from linearmodels.panel import PooledOLS from linearmodels.panel import RandomEffects from linearmodels import PanelOLS import statsmodels.api as sm
_____no_output_____
MIT
dataproject/dataProject.ipynb
NumEconCopenhagen/projects-2019-tba
For year and country, check whether these variables are set as index.
print(data_wb.head())
gdp_pC pop urban_pop% frt \ year country 2,016.0 Brazil 10,868.6534435352 207652865 86.042 1.726 2,015.0 Brazil 11,351.5657481703 205962108 85.77 1.74 2,014.0 Brazil 11,870.1484076345 204213133 85.492...
MIT
dataproject/dataProject.ipynb
NumEconCopenhagen/projects-2019-tba
We can se that they are set as indexes. For the following regressions, we need "years" to be the second index for the regression to work. Therefore, temporarily reset the index:
data_wb.reset_index(inplace = True ) print(data_wb.head()) data_wb = data_wb.set_index(["country","year"], append=False)
_____no_output_____
MIT
dataproject/dataProject.ipynb
NumEconCopenhagen/projects-2019-tba
Pooled OLS-Regression For the first regression, we do a pooled-OLS. We have nine entities (countries) and 27 years.
exog_vars = ['gdp_pC', 'pop', 'urban_pop%'] exog = sm.add_constant(data_wb[exog_vars]) mod = PooledOLS(data_wb.frt, exog) pooled_res = mod.fit() print(pooled_res)
PooledOLS Estimation Summary ================================================================================ Dep. Variable: frt R-squared: 0.6796 Estimator: PooledOLS R-squared (Between): 0.7...
MIT
dataproject/dataProject.ipynb
NumEconCopenhagen/projects-2019-tba
The results are questionable. For example gdp per capita seems to have no effect on fertility rate. Moreover, the effect of gdp per capita and population is unlikely small.Therefore, we have a look at our dependent variable. It seems that python takes the variable correctly and the indexes are altso correct. Therefore,...
data_wb.frt
_____no_output_____
MIT
dataproject/dataProject.ipynb
NumEconCopenhagen/projects-2019-tba
Now, we run a Panel OLS regression, where we control for entity effects and time effects.
exog_vars = ['gdp_pC', 'pop', 'urban_pop%'] exog = sm.add_constant(data_wb[exog_vars]) mod = PanelOLS(data_wb.frt, exog, entity_effects=True, time_effects=True) pooled_res = mod.fit() print(pooled_res)
PanelOLS Estimation Summary ================================================================================ Dep. Variable: frt R-squared: 0.6726 Estimator: PanelOLS R-squared (Between): -5.3...
MIT
dataproject/dataProject.ipynb
NumEconCopenhagen/projects-2019-tba
Auto EncoderThis notebook was created by Camille-Amaury JUGE, in order to better understand Auto Encoder principles and how they work.(it follows the exercices proposed by Hadelin de Ponteves on Udemy : https://www.udemy.com/course/le-deep-learning-de-a-a-z/) Imports
import numpy as np import pandas as pd # pytorch import torch import torch.nn as nn import torch.nn.parallel import torch.optim as optim import torch.utils.data from torch.autograd import Variable import sys import csv
_____no_output_____
CNRI-Python
Exercises/Auto Encoder/Auto Encoder.ipynb
camilleAmaury/DeepLearningExercise
Data preprocessingsame process as Boltzmann's machine (go there to see more details)
df_movies = pd.read_csv("ml-1m\\movies.dat", sep="::", header=None, engine="python", encoding="latin-1") users = pd.read_csv("ml-1m\\users.dat", sep="::", header=None, engine="python", encoding="latin-1") ratings = pd.read_csv("ml-1m\\ratings.dat", sep="::", header=None, engine="python",...
_____no_output_____
CNRI-Python
Exercises/Auto Encoder/Auto Encoder.ipynb
camilleAmaury/DeepLearningExercise
Model
class SparseAutoEncoder(nn.Module): def __init__(self, input_dim): super(SparseAutoEncoder, self).__init__() # creating input layer self.fully_connected_hidden_layer_1 = nn.Linear(input_dim, 20) self.fully_connected_hidden_layer_2 = nn.Linear(20, 10) self.fully_connected_hidd...
Test Set => Loss : 1.0229144248873956
CNRI-Python
Exercises/Auto Encoder/Auto Encoder.ipynb
camilleAmaury/DeepLearningExercise
Sentiment Analysis with an RNNIn this notebook, you'll implement a recurrent neural network that performs sentiment analysis. >Using an RNN rather than a strictly feedforward network is more accurate since we can include information about the *sequence* of words. Here we'll use a dataset of movie reviews, accompanied ...
import numpy as np # read data from text files with open('data/reviews.txt', 'r') as f: reviews = f.read() with open('data/labels.txt', 'r') as f: labels = f.read() print(reviews[:2000]) print() print(labels[:20])
_____no_output_____
MIT
sentiment-rnn/Sentiment_RNN_Exercise.ipynb
MiniMarvin/pytorch-v2
Data pre-processingThe first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.You can see an example of the reviews data above. Here are ...
from string import punctuation print(punctuation) # get rid of punctuation reviews = reviews.lower() # lowercase, standardize all_text = ''.join([c for c in reviews if c not in punctuation]) # split by new lines and spaces reviews_split = all_text.split('\n') all_text = ' '.join(reviews_split) # create a list of wor...
_____no_output_____
MIT
sentiment-rnn/Sentiment_RNN_Exercise.ipynb
MiniMarvin/pytorch-v2
Encoding the wordsThe embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.> **Exercise:** Now you're going t...
# feel free to use this import from collections import Counter ## Build a dictionary that maps words to integers vocab_to_int = None ## use the dict to tokenize each review in reviews_split ## store the tokenized reviews in reviews_ints reviews_ints = []
_____no_output_____
MIT
sentiment-rnn/Sentiment_RNN_Exercise.ipynb
MiniMarvin/pytorch-v2
**Test your code**As a text that you've implemented the dictionary correctly, print out the number of unique words in your vocabulary and the contents of the first, tokenized review.
# stats about vocabulary print('Unique words: ', len((vocab_to_int))) # should ~ 74000+ print() # print tokens in first review print('Tokenized review: \n', reviews_ints[:1])
_____no_output_____
MIT
sentiment-rnn/Sentiment_RNN_Exercise.ipynb
MiniMarvin/pytorch-v2
Encoding the labelsOur labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1.> **Exercise:** Convert labels from `positive` and `negative` to 1 and 0, respectively, and place those in a new list, `encoded_labels`.
# 1=positive, 0=negative label conversion encoded_labels = None
_____no_output_____
MIT
sentiment-rnn/Sentiment_RNN_Exercise.ipynb
MiniMarvin/pytorch-v2
Removing OutliersAs an additional pre-processing step, we want to make sure that our reviews are in good shape for standard processing. That is, our network will expect a standard input text size, and so, we'll want to shape our reviews into a specific length. We'll approach this task in two main steps:1. Getting rid ...
# outlier review stats review_lens = Counter([len(x) for x in reviews_ints]) print("Zero-length reviews: {}".format(review_lens[0])) print("Maximum review length: {}".format(max(review_lens)))
_____no_output_____
MIT
sentiment-rnn/Sentiment_RNN_Exercise.ipynb
MiniMarvin/pytorch-v2
Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. We'll have to remove any super short reviews and truncate super long reviews. This removes outliers and should allow our model to train more efficiently.> **Exercise:** First, remov...
print('Number of reviews before removing outliers: ', len(reviews_ints)) ## remove any reviews/labels with zero length from the reviews_ints list. reviews_ints = encoded_labels = print('Number of reviews after removing outliers: ', len(reviews_ints))
_____no_output_____
MIT
sentiment-rnn/Sentiment_RNN_Exercise.ipynb
MiniMarvin/pytorch-v2