markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Our Chainer script writes various artifacts, such as plots, to a directory `output_data_dir`, the contents of which which SageMaker uploads to S3. Now we download and extract these artifacts.
from s3_util import retrieve_output_from_s3 chainer_training_job = chainer_estimator.latest_training_job.name desc = sagemaker_session.sagemaker_client.describe_training_job( TrainingJobName=chainer_training_job ) output_data = desc["ModelArtifacts"]["S3ModelArtifacts"].replace("model.tar.gz", "output.tar.gz") retrieve_output_from_s3(output_data, "output/single_machine_cifar")
_____no_output_____
Apache-2.0
sagemaker-python-sdk/chainer_cifar10/chainer_single_machine_cifar10.ipynb
can-sun/amazon-sagemaker-examples
These plots show the accuracy and loss over epochs:
from IPython.display import Image from IPython.display import display accuracy_graph = Image(filename="output/single_machine_cifar/accuracy.png", width=800, height=800) loss_graph = Image(filename="output/single_machine_cifar/loss.png", width=800, height=800) display(accuracy_graph, loss_graph)
_____no_output_____
Apache-2.0
sagemaker-python-sdk/chainer_cifar10/chainer_single_machine_cifar10.ipynb
can-sun/amazon-sagemaker-examples
Deploying the Trained ModelAfter training, we use the Chainer estimator object to create and deploy a hosted prediction endpoint. We can use a CPU-based instance for inference (in this case an `ml.m4.xlarge`), even though we trained on GPU instances.The predictor object returned by `deploy` lets us call the new endpoint and perform inference on our sample images.
predictor = chainer_estimator.deploy(initial_instance_count=1, instance_type="ml.m4.xlarge")
_____no_output_____
Apache-2.0
sagemaker-python-sdk/chainer_cifar10/chainer_single_machine_cifar10.ipynb
can-sun/amazon-sagemaker-examples
CIFAR10 sample imagesWe'll use these CIFAR10 sample images to test the service: Predicting using SageMaker EndpointWe batch the images together into a single NumPy array to obtain multiple inferences with a single prediction request.
from skimage import io import numpy as np def read_image(filename): img = io.imread(filename) img = np.array(img).transpose(2, 0, 1) img = np.expand_dims(img, axis=0) img = img.astype(np.float32) img *= 1.0 / 255.0 img = img.reshape(3, 32, 32) return img def read_images(filenames): return np.array([read_image(f) for f in filenames]) filenames = [ "images/airplane1.png", "images/automobile1.png", "images/bird1.png", "images/cat1.png", "images/deer1.png", "images/dog1.png", "images/frog1.png", "images/horse1.png", "images/ship1.png", "images/truck1.png", ] image_data = read_images(filenames)
_____no_output_____
Apache-2.0
sagemaker-python-sdk/chainer_cifar10/chainer_single_machine_cifar10.ipynb
can-sun/amazon-sagemaker-examples
The predictor runs inference on our input data and returns a list of predictions whose argmax gives the predicted label of the input data.
response = predictor.predict(image_data) for i, prediction in enumerate(response): print("image {}: prediction: {}".format(i, prediction.argmax(axis=0)))
_____no_output_____
Apache-2.0
sagemaker-python-sdk/chainer_cifar10/chainer_single_machine_cifar10.ipynb
can-sun/amazon-sagemaker-examples
CleanupAfter you have finished with this example, remember to delete the prediction endpoint to release the instance(s) associated with it.
chainer_estimator.delete_endpoint()
_____no_output_____
Apache-2.0
sagemaker-python-sdk/chainer_cifar10/chainer_single_machine_cifar10.ipynb
can-sun/amazon-sagemaker-examples
USM Numérica Tema del Notebook Objetivos1. Conocer el funcionamiento de la librerìa sklearn de Machine Learning2. Aplicar la librerìa sklearn para solucionar problemas de Machine Learning Sobre el autor Sebastián Flores ICM UTFSM sebastian.flores@usm.cl Sobre la presentación Contenido creada en ipython notebook (jupyter) Versión en Slides gracias a RISE de Damián AvilaSoftware:* python 2.7 o python 3.1* pandas 0.16.1* sklearn 0.16.1Opcional:* numpy 1.9.2* matplotlib 1.3.1
from sklearn import __version__ as vsn print(vsn)
0.24.1
MIT
meetup.ipynb
sebastiandres/talk_2016_04_python_meetup_sklearn
0.1 InstruccionesLas instrucciones de instalación y uso de un ipython notebook se encuentran en el siguiente [link](link).Después de descargar y abrir el presente notebook, recuerden:* Desarrollar los problemas de manera secuencial.* Guardar constantemente con *`Ctr-S`* para evitar sorpresas.* Reemplazar en las celdas de código donde diga *`FIX_ME`* por el código correspondiente.* Ejecutar cada celda de código utilizando *`Ctr-Enter`* 0.2 Licenciamiento y ConfiguraciónEjecutar la siguiente celda mediante *`Ctr-Enter`*.
""" IPython Notebook v4.0 para python 3.0 Librerías adicionales: numpy, scipy, matplotlib. (EDITAR EN FUNCION DEL NOTEBOOK!!!) Contenido bajo licencia CC-BY 4.0. Código bajo licencia MIT. (c) Sebastian Flores, Christopher Cooper, Alberto Rubio, Pablo Bunout. """ # Configuración para recargar módulos y librerías dinámicamente %reload_ext autoreload %autoreload 2 # Configuración para graficos en línea %matplotlib inline
_____no_output_____
MIT
meetup.ipynb
sebastiandres/talk_2016_04_python_meetup_sklearn
1.- Sobre la librería sklearn Historia- Nace en 2007, como un Google Summer Project de David Cournapeau. - Retomado por Matthieu Brucher para su proyecto de tesis.- Desde 2010 con soporte por parte de INRIA.- Actualmente +35 colaboradores. 1.- Sobre la librería sklearn InstalaciónEn python, con un poco de suerte:```pip install -U scikit-learn```Utilizando Anaconda:```conda install scikit-learn``` 1.- Sobre la librería sklearn ¿Porqué sklearn?sklearn viene de scientific toolbox for Machine Learning. scikit learn para los amigos.Existen múltiples scikits, que son "scientific toolboxes" construidos sobre SciPy: [https://scikits.appspot.com/scikits](https://scikits.appspot.com/scikits). Primero que nada... ¿Qué es Machine Learning? 2.- Machine Learning 101 EjemploConsideremos un dataset consistente en características de diversos animales.```patas, ancho, largo, alto, peso, especie[numero],[metros],[metros],[metros],[kilogramos],[]2, 0.6, 0.4, 1.7, 75, humano2, 0.6, 0.4, 1.8, 90, humano...2, 0.5, 0.5, 1.7, 85, humano4, 0.2, 0.5, 0,3, 30, gato...4, 0.25, 0.55, 0.32, 32, gato4, 0.5, 0.8, 0.3, 50, perro...4, 0.4, 0.4, 0.32, 40, perro``` 2.- Machine Learning 101 ClusteringSupongamos que no nos han dicho la especie de cada animal. ¿Podríamos reconocer las distintas especies? ¿Podríamos reconocer que existen 3 grupos distintos de animales? 2.- Machine Learning 101 ClasificaciónSupongamos que conocemos los datos de cada animal y además la especie.Si alguien llega con las medidas de un animal... ¿podemos decir cuál será la especie? 2.- Machine Learning 101 RegresiónSupongamos que conocemos los datos de cada animal y su especie. Si alguien llega con los datos de un animal, excepto el peso... ¿podemos predecir el peso que tendrá el animal? 2.- Machine Learning 101 Definiciones* Los datos utilizados para predecir son predictores (features), y típicamente se llama `X`.* El dato que se busca predecir se llama etiqueta (label) y puede ser numérica o categórica, y típicamente se llama `y`. 3- Generalidades de sklearn Imagen resumen 3- Generalidades de sklearn Procedimiento General
from sklearn import HelpfulMethods from sklearn import AlgorithmIWantToUse # split data into train and test datasets # train model with train dataset # compute error on test dataset # Optional: Train model with all available data # Use model for some prediction
_____no_output_____
MIT
meetup.ipynb
sebastiandres/talk_2016_04_python_meetup_sklearn
4- Clustering con sklearn Wine DatasetLos datos del [Wine Dataset](https://archive.ics.uci.edu/ml/datasets/Wine) son un conjunto de datos clásicos para verificar los algoritmos de clustering. Los datos corresponden a 3 cultivos diferentes de vinos de la misma región de Italia, y que han sido identificados con las etiquetas 1, 2 y 3. 4- Clustering con sklearn Wine DatasetPara cada tipo de vino se realizado 13 análisis químicos:1. Alcohol 2. Malic acid 3. Ash 4. Alcalinity of ash 5. Magnesium 6. Total phenols 7. Flavanoids 8. Nonflavanoid phenols 9. Proanthocyanins 10. Color intensity 11. Hue 12. OD280/OD315 of diluted wines 13. Proline La base de datos contiene 178 muestras distintas en total.
%%bash head data/wine_data.csv
class,alcohol,malic_acid,ash,alcalinity_of_ash,magnesium,total_phenols,flavanoids,nonflavanoid_phenols,proanthocyanins,color_intensity,hue,OD280-OD315_of_diluted_wines,proline 1,14.23,1.71,2.43,15.6,127,2.8,3.06,.28,2.29,5.64,1.04,3.92,1065 1,13.2,1.78,2.14,11.2,100,2.65,2.76,.26,1.28,4.38,1.05,3.4,1050 1,13.16,2.36,2.67,18.6,101,2.8,3.24,.3,2.81,5.68,1.03,3.17,1185 1,14.37,1.95,2.5,16.8,113,3.85,3.49,.24,2.18,7.8,.86,3.45,1480 1,13.24,2.59,2.87,21,118,2.8,2.69,.39,1.82,4.32,1.04,2.93,735 1,14.2,1.76,2.45,15.2,112,3.27,3.39,.34,1.97,6.75,1.05,2.85,1450 1,14.39,1.87,2.45,14.6,96,2.5,2.52,.3,1.98,5.25,1.02,3.58,1290 1,14.06,2.15,2.61,17.6,121,2.6,2.51,.31,1.25,5.05,1.06,3.58,1295 1,14.83,1.64,2.17,14,97,2.8,2.98,.29,1.98,5.2,1.08,2.85,1045
MIT
meetup.ipynb
sebastiandres/talk_2016_04_python_meetup_sklearn
4- Clustering con sklearn Lectura de datos
import pandas as pd data = pd.read_csv("data/wine_data.csv") data
_____no_output_____
MIT
meetup.ipynb
sebastiandres/talk_2016_04_python_meetup_sklearn
4- Clustering con sklearn Exploración de datos
data.columns data["class"].value_counts() data.describe(include="all")
_____no_output_____
MIT
meetup.ipynb
sebastiandres/talk_2016_04_python_meetup_sklearn
4- Clustering con sklearn Exploración gráfica de datos
from matplotlib import pyplot as plt data.hist(figsize=(12,20)) plt.show() from matplotlib import pyplot as plt #pd.scatter_matrix(data, figsize=(12,12), range_padding=0.2) #plt.show()
_____no_output_____
MIT
meetup.ipynb
sebastiandres/talk_2016_04_python_meetup_sklearn
4- Clustering con sklearn Separación de los datosNecesitamos separar los datos en los predictores (features) y las etiquetas (labels)
X = data.drop("class", axis=1) true_labels = data["class"] -1 # labels deben ser 0, 1, 2, ..., n-1
_____no_output_____
MIT
meetup.ipynb
sebastiandres/talk_2016_04_python_meetup_sklearn
4- Custering Magnitudes de los datos
print(X.mean()) print(X.std())
alcohol 0.811827 malic_acid 1.117146 ash 0.274344 alcalinity_of_ash 3.339564 magnesium 14.282484 total_phenols 0.625851 flavanoids 0.998859 nonflavanoid_phenols 0.124453 proanthocyanins 0.572359 color_intensity 2.318286 hue 0.228572 OD280-OD315_of_diluted_wines 0.709990 proline 314.907474 dtype: float64
MIT
meetup.ipynb
sebastiandres/talk_2016_04_python_meetup_sklearn
4- Clustering con sklearn Algoritmo de ClusteringPara Clustering usaremos el algoritmo KMeans. Apliquemos un algoritmo de clustering directamente
from sklearn.cluster import KMeans from sklearn.metrics import confusion_matrix # Parameters n_clusters = 3 # Running the algorithm kmeans = KMeans(n_clusters) kmeans.fit(X) pred_labels = kmeans.labels_ cm = confusion_matrix(true_labels, pred_labels) print(cm)
[[ 0 46 13] [50 1 20] [19 0 29]]
MIT
meetup.ipynb
sebastiandres/talk_2016_04_python_meetup_sklearn
4- Clustering con sklearn Normalizacion de datosResulta conveniente escalar los datos, para que el algoritmo de clustering funcione mejor
from sklearn import preprocessing X_scaled = preprocessing.scale(X) print(X_scaled.mean()) print(X_scaled.std())
1.0
MIT
meetup.ipynb
sebastiandres/talk_2016_04_python_meetup_sklearn
4- Clustering con sklearn Algoritmo de ClusteringAhora podemos aplicar un algoritmo de clustering
from sklearn.cluster import KMeans from sklearn.metrics import confusion_matrix # Parameters n_clusters = 3 # Running the algorithm kmeans = KMeans(n_clusters) kmeans.fit(X_scaled) pred_labels = kmeans.labels_ cm = confusion_matrix(true_labels, pred_labels) print(cm)
[[ 0 59 0] [ 3 3 65] [48 0 0]]
MIT
meetup.ipynb
sebastiandres/talk_2016_04_python_meetup_sklearn
4- Clustering con sklearn Regla del codoEn todos los casos hemos utilizado que el número de clusters es igual a 3. En caso que no conociéramos este dato, deberíamos graficar la suma de las distancias a los clusters para cada punto, en función del número de clusters.
from sklearn.cluster import KMeans clusters = range(2,20) total_distance = [] for n_clusters in clusters: kmeans = KMeans(n_clusters) kmeans.fit(X_scaled) pred_labels = kmeans.labels_ centroids = kmeans.cluster_centers_ # Get the distances distance_for_n = 0 for k in range(n_clusters): points = X_scaled[pred_labels==k] aux = (points - centroids[k,:])**2 distance_for_n += (aux.sum(axis=1)**0.5).sum() total_distance.append(distance_for_n)
_____no_output_____
MIT
meetup.ipynb
sebastiandres/talk_2016_04_python_meetup_sklearn
4- Clustering con sklearnGraficando lo anterior, obtenemos
from matplotlib import pyplot as plt fig = plt.figure(figsize=(16,8)) plt.plot(clusters, total_distance, 'rs') plt.xlim(min(clusters)-1, max(clusters)+1) plt.ylim(0, max(total_distance)*1.1) plt.show()
_____no_output_____
MIT
meetup.ipynb
sebastiandres/talk_2016_04_python_meetup_sklearn
4- Clustering con sklearn¿Qué tan dificil es usar otro algoritmo de clustering? Nada dificil. Algoritmos disponibles:* K-Means* Mini-batch K-means* Affinity propagation* Mean-shift* Spectral clustering* Ward hierarchical clustering* Agglomerative clustering* DBSCAN* Gaussian mixtures* BirchLista con detalles: [http://scikit-learn.org/stable/modules/clustering.html](http://scikit-learn.org/stable/modules/clustering.html)
from sklearn.cluster import KMeans from sklearn.metrics import confusion_matrix from sklearn import preprocessing # Normalization of data X_scaled = preprocessing.scale(X) # Running the algorithm kmeans = KMeans(n_clusters=3) kmeans.fit(X_scaled) pred_labels = kmeans.labels_ # Evaluating the output cm = confusion_matrix(true_labels, pred_labels) print(cm) from sklearn.cluster import MiniBatchKMeans from sklearn.metrics import confusion_matrix from sklearn import preprocessing # Normalization of data X_scaled = preprocessing.scale(X) # Running the algorithm kmeans = MiniBatchKMeans(n_clusters=3) kmeans.fit(X_scaled) pred_labels = kmeans.labels_ # Evaluating the output cm = confusion_matrix(true_labels, pred_labels) print(cm) from sklearn.cluster import AffinityPropagation from sklearn.metrics import confusion_matrix from sklearn import preprocessing # Normalization of data X_scaled = preprocessing.scale(X) # Running the algorithm kmeans = AffinityPropagation(preference=-300) kmeans.fit(X_scaled) pred_labels = kmeans.labels_ # Evaluating the output cm = confusion_matrix(true_labels, pred_labels) print(cm)
[[49 10 0] [ 3 58 10] [ 2 0 46]]
MIT
meetup.ipynb
sebastiandres/talk_2016_04_python_meetup_sklearn
5- Clasificación Reconocimiento de dígitosLos datos se encuentran en 2 archivos, `data/optdigits.train` y `data/optdigits.test`. Como su nombre lo indica, el set `data/optdigits.train` contiene los ejemplos que deben ser usados para entrenar el modelo, mientras que el set `data/optdigits.test` se utilizará para obtener una estimación del error de predicción.Ambos archivos comparten el mismo formato: cada línea contiene 65 valores. Los 64 primeros corresponden a la representación de la imagen en escala de grises (0-blanco, 255-negro), y el valor 65 corresponde al dígito de la imágen (0-9). 5- Clasificación Cargando los datosPara cargar los datos, utilizamos np.loadtxt con los parámetros extra delimiter (para indicar que el separador será en esta ocasión una coma) y con el dype np.int8 (para que su representación en memoria sea la mínima posible, 8 bits en vez de 32/64 bits para un float).
import numpy as np XY_tv = np.loadtxt("data/optdigits.train", delimiter=",", dtype=np.int8) print(XY_tv) X_tv = XY_tv[:,:64] Y_tv = XY_tv[:, 64] print(X_tv.shape) print(Y_tv.shape) print(X_tv[0,:]) print(X_tv[0,:].reshape(8,8)) print(Y_tv[0])
[[ 0 1 6 ... 0 0 0] [ 0 0 10 ... 0 0 0] [ 0 0 8 ... 0 0 7] ... [ 0 0 3 ... 0 0 6] [ 0 0 6 ... 5 0 6] [ 0 0 2 ... 0 0 7]] (3823, 64) (3823,) [ 0 1 6 15 12 1 0 0 0 7 16 6 6 10 0 0 0 8 16 2 0 11 2 0 0 5 16 3 0 5 7 0 0 7 13 3 0 8 7 0 0 4 12 0 1 13 5 0 0 0 14 9 15 9 0 0 0 0 6 14 7 1 0 0] [[ 0 1 6 15 12 1 0 0] [ 0 7 16 6 6 10 0 0] [ 0 8 16 2 0 11 2 0] [ 0 5 16 3 0 5 7 0] [ 0 7 13 3 0 8 7 0] [ 0 4 12 0 1 13 5 0] [ 0 0 14 9 15 9 0 0] [ 0 0 6 14 7 1 0 0]] 0
MIT
meetup.ipynb
sebastiandres/talk_2016_04_python_meetup_sklearn
5- Clasificación Visualizando los datosPara visualizar los datos utilizaremos el método imshow de pyplot. Resulta necesario convertir el arreglo desde las dimensiones (1,64) a (8,8) para que la imagen sea cuadrada y pueda distinguirse el dígito. Superpondremos además el label correspondiente al dígito, mediante el método text. Realizaremos lo anterior para los primeros 25 datos del archivo.
from matplotlib import pyplot as plt # Well plot the first nx*ny examples nx, ny = 5, 5 fig, ax = plt.subplots(nx, ny, figsize=(12,12)) for i in range(nx): for j in range(ny): index = j+ny*i data = X_tv[index,:].reshape(8,8) label = Y_tv[index] ax[i][j].imshow(data, interpolation='nearest', cmap=plt.get_cmap('gray_r')) ax[i][j].text(7, 0, str(int(label)), horizontalalignment='center', verticalalignment='center', fontsize=10, color='blue') ax[i][j].get_xaxis().set_visible(False) ax[i][j].get_yaxis().set_visible(False) plt.show()
_____no_output_____
MIT
meetup.ipynb
sebastiandres/talk_2016_04_python_meetup_sklearn
5- Clasificación Entrenamiento trivialPara clasificar utilizaremos el algoritmo K Nearest Neighbours.Entrenaremos el modelo con 1 vecino y verificaremos el error de predicción en el set de entrenamiento.
from sklearn.neighbors import KNeighborsClassifier k = 1 kNN = KNeighborsClassifier(n_neighbors=k) kNN.fit(X_tv, Y_tv) Y_pred = kNN.predict(X_tv) n_errors = sum(Y_pred!=Y_tv) print("Hay %d errores de un total de %d ejemplos de entrenamiento" %(n_errors, len(Y_tv)))
Hay 0 errores de un total de 3823 ejemplos de entrenamiento
MIT
meetup.ipynb
sebastiandres/talk_2016_04_python_meetup_sklearn
¡La mejor predicción del punto es el mismo punto! Pero esto generalizaría catastróficamente.Es importantísimo **entrenar** en un set de datos y luego probar como generaliza/funciona en un set **completamente nuevo**. 5- Clasificación Seleccionando el número adecuado de vecinosBuscando el valor de k más apropiadoA partir del análisis del punto anterior, nos damos cuenta de la necesidad de:1. Calcular el error en un set distinto al utilizado para entrenar.2. Calcular el mejor valor de vecinos para el algoritmo.(Esto tomará un tiempo)
from sklearn.neighbors import KNeighborsClassifier from sklearn.model_selection import train_test_split template = "k={0:,d}: {1:.1f} +- {2:.1f} errores de clasificación de un total de {3:,d} puntos" # Fitting the model mean_error_for_k = [] std_error_for_k = [] k_range = range(1,8) for k in k_range: errors_k = [] for i in range(10): kNN = KNeighborsClassifier(n_neighbors=k) X_train, X_valid, Y_train, Y_valid = train_test_split(X_tv, Y_tv, train_size=0.75) kNN.fit(X_train, Y_train) # Predicting values Y_valid_pred = kNN.predict(X_valid) # Count the errors n_errors = sum(Y_valid!=Y_valid_pred) # Add them to vector errors_k.append(100.*n_errors/len(Y_valid)) errors = np.array(errors_k) print(template.format(k, errors.mean(), errors.std(), len(Y_valid))) mean_error_for_k.append(errors.mean()) std_error_for_k.append(errors.std())
k=1: 1.6 +- 0.3 errores de clasificación de un total de 956 puntos k=2: 2.3 +- 0.5 errores de clasificación de un total de 956 puntos k=3: 1.6 +- 0.3 errores de clasificación de un total de 956 puntos k=4: 2.0 +- 0.4 errores de clasificación de un total de 956 puntos k=5: 1.7 +- 0.2 errores de clasificación de un total de 956 puntos k=6: 1.8 +- 0.3 errores de clasificación de un total de 956 puntos k=7: 1.7 +- 0.4 errores de clasificación de un total de 956 puntos
MIT
meetup.ipynb
sebastiandres/talk_2016_04_python_meetup_sklearn
5- ClasificaciónPodemos visualizar los datos anteriores utilizando el siguiente código, que requiere que `sd_error_for k` y `mean_error_for_k` hayan sido apropiadamente definidos.
mean = np.array(mean_error_for_k) std = np.array(std_error_for_k) plt.figure(figsize=(12,8)) plt.plot(k_range, mean - std, "k:") plt.plot(k_range, mean , "r.-") plt.plot(k_range, mean + std, "k:") plt.xlabel("Numero de vecinos k") plt.ylabel("Error de clasificacion") plt.show()
_____no_output_____
MIT
meetup.ipynb
sebastiandres/talk_2016_04_python_meetup_sklearn
5- Clasificación Entrenando todo el modeloA partir de lo anterior, se fija el número de vecinos $k=3$ y se procede a entrenar el modelo con todos los datos.
from sklearn.neighbors import KNeighborsClassifier from sklearn.model_selection import train_test_split import numpy as np k = 3 kNN = KNeighborsClassifier(n_neighbors=k) kNN.fit(X_tv, Y_tv)
_____no_output_____
MIT
meetup.ipynb
sebastiandres/talk_2016_04_python_meetup_sklearn
5- Clasificación Predicción en testing datasetAhora que el modelo kNN ha sido completamente entrenado, calcularemos el error de predicción en un set de datos completamente nuevo: el set de testing.
# Cargando el archivo data/optdigits.tes XY_test = np.loadtxt("data/optdigits.test", delimiter=",") X_test = XY_test[:,:64] Y_test = XY_test[:, 64] # Predicción de etiquetas Y_pred = kNN.predict(X_test)
_____no_output_____
MIT
meetup.ipynb
sebastiandres/talk_2016_04_python_meetup_sklearn
5- ClasificaciónPuesto que tenemos las etiquetas verdaderas en el set de entrenamiento, podemos visualizar que números han sido correctamente etiquetados.
from matplotlib import pyplot as plt # Mostrar los datos correctos mask = (Y_pred==Y_test) X_aux = X_test[mask] Y_aux_true = Y_test[mask] Y_aux_pred = Y_pred[mask] # We'll plot the first 100 examples, randomly choosen nx, ny = 5, 5 fig, ax = plt.subplots(nx, ny, figsize=(12,12)) for i in range(nx): for j in range(ny): index = j+ny*i data = X_aux[index,:].reshape(8,8) label_pred = str(int(Y_aux_pred[index])) label_true = str(int(Y_aux_true[index])) ax[i][j].imshow(data, interpolation='nearest', cmap=plt.get_cmap('gray_r')) ax[i][j].text(0, 0, label_pred, horizontalalignment='center', verticalalignment='center', fontsize=10, color='green') ax[i][j].text(7, 0, label_true, horizontalalignment='center', verticalalignment='center', fontsize=10, color='blue') ax[i][j].get_xaxis().set_visible(False) ax[i][j].get_yaxis().set_visible(False) plt.show()
_____no_output_____
MIT
meetup.ipynb
sebastiandres/talk_2016_04_python_meetup_sklearn
5- Clasificación Visualización de etiquetas incorrectasMás interesante que el gráfico anterior, resulta considerar los casos donde los dígitos han sido incorrectamente etiquetados.
from matplotlib import pyplot as plt # Mostrar los datos correctos mask = (Y_pred!=Y_test) X_aux = X_test[mask] Y_aux_true = Y_test[mask] Y_aux_pred = Y_pred[mask] # We'll plot the first 100 examples, randomly choosen nx, ny = 5, 5 fig, ax = plt.subplots(nx, ny, figsize=(12,12)) for i in range(nx): for j in range(ny): index = j+ny*i data = X_aux[index,:].reshape(8,8) label_pred = str(int(Y_aux_pred[index])) label_true = str(int(Y_aux_true[index])) ax[i][j].imshow(data, interpolation='nearest', cmap=plt.get_cmap('gray_r')) ax[i][j].text(0, 0, label_pred, horizontalalignment='center', verticalalignment='center', fontsize=10, color='red') ax[i][j].text(7, 0, label_true, horizontalalignment='center', verticalalignment='center', fontsize=10, color='blue') ax[i][j].get_xaxis().set_visible(False) ax[i][j].get_yaxis().set_visible(False) plt.show()
_____no_output_____
MIT
meetup.ipynb
sebastiandres/talk_2016_04_python_meetup_sklearn
5- Clasificación Análisis del errorDespués de la exploración visual de los resultados, queremos obtener el error de predicción real del modelo.¿Existen dígitos más fáciles o difíciles de clasificar?
# Error global mask = (Y_pred!=Y_test) error_prediccion = 100.*sum(mask) / len(mask) print("Error de predicción total de {0:.1f} %".format(error_prediccion)) for digito in range(0,10): mask_digito = Y_test==digito Y_test_digito = Y_test[mask_digito] Y_pred_digito = Y_pred[mask_digito] mask = Y_test_digito!=Y_pred_digito error_prediccion = 100.*sum((Y_pred_digito!=Y_test_digito)) / len(Y_pred_digito) print("Error de predicción para digito {0:d} de {1:.1f} %".format(digito, error_prediccion))
Error de predicción total de 2.2 % Error de predicción para digito 0 de 0.0 % Error de predicción para digito 1 de 1.1 % Error de predicción para digito 2 de 2.3 % Error de predicción para digito 3 de 1.1 % Error de predicción para digito 4 de 1.7 % Error de predicción para digito 5 de 1.6 % Error de predicción para digito 6 de 0.0 % Error de predicción para digito 7 de 3.9 % Error de predicción para digito 8 de 6.9 % Error de predicción para digito 9 de 3.3 %
MIT
meetup.ipynb
sebastiandres/talk_2016_04_python_meetup_sklearn
5- Clasificación Análisis del error (cont. de)El siguiente código muestra el error de clasificación, permitiendo verificar que números son confundibles
from sklearn.metrics import confusion_matrix as cm cm = cm(Y_test, Y_pred) print(cm) # As in http://scikit-learn.org/stable/auto_examples/model_selection/plot_confusion_matrix.html def plot_confusion_matrix(cm, title='Confusion matrix', cmap=plt.cm.jet): plt.figure(figsize=(10,10)) plt.imshow(cm, interpolation='nearest', cmap=cmap) plt.title(title) plt.colorbar() tick_marks = np.arange(10) plt.xticks(tick_marks, tick_marks) plt.yticks(tick_marks, tick_marks) plt.tight_layout() plt.ylabel('True label') plt.xlabel('Predicted label') plt.show() return None # Compute confusion matrix plt.figure() plot_confusion_matrix(cm) # Normalize the confusion matrix by row (i.e by the number of samples in each class) cm_normalized = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] plot_confusion_matrix(cm_normalized, title='Normalized confusion matrix')
_____no_output_____
MIT
meetup.ipynb
sebastiandres/talk_2016_04_python_meetup_sklearn
Table of Contents1  Download and Clean Data2  Making Recommendations2.1  BERT2.2  Doc2vec2.3  LDA2.4  TFIDF **rec_books**Downloads an English Wikipedia dump and parses it for all available books. All available models are then ran to compare recommendation efficacy.If using this notebook in [Google Colab](https://colab.research.google.com/github/andrewtavis/wikirec/blob/main/examples/rec_books.ipynb), you can activate GPUs by following `Edit > Notebook settings > Hardware accelerator` and selecting `GPU`.
# pip install wikirec -U
_____no_output_____
BSD-3-Clause
examples/rec_books.ipynb
bizzyvinci/wikirec
The following gensim update might be necessary in Google Colab as the default version is very low.
# pip install gensim -U
_____no_output_____
BSD-3-Clause
examples/rec_books.ipynb
bizzyvinci/wikirec
In Colab you'll also need to download nltk's names data.
# import nltk # nltk.download("names") import os import json import pickle import matplotlib.pyplot as plt import seaborn as sns sns.set(style="darkgrid") sns.set(rc={"figure.figsize": (15, 5)}) from wikirec import data_utils, model, utils from IPython.core.display import display, HTML display(HTML("<style>.container { width:99% !important; }</style>"))
_____no_output_____
BSD-3-Clause
examples/rec_books.ipynb
bizzyvinci/wikirec
Download and Clean Data
files = data_utils.download_wiki( language="en", target_dir="./enwiki_dump", file_limit=-1, dump_id=False ) len(files) topic = "books" data_utils.parse_to_ndjson( topics=topic, output_path="./enwiki_books.ndjson", input_dir="./enwiki_dump", partitions_dir="./enwiki_book_partitions", limit=None, delete_parsed_files=True, multicore=True, verbose=True, ) with open("./enwiki_books.ndjson", "r") as fin: books = [json.loads(l) for l in fin] print(f"Found a total of {len(books)} books.") titles = [m[0] for m in books] texts = [m[1] for m in books] if os.path.isfile("./book_corpus_idxs.pkl"): print(f"Loading book corpus and selected indexes") with open(f"./book_corpus_idxs.pkl", "rb") as f: text_corpus, selected_idxs = pickle.load(f) selected_titles = [titles[i] for i in selected_idxs] else: print(f"Creating book corpus and selected indexes") text_corpus, selected_idxs = data_utils.clean( texts=texts, language="en", min_token_freq=5, # 0 for Bert min_token_len=3, # 0 for Bert min_tokens=50, max_token_index=-1, min_ngram_count=3, remove_stopwords=True, # False for Bert ignore_words=None, remove_names=True, sample_size=1, verbose=True, ) selected_titles = [titles[i] for i in selected_idxs] with open("./book_corpus_idxs.pkl", "wb") as f: print("Pickling book corpus and selected indexes") pickle.dump([text_corpus, selected_idxs], f, protocol=4)
Loading book corpus and selected indexes
BSD-3-Clause
examples/rec_books.ipynb
bizzyvinci/wikirec
Making Recommendations
single_input_0 = "Harry Potter and the Philosopher's Stone" single_input_1 = "The Hobbit" multiple_inputs = ["Harry Potter and the Philosopher's Stone", "The Hobbit"] def load_or_create_sim_matrix( method, corpus, metric, topic, path="./", bert_st_model="xlm-r-bert-base-nli-stsb-mean-tokens", **kwargs, ): """ Loads or creats a similarity matrix to deliver recommendations NOTE: the .pkl files made are 5-10GB or more in size """ if os.path.isfile(f"{path}{topic}_{metric}_{method}_sim_matrix.pkl"): print(f"Loading {method} {topic} {metric} similarity matrix") with open(f"{path}{topic}_{metric}_{method}_sim_matrix.pkl", "rb") as f: sim_matrix = pickle.load(f) else: print(f"Creating {method} {topic} {metric} similarity matrix") embeddings = model.gen_embeddings( method=method, corpus=corpus, bert_st_model=bert_st_model, **kwargs, ) sim_matrix = model.gen_sim_matrix( method=method, metric=metric, embeddings=embeddings, ) with open(f"{path}{topic}_{metric}_{method}_sim_matrix.pkl", "wb") as f: print(f"Pickling {method} {topic} {metric} similarity matrix") pickle.dump(sim_matrix, f, protocol=4) return sim_matrix
_____no_output_____
BSD-3-Clause
examples/rec_books.ipynb
bizzyvinci/wikirec
BERT
# Remove n-grams for BERT training corpus_no_ngrams = [ " ".join([t for t in text.split(" ") if "_" not in t]) for text in text_corpus ] # We can pass kwargs for sentence_transformers.SentenceTransformer.encode bert_sim_matrix = load_or_create_sim_matrix( method="bert", corpus=corpus_no_ngrams, metric="cosine", # euclidean topic=topic, path="./", bert_st_model="xlm-r-bert-base-nli-stsb-mean-tokens", show_progress_bar=True, batch_size=32, ) model.recommend( inputs=single_input_0, titles=selected_titles, sim_matrix=bert_sim_matrix, n=10, metric="cosine", ) model.recommend( inputs=single_input_1, titles=selected_titles, sim_matrix=bert_sim_matrix, n=10, metric="cosine", ) model.recommend( inputs=multiple_inputs, titles=selected_titles, sim_matrix=bert_sim_matrix, n=10, metric="cosine", )
_____no_output_____
BSD-3-Clause
examples/rec_books.ipynb
bizzyvinci/wikirec
Doc2vec
# We can pass kwargs for gensim.models.doc2vec.Doc2Vec doc2vec_sim_matrix = load_or_create_sim_matrix( method="doc2vec", corpus=text_corpus, metric="cosine", # euclidean topic=topic, path="./", vector_size=100, epochs=10, alpha=0.025, ) model.recommend( inputs=single_input_0, titles=selected_titles, sim_matrix=doc2vec_sim_matrix, n=10, metric="cosine", ) model.recommend( inputs=single_input_1, titles=selected_titles, sim_matrix=doc2vec_sim_matrix, n=10, metric="cosine", ) model.recommend( inputs=multiple_inputs, titles=selected_titles, sim_matrix=doc2vec_sim_matrix, n=10, metric="cosine", )
_____no_output_____
BSD-3-Clause
examples/rec_books.ipynb
bizzyvinci/wikirec
LDA
topic_nums_to_compare = [1, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100] # We can pass kwargs for gensim.models.ldamulticore.LdaMulticore utils.graph_lda_topic_evals( corpus=text_corpus, num_topic_words=10, topic_nums_to_compare=topic_nums_to_compare, metrics=True, verbose=True, ) plt.show() # We can pass kwargs for gensim.models.ldamulticore.LdaMulticore lda_sim_matrix = load_or_create_sim_matrix( method="lda", corpus=text_corpus, metric="cosine", # euclidean not an option at this time topic=topic, path="./", num_topics=90, passes=10, decay=0.5, ) model.recommend( inputs=single_input_0, titles=selected_titles, sim_matrix=lda_sim_matrix, n=10, metric="cosine", ) model.recommend( inputs=single_input_1, titles=selected_titles, sim_matrix=lda_sim_matrix, n=10, metric="cosine", ) model.recommend( inputs=multiple_inputs, titles=selected_titles, sim_matrix=lda_sim_matrix, n=10, metric="cosine", )
_____no_output_____
BSD-3-Clause
examples/rec_books.ipynb
bizzyvinci/wikirec
TFIDF
# We can pass kwargs for sklearn.feature_extraction.text.TfidfVectorizer tfidf_sim_matrix = load_or_create_sim_matrix( method="tfidf", corpus=text_corpus, metric="cosine", # euclidean topic=topic, path="./", max_features=None, norm='l2', ) model.recommend( inputs=single_input_0, titles=selected_titles, sim_matrix=tfidf_sim_matrix, n=10, metric="cosine", ) model.recommend( inputs=single_input_1, titles=selected_titles, sim_matrix=tfidf_sim_matrix, n=10, metric="cosine", ) model.recommend( inputs=multiple_inputs, titles=selected_titles, sim_matrix=tfidf_sim_matrix, n=10, metric="cosine", )
_____no_output_____
BSD-3-Clause
examples/rec_books.ipynb
bizzyvinci/wikirec
CI coverage, length and biasFor event related design.
# Directories of the data for different scenario's DATAwd <- list( 'Take[8mmBox10]' = "/Volumes/2_TB_WD_Elements_10B8_Han/PhD/IBMAvsGLM/Results/Cambridge/ThirdLevel/8mm/boxcar10", 'Take[8mmEvent2]' = "/Volumes/2_TB_WD_Elements_10B8_Han/PhD/IBMAvsGLM/Results/Cambridge/ThirdLevel/8mm/event2" ) NUMDATAwd <- length(DATAwd) currentWD <- 2 # Number of conficence intervals CIs <- c('MA-weightVar','GLM-t') NumCI <- length(CIs) # Number of executed runs nruns.tmp <- matrix(c( 1,2500, 2,500 ), ncol=2, byrow=TRUE) nruns <- nruns.tmp[currentWD,2] # Number of subjects and studies nsub <- 20 nstud <- 5 # Dimension of brain DIM <- c(91,109,91) # True value trueVal <- 0 # Load in libraries library(oro.nifti) library(dplyr) library(lattice) library(grDevices) library(ggplot2) library(data.table) library(gridExtra) # Function to count the number of instances in which true value is between lower and upper CI. indicator <- function(UPPER, LOWER, trueval){ IND <- trueval >= LOWER & trueval <= UPPER IND[is.na(IND)] <- 0 return(IND) } # Funtion to count the number of recorded values counting <- function(UPPER, LOWER){ count <- (!is.na(UPPER) & !is.na(LOWER)) return(count) } ## ############### ### Data Wrangling ############### ## ###################################################### # First we create a universal mask over all iterations ###################################################### # Set warnings off # options(warn = -1) # Vector to check progress CheckProgr <- floor(seq(1,nruns,length.out=10)) # Vector of simulations where we have a missing mask missingMask <- c() # Do you want to make an universal mask again? WRITEMASK <- FALSE if(isTRUE(WRITEMASK)){ # Vector with all masks in it AllMask <- c() # Load in the masks for(i in 1:nruns){ # Print progress if(i %in% CheckProgr) print(paste('LOADING MASKS. NOW AT ', (i/nruns)*100, '%', sep = '')) # Try reading in mask, then go to one column and convert to data frame. CheckMask <- try(readNIfTI(paste(DATAwd[[currentWD]], '/', i,'/mask.nii', sep = ''))[,,,1] %>% matrix(.,ncol = 1) %>% data.frame(), silent = TRUE) # If there is no mask, skip iteration if(class(CheckMask) == "try-error"){ missingMask <- c(missingMask, i); next} # Some masks are broken: if all values are zero: REPORT if(all(CheckMask == 0)){print(paste("CHECK MASK AT ITERATION ", i, sep = "")); next} # Bind the masks of all iterations together AllMask <- bind_cols(AllMask, CheckMask) rm(CheckMask) } # Take product to have universal mask UnivMask <- apply(AllMask, 1, prod) # Better write this to folder niftiimage <- nifti(img=array(UnivMask, dim = DIM),dim=DIM) writeNIfTI(niftiimage,filename=paste(DATAwd[[currentWD]],'/universalMask',sep=''),gzipped=FALSE) } if(isTRUE(!WRITEMASK)){ # Read in mask UnivMask <- readNIfTI(paste(DATAwd[[currentWD]],'/universalMask.nii', sep = ''))[,,] %>% matrix(.,ncol = 1) } # Load the naming structure of the data load(paste(paste(DATAwd[['Take[8mmBox10]']], '/1/ObjectsRestMAvsGLM_1.RData',sep=''))); objects <- names(ObjectsRestMAvsGLM); rm(ObjectsRestMAvsGLM) OBJ.ID <- c(rep(objects[!objects %in% c("STHEDGE","STWEIGHTS")], each=prod(DIM)), rep(c("STHEDGE","STWEIGHTS"), each=c(prod(DIM)*nstud))) objects.CI <- objects[grepl(c('upper'), objects) | grepl(c('lower'), objects)] # Pre-define the CI coverage and length vectors in which we sum the values # After running nruns, divide by amount of obtained runs. # For bias, we work with VAR(X) = E(X**2) - E(X)**2 and a vector in which we sum the bias. # Hence, we need to sum X**2 and X in a separate vector. summed.coverage.IBMA <- summed.coverage.GLM <- summed.length.IBMA <- summed.length.GLM <- summed.X.IBMA <- summed.X.GLM <- summed.X2.IBMA <- summed.X2.GLM <- array(0,dim=c(sum(UnivMask == 1),1)) # Keeping count of amount of values counterMA <- counterGLM <- 0 # Load in the data t1 <- Sys.time() for(i in 1:nruns){ if(i %in% CheckProgr) print(paste('PROCESSING. NOW AT ', (i/nruns)*100, '%', sep = '')) # CI coverage: loop over the two procedures for(p in 1:2){ objUP <- objects.CI[grepl(c('upper'), objects.CI)][p] %>% gsub(".", "_",.,fixed = TRUE) objLOW <- objects.CI[grepl(c('lower'), objects.CI)][p] %>% gsub(".", "_",.,fixed = TRUE) UP <- try(fread(file = paste(DATAwd[[currentWD]], '/', i, '/', objUP, '.txt', sep = ''), header = FALSE) %>% filter(., UnivMask == 1), silent = TRUE) if(class(UP) == "try-error"){print(paste('Missing data in iteration ', i, sep = '')); next} LOW <- fread(file = paste(DATAwd[[currentWD]], '/',i, '/', objLOW, '.txt', sep = ''), header = FALSE) %>% filter(., UnivMask == 1) if(grepl('MA', x = objUP)){ # CI coverage: add when true value in CI summed.coverage.IBMA[,1] <- summed.coverage.IBMA[,1] + indicator(UPPER = UP, LOWER = LOW, trueval = 0) # CI length: sum the length summed.length.IBMA[,1] <- summed.length.IBMA[,1] + as.matrix(UP - LOW) # Add one to the count (if data is available) counterMA <- counterMA + counting(UPPER = UP, LOWER = LOW) }else{ # GLM procedure: CI coverage summed.coverage.GLM[,1] <- summed.coverage.GLM[,1] + indicator(UPPER = UP, LOWER = LOW, trueval = 0) # CI length: sum the length summed.length.GLM[,1] <- summed.length.GLM[,1] + as.matrix(UP - LOW) # Count counterGLM <- counterGLM + counting(UPPER = UP, LOWER = LOW) } rm(objUP, objLOW, UP, LOW) } # Standardized bias: read in weighted average / cope WAVG <- fread(file = paste(DATAwd[[currentWD]], '/', i, '/MA_WeightedAvg.txt', sep = ''), header = FALSE) %>% filter(., UnivMask == 1) GLMCOPE <- fread(file = paste(DATAwd[[currentWD]], '/', i, '/GLM_COPE', '.txt', sep = ''), header = FALSE) %>% filter(., UnivMask == 1) # Sum X summed.X.IBMA[,1] <- summed.X.IBMA[,1] + as.matrix(WAVG) summed.X.GLM[,1] <- summed.X.GLM[,1] + as.matrix(GLMCOPE) # Sum X**2 summed.X2.IBMA[,1] <- summed.X2.IBMA[,1] + as.matrix(WAVG ** 2) summed.X2.GLM[,1] <- summed.X2.GLM[,1] + as.matrix(GLMCOPE ** 2) } Sys.time() - t1 # Calculate the average (over nsim) CI coverage, length and bias Coverage.IBMA <- summed.coverage.IBMA/counterMA Coverage.GLM <- summed.coverage.GLM/counterGLM Length.IBMA <- summed.length.IBMA/counterMA Length.GLM <- summed.length.GLM/counterGLM # Formula: Var(X) = E(X**2) - [E(X)]**2 # E(X**2) = sum(X**2) / n # E(X) = sum(X) / n # \hat{var(X)} = var(X) * (N / N-1) # \hat{SD} = sqrt(\hat{var(X)}) samplingSD.IBMA <- sqrt(((summed.X2.IBMA/(counterMA)) - ((summed.X.IBMA/counterMA)**2)) * (counterMA / (counterMA - 1))) samplingSD.GLM <- sqrt(((summed.X2.GLM/(counterGLM)) - ((summed.X.GLM/counterGLM)**2)) * (counterGLM / (counterGLM - 1))) # Standardized bias: true beta = 0 Bias.IBMA <- ((summed.X.IBMA / counterMA) - 0) / samplingSD.IBMA Bias.GLM <- ((summed.X.GLM / counterGLM) - 0) / samplingSD.GLM # Heatmap of the coverages emptBrainIBMA <- emptBrainGLM <- array(NA, dim = prod(DIM)) emptBrainIBMA[UnivMask == 1] <- c(summed.coverage.IBMA/counterMA) emptBrainGLM[UnivMask == 1] <- c(summed.coverage.GLM/counterGLM) LevelPlotMACoV <- levelplot(array(emptBrainIBMA, dim = DIM)[,,40], col.regions = topo.colors, xlim=c(0,DIM[1]),ylim=c(0,DIM[2]), xlab = 'x', ylab = 'y', main = 'CI coverage meta-analysis') LevelPlotGLMCoV <- levelplot(array(emptBrainGLM, dim = DIM)[,,40], col.regions = topo.colors, xlim=c(0,DIM[1]),ylim=c(0,DIM[2]), xlab = 'x', ylab = 'y', main = 'CI coverage GLM') # Bias emptBrainIBMA <- emptBrainGLM <- array(NA, dim = prod(DIM)) emptBrainIBMA[UnivMask == 1] <- Bias.IBMA emptBrainGLM[UnivMask == 1] <- Bias.GLM LevelPlotMABias <- levelplot(array(emptBrainIBMA, dim = DIM)[,,40], col.regions = topo.colors, xlim=c(0,DIM[1]),ylim=c(0,DIM[2]), xlab = 'x', ylab = 'y', main = 'Standardized bias Meta-Analysis') LevelPlotGLMBias <- levelplot(array(emptBrainGLM, dim = DIM)[,,40], col.regions = topo.colors, xlim=c(0,DIM[1]),ylim=c(0,DIM[2]), xlab = 'x', ylab = 'y', main = 'Standardized bias GLM') DifferenceBias <- levelplot(array(emptBrainIBMA - emptBrainGLM, dim = DIM)[,,c(36:46)], col.regions = topo.colors, xlim=c(0,DIM[1]),ylim=c(0,DIM[2]), xlab = 'x', ylab = 'y', main = 'Bias MA - GLM') # CI length emptBrainIBMA <- emptBrainGLM <- array(NA, dim = prod(DIM)) emptBrainIBMA[UnivMask == 1] <- Length.IBMA emptBrainGLM[UnivMask == 1] <- Length.GLM LevelPlotMACL <- levelplot(array(emptBrainIBMA, dim = DIM)[,,40], col.regions = topo.colors, xlim=c(0,DIM[1]),ylim=c(0,DIM[2]), xlab = 'x', ylab = 'y', main = 'CI length Meta-Analysis') LevelPlotGLMCL <- levelplot(array(emptBrainGLM, dim = DIM)[,,40], col.regions = topo.colors, xlim=c(0,DIM[1]),ylim=c(0,DIM[2]), xlab = 'x', ylab = 'y', main = 'CI length GLM') grid.arrange(LevelPlotMACoV,LevelPlotGLMCoV, ncol = 2) grid.arrange(LevelPlotMABias,LevelPlotGLMBias, ncol = 2) grid.arrange(LevelPlotMACL,LevelPlotGLMCL, ncol = 2)
_____no_output_____
MIT
3_Reports/12.03_22_17/Report_03_22_17.ipynb
NeuroStat/IBMAvsGLM
Assignment 02: Evaluate the Diabetes Dataset*The comments/sections provided are your cues to perform the assignment. You don't need to limit yourself to the number of rows/cells provided. You can add additional rows in each section to add more lines of code.**If at any point in time you need help on solving this assignment, view our demo video to understand the different steps of the code.***Happy coding!*** * * 1: Import the dataset
#Import the required libraries import numpy as np import pandas as pd #Import the diabetes dataset data = pd.read_csv("pima-indians-diabetes.data",header=None)
_____no_output_____
Apache-2.0
ML_Assignment 02_Diabetes Prediction/Diabetes_prediction.ipynb
parth111999/Data-Science-Assignment
2: Analyze the dataset
#View the first five observations of the dataset data.head()
_____no_output_____
Apache-2.0
ML_Assignment 02_Diabetes Prediction/Diabetes_prediction.ipynb
parth111999/Data-Science-Assignment
3: Find the features of the dataset
#Use the .NAMES file to view and set the features of the dataset feature_name = np.array(["Pregnant","Glucose","BP","Skin","Insulin","BMI","Pedigree","Age","label"]) df_data = pd.read_csv("pima-indians-diabetes.data",names=feature_name) df_data #View the number of observations and features of the dataset df_data.shape
_____no_output_____
Apache-2.0
ML_Assignment 02_Diabetes Prediction/Diabetes_prediction.ipynb
parth111999/Data-Science-Assignment
4: Find the response of the dataset
#Create the feature object X_feature = df_data[["Pregnant","Glucose","BP","Skin","Insulin","BMI","Pedigree","Age"]] X_feature #Create the reponse object y_target = df_data[["label"]] y_target #View the shape of the feature object X_feature.shape #View the shape of the target object y_target.shape
_____no_output_____
Apache-2.0
ML_Assignment 02_Diabetes Prediction/Diabetes_prediction.ipynb
parth111999/Data-Science-Assignment
5: Use training and testing datasets to train the model
#Split the dataset to test and train the model from sklearn.model_selection import train_test_split X_train,X_test,y_train,y_test = train_test_split(X_feature,y_target,test_size = 0.25,random_state = 20)
_____no_output_____
Apache-2.0
ML_Assignment 02_Diabetes Prediction/Diabetes_prediction.ipynb
parth111999/Data-Science-Assignment
6: Create a model to predict the diabetes outcome
# Create a logistic regression model using the training set from sklearn.linear_model import LogisticRegression logreg = LogisticRegression() logreg.fit(X_train,y_train) #Make predictions using the testing set Prediction = logreg.predict(X_test) print(Prediction[10:20]) print(y_test[10:20])
[1 1 0 0 1 0 0 0 0 1] label 702 1 222 0 20 0 631 0 147 0 403 0 526 0 422 0 150 0 7 0
Apache-2.0
ML_Assignment 02_Diabetes Prediction/Diabetes_prediction.ipynb
parth111999/Data-Science-Assignment
7: Check the accuracy of the model
#Evaluate the accuracy of your model from sklearn import metrics performance = metrics.accuracy_score(y_test,Prediction) performance #Print the first 30 actual and predicted responses print(f"Predicted Value - {Prediction[0:30]}") print(f"Actual Value - {y_test.values[0:30]}")
Predicted Value - [0 1 0 0 0 0 0 0 1 0 1 1 0 0 1 0 0 0 0 1 0 0 0 0 1 0 1 0 1 0] Actual Value - [[1] [1] [0] [0] [0] [1] [1] [0] [0] [0] [1] [0] [0] [0] [0] [0] [0] [0] [0] [0] [0] [1] [0] [0] [1] [0] [0] [0] [1] [1]]
Apache-2.0
ML_Assignment 02_Diabetes Prediction/Diabetes_prediction.ipynb
parth111999/Data-Science-Assignment
Mixture Density Networks with PyTorch Related posts:JavaScript [implementation](http://blog.otoro.net/2015/06/14/mixture-density-networks/).TensorFlow [implementation](http://blog.otoro.net/2015/11/24/mixture-density-networks-with-tensorflow/).
import matplotlib.pyplot as plt import numpy as np import torch import math from torch.autograd import Variable import torch.nn as nn
_____no_output_____
MIT
pytorch_notebooks-master/mixtures_density_network_relu_version.ipynb
boyali/pytorch-mixture_of_density_networks
Simple Data Fitting Before we talk about MDN's, we try to perform some simple data fitting using PyTorch to make sure everything works. To get started, let's try to quickly build a neural network to fit some fake data. As neural nets of even one hidden layer can be universal function approximators, we can see if we can train a simple neural network to fit a noisy sinusoidal data, like this ( $\epsilon$ is just standard gaussian random noise):$y=7.0 \sin( 0.75 x) + 0.5 x + \epsilon$ After importing the libraries, we generate the sinusoidal data we will train a neural net to fit later:
NSAMPLE = 1000 x_data = np.float32(np.random.uniform(-10.5, 10.5, (1, NSAMPLE))).T r_data = np.float32(np.random.normal(size=(NSAMPLE,1))) y_data = np.float32(np.sin(0.75*x_data)*7.0+x_data*0.5+r_data*1.0) plt.figure(figsize=(8, 8)) plot_out = plt.plot(x_data,y_data,'ro',alpha=0.3) plt.show()
_____no_output_____
MIT
pytorch_notebooks-master/mixtures_density_network_relu_version.ipynb
boyali/pytorch-mixture_of_density_networks
We will define this simple neural network one-hidden layer and 100 nodes:$Y = W_{out} \max( W_{in} X + b_{in}, 0) + b_{out}$
# N is batch size; D_in is input dimension; # H is hidden dimension; D_out is output dimension. # from (https://github.com/jcjohnson/pytorch-examples) N, D_in, H, D_out = NSAMPLE, 1, 100, 1 # Create random Tensors to hold inputs and outputs, and wrap them in Variables. # since NSAMPLE is not large, we train entire dataset in one minibatch. x = Variable(torch.from_numpy(x_data.reshape(NSAMPLE, D_in))) y = Variable(torch.from_numpy(y_data.reshape(NSAMPLE, D_out)), requires_grad=False) model = torch.nn.Sequential( torch.nn.Linear(D_in, H), torch.nn.ReLU(), torch.nn.Linear(H, D_out), )
_____no_output_____
MIT
pytorch_notebooks-master/mixtures_density_network_relu_version.ipynb
boyali/pytorch-mixture_of_density_networks
We can define a loss function as the sum of square error of the output vs the data (we can add regularisation if we want).
loss_fn = torch.nn.MSELoss()
_____no_output_____
MIT
pytorch_notebooks-master/mixtures_density_network_relu_version.ipynb
boyali/pytorch-mixture_of_density_networks
We will also define a training loop to minimise the loss function later. We can use the RMSProp gradient descent optimisation method.
learning_rate = 0.01 optimizer = torch.optim.RMSprop(model.parameters(), lr=learning_rate, alpha=0.8) for t in range(100000): y_pred = model(x) loss = loss_fn(y_pred, y) if (t % 10000 == 0): print(t, loss.data[0]) optimizer.zero_grad() loss.backward() optimizer.step() x_test = np.float32(np.random.uniform(-10.5, 10.5, (1, NSAMPLE))).T x_test = Variable(torch.from_numpy(x_test.reshape(NSAMPLE, D_in))) y_test = model(x_test) plt.figure(figsize=(8, 8)) plt.plot(x_data,y_data,'ro', x_test.data.numpy(),y_test.data.numpy(),'bo',alpha=0.3) plt.show()
_____no_output_____
MIT
pytorch_notebooks-master/mixtures_density_network_relu_version.ipynb
boyali/pytorch-mixture_of_density_networks
We see that the neural network can fit this sinusoidal data quite well, as expected. However, this type of fitting method only works well when the function we want to approximate with the neural net is a one-to-one, or many-to-one function. Take for example, if we invert the training data:$x=7.0 \sin( 0.75 y) + 0.5 y+ \epsilon$
temp_data = x_data x_data = y_data y_data = temp_data plt.figure(figsize=(8, 8)) plot_out = plt.plot(x_data,y_data,'ro',alpha=0.3) plt.show()
_____no_output_____
MIT
pytorch_notebooks-master/mixtures_density_network_relu_version.ipynb
boyali/pytorch-mixture_of_density_networks
If we were to use the same method to fit this inverted data, obviously it wouldn't work well, and we would expect to see a neural network trained to fit only to the square mean of the data.
x = Variable(torch.from_numpy(x_data.reshape(NSAMPLE, D_in))) y = Variable(torch.from_numpy(y_data.reshape(NSAMPLE, D_out)), requires_grad=False) learning_rate = 0.01 optimizer = torch.optim.RMSprop(model.parameters(), lr=learning_rate, alpha=0.8) for t in range(3000): y_pred = model(x) loss = loss_fn(y_pred, y) if (t % 300 == 0): print(t, loss.data[0]) optimizer.zero_grad() loss.backward() optimizer.step() x_test = np.float32(np.random.uniform(-10.5, 10.5, (1, NSAMPLE))).T x_test = Variable(torch.from_numpy(x_test.reshape(NSAMPLE, D_in))) y_test = model(x_test) plt.figure(figsize=(8, 8)) plt.plot(x_data,y_data,'ro', x_test.data.numpy(),y_test.data.numpy(),'bo',alpha=0.3) plt.show()
_____no_output_____
MIT
pytorch_notebooks-master/mixtures_density_network_relu_version.ipynb
boyali/pytorch-mixture_of_density_networks
Our current model only predicts one output value for each input, so this approach will fail miserably. What we want is a model that has the capacity to predict a range of different output values for each input. In the next section we implement a Mixture Density Network (MDN) to achieve this task. Mixture Density Networks Our current model only predicts one output value for each input, so this approach will fail. What we want is a model that has the capacity to predict a range of different output values for each input. In the next section we implement a *Mixture Density Network (MDN)* to do achieve this task.Mixture Density Networks, developed by Christopher Bishop in the 1990s, is an attempt to address this problem. Rather to have the network predict a single output value, the MDN predicts an entire *probability distribution* of the output, so we can sample several possible different output values for a given input.This concept is quite powerful, and can be employed many current areas of machine learning research. It also allows us to calculate some sort of confidence factor in the predictions that the network is making.The inverse sinusoidal data we chose is not just for a toy problem, as there are applications in the field of robotics, for example, where we want to determine which angle we need to move the robot arm to achieve a target location. MDNs are also used to model handwriting, where the next stroke is drawn from a probability distribution of multiple possibilities, rather than sticking to one prediction.Bishop's implementation of MDNs will predict a class of probability distributions called Mixture Gaussian distributions, where the output value is modelled as a sum of many gaussian random values, each with different means and standard deviations. So for each input $x$, we will predict a probability distribution function $P(Y = y | X = x)$ that is approximated by a weighted sum of different gaussian distributions. $P(Y = y | X = x) = \sum_{k=0}^{K-1} \Pi_{k}(x) \phi(y, \mu_{k}(x), \sigma_{k}(x)), \sum_{k=0}^{K-1} \Pi_{k}(x) = 1$ Our network will therefore predict the *parameters* of the pdf, in our case the set of $\mu$, $\sigma$, and $\Pi$ values for each input $x$. Rather than predict $y$ directly, we will need to sample from our distribution to sample $y$. This will allow us to have multiple possible values of $y$ for a given $x$.Each of the parameters $\Pi_{k}(x), \mu_{k}(x), \sigma_{k}(x)$ of the distribution will be determined by the neural network, as a function of the input $x$. There is a restriction that the sum of $\Pi_{k}(x)$ add up to one, to ensure that the pdf integrates to 1. In addition, $\sigma_{k}(x)$ must be strictly positive. In our implementation, we will use a neural network of one hidden later with 100 nodes, and also generate 20 mixtures, hence there will be 60 actual outputs of our neural network of a single input. Our definition will be split into 2 parts:$Z = W_{out} \max( W_{in} X + b_{in}, 0) + b_{out}$ In the first part, $Z$ is a vector of 60 values that will be then splitup into three equal parts, $[Z_{\Pi}, Z_{\sigma}, Z_{\mu}] = Z$, where each of $Z_{\Pi}$, $Z_{\sigma}$, $Z_{\mu}$ are vectors of length 20. In this PyTorch implementation, unlike the TF version, we will implement this operation with 3 seperate Linear layers, rather than splitting a large $Z$, for clarity:$Z_{\Pi} = W_{\Pi} \max( W_{in} X + b_{in}, 0) + b_{\Pi}$$Z_{\sigma} = W_{\sigma} \max( W_{in} X + b_{in}, 0) + b_{\sigma}$$Z_{\mu} = W_{\mu} \max( W_{in} X + b_{in}, 0) + b_{\mu}$In the second part, the parameters of the pdf will be defined as below to satisfy the earlier conditions:$\Pi = \frac{\exp(Z_{\Pi})}{\sum_{i=0}^{20} exp(Z_{\Pi, i})}, \\ \sigma = \exp(Z_{\sigma}), \\ \mu = Z_{\mu}$ $\Pi_{k}$ are put into a *softmax* operator to ensure that the sum adds to one, and that each mixture probability is positive. Each $\sigma_{k}$ will also be positive due to the exponential operator.Below is the PyTorch implementation of the MDN network:
NHIDDEN = 100 # hidden units KMIX = 20 # number of mixtures class MDN(nn.Module): def __init__(self, hidden_size, num_mixtures): super(MDN, self).__init__() self.fc_in = nn.Linear(1, hidden_size) self.relu = nn.ReLU() self.pi_out = torch.nn.Sequential( nn.Linear(hidden_size, num_mixtures), nn.Softmax() ) self.sigma_out = nn.Linear(hidden_size, num_mixtures) self.mu_out = nn.Linear(hidden_size, num_mixtures) def forward(self, x): out = self.fc_in(x) out = self.relu(out) out_pi = self.pi_out(out) out_sigma = torch.exp(self.sigma_out(out)) out_mu = self.mu_out(out) return (out_pi, out_sigma, out_mu)
_____no_output_____
MIT
pytorch_notebooks-master/mixtures_density_network_relu_version.ipynb
boyali/pytorch-mixture_of_density_networks
Let's define the inverted data we want to train our MDN to predict later. As this is a more involved prediction task, I used a higher number of samples compared to the simple data fitting task earlier.
NSAMPLE = 2500 y_data = np.float32(np.random.uniform(-10.5, 10.5, (1, NSAMPLE))).T r_data = np.float32(np.random.normal(size=(NSAMPLE,1))) # random noise x_data = np.float32(np.sin(0.75*y_data)*7.0+y_data*0.5+r_data*1.0) x_train = Variable(torch.from_numpy(x_data.reshape(NSAMPLE, 1))) y_train = Variable(torch.from_numpy(y_data.reshape(NSAMPLE, 1)), requires_grad=False) plt.figure(figsize=(8, 8)) plt.plot(x_train.data.numpy(),y_train.data.numpy(),'ro', alpha=0.3) plt.show()
_____no_output_____
MIT
pytorch_notebooks-master/mixtures_density_network_relu_version.ipynb
boyali/pytorch-mixture_of_density_networks
We cannot simply use the min square error L2 lost function in this task the output is an entire description of the probability distribution. A more suitable loss function is to minimise the logarithm of the likelihood of the distribution vs the training data:$CostFunction(y | x) = -\log[ \sum_{k}^K \Pi_{k}(x) \phi(y, \mu(x), \sigma(x)) ]$ So for every $(x,y)$ point in the training data set, we can compute a cost function based on the predicted distribution versus the actual points, and then attempt the minimise the sum of all the costs combined. To those who are familiar with logistic regression and cross entropy minimisation of softmax, this is a similar approach, but with non-discretised states.We have to implement this cost function ourselves:
oneDivSqrtTwoPI = 1.0 / math.sqrt(2.0*math.pi) # normalisation factor for gaussian. def gaussian_distribution(y, mu, sigma): # braodcast subtraction with mean and normalization to sigma result = (y.expand_as(mu) - mu) * torch.reciprocal(sigma) result = - 0.5 * (result * result) return (torch.exp(result) * torch.reciprocal(sigma)) * oneDivSqrtTwoPI def mdn_loss_function(out_pi, out_sigma, out_mu, y): epsilon = 1e-3 result = gaussian_distribution(y, out_mu, out_sigma) * out_pi result = torch.sum(result, dim=1) result = - torch.log(epsilon + result) return torch.mean(result)
_____no_output_____
MIT
pytorch_notebooks-master/mixtures_density_network_relu_version.ipynb
boyali/pytorch-mixture_of_density_networks
Let's define our model, and use the Adam optimizer to train our model below:
model = MDN(hidden_size=NHIDDEN, num_mixtures=KMIX) learning_rate = 0.00001 optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) for t in range(20000): (out_pi, out_sigma, out_mu) = model(x_train) loss = mdn_loss_function(out_pi, out_sigma, out_mu, y_train) if (t % 1000 == 0): print(t, loss.data[0]) optimizer.zero_grad() loss.backward() optimizer.step()
0 4.988687992095947 1000 3.4866292476654053 2000 3.1824162006378174 3000 2.9246561527252197 4000 2.7802634239196777 5000 2.672682523727417 6000 2.5783588886260986 7000 2.5089898109436035 8000 2.4450607299804688 9000 2.398449420928955 10000 2.3576488494873047 11000 2.3166143894195557 12000 2.276536464691162 13000 2.239301919937134 14000 2.1948606967926025 15000 2.1471312046051025 16000 2.0966522693634033 17000 2.042475461959839 18000 1.9856466054916382 19000 1.9275684356689453
MIT
pytorch_notebooks-master/mixtures_density_network_relu_version.ipynb
boyali/pytorch-mixture_of_density_networks
We want to use our network to generate the parameters of the pdf for us to sample from. In the code below, we will sample $M=10$ values of $y$ for every $x$ input, and compare the sampled results with the training data.
x_test_data = np.float32(np.random.uniform(-15, 15, (1, NSAMPLE))).T x_test = Variable(torch.from_numpy(x_test_data.reshape(NSAMPLE, 1))) (out_pi_test, out_sigma_test, out_mu_test) = model(x_test) out_pi_test_data = out_pi_test.data.numpy() out_sigma_test_data = out_sigma_test.data.numpy() out_mu_test_data = out_mu_test.data.numpy() def get_pi_idx(x, pdf): N = pdf.size accumulate = 0 for i in range(0, N): accumulate += pdf[i] if (accumulate >= x): return i print('error with sampling ensemble') return -1 def generate_ensemble(M = 10): # for each point in X, generate M=10 ensembles NTEST = x_test_data.size result = np.random.rand(NTEST, M) # initially random [0, 1] rn = np.random.randn(NTEST, M) # normal random matrix (0.0, 1.0) mu = 0 std = 0 idx = 0 # transforms result into random ensembles for j in range(0, M): for i in range(0, NTEST): idx = get_pi_idx(result[i, j], out_pi_test_data[i]) mu = out_mu_test_data[i, idx] std = out_sigma_test_data[i, idx] result[i, j] = mu + rn[i, j]*std return result y_test_data = generate_ensemble() plt.figure(figsize=(8, 8)) plt.plot(x_test_data,y_test_data,'b.', x_data,y_data,'r.',alpha=0.3) plt.show()
_____no_output_____
MIT
pytorch_notebooks-master/mixtures_density_network_relu_version.ipynb
boyali/pytorch-mixture_of_density_networks
In the above graph, we plot out the generated data we sampled from the MDN distribution, in blue. We also plot the original training data in red over the predictions. Apart from a few outliers, the distributions seem to match the data. We can also plot a graph of $\mu(x)$ as well to interpret what the neural net is actually doing:
plt.figure(figsize=(8, 8)) plt.plot(x_test_data,out_mu_test_data,'g.', x_data,y_data,'r.',alpha=0.3) plt.show()
_____no_output_____
MIT
pytorch_notebooks-master/mixtures_density_network_relu_version.ipynb
boyali/pytorch-mixture_of_density_networks
LDA (Latent Dirichlet Allocation)In this notebook, I'll be showing you the practical example of topic modelling using LDA.For this I'll be using ABC news headlines dataset from kaggle - https://www.kaggle.com/therohk/million-headlines
# Let's first read the dataset import pandas as pd df = pd.read_csv("abcnews-date-text.csv") # Let's check the head of the dataframe df.head()
_____no_output_____
MIT
B2-NLP/Ajay_NLP_TopicModelling.ipynb
Shreyansh-Gupta/Open-contributions
Here our main focus is the headline_text column because we will be using these headlines to extract the topics.
df1 = df[:50000].drop("publish_date", axis = 1)
_____no_output_____
MIT
B2-NLP/Ajay_NLP_TopicModelling.ipynb
Shreyansh-Gupta/Open-contributions
Here I am taking only 50000 records.
df1.head() # Length of the data len(df1)
_____no_output_____
MIT
B2-NLP/Ajay_NLP_TopicModelling.ipynb
Shreyansh-Gupta/Open-contributions
Preprocessing
from sklearn.feature_extraction.text import CountVectorizer cv = CountVectorizer(max_df = 0.95, min_df = 3, stop_words = 'english') # Create a document term matrix dtm = cv.fit_transform(df1[0:50000]['headline_text']) dtm
_____no_output_____
MIT
B2-NLP/Ajay_NLP_TopicModelling.ipynb
Shreyansh-Gupta/Open-contributions
Let's perfrom LDA***Here I'll be assuming that there are 20 topics present in this document***
from sklearn.decomposition import LatentDirichletAllocation lda = LatentDirichletAllocation(n_components = 20, random_state = 79) # This will take some time to execute lda.fit(dtm) topics = lda.transform(dtm)
_____no_output_____
MIT
B2-NLP/Ajay_NLP_TopicModelling.ipynb
Shreyansh-Gupta/Open-contributions
Let's print 15 most common words for all the 20 topics
for index,topic in enumerate(lda.components_): print(f'THE TOP 15 WORDS FOR TOPIC #{index}') print([cv.get_feature_names()[i] for i in topic.argsort()[-15:]]) print('\n')
THE TOP 15 WORDS FOR TOPIC #0 ['row', 'sale', 'telstra', 'indigenous', 'bid', 'campaign', 'budget', 'tax', 'airport', 'bomb', 'community', 'blast', 'funding', 'boost', 'security'] THE TOP 15 WORDS FOR TOPIC #1 ['says', 'saddam', 'dump', 'qaeda', 'broken', 'gm', 'city', 'waste', 'israel', 'gets', 'industry', 'al', 'warns', 'hill', 'future'] THE TOP 15 WORDS FOR TOPIC #2 ['debate', 'merger', 'real', 'local', 'centre', 'stop', 'woes', 'seeks', 'force', 'new', 'air', 'plan', 'chief', 'work', 'council'] THE TOP 15 WORDS FOR TOPIC #3 ['airs', 'opposition', 'staff', 'nsw', 'support', 'east', 'rate', 'teachers', 'pay', 'gold', 'west', 'strike', 'coast', 'south', 'concerns'] THE TOP 15 WORDS FOR TOPIC #4 ['soldiers', 'british', 'bali', 'forces', 'victims', 'iraqi', 'israeli', 'case', 'search', 'attack', 'appeal', 'missing', 'iraq', 'killed', 'baghdad'] THE TOP 15 WORDS FOR TOPIC #5 ['aims', 'plant', 'children', 'downer', 'nuclear', 'begin', 'says', 'sign', 'gas', 'deal', 'urges', 'north', 'korea', 'talks', 'new'] THE TOP 15 WORDS FOR TOPIC #6 ['china', 'fears', 'post', 'plan', 'discuss', 'jobs', 'leaders', 'meet', 'meeting', 'job', 'workers', 'bush', 'sars', 'iraq', 'war'] THE TOP 15 WORDS FOR TOPIC #7 ['praises', 'coach', 'summit', 'jones', 'suicide', 'battle', 'wallabies', 'thousands', 'terrorism', 'family', 'free', 'head', 'calls', 'test', 'tour'] THE TOP 15 WORDS FOR TOPIC #8 ['firefighters', 'league', 'way', 'education', 'red', 'beattie', 'issues', 'blaze', 'adelaide', 'title', 'race', 'lead', 'action', 'continues', 'takes'] THE TOP 15 WORDS FOR TOPIC #9 ['good', 'laws', 'union', 'insurance', 'fight', 'business', 'aid', 'doctors', 'new', 'group', 'help', 'rain', 'drought', 'farmers', 'qld'] THE TOP 15 WORDS FOR TOPIC #10 ['poll', 'lose', 'virus', 'parliament', 'labor', 'leave', 'changes', 'sheep', 'howard', 'lions', 'residents', 'service', 'election', 'iraq', 'pm'] THE TOP 15 WORDS FOR TOPIC #11 ['club', 'away', 'sets', 'figures', 'title', 'farm', 'says', 'cancer', 'hopes', 'win', 'big', 'open', 'minister', 'record', 'power'] THE TOP 15 WORDS FOR TOPIC #12 ['indian', 'arrest', 'attack', 'alleged', 'death', 'team', 'shooting', 'arrested', 'body', 'investigate', 'murder', 'man', 'trial', 'probe', 'police'] THE TOP 15 WORDS FOR TOPIC #13 ['fed', 'offer', 'regional', 'restrictions', 'wa', 'rail', 'nsw', 'act', 'rejects', 'plan', 'sa', 'vic', 'urged', 'water', 'govt'] THE TOP 15 WORDS FOR TOPIC #14 ['illegal', 'warned', 'protesters', 'threat', 'fishing', 'services', 'abuse', 'sars', 'care', 'war', 'study', 'anti', 'protest', 'inquiry', 'home'] THE TOP 15 WORDS FOR TOPIC #15 ['prices', 'film', 'long', 'week', 'dollar', 'fined', 'share', 'hits', 'company', 'makes', 'year', 'hit', 'backs', 'high', 'wins'] THE TOP 15 WORDS FOR TOPIC #16 ['perth', 'road', 'killed', 'dead', 'jailed', 'toll', 'accident', 'woman', 'injured', 'dies', 'charged', 'death', 'car', 'crash', 'man'] THE TOP 15 WORDS FOR TOPIC #17 ['old', 'black', 'win', 'market', 'aussie', 'australia', 'england', 'play', 'india', 'warning', 'victory', 'pakistan', 'final', 'world', 'cup'] THE TOP 15 WORDS FOR TOPIC #18 ['kill', 'aust', 'train', 'hears', 'indonesian', 'says', 'aceh', 'time', 'charge', 'man', 'charges', 'faces', 'troops', 'face', 'court'] THE TOP 15 WORDS FOR TOPIC #19 ['need', 'launch', 'spotlight', 'report', 'highway', 'new', 'mp', 'nats', 'target', 'plan', 'building', 'weapons', 'sought', 'highlights', 'students']
MIT
B2-NLP/Ajay_NLP_TopicModelling.ipynb
Shreyansh-Gupta/Open-contributions
Let's combine these topics with our original headlines
df1['Headline Topic'] = topics.argmax(axis = 1) df1.head()
_____no_output_____
MIT
B2-NLP/Ajay_NLP_TopicModelling.ipynb
Shreyansh-Gupta/Open-contributions
Dictionary Details 1. r["title"] tells you the noramlized title2. r["gender"] tells you the gender (binary for simplicity, determined from the pronouns)3. 3. r["start_pos"] indicates the length of the first sentence.4. r["raw"] has the entire bio5. The field r["bio"] contains a scrubbed version of the bio (with the person's name and obvious gender words (like she/he removed) Problem Statement So the classification task is to predict r["title"] from r["raw"][r["start_pos"]:] Example Dictionary Element
test_bio = all_bios[0] test_bio['bio'] test_bio['raw']
_____no_output_____
MIT
Visualisation_codes.ipynb
punyajoy/biosbias
Distribution of occupation
occupation_dict={} for bio in all_bios: occupation=bio['title'] try: occupation_dict[occupation] = 1 except KeyError: occupation_dict[occupation] += 1 import matplotlib.pyplot as plt import numpy as np keys = x.keys() vals = x.values() plt.bar(keys, np.divide(list(vals), sum(vals)), label="Real distribution") plt.ylim(0,1) plt.ylabel ('Percentage') plt.xlabel ('Significant number') plt.xticks(list(keys)) plt.legend (bbox_to_anchor=(1, 1), loc="upper right", borderaxespad=0.) plt.show() import pandas as pd from matplotlib import pyplot as plt import matplotlib as mpl import seaborn as sns %matplotlib inline #Read in data & create total column import pandas as pd train_data=pd.read_csv('Data/Train.csv') val_data =pd.read_csv('Data/Val.csv') test_data =pd.read_csv('Data/Test.csv') total_data = pd.concat([train_data,test_data,val_data],axis=0) # #stacked_bar_data["total"] = stacked_bar_data.Series1 + stacked_bar_data.Series2 # # #Set general plot properties # sns.set_style("white") # sns.set_context({"figure.figsize": (24, 10)}) # # #Plot 1 - background - "total" (top) series # sns.barplot(x = stacked_bar_data.title, y = stacked_bar_data., color = "red") # # #Plot 2 - overlay - "bottom" series # # bottom_plot = sns.barplot(x = stacked_bar_data.Group, y = stacked_bar_data.Series1, color = "#0000A3") # # topbar = plt.Rectangle((0,0),1,1,fc="red", edgecolor = 'none') # # bottombar = plt.Rectangle((0,0),1,1,fc='#0000A3', edgecolor = 'none') # # l = plt.legend([bottombar, topbar], ['Bottom Bar', 'Top Bar'], loc=1, ncol = 2, prop={'size':16}) # # l.draw_frame(False) # # #Optional code - Make plot look nicer # # sns.despine(left=True) # # bottom_plot.set_ylabel("Y-axis label") # # bottom_plot.set_xlabel("X-axis label") # # #Set fonts to consistent 16pt size # # for item in ([bottom_plot.xaxis.label, bottom_plot.yaxis.label] + # # bottom_plot.get_xticklabels() + bottom_plot.get_yticklabels()): # # item.set_fontsize(16) df=total_data.groupby(['title','gender'])['path'].count() total_data['title'].unique() df_to_plot=pd.DataFrame(columns=['title','M','F']) list1=[] for title in list(total_data['title'].unique()): try: list1.append((title, df[title,'M'],df[title,'F'])) except: pass df_to_plot=pd.DataFrame(list1,columns=['title','M','F']) #total_data = pd.concat([train_data,test_data,val_data],axis=0) df_to_plot["total"] = df_to_plot['M'] + df_to_plot['F'] df_to_plot=df_to_plot.sort_values(['total'],ascending=False) # #Set general plot properties sns.set_style("white") sns.set_context({"figure.figsize": (24, 10)}) # #Plot 1 - background - "total" (top) series sns.barplot(x = df_to_plot.title, y = df_to_plot.total, color = "green") # #Plot 2 - overlay - "bottom" series bottom_plot = sns.barplot(x = df_to_plot.title, y = df_to_plot['M'], color = "blue") topbar = plt.Rectangle((0,0),1,1,fc="green", edgecolor = 'none') bottombar = plt.Rectangle((0,0),1,1,fc='blue', edgecolor = 'none') l = plt.legend([bottombar, topbar], ['Male', 'Female'], loc=1, ncol = 2, prop={'size':16}) l.draw_frame(False) #Optional code - Make plot look nicer sns.despine(left=True) bottom_plot.set_ylabel("Log frequency") plt.yscale('log') #Set fonts to consistent 16pt size for item in ([bottom_plot.xaxis.label, bottom_plot.yaxis.label] + bottom_plot.get_xticklabels() + bottom_plot.get_yticklabels()): item.set_fontsize(28) item.set_rotation('vertical') #bottom_plot.set_xlabel("Occupation") plt.tight_layout() bottom_plot.set_xlabel('') plt.savefig('data_distribution.png')
_____no_output_____
MIT
Visualisation_codes.ipynb
punyajoy/biosbias
Mithun add your codes here Model 1 : Bag of words
word_dict={} for bio in all_bios: index_to_start=bio['start_pos'] tokens=bio['raw'][index_to_start:].split() for tok in tokens: tok = tok.strip().lower() try: word_dict[tok] += 1 except: word_dict[tok] = 1 len(list(word_dict)) import nltk import pandas as pd from scipy.sparse import vstack, csr_matrix, save_npz, load_npz !pip install scipy df = pd.DataFrame(all_bios, columns =list(all_bios[0].keys())) from sklearn.model_selection import train_test_split df_train,df_test_val=train_test_split(df, test_size=0.35, random_state=42,stratify=df['title']) df_test,df_val=train_test_split(df_test_val, test_size=0.28, random_state=42,stratify=df_test_val['title']) df_train.to_csv('Train.csv',index=False) df_test.to_csv('Test.csv',index=False) df_val.to_csv('Val.csv',index=False) import heapq most_freq = heapq.nlargest(50000, word_dict, key=word_dict.get) dataset = [] for bio in all_bios: index_to_start=bio['start_pos'] tokens=bio['raw'][index_to_start:].split() for tok in most_freq: if token in sentence_tokens: sent_vec.append(1) else: sent_vec.append(0) sentence_vectors.append(sent_vec) for sentence in corpus: sentence_tokens = nltk.word_tokenize(sentence) sent_vec = [] for token in most_freq: if token in sentence_tokens: sent_vec.append(1) else: sent_vec.append(0) sentence_vectors.append(sent_vec)
_____no_output_____
MIT
Visualisation_codes.ipynb
punyajoy/biosbias
forwardfill - ffill - none value will be filled with the previous data backwardfill= null value will be filled with the next value
data.fillna(method='ffill') df3 = pd.DataFrame({'Data':[10,20,30,np.nan,50,60], 'float':[1.5,2.5,3.2,4.5,5.5,np.nan], }) df3 data.fillna(method='bfill') import numpy as np import pandas as pd Data = pd.read_csv('california_cities.csv') Data Data.head() Data.tail() Data.describe() Data.info() Data.columns Data.index Data.isnull().info() Data1 = Data.drop(['Unnamed: 0'],axis =1) Data1 Data1.isnull() Data1.info() #Data1['elevation_m']=np.nanmean(Data1['elevation_m']) #Data1 #original data will be affetced so not a recomanded method Fill = np.nanmean(Data1['elevation_m']) Data1['elevation_m']= Data1['elevation_m'].fillna(Fill) Data1 Data1.info() #hierarchical indexing pd.__version__ import numpy as np import pandas as pd index = [('California', 2000), ('California', 2010), ('New York', 2000), ('New York', 2010), ('Texas', 2000), ('Texas', 2010)] populations = [33871648, 37253956, 18976457, 19378102, 20851820, 25145561] populations pop = pd.Series(populations, index=index) pop #pop [1:4] pop['California',2000] pop[[i for i in pop.index if i[1]==2010]] for i in pop.index: if i[1]==2010: print(i,pop[i]) index = pd.MultiIndex.from_tuples(index) index pop =pop.reindex(index) pop pop['California'] pop[:,2010] pop_df = pop.unstack() pop_df pop_df.stack() pop pop_df = pd.DataFrame ({'Total': pop, 'under18': [8865325656,35689545, 656898,458545545, 4455687,965856]}) pop_df df = pd.DataFrame(np.random.rand(4,2), index =[['a','a','b','b'],[1,2,1,2]], columns=['data1','data2']) df data ={('california',2000):5589865365, ('california',2010):89888556, ('Texas',2000):78454533, ('Texas',2010):58963568, ('Newyork',2000):57989656, ('Newyork',2010):555655878} pd.Series(data) pd.MultiIndex.from_arrays([['a','a','b','b'],[1,2,1,2]]) pd.MultiIndex.from_tuples([('a',1),('a',2),('b',1),('b',2)]) pd.MultiIndex.from_product([['a','b'],[1,2]]) pd.MultiIndex(levels = [['a','b'],[1,2]], codes = [[0,0,1,1],[0,1,0,1]]) pop.index.names=['state','year'] pop index = pd.MultiIndex.from_product([[2013,2014],[1,2,3]], names=['year','visit']) columns =pd.MultiIndex.from_product([['Rani','Raju','Sam'],['BMI','TEMP','WGHT']], names =['subject','type']) data=np.round(np.random.rand(6,9),2) data+=37 health_data = pd.DataFrame(data,index=index,columns=columns) health_data health_data['Rani'] import numpy as np import pandas as pd health_data.iloc[:3,:-3] # starts with oth row and ends with 2nd row, from right side till -3 health_data.iloc[:3] idx = pd.IndexSlice health_data.loc[idx[:,1],idx[:,'TEMP']] #performing integer and string together #sorted and unsorted indices index = pd.MultiIndex.from_product([['a','c','b','d'],[1,2]]) data = pd.Series(np.random.rand(8),index=index) data.index.names = ['char','int'] data try: data[ 'a':'b'] except KeyError as e: print(type(e)) print(e) data = data.sort_index() data pop.unstack(level =0) pop.unstack().stack() pop_flat = pop.reset_index(name='population') pop_flat health_data data_mean1 = health_data.mean(level='year') data_mean1 data_mean2 = health_data.mean(level='visit') data_mean2 data_mean1.mean(axis =1 ,level='type')
_____no_output_____
Apache-2.0
pandas.ipynb
Nikhila-padmanabhan/Python-project
Process all microCT and save the output.This code begins by reading the CT data from every directory in ../data/microCT and processing it into a dataframe. It then stores these dataframes in a dictionary (key: site code, value: dataframe).The dictionary is then saved as a pickle file in data/microCT.
import os import pandas as pd import pickle output_frames = {} for site in os.listdir('../data/microCT'): if '.' not in site: data_dir='../data/microCT/' + site + '/' [SSA_CT,height_min,height_max]=read_CT_txt_files(data_dir) fig,ax = plt.subplots() ax.plot(6/917/SSA_CT*1000,height_min,label='microCT') #CT data ax.set_xlabel('Equivalent diameter, mm') ax.set_ylabel('Height above snow-soil interface [cm]') ax.set_title(f'Site {site.upper()}') plt.show() data_df = pd.DataFrame( {'height (cm)':height_min, 'SSA (m2/kg)':SSA_CT, 'Equiv. Diam (mm)':6/917/SSA_CT*1000, } ) output_frames[site] = data_df pickle.dump(output_frames, open('../data/microCT/processed_mCT.p', 'wb'))
_____no_output_____
BSD-3-Clause
notebooks/CheckOutCT.ipynb
chang306/microstructure
Test that the saved data can be read out and plotted again The plots below should match the plots above!
# read data from pickle file frames = pickle.load(open('../data/microCT/processed_mCT.p', 'rb')) for site in frames.keys(): # extract dataframe from dict df = frames[site] # plot fig,ax = plt.subplots() ax.plot(df['Equiv. Diam (mm)'], df['height (cm)']) ax.set_xlabel('Equiv. Diam (mm)') ax.set_ylabel('Height above snow-soil interface [cm]') ax.set_title(f'Site {site.upper()}') plt.show()
_____no_output_____
BSD-3-Clause
notebooks/CheckOutCT.ipynb
chang306/microstructure
Environment
%env CUDA_DEVICE_ORDER=PCI_BUS_ID %env CUDA_VISIBLE_DEVICES=0 from pathlib import Path import numpy as np import matplotlib.pyplot as plt %matplotlib inline %autosave 20 import csv import pandas as pd from keras.backend import tf as ktf import sys import cv2 import six # keras import keras from keras.models import Model from keras.models import Sequential from keras.regularizers import l2 from keras.layers.core import Lambda from keras.optimizers import Adam from keras.layers.normalization import BatchNormalization from keras.callbacks import LearningRateScheduler from keras.models import Model from keras.layers import ( Input, Activation, Dense, Flatten, Dropout ) from keras.layers.convolutional import ( Conv2D, MaxPooling2D, AveragePooling2D ) from keras.layers.merge import add from keras import backend as K import math import tensorflow as tf config = tf.ConfigProto() config.gpu_options.allow_growth = True session = tf.Session(config=config) ROOT_PATH = Path('/home/downloads/CarND-Behavioral-Cloning-P3/') #ROOT_PATH=Path('/src') from tensorflow.python.client import device_lib print(device_lib.list_local_devices()) #SAMPLE_DATA_PATH = ROOT_PATH/'data/sample_data' SAMPLE_DATA_PATH = ROOT_PATH/'data/all' print('tensorflow version: ', tf.__version__) print('keras version: ', keras.__version__) print('python version: ', sys.version_info)
env: CUDA_DEVICE_ORDER=PCI_BUS_ID env: CUDA_VISIBLE_DEVICES=0
MIT
notebooks/behavior_cloning_tutorial-Copy1.ipynb
Jetafull/CarND-Behavioral-Cloning-P3
Load images
#[str(x) for x in list(SAMPLE_DATA_PATH.iterdir())] logs = pd.DataFrame() num_tracks = [0, 0] include_folders = [ '/home/downloads/CarND-Behavioral-Cloning-P3/data/all/IMG', '/home/downloads/CarND-Behavioral-Cloning-P3/data/all/driving_log_track1_recovery.csv', '/home/downloads/CarND-Behavioral-Cloning-P3/data/all/driving_log_track2_drive4.csv', '/home/downloads/CarND-Behavioral-Cloning-P3/data/all/driving_log_track2_curve.csv', '/home/downloads/CarND-Behavioral-Cloning-P3/data/all/driving_log_track1_sampledata.csv', '/home/downloads/CarND-Behavioral-Cloning-P3/data/all/driving_log_track2_drive3.csv', '/home/downloads/CarND-Behavioral-Cloning-P3/data/all/backup', '/home/downloads/CarND-Behavioral-Cloning-P3/data/all/driving_log_track2_drive5.csv', '/home/downloads/CarND-Behavioral-Cloning-P3/data/all/driving_log_track2_reverse.csv', '/home/downloads/CarND-Behavioral-Cloning-P3/data/all/driving_log_track2_drive2.csv', '/home/downloads/CarND-Behavioral-Cloning-P3/data/all/driving_log_track2_drive1.csv', '/home/downloads/CarND-Behavioral-Cloning-P3/data/all/driving_log_track1_drive.csv' ] for log_file in SAMPLE_DATA_PATH.glob('*.csv'): if str(log_file) not in include_folders: continue one_log = pd.read_csv(log_file) num_rows = one_log.shape[0] print(log_file, '\t', num_rows) if str(log_file).find('track1') != -1: num_tracks[0] += num_rows else: num_tracks[1] += num_rows logs = pd.concat([logs, one_log], axis=0) print('\ntrack 1: ', num_tracks[0]) print('track 2: ', num_tracks[1]) logs.tail()
/home/downloads/CarND-Behavioral-Cloning-P3/data/all/driving_log_track1_recovery.csv 1458 /home/downloads/CarND-Behavioral-Cloning-P3/data/all/driving_log_track2_drive4.csv 10252 /home/downloads/CarND-Behavioral-Cloning-P3/data/all/driving_log_track2_curve.csv 6617 /home/downloads/CarND-Behavioral-Cloning-P3/data/all/driving_log_track1_sampledata.csv 8036 /home/downloads/CarND-Behavioral-Cloning-P3/data/all/driving_log_track2_drive3.csv 2039 /home/downloads/CarND-Behavioral-Cloning-P3/data/all/driving_log_track2_drive5.csv 3098 /home/downloads/CarND-Behavioral-Cloning-P3/data/all/driving_log_track2_reverse.csv 8873 /home/downloads/CarND-Behavioral-Cloning-P3/data/all/driving_log_track2_drive2.csv 5465 /home/downloads/CarND-Behavioral-Cloning-P3/data/all/driving_log_track2_drive1.csv 6120 /home/downloads/CarND-Behavioral-Cloning-P3/data/all/driving_log_track1_drive.csv 7874 track 1: 17368 track 2: 42464
MIT
notebooks/behavior_cloning_tutorial-Copy1.ipynb
Jetafull/CarND-Behavioral-Cloning-P3
Preprocessing and Augmentation
IMG_FOLDER_PATH = SAMPLE_DATA_PATH/'IMG' def get_img_files(img_folder_path): image_files = [] labels = dict() correction = 0.2 for log in logs.iterrows(): center, left, right, y = log[1][:4] for i, img_path in enumerate([center, left, right]): img_path = img_path.split('/')[-1].strip() abs_img_path = str(img_folder_path/img_path) if i == 1: y_corrected = y + correction # left elif i == 2: y_corrected = y - correction # right else: y_corrected = y image_files.append(abs_img_path) labels[abs_img_path] = y_corrected np.random.shuffle(image_files) trn_end_idx = int(len(image_files)*0.8) train_img_files = image_files[:trn_end_idx] val_img_files = image_files[trn_end_idx:] return train_img_files, val_img_files, labels TRAIN_IMG_FILES, VAL_IMG_FILES, LABELS = get_img_files(IMG_FOLDER_PATH) len(TRAIN_IMG_FILES), len(VAL_IMG_FILES), len(LABELS.keys()) def augment_data(img, y, probs=0.5): # flip if np.random.rand() > probs: img = np.fliplr(img) y = -y return img, y
_____no_output_____
MIT
notebooks/behavior_cloning_tutorial-Copy1.ipynb
Jetafull/CarND-Behavioral-Cloning-P3
Create data generator for Keras model training
# adpated from https://stanford.edu/~shervine/blog/keras-how-to-generate-data-on-the-fly.html class GeneratorFromFiles(keras.utils.Sequence): '''Generate data from list of image files.''' def __init__(self, list_files, labels, batch_size=64, dim=(160, 320, 3), post_dim=(66, 200, 3), shuffle=True, data_aug=None, resize=False): ''' Paramters ---------- list_files : a list of absolute path to image files labels : a dictionary mapping image files to labels (classes/continous value) batch_size : size for each batch dim : dimension for input image, height x width x number of channel shuffle : whether to shuffle data at each epoch ''' self.dim = dim self.post_dim = post_dim if resize else dim self.batch_size = batch_size self.list_files = list_files self.labels = labels self.shuffle = shuffle self.data_aug = data_aug self.resize=resize self.on_epoch_end() def __len__(self): return int(len(self.list_files) / self.batch_size) def __getitem__(self, index): # generate indexes of the batch indexes = self.indexes[index*self.batch_size:(index+1)*self.batch_size] # find list of files list_files_batch = [self.list_files[k] for k in indexes] X, ys = self._generate(list_files_batch, self.data_aug) return X, ys def on_epoch_end(self): self.indexes = np.arange(len(self.list_files)) if self.shuffle: np.random.shuffle(self.indexes) def _generate(self, list_files_batch, data_aug=None): X = np.zeros((self.batch_size, ) + self.post_dim) ys = np.zeros((self.batch_size)) for i, img_file in enumerate(list_files_batch): x = plt.imread(img_file) if self.resize: x = cv2.resize(x, (self.post_dim[1], self.post_dim[0])) y = self.labels[img_file] if data_aug is not None: x, y = data_aug(x, y) X[i, ] = x ys[i] = y return X, ys
_____no_output_____
MIT
notebooks/behavior_cloning_tutorial-Copy1.ipynb
Jetafull/CarND-Behavioral-Cloning-P3
Visualize flipping the image
data_generator = GeneratorFromFiles(TRAIN_IMG_FILES, LABELS) res = next(iter(data_generator)) plt.imshow(res[0][56].astype(int)) plt.imshow(augment_data(res[0][56], res[1][60], 0.0)[0].astype(int)) plt.imshow(cv2.resize(res[0][56], (200, 66)).astype(int)) plt.imshow(cv2.resize(augment_data(res[0][56], res[1][60], 0.0)[0], (200, 66)).astype(int))
_____no_output_____
MIT
notebooks/behavior_cloning_tutorial-Copy1.ipynb
Jetafull/CarND-Behavioral-Cloning-P3
Model Architecture and Parameter Nvidia model
def _bn_act_dropout(input, dropout_rate): """Helper to build a BN -> activation block """ norm = BatchNormalization(axis=2)(input) relu = Activation('elu')(norm) return Dropout(dropout_rate)(relu) def _conv_bn_act_dropout(**conv_params): '''Helper to build a conv -> BN -> activation block -> dropout ''' filters = conv_params['filters'] kernel_size = conv_params['kernel_size'] strides = conv_params.setdefault('strides', (1, 1)) kernel_initializer = conv_params.setdefault('kernel_initializer', 'he_normal') padding = conv_params.setdefault('padding', 'valid') kernel_regularizer = conv_params.setdefault('kernel_regularizer', l2(1.e-4)) dropout_rate = conv_params.setdefault('dropout_rate', 0.1) def f(input): conv = Conv2D(filters=filters, kernel_size=kernel_size, strides=strides, padding=padding, kernel_initializer=kernel_initializer, kernel_regularizer=kernel_regularizer)(input) return _bn_act_dropout(conv, dropout_rate) return f def _dense_dropout(input, n, dropout_rate, dropout_multi=1): return Dropout(dropout_rate*dropout_multi)(Dense(n, activation='elu')(input)) def build_nvidia(in_shape, num_outputs, dropout_rate, dropout_multi=1): input = Input(shape=in_shape) in_layer = Lambda(lambda x: (x / 255.0) - 0.5, input_shape=(in_shape))(input) in_layer = _conv_bn_act_dropout(filters=24, kernel_size=(5, 5), strides=(2, 2), dropout_rate=dropout_rate)(in_layer) in_layer = _conv_bn_act_dropout(filters=36, kernel_size=(5, 5), strides=(2, 2), dropout_rate=dropout_rate)(in_layer) in_layer = _conv_bn_act_dropout(filters=48, kernel_size=(5, 5), strides=(2, 2), dropout_rate=dropout_rate)(in_layer) in_layer = _conv_bn_act_dropout(filters=64, kernel_size=(3, 3), strides=(1, 1), dropout_rate=dropout_rate)(in_layer) in_layer = _conv_bn_act_dropout(filters=64, kernel_size=(3, 3), strides=(1, 1), dropout_rate=dropout_rate)(in_layer) flatten = Flatten()(in_layer) flatten = _dense_dropout(flatten, 1000, dropout_rate, dropout_multi) flatten = _dense_dropout(flatten, 100, dropout_rate, dropout_multi) #flatten = _dense_dropout(flatten, 50, dropout_rate) flatten = Dense(50)(flatten) dense = Dense(units=num_outputs)(flatten) model = Model(inputs=input, outputs=dense) return model # learning rate schedule def step_decay(epoch): initial_lrate = 1e-3 drop = 0.5 epochs_drop = 3 lrate = initial_lrate * math.pow(drop, math.floor((1+epoch)/epochs_drop)) return lrate in_shape = (66, 200, 3) #in_shape = (160, 320, 3) dropout_rate = 0.2 model = build_nvidia(in_shape, 1, dropout_rate, dropout_multi=2) opt = Adam(lr=1e-4) model.compile(loss='mse', optimizer=opt) lrate = LearningRateScheduler(step_decay) callbacks_list = [lrate] model.summary()
_________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) (None, 66, 200, 3) 0 _________________________________________________________________ lambda_2 (Lambda) (None, 66, 200, 3) 0 _________________________________________________________________ conv2d_3 (Conv2D) (None, 31, 98, 24) 1824 _________________________________________________________________ batch_normalization_1 (Batch (None, 31, 98, 24) 392 _________________________________________________________________ activation_1 (Activation) (None, 31, 98, 24) 0 _________________________________________________________________ dropout_3 (Dropout) (None, 31, 98, 24) 0 _________________________________________________________________ conv2d_4 (Conv2D) (None, 14, 47, 36) 21636 _________________________________________________________________ batch_normalization_2 (Batch (None, 14, 47, 36) 188 _________________________________________________________________ activation_2 (Activation) (None, 14, 47, 36) 0 _________________________________________________________________ dropout_4 (Dropout) (None, 14, 47, 36) 0 _________________________________________________________________ conv2d_5 (Conv2D) (None, 5, 22, 48) 43248 _________________________________________________________________ batch_normalization_3 (Batch (None, 5, 22, 48) 88 _________________________________________________________________ activation_3 (Activation) (None, 5, 22, 48) 0 _________________________________________________________________ dropout_5 (Dropout) (None, 5, 22, 48) 0 _________________________________________________________________ conv2d_6 (Conv2D) (None, 3, 20, 64) 27712 _________________________________________________________________ batch_normalization_4 (Batch (None, 3, 20, 64) 80 _________________________________________________________________ activation_4 (Activation) (None, 3, 20, 64) 0 _________________________________________________________________ dropout_6 (Dropout) (None, 3, 20, 64) 0 _________________________________________________________________ conv2d_7 (Conv2D) (None, 1, 18, 64) 36928 _________________________________________________________________ batch_normalization_5 (Batch (None, 1, 18, 64) 72 _________________________________________________________________ activation_5 (Activation) (None, 1, 18, 64) 0 _________________________________________________________________ dropout_7 (Dropout) (None, 1, 18, 64) 0 _________________________________________________________________ flatten_2 (Flatten) (None, 1152) 0 _________________________________________________________________ dense_3 (Dense) (None, 1000) 1153000 _________________________________________________________________ dropout_8 (Dropout) (None, 1000) 0 _________________________________________________________________ dense_4 (Dense) (None, 100) 100100 _________________________________________________________________ dropout_9 (Dropout) (None, 100) 0 _________________________________________________________________ dense_5 (Dense) (None, 50) 5050 _________________________________________________________________ dense_6 (Dense) (None, 1) 51 ================================================================= Total params: 1,390,369 Trainable params: 1,389,959 Non-trainable params: 410 _________________________________________________________________
MIT
notebooks/behavior_cloning_tutorial-Copy1.ipynb
Jetafull/CarND-Behavioral-Cloning-P3
Training and Validation
%%time trn_data_generator = GeneratorFromFiles(TRAIN_IMG_FILES, LABELS, resize=True) val_data_generator = GeneratorFromFiles(VAL_IMG_FILES, LABELS, resize=True) model.fit_generator(trn_data_generator, validation_data=val_data_generator, epochs=12, workers=2, callbacks=callbacks_list, use_multiprocessing=False, verbose=1) # model.load_weights(ROOT_PATH/'models/model-nvidia-base-2.h5') # trn_data_generator = GeneratorFromFiles(TRAIN_IMG_FILES, LABELS, resize=True) # val_data_generator = GeneratorFromFiles(VAL_IMG_FILES, LABELS, resize=True)
_____no_output_____
MIT
notebooks/behavior_cloning_tutorial-Copy1.ipynb
Jetafull/CarND-Behavioral-Cloning-P3
Fine-Tuning the Model
%%time opt = Adam(lr=1e-5) model.compile(loss='mse', optimizer=opt) model.fit_generator(trn_data_generator, validation_data=val_data_generator, epochs=5, workers=3, use_multiprocessing=True, verbose=1) %%time opt = Adam(lr=8e-6) model.compile(loss='mse', optimizer=opt) model.fit_generator(trn_data_generator, validation_data=val_data_generator, epochs=5, workers=3, use_multiprocessing=True, verbose=1)
Epoch 1/5 2243/2243 [==============================] - 180s 80ms/step - loss: 0.0568 - val_loss: 0.0523 Epoch 2/5 2243/2243 [==============================] - 180s 80ms/step - loss: 0.0566 - val_loss: 0.0522 Epoch 3/5 2243/2243 [==============================] - 190s 85ms/step - loss: 0.0563 - val_loss: 0.0522 Epoch 4/5 1743/2243 [======================>.......] - ETA: 32s - loss: 0.0564
MIT
notebooks/behavior_cloning_tutorial-Copy1.ipynb
Jetafull/CarND-Behavioral-Cloning-P3
Saving Model
model.save(ROOT_PATH/'models/model-nvidia-base-3.h5', include_optimizer=False)
_____no_output_____
MIT
notebooks/behavior_cloning_tutorial-Copy1.ipynb
Jetafull/CarND-Behavioral-Cloning-P3
Sample Runs=========Basic Run--------The simplest test run requires that we specify a reference directory and atest directory. The default file matching assumes that our reference andtest files match names exactly and both end in '.xml'. With just thetwo directory arguments, we get micro-average scores for the defaultmetrics across the full directory.
!python etude.py \ --reference-input tests/data/i2b2_2016_track-1_reference \ --test-input tests/data/i2b2_2016_track-1_test
100% (10 of 10) |##########################| Elapsed Time: 0:00:01 Time: 0:00:01 exact TP FP TN FN micro-average 340.0 8.0 0.0 105.0
Apache-2.0
jupyter/README.ipynb
MUSC-TBIC/etude-engine
In the next sample runs, you can see how to include a per-file score breakdown and a per-annotation-type score breakdown.
!python etude.py \ --reference-input tests/data/i2b2_2016_track-1_reference \ --test-input tests/data/i2b2_2016_track-1_test \ --by-file !python etude.py \ --reference-input tests/data/i2b2_2016_track-1_reference \ --test-input tests/data/i2b2_2016_track-1_test \ --by-type
100% (10 of 10) |##########################| Elapsed Time: 0:00:01 Time: 0:00:01 exact TP FP TN FN micro-average 340.0 8.0 0.0 105.0 Age 63.0 2.0 0.0 29.0 DateTime 91.0 2.0 0.0 33.0 HCUnit 61.0 4.0 0.0 15.0 OtherID 7.0 0.0 0.0 0.0 OtherLoc 1.0 0.0 0.0 4.0 OtherOrg 18.0 0.0 0.0 3.0 Patient 16.0 0.0 0.0 3.0 PhoneFax 5.0 0.0 0.0 1.0 Provider 54.0 0.0 0.0 10.0 StateCountry 14.0 0.0 0.0 7.0 StreetCity 4.0 0.0 0.0 0.0 Zip 4.0 0.0 0.0 0.0 eAddress 2.0 0.0 0.0 0.0
Apache-2.0
jupyter/README.ipynb
MUSC-TBIC/etude-engine
Scoring on Different Fields-----------------------The above examples show scoring based on the default key in theconfiguration file used for matching the reference to the testconfiguration. You may wish to group annotations on different fields,such as the parent class or long description.
!python etude.py \ --reference-input tests/data/i2b2_2016_track-1_reference \ --test-input tests/data/i2b2_2016_track-1_test \ --by-type !python etude.py \ --reference-input tests/data/i2b2_2016_track-1_reference \ --test-input tests/data/i2b2_2016_track-1_test \ --by-type \ --score-key "Parent" !python etude.py \ --reference-input tests/data/i2b2_2016_track-1_reference \ --test-input tests/data/i2b2_2016_track-1_test \ --by-type \ --score-key "Long Name"
100% (10 of 10) |##########################| Elapsed Time: 0:00:01 Time: 0:00:01 exact TP FP TN FN micro-average 340.0 8.0 0.0 105.0 Age Greater than 89 63.0 2.0 0.0 29.0 Date and Time Information 91.0 2.0 0.0 33.0 Electronic Address Information 2.0 0.0 0.0 0.0 Health Care Provider Name 54.0 0.0 0.0 10.0 Health Care Unit Name 61.0 4.0 0.0 15.0 Other ID Numbers 7.0 0.0 0.0 0.0 Other Locations 1.0 0.0 0.0 4.0 Other Organization Name 18.0 0.0 0.0 3.0 Patient Name 16.0 0.0 0.0 3.0 Phone, Fax, or Pager Number 5.0 0.0 0.0 1.0 State or Country 14.0 0.0 0.0 7.0 Street City Name 4.0 0.0 0.0 0.0 ZIP Code 4.0 0.0 0.0 0.0
Apache-2.0
jupyter/README.ipynb
MUSC-TBIC/etude-engine
Testing=====Unit testing is done with the pytest module.Because of a bug in how tests are processed in Python 2.7, you should run pytest indirectly rather than directly.An [HTML-formatted coverage guide](../htmlcov/index.html) will be generated locally under the directory containing this code.
!python -m pytest --cov-report html --cov=./ tests
============================= test session starts ============================== platform darwin -- Python 2.7.13, pytest-3.1.1, py-1.4.34, pluggy-0.4.0 rootdir: /Users/pmh/git/etude, inifile: plugins: cov-2.5.1 collected 107 items 1m  tests/test_args_and_configs.py .................. tests/test_etude.py ....... tests/test_scoring_metrics.py ............................................................... tests/test_text_extraction.py ................... ---------- coverage: platform darwin, python 2.7.13-final-0 ---------- Coverage HTML written to dir htmlcov ========================== 107 passed in 3.50 seconds ==========================
Apache-2.0
jupyter/README.ipynb
MUSC-TBIC/etude-engine
Read the data
dfXtrain = pd.read_csv('preprocessed_csv/train_tree.csv', index_col='id', sep=';') dfXtest = pd.read_csv('preprocessed_csv/test_tree.csv', index_col='id', sep=';') dfYtrain = pd.read_csv('preprocessed_csv/y_train_tree.csv', header=None, names=['ID', 'COTIS'], sep=';') dfYtrain = dfYtrain.set_index('ID')
_____no_output_____
MIT
xtr_tune_drop_lmse.ipynb
alexsyrom/datascience-ml-2
Preprocessing Вынесем var14, department и subreg.
dropped_col_names = ['var14', 'department', 'subreg'] def drop_cols(df): return df.drop(dropped_col_names, axis=1), df[dropped_col_names] train, dropped_train = drop_cols(dfXtrain) test, dropped_test = drop_cols(dfXtest)
_____no_output_____
MIT
xtr_tune_drop_lmse.ipynb
alexsyrom/datascience-ml-2
Добавим инфу о величине города из subreg'a
def add_big_city_cols(df, dropped_df): df['big'] = np.where(dropped_df['subreg'] % 100 == 0, 1, 0) df['average'] = np.where(dropped_df['subreg'] % 10 == 0, 1, 0) df['average'] = df['average'] - df['big'] df['small'] = 1 - df['big'] - df['average'] return df train = add_big_city_cols(train, dropped_train) test = add_big_city_cols(test, dropped_test)
_____no_output_____
MIT
xtr_tune_drop_lmse.ipynb
alexsyrom/datascience-ml-2
Декодируем оставшиеся категориальные признаки
categorical = list(train.select_dtypes(exclude=[np.number]).columns) categorical list(test.select_dtypes(exclude=[np.number]).columns) for col in categorical: print(col, train[col].nunique())
marque 154 energie_veh 5 profession 17 var6 5 var8 23
MIT
xtr_tune_drop_lmse.ipynb
alexsyrom/datascience-ml-2
energie_veh и var6 с помощью get_dummies
small_cat = ['energie_veh', 'var6'] train = pd.get_dummies(train, columns=small_cat) test = pd.get_dummies(test, columns=small_cat)
_____no_output_____
MIT
xtr_tune_drop_lmse.ipynb
alexsyrom/datascience-ml-2
Для остальных посчитаем сглаженные средние таргета
big_cat = ['marque', 'profession', 'var8']
_____no_output_____
MIT
xtr_tune_drop_lmse.ipynb
alexsyrom/datascience-ml-2
Описание для начала
df = pd.concat([dfYtrain.describe()] + [train[col].value_counts().describe() for col in big_cat], axis=1) df
_____no_output_____
MIT
xtr_tune_drop_lmse.ipynb
alexsyrom/datascience-ml-2
Сглаживать будем с 500 Будем использовать среднее, 25%, 50% и 75% Декодирование
class EncodeWithAggregates(): def __init__(self, cols, y_train, train, *tests): self.cols = cols self.y_train = y_train self.train = train self.tests = tests self.Xs = (self.train,) + self.tests self.smooth_coef = 500 self.miss_val = 'NAN' self.percentiles = [25, 50, 75] self.names = ['Mean'] + [str(q) for q in self.percentiles] self.aggs = [np.mean] + [self.percentile_fix(q) for q in self.percentiles] self.miss_val_fills = [agg(y_train) for agg in self.aggs] self.train_aggs = [agg(y_train) for agg in self.aggs] def percentile_fix(self, q): def wrapped(a): return np.percentile(a, q) return wrapped def transform(self): for col in self.cols: self.encode(col) gc.collect() return self.Xs def encode(self, col): df = pd.concat([self.y_train, self.train[col]], axis=1) dfgb = df.groupby(col) dfsize = dfgb.size() dfsize.ix[self.miss_val] = 0 for name, agg, miss_val_fill, train_agg in zip(self.names, self.aggs, self.miss_val_fills, self.train_aggs): dfm = dfgb.agg(agg) dfm.ix[self.miss_val] = miss_val_fill for X in self.Xs: agg_df = dfm.ix[X[col].fillna(self.miss_val)].set_index(X.index)[self.y_train.name] agg_size = dfsize.ix[X[col].fillna(self.miss_val)] agg_size = pd.DataFrame({'size': agg_size}).set_index(X.index)['size'] agg_name = "{}_{}".format(col, name) X[agg_name] = (agg_df * agg_size + self.smooth_coef * train_agg) / (self.smooth_coef + agg_size) self.Xs = [X.drop(col, axis=1) for X in self.Xs] train, test = EncodeWithAggregates(big_cat, dfYtrain['COTIS'], train, test).transform() test.shape train.shape train.fillna(-9999, inplace=True) test.fillna(-9999, inplace=True) y_train = np.array(dfYtrain) x_train = np.array(train) x_test = np.array(test)
_____no_output_____
MIT
xtr_tune_drop_lmse.ipynb
alexsyrom/datascience-ml-2
Save routines
dfYtest = pd.DataFrame({'ID': dfXtest.index, 'COTIS': np.zeros(test.shape[0])}) dfYtest = dfYtest[['ID', 'COTIS']] dfYtest.head() def save_to_file(y, file_name): dfYtest['COTIS'] = y dfYtest.to_csv('results/{}'.format(file_name), index=False, sep=';') model_name = 'lmse_without_size_xtr' dfYtest_stacking = pd.DataFrame({'ID': dfXtrain.index, model_name: np.zeros(train.shape[0])}) dfYtest_stacking = dfYtest_stacking[['ID', model_name]] dfYtest_stacking.head() def save_to_file_stacking(y, file_name): dfYtest_stacking[model_name] = y dfYtest_stacking.to_csv('stacking/{}'.format(file_name), index=False, sep=';')
_____no_output_____
MIT
xtr_tune_drop_lmse.ipynb
alexsyrom/datascience-ml-2
Train XGB
from sklearn.ensemble import ExtraTreesRegressor def plot_quality(grid_searcher, param_name): means = [] stds = [] for elem in grid_searcher.grid_scores_: means.append(np.mean(elem.cv_validation_scores)) stds.append(np.sqrt(np.var(elem.cv_validation_scores))) means = np.array(means) stds = np.array(stds) params = grid_searcher.param_grid plt.figure(figsize=(10, 6)) plt.plot(params[param_name], means) plt.fill_between(params[param_name], \ means + stds, means - stds, alpha = 0.3, facecolor='blue') plt.xlabel(param_name) plt.ylabel('MAPE') def mape(y_true, y_pred): return -np.mean(np.abs((y_true - y_pred) / y_true)) * 100 def mape_scorer(est, X, y): gc.collect() return mape(y, est.predict(X)) class MyGS(): class Element(): def __init__(self): self.cv_validation_scores = [] def add(self, score): self.cv_validation_scores.append(score) def __init__(self, param_grid, name, n_folds): self.param_grid = {name: param_grid} self.grid_scores_ = [MyGS.Element() for item in param_grid] def add(self, score, param_num): self.grid_scores_[param_num].add(score) validation_index = (dropped_train.department == 1) | (dropped_train.department > 90) train_index = ~validation_index subtrain, validation = train[train_index], train[validation_index] x_subtrain = np.array(subtrain) x_validation = np.array(validation) ysubtrain, yvalidation = dfYtrain[train_index], dfYtrain[validation_index] y_subtrain = np.array(ysubtrain).flatten() y_validation = np.array(yvalidation).flatten() %%time est = ExtraTreesRegressor(n_estimators=10, max_features=51, max_depth=None, n_jobs=-1, random_state=42).fit(X=x_subtrain, y=np.log(y_subtrain)) y_pred = est.predict(x_validation) mape(y_validation, np.exp(y_pred)) est sample_weight_subtrain = np.power(y_subtrain, -1) from sklearn.tree import DecisionTreeRegressor %%time count = 10000 est = DecisionTreeRegressor(criterion='mae', max_depth=2, max_features=None, random_state=42).fit( X=x_subtrain[:count], y=y_subtrain[:count], sample_weight=sample_weight_subtrain[:count]) gc.collect() y_pred = est.predict(x_validation) mape(y_validation, y_pred)
_____no_output_____
MIT
xtr_tune_drop_lmse.ipynb
alexsyrom/datascience-ml-2
Save
save_to_file_stacking(y_lmse_pred * 0.995, 'xbg_tune_eta015_num300_dropped_lmse.csv') %%time param = {'base_score':0.5, 'colsample_bylevel':1, 'colsample_bytree':1, 'gamma':0, 'eta':0.15, 'max_delta_step':0, 'max_depth':9, 'min_child_weight':1, 'nthread':-1, 'objective':'reg:linear', 'alpha':0, 'lambda':1, 'scale_pos_weight':1, 'seed':56, 'silent':True, 'subsample':1} num_round = 180 dtrain = xgb.DMatrix(x_train, label=np.log(y_train), missing=-9999,) #weight=weight_coef * np.power(y_train[train_index], -2) ) dtest = xgb.DMatrix(x_test, missing=-9999) param['base_score'] = np.percentile(np.log(y_train), 25) bst = xgb.train(param, dtrain, num_round) y_pred = np.exp(bst.predict(dtest)) gc.collect() save_to_file(y_pred * 0.995, 'xbg_tune_eta015_num300_dropped_lmse.csv')
_____no_output_____
MIT
xtr_tune_drop_lmse.ipynb
alexsyrom/datascience-ml-2
Test For The Best Machine Learning Algorithm For Prediction This notebook takes about 40 minutes to run, but we've already run it and saved the data for you. Please read through it, though, so that you understand how we came to the conclusions we'll use moving forward. Six AlgorithmsWe're going to compare six different algorithms to determine the best one to produce an accurate model for our predictions. Logistic RegressionLogistic Regression (LR) is a technique borrowed from the field of statistics. It is the go-to method for binary classification problems (problems with two class values). ![](./docs/logisticfunction.png)Logistic Regression is named for the function used at the core of the method: the logistic function. The logistic function is a probablistic method used to determine whether or not the driver will be the winner. Logistic Regression predicts probabilities. Decision TreeA tree has many analogies in real life, and it turns out that it has influenced a wide area of machine learning, covering both classification and regression. In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making.![](./docs/decisiontree.png)This methodology is more commonly known as a "learning decision tree" from data, and the above tree is called a Classification tree because the goal is to classify a driver as the winner or not. Random ForestRandom forest is a supervised learning algorithm. The "forest" it builds is an **ensemble of decision trees**, usually trained with the “bagging” method, a combination of learning models which increases the accuracy of the result.A random forest eradicates the limitations of a decision tree algorithm. It reduces the overfitting of datasets and increases precision. It generates predictions without requiring many configurations.![](./docs/randomforest.png)Here's the difference between the Decision Tree and Random Forest methods:![](./docs/treefortheforest.jpg) Support Vector Machine Algorithm (SVC)Support Vector Machines (SVMs) are a set of supervised learning methods used for classification, regression and detection of outliers.The advantages of support vector machines are:- Effective in high dimensional spaces- Still effective in cases where number of dimensions is greater than the number of samples- Uses a subset of training points in the decision function (called support vectors), so it is also memory efficient- Versatile: different kernel functions can be specified for the decision function. Common kernels are provided, but it is also possible to specify custom kernelsThe objective of a SVC (Support Vector Classifier) is to fit to the data you provide, returning a "best fit" hyperplane that divides, or categorizes, your data. Gaussian Naive Bayes AlgorithmNaive Bayes is a classification algorithm for binary (two-class) and multi-class classification problems. The technique is easiest to understand when described using binary or categorical input values. The representation used for naive Bayes is probabilities.A list of probabilities is stored to a file for a learned Naive Bayes model. This includes:- **Class Probabilities:** The probabilities of each class in the training dataset.- **Conditional Probabilities:** The conditional probabilities of each input value given each class value.Naive Bayes can be extended to real-value attributes, most commonly by assuming a Gaussian distribution. This extension of Naive Bayes is called Gaussian Naive Bayes. Other functions can be used to estimate the distribution of the data, but the Gaussian (or normal distribution) is the easiest to work with because you only need to estimate the mean and the standard deviation from your training data. k Nearest Neighbor Algorithm (kNN)The k-Nearest Neighbors (KNN) algorithm is a simple, supervised machine learning algorithm that can be used to solve both classification and regression problems.kNN works by finding the distances between a query and all of the examples in the data, selecting the specified number examples (k) closest to the query, then voting for the most frequent label (in the case of classification) or averages the labels (in the case of regression).The kNN algorithm assumes the similarity between the new case/data and available cases, and puts the new case into the category that is most similar to the available categories.![](./docs/knn.png) Analyzing the Data Feature ImportanceAnother great quality of the random forest algorithm is that it's easy to measure the relative importance of each feature to the prediction.The Scikit-learn Python Library provides a great tool for this which measures a feature's importance by looking at how much the tree nodes that use that feature reduce impurity across all trees in the forest. It computes this score automatically for each feature after training, and scales the results so the sum of all importance is equal to one. Data Visualization When Building a ModelHow do you visualize the influence of the data? How do you frame the problem?An important tool in the data scientist's toolkit is the power to visualize data using several excellent libraries such as Seaborn or MatPlotLib. Representing your data visually might allow you to uncover hidden correlations that you can leverage. Your visualizations might also help you to uncover bias or unbalanced data.![](./docs/visualization.png) Splitting the DatasetPrior to training, you need to split your dataset into two or more parts of unequal size that still represent the data well. 1. Training. This part of the dataset is fit to your model to train it. This set constitutes the majority of the original dataset.2. Testing. A test dataset is an independent group of data, often a subset of the original data, that you use to confirm the performance of the model you built.3. Validating. A validation set is a smaller independent group of examples that you use to tune the model's hyperparameters, or architecture, to improve the model. Depending on your data's size and the question you are asking, you might not need to build this third set. Building the ModelUsing your training data, your goal is to build a model, or a statistical representation of your data, using various algorithms to train it. Training a model exposes it to data and allows it to make assumptions about perceived patterns it discovers, validates, and accepts or rejects. Decide on a Training MethodDepending on your question and the nature of your data, you will choose a method to train it. Stepping through Scikit-learn's documentation, you can explore many ways to train a model. Depending on the results you get, you might have to try several different methods to build the best model. You are likely to go through a process whereby data scientists evaluate the performance of a model by feeding it unseen data, checking for accuracy, bias, and other quality-degrading issues, and selecting the most appropriate training method for the task at hand. Train a ModelArmed with your training data, you are ready to "fit" it to create a model. In many ML libraries you will find the code 'model.fit' - it is at this time that you send in your data as an array of values (usually 'X') and a feature variable (usually 'y'). Evaluate the ModelOnce the training process is complete, you will be able to evaluate the model's quality by using test data to gauge its performance. This data is a subset of the original data that the model has not previously analyzed. You can print out a table of metrics about your model's quality. Model FittingIn the Machine Learning context, model fitting refers to the accuracy of the model's underlying function as it attempts to analyze data with which it is not familiar. Underfitting and OverfittingUnderfitting and overfitting are common problems that degrade the quality of the model, as the model either doesn't fit well enough, or it fits too well. This causes the model to make predictions either too closely aligned or too loosely aligned with its training data. An overfit model predicts training data too well because it has learned the data's details and noise too well. An underfit model is not accurate as it can neither accurately analyze its training data nor data it has not yet 'seen'.![](./docs/overfit.png)Let's test out some algorithms to choose our path for modelling our predictions.
import warnings warnings.filterwarnings("ignore") import time start = time.time() import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns import pickle from sklearn.metrics import confusion_matrix, precision_score from sklearn.metrics import accuracy_score from sklearn.preprocessing import StandardScaler,LabelEncoder,OneHotEncoder from sklearn.model_selection import cross_val_score,StratifiedKFold,RandomizedSearchCV from sklearn.linear_model import LogisticRegression from sklearn.ensemble import RandomForestClassifier, RandomForestRegressor from sklearn.svm import SVC from sklearn.tree import DecisionTreeClassifier from sklearn.neighbors import KNeighborsClassifier from sklearn.naive_bayes import GaussianNB from sklearn.metrics import confusion_matrix,precision_score,f1_score,recall_score from sklearn.neural_network import MLPClassifier, MLPRegressor plt.style.use('seaborn') np.set_printoptions(precision=4) data = pd.read_csv('./data_f1/data_filtered.csv') data.head() len(data) dnf_by_driver = data.groupby('driver').sum()['driver_dnf'] driver_race_entered = data.groupby('driver').count()['driver_dnf'] driver_dnf_ratio = (dnf_by_driver/driver_race_entered) driver_confidence = 1-driver_dnf_ratio driver_confidence_dict = dict(zip(driver_confidence.index,driver_confidence)) driver_confidence_dict dnf_by_constructor = data.groupby('constructor').sum()['constructor_dnf'] constructor_race_entered = data.groupby('constructor').count()['constructor_dnf'] constructor_dnf_ratio = (dnf_by_constructor/constructor_race_entered) constructor_reliability = 1-constructor_dnf_ratio constructor_reliability_dict = dict(zip(constructor_reliability.index,constructor_reliability)) constructor_reliability_dict data['driver_confidence'] = data['driver'].apply(lambda x:driver_confidence_dict[x]) data['constructor_reliability'] = data['constructor'].apply(lambda x:constructor_reliability_dict[x]) #removing retired drivers and constructors active_constructors = ['Alpine F1', 'Williams', 'McLaren', 'Ferrari', 'Mercedes', 'AlphaTauri', 'Aston Martin', 'Alfa Romeo', 'Red Bull', 'Haas F1 Team'] active_drivers = ['Daniel Ricciardo', 'Mick Schumacher', 'Carlos Sainz', 'Valtteri Bottas', 'Lance Stroll', 'George Russell', 'Lando Norris', 'Sebastian Vettel', 'Kimi Räikkönen', 'Charles Leclerc', 'Lewis Hamilton', 'Yuki Tsunoda', 'Max Verstappen', 'Pierre Gasly', 'Fernando Alonso', 'Sergio Pérez', 'Esteban Ocon', 'Antonio Giovinazzi', 'Nikita Mazepin','Nicholas Latifi'] data['active_driver'] = data['driver'].apply(lambda x: int(x in active_drivers)) data['active_constructor'] = data['constructor'].apply(lambda x: int(x in active_constructors)) data.head() data.columns
_____no_output_____
UPL-1.0
beginners/04.ML_Modelling.ipynb
MKulfan/redbull-analytics-hol