markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
it's really intersting that somes common french traffic signs are not present in INI German traffic signs dataset or differedWhatever our input - evenif it's not present in the training dataset, by using softmax activation our classififer can not say that 'this is a new traffic sign that it doesn't recognize' (sum of ...
#Normalize the dataset X_frenchsign_norm = input_normalization(images_frenchsign) #One-hot matrix y_frenchsign_onehot = keras.utils.to_categorical(y_frenchsign, n_classes) #Load saved model reconstructed = keras.models.load_model("LeNet_enhanced_trainingdataset_HLS.h5") #Evaluate and display th...
Image 0 - Target = 13, Predicted = 6 Image 1 - Target = 31, Predicted = 6 Image 2 - Target = 29, Predicted = 6 Image 3 - Target = 24, Predicted = 6 Image 4 - Target = 26, Predicted = 6 Image 5 - Target = 27, Predicted = 6 Image 6 - Target = 33, Predicted = 6 Image 7 - Target = 17, Predicted = 6 Image 8 - Target = 15, P...
MIT
traffic_sign_classifier_LeNet_enhanced_trainingdataset_HLS.ipynb
nguyenrobot/Traffic-Sign-Recognition-with-Keras-Tensorflow
*French traffic signs to classsify &8595;*
#### plot softmax probs along with traffic sign examples n_img = X_frenchsign_norm.shape[0] fig, axarray = plot.subplots(n_img, 2) plot.suptitle('Visualization of softmax probabilities', fontweight='bold') for r in range(0, n_img): axarray[r, 0].imshow(numpy.squeeze(images_frenchsign[r])) axarray[r, 0].s...
Top 3 model predictions for image 0 (Target is 13) Prediction = 02 with probability 0.0250 (logit is 0.1806) Prediction = 31 with probability 0.0262 (logit is 0.2275) Prediction = 06 with probability 0.0292 (logit is 0.3377) Top 3 model predictions for image 1 (Target is 31) Prediction = 02 with probability...
MIT
traffic_sign_classifier_LeNet_enhanced_trainingdataset_HLS.ipynb
nguyenrobot/Traffic-Sign-Recognition-with-Keras-Tensorflow
*Visualization of softmax probabilities &8595;* Visualization of layers
### Import tensorflow and keras import tensorflow as tf from tensorflow import keras from tensorflow.keras import Model import matplotlib.pyplot as plot print ("TensorFlow version: " + tf.__version__) # Load pickled data import pickle import numpy training_file = 'traffic-signs-data/train.p' with open(training_fil...
_____no_output_____
MIT
traffic_sign_classifier_LeNet_enhanced_trainingdataset_HLS.ipynb
nguyenrobot/Traffic-Sign-Recognition-with-Keras-Tensorflow
Display analized input
plot.imshow(X_train[900]) def display_layer(outputs_history, col_size, row_size, layer_index): activation = outputs_history[layer_index] activation_index = 0 fig, ax = plot.subplots(row_size, col_size, figsize=(row_size*2.5,col_size*1.5)) for row in range(0,row_size): for col in ...
_____no_output_____
MIT
traffic_sign_classifier_LeNet_enhanced_trainingdataset_HLS.ipynb
nguyenrobot/Traffic-Sign-Recognition-with-Keras-Tensorflow
Embed an Escher map in an IPython notebook
escher.list_available_maps() b = escher.Builder(map_name='e_coli_core.Core metabolism') b.display_in_notebook()
_____no_output_____
MIT
.ipynb_checkpoints/Nucleotide metabolism-checkpoint.ipynb
polybiome/PolyEnzyme
Plot FBA solutions in Escher
model = cobra.io.load_json_model( "iECW_1372.json") # E coli metabolic model FBA_Solution = model.optimize() # FBA of the original model print('Original Growth rate: %.9f' % FBA_Solution.f) b = escher.Builder(map_name='e_coli_core.Core metabolism', reaction_data=FBA_Solution.x_dict, ...
_____no_output_____
MIT
.ipynb_checkpoints/Nucleotide metabolism-checkpoint.ipynb
polybiome/PolyEnzyme
Methods
class ourCircle: pi = 3.14 def __init__(self,radius=1): self.radius = radius self.area = self.getArea(radius) def setRadius(self,new_radius): self.radius = new_radius self = new_radius * new_radius * self.pi def getCircumference(self): return sel...
_____no_output_____
MIT
Object Oriented Programming.ipynb
ramsvijay/basic_datatype_python
Inheritance
class Animal: def __init__(self): print("Animal Object Cost Created") def whoAmI(self): print("I am Animal Class") def eat(self): print("I am eating") a = Animal() a.eat() class Man(Animal): def __init__(self): Animal.__init__(self) m = Man() m.eat() #Polymorphism #Exc...
************* Module capitalize_text capitalize_text.py:1:0: C0111: Missing module docstring (missing-docstring) capitalize_text.py:1:0: C0111: Missing function docstring (missing-docstring) ----------------------------------- Your code has been rated at 0.00/10
MIT
Object Oriented Programming.ipynb
ramsvijay/basic_datatype_python
Simulating Grover's Search Algorithm with 2 Qubits
import numpy as np from matplotlib import pyplot as plt %matplotlib inline
_____no_output_____
Apache-2.0
lessons/misc/quantum-computing/grovers-algorthim-2-qubits.ipynb
UAAppComp/studyGroup
Define the zero and one vectorsDefine the initial state $\psi$
zero = np.matrix([[1],[0]]); one = np.matrix([[0],[1]]); psi = np.kron(zero,zero); print(psi)
[[1] [0] [0] [0]]
Apache-2.0
lessons/misc/quantum-computing/grovers-algorthim-2-qubits.ipynb
UAAppComp/studyGroup
Define the gates we will use:$\text{Id} = \begin{pmatrix} 1 & 0 \\0 & 1 \end{pmatrix},\quadX = \begin{pmatrix} 0 & 1 \\1 & 0 \end{pmatrix},\quadZ = \begin{pmatrix} 1 & 0 \\0 & -1 \end{pmatrix},\quadH = \frac{1}{\sqrt{2}}\begin{pmatrix} 1 & 1 \\1 & -1 \end{pmatrix},\quad\text{CNOT} = \begin{pmatrix} 1 & 0 & 0 &...
Id = np.matrix([[1,0],[0,1]]); X = np.matrix([[0,1],[1,0]]); Z = np.matrix([[1,0],[0,-1]]); H = np.sqrt(0.5) * np.matrix([[1,1],[1,-1]]); CNOT = np.matrix([[1,0,0,0],[0,1,0,0],[0,0,0,1],[0,0,1,0]]); CZ = np.kron(Id,H).dot(CNOT).dot(np.kron(Id,H)); print(CZ)
[[ 1. 0. 0. 0.] [ 0. 1. 0. 0.] [ 0. 0. 1. 0.] [ 0. 0. 0. -1.]]
Apache-2.0
lessons/misc/quantum-computing/grovers-algorthim-2-qubits.ipynb
UAAppComp/studyGroup
Define the oracle for Grover's algorithm (take search answer to be "10")$\text{oracle} = \begin{pmatrix} 1 & 0 & 0 & 0 \\0 & 1 & 0 & 0 \\0 & 0 & -1 & 0 \\0 & 0 & 0 & 1\end{pmatrix}= (Z \otimes \text{Id}) CZ$Use different combinations of $Z \otimes \text{Id}$ to change where search answer is.
oracle = np.kron(Z,Id).dot(CZ); print(oracle)
[[ 1. 0. 0. 0.] [ 0. 1. 0. 0.] [ 0. 0. -1. 0.] [ 0. 0. 0. 1.]]
Apache-2.0
lessons/misc/quantum-computing/grovers-algorthim-2-qubits.ipynb
UAAppComp/studyGroup
Act the H gates on the input vector and apply the oracle
psi0 = np.kron(H,H).dot(psi); psi1 = oracle.dot(psi0); print(psi1)
[[ 0.5] [ 0.5] [-0.5] [ 0.5]]
Apache-2.0
lessons/misc/quantum-computing/grovers-algorthim-2-qubits.ipynb
UAAppComp/studyGroup
Remember that when we measure the result ("00", "01", "10", "11") is chosen randomly with probabilities given by the vector elements squared.
print(np.multiply(psi1,psi1))
[[ 0.25] [ 0.25] [ 0.25] [ 0.25]]
Apache-2.0
lessons/misc/quantum-computing/grovers-algorthim-2-qubits.ipynb
UAAppComp/studyGroup
There is no difference between any of the probabilities. It's still just a 25% chance of getting the right answer. We need some of gates after the oracle before measuring to converge on the right answer. These gates do the operation $W = \frac{1}{2}\begin{pmatrix} -1 & 1 & 1 & 1 \\1 & -1 & 1 & 1 \\1 & 1 & -1 & 1 \\1 ...
W = np.kron(H,H).dot(np.kron(Z,Z)).dot(CZ).dot(np.kron(H,H)); print(W) psif = W.dot(psi1); print(np.multiply(psif,psif)) x = [0,1,2,3]; xb = [0.25,1.25,2.25,3.25]; labels=['00', '01', '10', '11']; plt.axis([-0.5,3.5,-1.25,1.25]); plt.xticks(x,labels); plt.bar(x, np.ravel(psi0), 1/1.5, color="red"); plt.bar(xb, np.ravel...
_____no_output_____
Apache-2.0
lessons/misc/quantum-computing/grovers-algorthim-2-qubits.ipynb
UAAppComp/studyGroup
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine APIInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](http...
import subprocess try: import geehydro except ImportError: print('geehydro package not installed. Installing ...') subprocess.check_call(["python", '-m', 'pip', 'install', 'geehydro'])
_____no_output_____
MIT
Visualization/image_color_ramp.ipynb
pberezina/earthengine-py-notebooks
Import libraries
import ee import folium import geehydro
_____no_output_____
MIT
Visualization/image_color_ramp.ipynb
pberezina/earthengine-py-notebooks
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once.
try: ee.Initialize() except Exception as e: ee.Authenticate() ee.Initialize()
_____no_output_____
MIT
Visualization/image_color_ramp.ipynb
pberezina/earthengine-py-notebooks
Create an interactive map This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function. The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `...
Map = folium.Map(location=[40, -100], zoom_start=4) Map.setOptions('HYBRID')
_____no_output_____
MIT
Visualization/image_color_ramp.ipynb
pberezina/earthengine-py-notebooks
Add Earth Engine Python script
# Load SRTM Digital Elevation Model data. image = ee.Image('CGIAR/SRTM90_V4'); # Define an SLD style of discrete intervals to apply to the image. sld_intervals = \ '<RasterSymbolizer>' + \ '<ColorMap type="intervals" extended="false" >' + \ '<ColorMapEntry color="#0000ff" quantity="0" label="0"/>' + \ ...
_____no_output_____
MIT
Visualization/image_color_ramp.ipynb
pberezina/earthengine-py-notebooks
Display Earth Engine data layers
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True) Map
_____no_output_____
MIT
Visualization/image_color_ramp.ipynb
pberezina/earthengine-py-notebooks
Ingeniería de Características En las clases previas vimos las ideas fundamentales de machine learning, pero todos los ejemplos asumían que ya teníamos los datos numéricos en un formato ordenado de tamaño ``[n_samples, n_features]``.En la realidad son raras las ocasiones en que los datos vienen así, _llegar y llevar_.C...
data = [ {'price': 850000, 'rooms': 4, 'neighborhood': 'Queen Anne'}, {'price': 700000, 'rooms': 3, 'neighborhood': 'Fremont'}, {'price': 650000, 'rooms': 3, 'neighborhood': 'Wallingford'}, {'price': 600000, 'rooms': 2, 'neighborhood': 'Fremont'} ]
_____no_output_____
MIT
05.04-Feature-Engineering.ipynb
sebaspee/intro_machine_learning
Podrías estar tentade a codificar estos datos directamente con un mapeo numérico:
{'Queen Anne': 1, 'Fremont': 2, 'Wallingford': 3};
_____no_output_____
MIT
05.04-Feature-Engineering.ipynb
sebaspee/intro_machine_learning
Resulta que esto no es una buena idea. En Scikit-Learn, y en general, los modelos asumen que los datos numéricos reflejan cantidades algebraicas.Usar un mapeo así implica, por ejemplo, que *Queen Anne < Fremont < Wallingford*, o incluso que *Wallingford - Queen Anne = Fremont*, lo que no tiene mucho sentido.Una técnica...
from sklearn.feature_extraction import DictVectorizer vec = DictVectorizer(sparse=False, dtype=int) vec.fit_transform(data)
_____no_output_____
MIT
05.04-Feature-Engineering.ipynb
sebaspee/intro_machine_learning
Nota que la característica `neighborhood` se ha expandido en tres columnas separadas, representando las tres etiquetas de barrio, y que cada fila tiene un 1 en la columna asociada al barrio respectivo.Teniendo los datos codificados de esa manera, se puede proceder a ajustar un modelo en Scikit-Learn.Para ver el signifi...
vec.get_feature_names()
_____no_output_____
MIT
05.04-Feature-Engineering.ipynb
sebaspee/intro_machine_learning
Hay una clara desventaja en este enfoque: si las categorías tienen muchos valores posibles, el dataset puede crecer demasiado.Sin embargo, como los datos codificados contienen principalmente ceros, una matriz dispersa puede ser una solucion eficiente:
vec = DictVectorizer(sparse=True, dtype=int) vec.fit_transform(data)
_____no_output_____
MIT
05.04-Feature-Engineering.ipynb
sebaspee/intro_machine_learning
Varios (pero no todos) de los estimadores en Scikit-Learn aceptan entradas dispersas. ``sklearn.preprocessing.OneHotEncoder`` y ``sklearn.feature_extraction.FeatureHasher`` son dos herramientas adicionales que permiten trabajar con este tipo de características. TextoOtra necesidad común es convertir texto en una serie...
sample = ['problem of evil', 'evil queen', 'horizon problem']
_____no_output_____
MIT
05.04-Feature-Engineering.ipynb
sebaspee/intro_machine_learning
Para vectorizar estos datos construiríamos una columna para las palabras "problem," "evil,", "horizon," etc.Hacer esto a mano es posible, pero nos podemos ahorrar el tedio utilizando el ``CountVectorizer`` de Scikit-Learn:
from sklearn.feature_extraction.text import CountVectorizer vec = CountVectorizer() X = vec.fit_transform(sample) X
_____no_output_____
MIT
05.04-Feature-Engineering.ipynb
sebaspee/intro_machine_learning
El resultado es una matriz dispersa que contiene cada vez que aparece cada palabra en los textos. Para inspeccionarlo fácilmente podemos convertir esto en un ``DataFrame``:
import pandas as pd pd.DataFrame(X.toarray(), columns=vec.get_feature_names())
_____no_output_____
MIT
05.04-Feature-Engineering.ipynb
sebaspee/intro_machine_learning
Todavía falta algo. Este enfoque puede tener problemas: el conteo de palabras puede hacer que algunas características pesen más que otras debido a la frecuencia con la que utilizamos las palabras, y esto puede ser sub-óptimo en algunos algoritmos de clasificación.Una manera de considerar esto es utilizar el modelo _fre...
from sklearn.feature_extraction.text import TfidfVectorizer vec = TfidfVectorizer() X = vec.fit_transform(sample) pd.DataFrame(X.toarray(), columns=vec.get_feature_names())
_____no_output_____
MIT
05.04-Feature-Engineering.ipynb
sebaspee/intro_machine_learning
Esto lo veremos en más detalle en la clase de Naive Bayes. Características DerivadasOtro tipo útil de característica es aquella derivada matemáticamente desde otras características en los datos de entrada.Vimos un ejemplo en la clase de Hiperparámetros cuando construimos características polinomiales desde los datos.Vi...
%matplotlib inline %config InlineBackend.figure_format = 'retina' %matplotlib inline import numpy as np import matplotlib.pyplot as plt x = np.array([1, 2, 3, 4, 5]) y = np.array([4, 2, 1, 3, 7]) plt.scatter(x, y);
_____no_output_____
MIT
05.04-Feature-Engineering.ipynb
sebaspee/intro_machine_learning
Si ajustamos una recta a los datos usando ``LinearRegression`` obtendremos un resultado óptimo:
from sklearn.linear_model import LinearRegression X = x[:, np.newaxis] model = LinearRegression().fit(X, y) yfit = model.predict(X) plt.scatter(x, y) plt.plot(x, yfit);
_____no_output_____
MIT
05.04-Feature-Engineering.ipynb
sebaspee/intro_machine_learning
Es óptimo, pero también queda claro que necesitamos un modelo más sofisticado para describir la relació entre $x$ e $y$.Una manera de lograrlo es transformando los datos, agregando columnas o características adicionales que le den más flexibilidad al modelo. Por ejemplo, podemos agregar características polinomiales de ...
from sklearn.preprocessing import PolynomialFeatures poly = PolynomialFeatures(degree=3, include_bias=False) X2 = poly.fit_transform(X) print(X2)
[[ 1. 1. 1.] [ 2. 4. 8.] [ 3. 9. 27.] [ 4. 16. 64.] [ 5. 25. 125.]]
MIT
05.04-Feature-Engineering.ipynb
sebaspee/intro_machine_learning
Esta matriz de características _derivada_ tiene una columna que representa a $x$, una segunda columna que representa $x^2$, y una tercera que representa $x^3$.Calcular una regresión lineal en esta entrada da un ajuste más cercano a nuestros datos:
model = LinearRegression().fit(X2, y) yfit = model.predict(X2) plt.scatter(x, y) plt.plot(x, yfit);
_____no_output_____
MIT
05.04-Feature-Engineering.ipynb
sebaspee/intro_machine_learning
La idea de mejorar un modelo sin cambiarlo, sino que transformando la entrada que recibe, es fundamental para muchas de las técnicas de machine learning más poderosas.Exploraremos más esta idea en la clase de Regresión Lineal. Este camino es motivante y se puede generalizar con las técnicas conocidas como _métodos de k...
from numpy import nan X = np.array([[ nan, 0, 3 ], [ 3, 7, 9 ], [ 3, 5, 2 ], [ 4, nan, 6 ], [ 8, 8, 1 ]]) y = np.array([14, 16, -1, 8, -5])
_____no_output_____
MIT
05.04-Feature-Engineering.ipynb
sebaspee/intro_machine_learning
Antes de aplicar un modelo a estos datos necesitamos reemplazar esos datos faltantes con algún valor apropiado de relleno.Esto es conocido como _imputación_ de valores faltantes, y las estrategias para hacerlo varían desde las más simples (como rellenar con el promedio de cada columna) hasta las más sofisticadas (como ...
from sklearn.preprocessing import Imputer imp = Imputer(strategy='mean') X2 = imp.fit_transform(X) X2
_____no_output_____
MIT
05.04-Feature-Engineering.ipynb
sebaspee/intro_machine_learning
Como vemos, al aplicar el imputador los dos valores que faltaban fueron reemplazados por el promedio de los valores presentes en las columnas respectivas. Ahora que tenemos una matriz sin valores faltantes, podemos usarla con la instancia de un modelo, en este caso, una regresión lineal:
model = LinearRegression().fit(X2, y) model.predict(X2)
_____no_output_____
MIT
05.04-Feature-Engineering.ipynb
sebaspee/intro_machine_learning
Cadena de Procesamiento (_Pipeline_)Considerando los ejemplos que hemos visto, es posible que sea tedioso hacer cada una de estas transformaciones a mano. En ocasiones querremos automatizar la cadena de procesamiento para un modelo. Imagina una secuencia como la siguiente:1. Imputar valores usando el promedio.2. Trans...
from sklearn.pipeline import make_pipeline model = make_pipeline(Imputer(strategy='mean'), PolynomialFeatures(degree=2), LinearRegression())
_____no_output_____
MIT
05.04-Feature-Engineering.ipynb
sebaspee/intro_machine_learning
Esta cadena o _pipeline_ se ve y actúa como un objeto estándar de Scikit-Learn, por lo que podemos utilizarla en todo lo que hemos visto hasta ahora que siga la receta de uso de Scikit-Learn.
model.fit(X, y) # X con valores faltantes print(y) print(model.predict(X))
[14 16 -1 8 -5] [14. 16. -1. 8. -5.]
MIT
05.04-Feature-Engineering.ipynb
sebaspee/intro_machine_learning
Pre-training VGG16 for Distillation
import torch import torch.nn as nn from src.data.dataset import get_dataloader import torchvision.transforms as transforms import numpy as np import matplotlib.pyplot as plt DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu") print(DEVICE) SEED = 0 BATCH_SIZE = 32 LR = 5e-4 NUM_EPOCHES = 25 np.ra...
_____no_output_____
MIT
VGG16_CIFAR10.ipynb
UdbhavPrasad072300/CPS803_Final_Project
Preprocessing
transform = transforms.Compose([ transforms.RandomHorizontalFlip(), #transforms.RandomVerticalFlip(), transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,)) ]) train_loader, val_loader, test_loader = get_dataloader("./data/CIFAR10/", BATCH_SIZE)
Files already downloaded and verified Files already downloaded and verified
MIT
VGG16_CIFAR10.ipynb
UdbhavPrasad072300/CPS803_Final_Project
Model
from src.models.model import VGG16_classifier classes = 10 hidden_size = 512 dropout = 0.3 model = VGG16_classifier(classes, hidden_size, preprocess_flag=False, dropout=dropout).to(DEVICE) model for img, label in train_loader: img = img.to(DEVICE) label = label.to(DEVICE) print("Input Image Dimensions...
Input Image Dimensions: torch.Size([32, 3, 32, 32]) Label Dimensions: torch.Size([32]) ----------------------------------------------------------------------------------------------------
MIT
VGG16_CIFAR10.ipynb
UdbhavPrasad072300/CPS803_Final_Project
Training
criterion = nn.CrossEntropyLoss() optimizer = torch.optim.Adam(params=model.parameters(), lr=LR) loss_hist = {"train accuracy": [], "train loss": [], "val accuracy": []} for epoch in range(1, NUM_EPOCHES+1): model.train() epoch_train_loss = 0 y_true_train = [] y_pred_train = [] ...
_____no_output_____
MIT
VGG16_CIFAR10.ipynb
UdbhavPrasad072300/CPS803_Final_Project
Testing
with torch.no_grad(): model.eval() y_true_test = [] y_pred_test = [] for batch_idx, (img, labels) in enumerate(test_loader): img = img.to(DEVICE) label = label.to(DEVICE) preds = model(img) y_pred_test.extend(preds.detach().argmax(dim=-1).tolist())...
Test Accuracy%: 81.04 == 4052 / 5000
MIT
VGG16_CIFAR10.ipynb
UdbhavPrasad072300/CPS803_Final_Project
Saving Model Weights
torch.save(model.state_dict(), "./trained_models/vgg16_cifar10.pt")
_____no_output_____
MIT
VGG16_CIFAR10.ipynb
UdbhavPrasad072300/CPS803_Final_Project
HELLO WORLD
with Error import matplotlib import matplotlib.pyplot as plt import numpy as np t = np.arange(0.0, 2.0, 0.01) s = 1 + np.sin(2 * np.pi * t) fig, ax = plt.subplots() ax.plot(t, s) ax.set(xlabel='time (s)', ylabel='voltage (mV)', title='About as simple as it gets, folks') ax.grid() fig.savefig('test.png') plt....
_____no_output_____
MIT
src/test/datascience/notebook/withOutputForTrust.ipynb
jakebailey/vscode-jupyter
_Mini Program - Working with SQLLite DB using Python_ Objective -1. This program gives an idea how to connect with SQLLite DB using Python and perform data manipulation 2. There are 2 ways in which tables are create below to help you understand the robustness of this language Step 1 - Import required libraries In thi...
#Importing the required modules import sqlite3 import pandas as pd import os
_____no_output_____
Apache-2.0
SQLLiteDBConnection/workingWithSQLLiteDB.ipynb
Snigdha171/PythonMiniProgramSeries
Step 2 - Creating a function to drop the table Function helps to re-create a reusable component that can be used conviniently and easily in other part of the code In Line 1 - We state the function name and specify the parameter being passed. In this case, the parameter is the table name In Line 2 - We write the sql qu...
#Creating a function to drop the table if it exists def dropTbl(tablename): dropTblStmt = "DROP TABLE IF EXISTS " + tablename c.execute(dropTblStmt)
_____no_output_____
Apache-2.0
SQLLiteDBConnection/workingWithSQLLiteDB.ipynb
Snigdha171/PythonMiniProgramSeries
Step 3 - We create the database in which our table will reside In Line 1 - We are removing the already existing database file In Line 2 - We use connect function from the sqlite3 module to create a database studentGrades.db and establish a connection In Line 3 - We create a context of the database connection. This hel...
#Removing the database file os.remove('studentGrades.db') #Creating a new database - studentGrades.db conn = sqlite3.connect("studentGrades.db") c = conn.cursor()
_____no_output_____
Apache-2.0
SQLLiteDBConnection/workingWithSQLLiteDB.ipynb
Snigdha171/PythonMiniProgramSeries
Step 4 - We create a table in sqllite DB using data defined in the excel file This is the first method in which you can create a table. You can use to_sql function directly to read a dataframe and dump all it's content to the table In Line 1 - We are making use of dropTbl function created above to drop the table In Li...
#Reading data from csv file - student details, grades and subject dropTbl('STUDENT') student_details = pd.read_csv("Datafiles/studentDetails.csv") student_details.to_sql('STUDENT',conn,index = False) dropTbl('GRADES') student_grades = pd.read_csv('Datafiles/studentGrades.csv') student_grades.to_sql('GRADES',conn,index...
_____no_output_____
Apache-2.0
SQLLiteDBConnection/workingWithSQLLiteDB.ipynb
Snigdha171/PythonMiniProgramSeries
Step 5 - We create a master table STUDENT_GRADE_MASTER where we can colate the data from the individual tables by performing the joining operations In Line 1 - We are making use of dropTbl function created above to drop the table In Line 2 - We are writing sql query for table creation In Line 3 - We are using the curs...
#Creating a table to store student master data dropTbl('STUDENT_GRADE_MASTER') createTblStmt = '''CREATE TABLE STUDENT_GRADE_MASTER ([Roll_number] INTEGER, [Student_Name] TEXT, [Stream] TEXT, [Subject] TEXT, [Marks] ...
_____no_output_____
Apache-2.0
SQLLiteDBConnection/workingWithSQLLiteDB.ipynb
Snigdha171/PythonMiniProgramSeries
Step 6 - We can perform data fetch like we do in sqls using this sqlite3 module In Line 1 - We are writing a query to find the number of records in the master table In Line 2 - We are executing the above created query In Line 3 - fetchall function is used to get the result returned by the query. The result will be in ...
#Finding the key data from the master table #1. Find the number of records in the master table query_count = '''SELECT COUNT(*) FROM STUDENT_GRADE_MASTER''' c.execute(query_count) number_of_records = c.fetchall() print(number_of_records) #2. Maximum marks for each subject query_max_marks = '''SELECT Subject,max(Mark...
[(20,)] [('C', 97), ('C++', 95), ('Environmental studies', 92), ('Java', 96), ('Maths', 98)] [('Abhishek', 94.2), ('Anand', 85.2), ('Sourabh', 89.0), ('Vivek', 84.8)]
Apache-2.0
SQLLiteDBConnection/workingWithSQLLiteDB.ipynb
Snigdha171/PythonMiniProgramSeries
Step 7 - We are closing the database connection It is always a good practice to close the database connection after all the operations are completed
#Closing the connection conn.close()
_____no_output_____
Apache-2.0
SQLLiteDBConnection/workingWithSQLLiteDB.ipynb
Snigdha171/PythonMiniProgramSeries
**PUNTO 2 **
premio1 = "Viaje todo incluído para dos personas a San Andrés" premio2 = "una pasadía a los termales de San Vicente incluyendo almuerzo" premio3 = "Viaje todo incluido para dos personas a Santa Marta" premio4 = "Pasadía al desierto de Tatacoa (Sin incluír alimentación)" rosada = premio1 verde = premio2 azul = premio3...
por favor digite el nombre del concursante Angie Digite el color de la balota rosada Digite un valor variable 2000000 Digite los años de antiguedad del cliente 14 Digite los referidos del cliente 1 ¿El cliente tiene liderazgo en los programas de cooperación de viajes internos? si La empresa VIVAFLY se complace en anunc...
MIT
TALLER1.ipynb
AngieCat26/MujeresDigitales
Tutorial This tutorial will introduce you to the *fifa_preprocessing*'s functionality!In general, the following functions will alow you to preprocess your data to be able to perform machine learning or statistical data analysis by reformatting, casting or deleting certain values. The data used in these examples comes ...
import fifa_preprocessing as fp import pandas as pd import math
_____no_output_____
MIT
tutorial/tutorial.ipynb
piotrfratczak/fifa_preprocessing
Load your data:
data = pd.read_csv('data.csv') data
_____no_output_____
MIT
tutorial/tutorial.ipynb
piotrfratczak/fifa_preprocessing
Exclude goalkeepersBefore any preprocessing, the data contains all the players.
data[['Name', 'Position']]
_____no_output_____
MIT
tutorial/tutorial.ipynb
piotrfratczak/fifa_preprocessing
This command will exclude goalkeepers from your data set (i.e. delete all the rows where column 'Position' is equal to 'GK'):
data = fp.exclude_goalkeepers(data)
_____no_output_____
MIT
tutorial/tutorial.ipynb
piotrfratczak/fifa_preprocessing
As you may notice, the row number 3 was deleted.
data[['Name', 'Position']]
_____no_output_____
MIT
tutorial/tutorial.ipynb
piotrfratczak/fifa_preprocessing
Format currenciesTo remove unnecessary characters form a monetary value use:
money = '€23.4M' fp.money_format(money)
_____no_output_____
MIT
tutorial/tutorial.ipynb
piotrfratczak/fifa_preprocessing
The value will be expressed in thousands of euros:
money = '€7K' fp.money_format(money)
_____no_output_____
MIT
tutorial/tutorial.ipynb
piotrfratczak/fifa_preprocessing
Format players' ratingIn FIFA players get a ranking on they skills on the pitch. The ranking is represented as a sum of two integers.The following function lets you take in a string containing two numbers separated by a '+' and get the actual sum:
rating = '81+3' fp.rating_format(rating)
_____no_output_____
MIT
tutorial/tutorial.ipynb
piotrfratczak/fifa_preprocessing
Format players' work rateThe next function takes in a qualitative parameter that could be expressed as a quantitive value.If you have a data set where one category is expressed as 'High', 'Medium' or 'Low', this function will assign numbers to these values (2, 1 and 0 respectively):
fp.work_format('High') fp.work_format('Medium') fp.work_format('Low')
_____no_output_____
MIT
tutorial/tutorial.ipynb
piotrfratczak/fifa_preprocessing
In fact, the function returns 0 in every case where the passed in parameter id different than 'High' and 'Medium':
fp.work_format('Mediocre')
_____no_output_____
MIT
tutorial/tutorial.ipynb
piotrfratczak/fifa_preprocessing
Cast to intThis simple function casts a float to int, but also adds extra flexibility and returns 0 when it encounters a NaN (Not a Number):
fp.to_int(3.24) import numpy nan = numpy.nan fp.to_int(nan)
_____no_output_____
MIT
tutorial/tutorial.ipynb
piotrfratczak/fifa_preprocessing
Apply format of choiceThis generic function lets you choose what format to apply to every value in the columns of the data frame you specify.
data[['Name', 'Jersey Number', 'Skill Moves', 'Weak Foot']]
_____no_output_____
MIT
tutorial/tutorial.ipynb
piotrfratczak/fifa_preprocessing
By format, is meant a function that operates on the values in the specified columns:
columns = ['Jersey Number', 'Skill Moves', 'Weak Foot'] format_fun = fp.to_int data = fp.apply_format(data, columns, format_fun) data[['Name', 'Jersey Number', 'Skill Moves', 'Weak Foot']]
_____no_output_____
MIT
tutorial/tutorial.ipynb
piotrfratczak/fifa_preprocessing
Dummy variablesIf we intend to build machine learning models to explore our data, we usually are not able to extract any information from qualitative data. Here 'Club' and 'Preferred Foot' are categories that could bring interesting information. To be able to use it in our machine learning algorithms we can get dummy ...
data[['Name', 'Preferred Foot']]
_____no_output_____
MIT
tutorial/tutorial.ipynb
piotrfratczak/fifa_preprocessing
If we choose 'Preferred Foot', new columns will be aded, their titles will be the same as the values in 'Preferred Foot' column: 'Left' and 'Right'. So now instead of seeing 'Left' in the column 'Preferred Foot' we will see 1 in 'Left' column (and 0 in 'Right').
data = fp.to_dummy(data, ['Preferred Foot']) data[['Name', 'Left', 'Right']]
_____no_output_____
MIT
tutorial/tutorial.ipynb
piotrfratczak/fifa_preprocessing
Learn more about [dummy variables](https://en.wikiversity.org/wiki/Dummy_variable_(statistics)). The data frame will no longer contain the columns we transformed:
'Preferred Foot' in data
_____no_output_____
MIT
tutorial/tutorial.ipynb
piotrfratczak/fifa_preprocessing
We can get dummy variables for multiple columns at once.
data[['Name', 'Club', 'Position']] data = fp.to_dummy(data, ['Club', 'Nationality']) data[['Name', 'Paris Saint-Germain', 'Manchester City', 'Brazil', 'Portugal']]
_____no_output_____
MIT
tutorial/tutorial.ipynb
piotrfratczak/fifa_preprocessing
Split work rate columnIn FIFA the players' work rate is saved in a special way, two qualiative values are split with a slash:
data[['Name', 'Work Rate']]
_____no_output_____
MIT
tutorial/tutorial.ipynb
piotrfratczak/fifa_preprocessing
This next function allows you to split column 'Work Rate' into 'Defensive Work Rate' and 'Offensive Work Rate':
data = fp.split_work_rate(data) data[['Name', 'Defensive Work Rate', 'Offensive Work Rate']]
_____no_output_____
MIT
tutorial/tutorial.ipynb
piotrfratczak/fifa_preprocessing
Default preprocessingTo perform all the basic preprocessing (optimized for the FIFA 19 data set) on your data, simply go:
data = pd.read_csv('data.csv') fp.preprocess(data)
_____no_output_____
MIT
tutorial/tutorial.ipynb
piotrfratczak/fifa_preprocessing
Bias GoalsIn this notebook, you're going to explore a way to identify some biases of a GAN using a classifier, in a way that's well-suited for attempting to make a model independent of an input. Note that not all biases are as obvious as the ones you will see here. Learning Objectives1. Be able to distinguish a few ...
import torch import numpy as np from torch import nn from tqdm.auto import tqdm from torchvision import transforms from torchvision.utils import make_grid from torchvision.datasets import CelebA from torch.utils.data import DataLoader import matplotlib.pyplot as plt torch.manual_seed(0) # Set for our testing purposes, ...
_____no_output_____
Apache-2.0
12-Bias.ipynb
pedro-abundio-wang/GANs
Generator and Noise
class Generator(nn.Module): ''' Generator Class Values: z_dim: the dimension of the noise vector, a scalar im_chan: the number of channels in the images, fitted for the dataset used, a scalar (CelebA is rgb, so 3 is your default) hidden_dim: the inner dimension, a scala...
_____no_output_____
Apache-2.0
12-Bias.ipynb
pedro-abundio-wang/GANs
Classifier
class Classifier(nn.Module): ''' Classifier Class Values: im_chan: the number of channels in the images, fitted for the dataset used, a scalar (CelebA is rgb, so 3 is your default) n_classes: the total number of classes in the dataset, an integer scalar hidden_dim: the ...
_____no_output_____
Apache-2.0
12-Bias.ipynb
pedro-abundio-wang/GANs
Specifying ParametersYou will also need to specify a few parameters before you begin training: * z_dim: the dimension of the noise vector * batch_size: the number of images per forward/backward pass * device: the device type
z_dim = 64 batch_size = 128 device = 'cuda'
_____no_output_____
Apache-2.0
12-Bias.ipynb
pedro-abundio-wang/GANs
Train a Classifier (Optional)You're welcome to train your own classifier with this code, but you are provide a pre-trained one based on this architecture here which you can load and use in the next section.
# You can run this code to train your own classifier, but there is a provided pre-trained one # If you'd like to use this, just run "train_classifier(filename)" # To train and save a classifier on the label indices to that filename def train_classifier(filename): import seaborn as sns import matplotlib.pyplot ...
_____no_output_____
Apache-2.0
12-Bias.ipynb
pedro-abundio-wang/GANs
Loading the Pre-trained ModelsYou can now load the pre-trained generator (trained on CelebA) and classifier using the following code. If you trained your own classifier, you can load that one here instead. However, it is suggested that you first go through the assignment using the pre-trained one.
import torch gen = Generator(z_dim).to(device) gen_dict = torch.load("pretrained_celeba.pth", map_location=torch.device(device))["gen"] gen.load_state_dict(gen_dict) gen.eval() n_classes = 40 classifier = Classifier(n_classes=n_classes).to(device) class_dict = torch.load("pretrained_classifier.pth", map_location=torch...
_____no_output_____
Apache-2.0
12-Bias.ipynb
pedro-abundio-wang/GANs
Feature CorrelationNow you can generate images using the generator. By also using the classifier, you will be generating images with different amounts of the "male" feature.You are welcome to experiment with other features as the target feature, but it is encouraged that you initially go through the notebook as is bef...
# First you generate a bunch of fake images with the generator n_images = 256 fake_image_history = [] classification_history = [] grad_steps = 30 # How many gradient steps to take skip = 2 # How many gradient steps to skip in the visualization feature_names = ["5oClockShadow", "ArchedEyebrows", "Attractive", "BagsUnde...
_____no_output_____
Apache-2.0
12-Bias.ipynb
pedro-abundio-wang/GANs
You've now generated image samples, which have increasing or decreasing amounts of the target feature. You can visualize the way in which that affects other classified features. The x-axis will show you the amount of change in your target feature and the y-axis shows how much the other features change, as detected in t...
import seaborn as sns # Set the other features other_features = ["Smiling", "Bald", "Young", "HeavyMakeup", "Attractive"] classification_changes = (classification_history - starting_classifications[None, :, :]).numpy() for other_feature in other_features: other_indices = feature_names.index(other_feature) with ...
_____no_output_____
Apache-2.0
12-Bias.ipynb
pedro-abundio-wang/GANs
This correlation detection can be used to reduce bias by penalizing this type of correlation in the loss during the training of the generator. However, currently there is no rigorous and accepted solution for debiasing GANs. A first step that you can take in the right direction comes before training the model: make sur...
# UNQ_C1 (UNIQUE CELL IDENTIFIER, DO NOT EDIT) # GRADED CELL: get_top_covariances def get_top_covariances(classification_changes, target_index, top_n=10): ''' Function for getting the top n covariances: Given a list of classification changes and the index of the target feature, returns (1) a list or tensor ...
_____no_output_____
Apache-2.0
12-Bias.ipynb
pedro-abundio-wang/GANs
Neural network hybrid recommendation system on Google Analytics data model and trainingThis notebook demonstrates how to implement a hybrid recommendation system using a neural network to combine content-based and collaborative filtering recommendation models using Google Analytics data. We are going to use the learne...
!pip install tensorflow_hub
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive/10_recommend/labs/hybrid_recommendations/hybrid_recommendations.ipynb
smartestrobotdai/training-data-analyst
Now reset the notebook's session kernel! Since we're no longer using Cloud Dataflow, we'll be using the python3 kernel from here on out so don't forget to change the kernel if it's still python2.
# Import helpful libraries and setup our project, bucket, and region import os import tensorflow as tf import tensorflow_hub as hub PROJECT = 'cloud-training-demos' # REPLACE WITH YOUR PROJECT ID BUCKET = 'cloud-training-demos-ml' # REPLACE WITH YOUR BUCKET NAME REGION = 'us-central1' # REPLACE WITH YOUR BUCKET REGION...
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive/10_recommend/labs/hybrid_recommendations/hybrid_recommendations.ipynb
smartestrobotdai/training-data-analyst
Create hybrid recommendation system model using TensorFlow Now that we've created our training and evaluation input files as well as our categorical feature vocabulary files, we can create our TensorFlow hybrid recommendation system model. Let's first get some of our aggregate information that we will use in the model...
from tensorflow.python.lib.io import file_io # Get number of content ids from text file in Google Cloud Storage with file_io.FileIO(tf.gfile.Glob(filename = "gs://{}/hybrid_recommendation/preproc/vocab_counts/content_id_vocab_count.txt*".format(BUCKET))[0], mode = 'r') as ifp: number_of_content_ids = int([x for x in ...
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive/10_recommend/labs/hybrid_recommendations/hybrid_recommendations.ipynb
smartestrobotdai/training-data-analyst
Create input function for training and evaluation to read from our preprocessed CSV files.
# Create input function for train and eval def read_dataset(filename, mode, batch_size = 512): def _input_fn(): def decode_csv(value_column): columns = tf.decode_csv(records = value_column, record_defaults = DEFAULTS) features = dict(zip(CSV_COLUMNS, columns)) label = features.pop(LABE...
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive/10_recommend/labs/hybrid_recommendations/hybrid_recommendations.ipynb
smartestrobotdai/training-data-analyst
Next, we will create our feature columns using our read in features.
# Create feature columns to be used in model def create_feature_columns(args): # Create content_id feature column content_id_column = tf.feature_column.categorical_column_with_hash_bucket( key = "content_id", hash_bucket_size = number_of_content_ids) # Embed content id into a lower dimensional representa...
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive/10_recommend/labs/hybrid_recommendations/hybrid_recommendations.ipynb
smartestrobotdai/training-data-analyst
Now we'll create our model function
# Create custom model function for our custom estimator def model_fn(features, labels, mode, params): # TODO: Create neural network input layer using our feature columns defined above # TODO: Create hidden layers by looping through hidden unit list # TODO: Compute logits (1 per class) using the output of our la...
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive/10_recommend/labs/hybrid_recommendations/hybrid_recommendations.ipynb
smartestrobotdai/training-data-analyst
Now create a serving input function
# Create serving input function def serving_input_fn(): feature_placeholders = { colname : tf.placeholder(dtype = tf.string, shape = [None]) \ for colname in NON_FACTOR_COLUMNS[1:-1] } feature_placeholders['months_since_epoch'] = tf.placeholder(dtype = tf.float32, shape = [None]) for colname in FAC...
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive/10_recommend/labs/hybrid_recommendations/hybrid_recommendations.ipynb
smartestrobotdai/training-data-analyst
Now that all of the pieces are assembled let's create and run our train and evaluate loop
# Create train and evaluate loop to combine all of the pieces together. tf.logging.set_verbosity(tf.logging.INFO) def train_and_evaluate(args): estimator = tf.estimator.Estimator( model_fn = model_fn, model_dir = args['output_dir'], params={ 'feature_columns': create_feature_columns(args), 'hi...
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive/10_recommend/labs/hybrid_recommendations/hybrid_recommendations.ipynb
smartestrobotdai/training-data-analyst
Run train_and_evaluate!
# Call train and evaluate loop import shutil outdir = 'hybrid_recommendation_trained' shutil.rmtree(outdir, ignore_errors = True) # start fresh each time arguments = { 'bucket': BUCKET, 'train_data_paths': "gs://{}/hybrid_recommendation/preproc/features/train.csv*".format(BUCKET), 'eval_data_paths': "gs://{}/hy...
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive/10_recommend/labs/hybrid_recommendations/hybrid_recommendations.ipynb
smartestrobotdai/training-data-analyst
Run on module locallyNow let's place our code into a python module with model.py and task.py files so that we can train using Google Cloud's ML Engine! First, let's test our module locally.
%writefile requirements.txt tensorflow_hub %%bash echo "bucket=${BUCKET}" rm -rf hybrid_recommendation_trained export PYTHONPATH=${PYTHONPATH}:${PWD}/hybrid_recommendations_module python -m trainer.task \ --bucket=${BUCKET} \ --train_data_paths=gs://${BUCKET}/hybrid_recommendation/preproc/features/train.csv* \ --...
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive/10_recommend/labs/hybrid_recommendations/hybrid_recommendations.ipynb
smartestrobotdai/training-data-analyst
Run on Google Cloud ML EngineIf our module locally trained fine, let's now use of the power of ML Engine to scale it out on Google Cloud.
%%bash OUTDIR=gs://${BUCKET}/hybrid_recommendation/small_trained_model JOBNAME=hybrid_recommendation_$(date -u +%y%m%d_%H%M%S) echo $OUTDIR $REGION $JOBNAME gsutil -m rm -rf $OUTDIR gcloud ml-engine jobs submit training $JOBNAME \ --region=$REGION \ --module-name=trainer.task \ --package-path=$(pwd)/hybrid_recomm...
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive/10_recommend/labs/hybrid_recommendations/hybrid_recommendations.ipynb
smartestrobotdai/training-data-analyst
Let's add some hyperparameter tuning!
%%writefile hyperparam.yaml trainingInput: hyperparameters: goal: MAXIMIZE maxTrials: 5 maxParallelTrials: 1 hyperparameterMetricTag: accuracy params: - parameterName: batch_size type: INTEGER minValue: 8 maxValue: 64 scaleType: UNIT_LINEAR_SCALE - parameterName: le...
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive/10_recommend/labs/hybrid_recommendations/hybrid_recommendations.ipynb
smartestrobotdai/training-data-analyst
Now that we know the best hyperparameters, run a big training job!
%%bash OUTDIR=gs://${BUCKET}/hybrid_recommendation/big_trained_model JOBNAME=hybrid_recommendation_$(date -u +%y%m%d_%H%M%S) echo $OUTDIR $REGION $JOBNAME gsutil -m rm -rf $OUTDIR gcloud ml-engine jobs submit training $JOBNAME \ --region=$REGION \ --module-name=trainer.task \ --package-path=$(pwd)/hybrid_recommen...
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive/10_recommend/labs/hybrid_recommendations/hybrid_recommendations.ipynb
smartestrobotdai/training-data-analyst
Setting the path
path = Path("C:/Users/shahi/.fastai/data/lgg-mri-segmentation/kaggle_3m") path getMask = lambda x: x.parents[0] / (x.stem + '_mask' + x.suffix) tempImgFile = path/"TCGA_CS_4941_19960909/TCGA_CS_4941_19960909_1.tif" tempMaskFile = getMask(tempImgFile) image = open_image(tempImgFile) image image.shape mask = open_mask(ge...
_____no_output_____
Apache-2.0
Brain MRI Segmentation.ipynb
shahidhj/Deep-Learning-notebooks
Building the model Pretrained resnet 34 used for downsampling
learn = unet_learner(data,models.resnet34,metrics=[dice]) learn.lr_find() learn.recorder.plot() learn.fit_one_cycle(4,1e-4) learn.unfreeze() learn.fit_one_cycle(4,1e-4,wd=1e-2)
_____no_output_____
Apache-2.0
Brain MRI Segmentation.ipynb
shahidhj/Deep-Learning-notebooks
Resnet 34 without pretrained weights with dialation
learn.save('ResnetWithPrettrained') learn.summary() def conv(ni,nf): return nn.Conv2d(ni, nf, kernel_size=3, stride=2, padding=1,dilation=2) def conv2(ni,nf): return conv_layer(ni,nf) models.resnet34 model = nn.Sequential( conv2(3,8), res_block(8), conv2(8,16), res_block(16), conv2(16,32), r...
_____no_output_____
Apache-2.0
Brain MRI Segmentation.ipynb
shahidhj/Deep-Learning-notebooks
Supervised Learning: Finding Donors for *CharityML*> Udacity Machine Learning Engineer Nanodegree: _Project 2_>> Author: _Ke Zhang_>> Submission Date: _2017-04-30_ (Revision 3) Content- [Getting Started](Getting-Started)- [Exploring the Data](Exploring-the-Data)- [Preparing the Data](Preparing-the-Data)- [Evaluating ...
# Import libraries necessary for this project import numpy as np import pandas as pd from time import time from IPython.display import display # Allows the use of display() for DataFrames import matplotlib.pyplot as plt import seaborn as sns # Import supplementary visualization code visuals.py import visuals as vs #s...
_____no_output_____
Apache-2.0
p2_sl_finding_donors/p2_sl_finding_donors.ipynb
superkley/udacity-mlnd