markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Check the input and output dimensionsAs a check that your model is working as expected, test out how it responds to input data.
# test that dimensions are as expected test_rnn = RNN(input_size=1, output_size=1, hidden_dim=10, n_layers=2) # generate evenly spaced, test data pts time_steps = np.linspace(0, np.pi, seq_length) data = np.sin(time_steps) data.resize((seq_length, 1)) test_input = torch.Tensor(data).unsqueeze(0) # give it a batch_siz...
Input size: torch.Size([1, 20, 1]) Output size: torch.Size([20, 1]) Hidden state size: torch.Size([2, 1, 10])
MIT
recurrent-neural-networks/time-series/Simple_RNN.ipynb
johnsonjoseph37/deep-learning-v2-pytorch
--- Training the RNNNext, we'll instantiate an RNN with some specified hyperparameters. Then train it over a series of steps, and see how it performs.
# decide on hyperparameters input_size=1 output_size=1 hidden_dim=32 n_layers=1 # instantiate an RNN rnn = RNN(input_size, output_size, hidden_dim, n_layers) print(rnn)
RNN( (rnn): RNN(1, 32, batch_first=True) (fc): Linear(in_features=32, out_features=1, bias=True) )
MIT
recurrent-neural-networks/time-series/Simple_RNN.ipynb
johnsonjoseph37/deep-learning-v2-pytorch
Loss and OptimizationThis is a regression problem: can we train an RNN to accurately predict the next data point, given a current data point?>* The data points are coordinate values, so to compare a predicted and ground_truth point, we'll use a regression loss: the mean squared error.* It's typical to use an Adam opti...
# MSE loss and Adam optimizer with a learning rate of 0.01 criterion = nn.MSELoss() optimizer = torch.optim.Adam(rnn.parameters(), lr=0.01)
_____no_output_____
MIT
recurrent-neural-networks/time-series/Simple_RNN.ipynb
johnsonjoseph37/deep-learning-v2-pytorch
Defining the training functionThis function takes in an rnn, a number of steps to train for, and returns a trained rnn. This function is also responsible for displaying the loss and the predictions, every so often. Hidden StatePay close attention to the hidden state, here:* Before looping over a batch of training data...
# train the RNN def train(rnn, n_steps, print_every): # initialize the hidden state hidden = None for batch_i, step in enumerate(range(n_steps)): # defining the training data time_steps = np.linspace(step * np.pi, (step+1)*np.pi, seq_length + 1) data = np.sin(time_st...
C:\Users\johnj\miniconda3\lib\site-packages\torch\autograd\__init__.py:145: UserWarning: CUDA initialization: CUDA driver initialization failed, you might not have a CUDA gpu. (Triggered internally at ..\c10\cuda\CUDAFunctions.cpp:109.) Variable._execution_engine.run_backward(
MIT
recurrent-neural-networks/time-series/Simple_RNN.ipynb
johnsonjoseph37/deep-learning-v2-pytorch
Εστω οτι παρατηρούμε εναν αστέρα στον ουρανό και μετράμε τη ροή φωτονίων. Θεωρώντας ότι η ροή είναι σταθερή με το χρόνο ίση με $F_{\mathtt{true}}$. Παίρνουμε $N$ παρατηρήσεις, μετρώντας τη ροή $F_i$ και το σφάλμα $e_i$. Η ανίχνευση ενός φωτονίου είναι ενα ανεξάρτητο γεγονός που ακολουθεί μια τυχαία κατανομή Poisson. Α...
N=100 F_true=1000. F=np.random.poisson(F_true*np.ones(N)) e=np.sqrt(F) plt.errorbar(np.arange(N),F,yerr=e, fmt='ok', ecolor='gray', alpha=0.5) plt.hlines(np.mean(F),0,N,linestyles='--') plt.hlines(F_true,0,N) print np.mean(F),np.mean(F)-F_true,np.std(F) ax=seaborn.distplot(F,bins=N/3) xx=np.linspace(F.min(),F.max()) g...
_____no_output_____
MIT
Untitled.ipynb
Mixpap/astrostatistics
Η αρχική προσέγγιση μας είναι μέσω της μεγιστοποιήσης της πιθανοφάνειας. Με βάση τα δεδομένωα $D_i=(F_i,e_i)$ μπορούμε να υπολογίσουμε τη πιθανότητα να τα έχουμε παρατηρήσει δεδομένου της αληθινής τιμής $F_{\mathtt{true}}$ υποθέτωντας ότι τα σφάλματα είναι gaussian$$P(D_i|F_{\mathtt{true}})=\frac{1}{\sqrt{2\pi e_i^2}}e...
#xx=np.linspace(0,10,5000) xx=np.ones(1000) #seaborn.distplot(np.random.poisson(xx),kde=False) plt.hist(np.random.poisson(xx)) w = 1. / e ** 2 print(""" F_true = {0} F_est = {1:.0f} +/- {2:.0f} (based on {3} measurements) """.format(F_true, (w * F).sum() / w.sum(), w.sum() ** -0.5, N)) np.sum(((F-F.m...
F_true = 1000.0 F_est = 997 +/- 3 (based on 100 measurements)
MIT
Untitled.ipynb
Mixpap/astrostatistics
Load MNIST Data
# MNIST dataset downloaded from Kaggle : #https://www.kaggle.com/c/digit-recognizer/data # Functions to read and show images. import numpy as np import pandas as pd import matplotlib.pyplot as plt d0 = pd.read_csv('./mnist_train.csv') print(d0.head(5)) # print first five rows of d0. # save the labels into a ...
_____no_output_____
Apache-2.0
2.assign_amzn_fine_food_review_tsne/pca_tsne_mnist.ipynb
be-shekhar/learning-ml
2D Visualization using PCA
# Pick first 15K data-points to work on for time-effeciency. #Excercise: Perform the same analysis on all of 42K data-points. labels = l.head(15000) data = d.head(15000) print("the shape of sample data = ", data.shape) # Data-preprocessing: Standardizing the data from sklearn.preprocessing import StandardScaler sta...
_____no_output_____
Apache-2.0
2.assign_amzn_fine_food_review_tsne/pca_tsne_mnist.ipynb
be-shekhar/learning-ml
PCA using Scikit-Learn
# initializing the pca from sklearn import decomposition pca = decomposition.PCA() # configuring the parameteres # the number of components = 2 pca.n_components = 2 pca_data = pca.fit_transform(sample_data) # pca_reduced will contain the 2-d projects of simple data print("shape of pca_reduced.shape = ", pca_data.shap...
_____no_output_____
Apache-2.0
2.assign_amzn_fine_food_review_tsne/pca_tsne_mnist.ipynb
be-shekhar/learning-ml
PCA for dimensionality redcution (not for visualization)
# PCA for dimensionality redcution (non-visualization) pca.n_components = 784 pca_data = pca.fit_transform(sample_data) percentage_var_explained = pca.explained_variance_ / np.sum(pca.explained_variance_); cum_var_explained = np.cumsum(percentage_var_explained) # Plot the PCA spectrum plt.figure(1, figsize=(6, 4)) ...
_____no_output_____
Apache-2.0
2.assign_amzn_fine_food_review_tsne/pca_tsne_mnist.ipynb
be-shekhar/learning-ml
t-SNE using Scikit-Learn
# TSNE from sklearn.manifold import TSNE # Picking the top 1000 points as TSNE takes a lot of time for 15K points data_1000 = standardized_data[0:1000,:] labels_1000 = labels[0:1000] model = TSNE(n_components=2, random_state=0) # configuring the parameteres # the number of components = 2 # default perplexity = 30 # ...
_____no_output_____
Apache-2.0
2.assign_amzn_fine_food_review_tsne/pca_tsne_mnist.ipynb
be-shekhar/learning-ml
APIs and data Catherine Devlin (@catherinedevlin)Innovation Specialist, 18FOakwood High School, Feb 16 2017 Who am I?(hint: not Jean Valjean)![International Falls, MN winter street scene](http://kcc-tv.org/wp-content/uploads/2017/01/Winter-downtown.jpg) Cool things I've done- Chemical engineer in college- Oops, beca...
!pip install requests
_____no_output_____
CC0-1.0
presentation_vcr.ipynb
catherinedevlin/code-org-apis-data
Then, we import. That's like getting it out of the cupboard.
import requests
_____no_output_____
CC0-1.0
presentation_vcr.ipynb
catherinedevlin/code-org-apis-data
Oakwood High School
with offline.use_cassette('offline.vcr'): response = requests.get('http://ohs.oakwoodschools.org/pages/Oakwood_High_School') response.ok response.status_code print(response.text)
_____no_output_____
CC0-1.0
presentation_vcr.ipynb
catherinedevlin/code-org-apis-data
We have backed our semi up to the front door.OK, back to checking out politicians.
url = 'https://api.open.fec.gov/v1/committee/C00373001/totals/?page=1&api_key=DEMO_KEY&sort=-cycle&per_page=20' with offline.use_cassette('offline.vcr'): response = requests.get(url) response.ok response.status_code response.json() response.json()['results'] results = response.json()['results'] results[0]['cycle']...
_____no_output_____
CC0-1.0
presentation_vcr.ipynb
catherinedevlin/code-org-apis-data
[Pandas](http://pandas.pydata.org/)
!pip install pandas import pandas as pd data = pd.DataFrame(response.json()['results']) data data = data.set_index('cycle') data data['disbursements'] data[data['disbursements'] < 1000000 ]
_____no_output_____
CC0-1.0
presentation_vcr.ipynb
catherinedevlin/code-org-apis-data
[Bokeh](http://bokeh.pydata.org/en/latest/)
!pip install bokeh from bokeh.charts import Bar, show, output_notebook by_year = Bar(data, values='disbursements') output_notebook() show(by_year)
_____no_output_____
CC0-1.0
presentation_vcr.ipynb
catherinedevlin/code-org-apis-data
Playtime[so many options](http://bokeh.pydata.org/en/latest/docs/user_guide/charts.html)- Which column to map?- Colors or styles?- Scatter- Better y-axis label?- Some other candidate committee? - Portman C00458463, Brown C00264697- Filter it Where's it coming from?https://api.open.fec.gov/v1/committee/C00373001/sche...
url = 'https://api.open.fec.gov/v1/committee/C00373001/schedules/schedule_a/by_state/?per_page=20&api_key=DEMO_KEY&page=1&cycle=2016' with offline.use_cassette('offline.vcr'): response = requests.get(url) results = response.json()['results'] data = pd.DataFrame(results) data data = data.set_index('state') by_state ...
_____no_output_____
CC0-1.0
presentation_vcr.ipynb
catherinedevlin/code-org-apis-data
[Diabetes dataset](https://scikit-learn.org/stable/datasets/toy_dataset.htmldiabetes-dataset)----------------
import pandas as pd from sklearn import datasets diabetes = datasets.load_diabetes() print(diabetes['DESCR']) # Convert the data to a pandas dataframe df = pd.DataFrame(diabetes.data, columns=diabetes.feature_names) df['diabetes'] = diabetes.target df.head()
_____no_output_____
MIT
ejercicios/reg-toy-diabetes.ipynb
joseluisGA/videojuegos
Random ForestAplicação do random forest em uma mão de poker***Dataset:*** https://archive.ics.uci.edu/ml/datasets/Poker+Hand***Apresentação:*** https://docs.google.com/presentation/d/1zFS4cTf9xwvcVPiCOA-sV_RFx_UeoNX2dTthHkY9Am4/edit
import pandas as pd import numpy as np import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split from sklearn.neighbors import KNeighborsClassifier from sklearn.neural_network import MLPClassifier from sklearn.utils import column_or_1d from sklearn.linear_model import LogisticRegression from ...
_____no_output_____
MIT
RandomForest.ipynb
AM-2018-2-dusteam/ML-poker
Description This task is to do an exploratory data analysis on the balance-scale dataset Data Set Information This data set was generated to model psychological experimental results. Each example is classified as having the balance scale tip to the right, tip to the left, or be balanced. The attributes are the left w...
#importing libraries import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline #reading the data data=pd.read_csv('balance-scale.data') #shape of the data data.shape #first five rows of the data data.head() #Generating the x values x=data.drop(['Class'],axis=1) x.head() #Generating the ...
_____no_output_____
MIT
AnushkaProject/Balance Scale Decision Tree.ipynb
Sakshat682/BalanceDataProject
Using the Weight and Distance parameters Splitting the data set into a ratio of 70:30 by the built in 'train_test_split' function in sklearn to get a better idea of accuracy of the model
from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(x,y,stratify=y, test_size=0.3, random_state=2) X_train.describe() #Importing decision tree classifier and creating it's object from sklearn.tree import DecisionTreeClassifier clf= DecisionTreeClassifier() clf.fit(X_...
_____no_output_____
MIT
AnushkaProject/Balance Scale Decision Tree.ipynb
Sakshat682/BalanceDataProject
We observe that the accuracy score is pretty low. Thus, we need to find optimal parameters to get the best accuracy. We do that by using GridSearchCV
#Using GridSearchCV to find the maximun optimal depth from sklearn.model_selection import GridSearchCV tree_para={"criterion":["gini","entropy"], "max_depth":[3,4,5,6,7,8,9,10,11,12]} dt_model_grid= GridSearchCV(DecisionTreeClassifier(random_state=3),tree_para, cv=10) dt_model_grid.fit(X_train,y_train) # To print the o...
_____no_output_____
MIT
AnushkaProject/Balance Scale Decision Tree.ipynb
Sakshat682/BalanceDataProject
Using the created Torque
dt_model2 = DecisionTreeClassifier(random_state=31) X_train, X_test, y_train, y_test= train_test_split(x1,y, stratify=y, test_size=0.3, random_state=8) X_train.head( ) X_train.shape dt_model2.fit(X_train, y_train) y_pred2= dt_model2.predict(X_test) print(classification_report(y_test, y_pred2, target_names=["Balanced","...
_____no_output_____
MIT
AnushkaProject/Balance Scale Decision Tree.ipynb
Sakshat682/BalanceDataProject
Increasing the optimization After observing the trees, we conclude that differences are not being taken into account. Hence, we add the differences attribute to try and increase the accuracy.
x1['Diff']= x1['LT']- x1['RT'] x1.head() X_train, X_test, y_train, y_test =train_test_split(x1,y, stratify=y, test_size=0.3,random_state=40) dt_model3= DecisionTreeClassifier(random_state=40) dt_model3.fit(X_train, y_train) #Create Classification Report y_pred3= dt_model3.predict(X_test) print(classification_report(y_t...
_____no_output_____
MIT
AnushkaProject/Balance Scale Decision Tree.ipynb
Sakshat682/BalanceDataProject
Final Conclusion The model returns a perfect accuracy score as desired.
!pip install seaborn
Collecting seaborn Downloading seaborn-0.11.2-py3-none-any.whl (292 kB) Requirement already satisfied: numpy>=1.15 in c:\python39\lib\site-packages (from seaborn) (1.21.2) Requirement already satisfied: scipy>=1.0 in c:\python39\lib\site-packages (from seaborn) (1.7.1) Requirement already satisfied: matplotlib>=2.2 i...
MIT
AnushkaProject/Balance Scale Decision Tree.ipynb
Sakshat682/BalanceDataProject
3.10 多层感知机的简洁实现
import torch from torch import nn from torch.nn import init import numpy as np import sys sys.path.append("..") import d2lzh_pytorch as d2l print(torch.__version__)
0.4.1
Apache-2.0
code/chapter03_DL-basics/3.10_mlp-pytorch.ipynb
fizzyelf-es/Dive-into-DL-PyTorch
3.10.1 定义模型
num_inputs, num_outputs, num_hiddens = 784, 10, 256 net = nn.Sequential( d2l.FlattenLayer(), nn.Linear(num_inputs, num_hiddens), nn.ReLU(), nn.Linear(num_hiddens, num_outputs), ) for params in net.parameters(): init.normal_(params, mean=0, std=0.01)
_____no_output_____
Apache-2.0
code/chapter03_DL-basics/3.10_mlp-pytorch.ipynb
fizzyelf-es/Dive-into-DL-PyTorch
3.10.2 读取数据并训练模型
batch_size = 256 train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size) loss = torch.nn.CrossEntropyLoss() optimizer = torch.optim.SGD(net.parameters(), lr=0.5) num_epochs = 5 d2l.train_ch3(net, train_iter, test_iter, loss, num_epochs, batch_size, None, None, optimizer)
epoch 1, loss 0.0031, train acc 0.703, test acc 0.757 epoch 2, loss 0.0019, train acc 0.824, test acc 0.822 epoch 3, loss 0.0016, train acc 0.845, test acc 0.825 epoch 4, loss 0.0015, train acc 0.855, test acc 0.811 epoch 5, loss 0.0014, train acc 0.865, test acc 0.846
Apache-2.0
code/chapter03_DL-basics/3.10_mlp-pytorch.ipynb
fizzyelf-es/Dive-into-DL-PyTorch
Summarizing Emails using Machine Learning: Data Wrangling Table of Contents1. Imports & Initalization 2. Data Input A. Enron Email Dataset B. BC3 Corpus 3. Preprocessing A. Data Cleaning. B. Sentence Cleaning C. Tokenizing 4. Store Data A. Locally as pickle B. Into database 5. Data Explorat...
import sys from os import listdir from os.path import isfile, join import configparser from sqlalchemy import create_engine import pandas as pd import numpy as np import matplotlib.pyplot as plt import email import mailparser import xml.etree.ElementTree as ET from talon.signature.bruteforce import extract_signature ...
_____no_output_____
MIT
notebooks/Process_Emails.ipynb
dailykirt/ML_Enron_email_summary
2. Data Input A. Enron Email DatasetThe raw enron email dataset contains a maildir directory that contains folders seperated by employee which contain the emails. The following processes the raw text of each email into a dask dataframe with the following columns: Employee: The username of the email owner. Body: Clean...
def process_email(index): ''' This function splits a raw email into constituent parts that can be used as features. ''' email_path = index[0] employee = index[1] folder = index[2] mail = mailparser.parse_from_file(email_path) full_body = email.message_from_string(mail.body) ...
_____no_output_____
MIT
notebooks/Process_Emails.ipynb
dailykirt/ML_Enron_email_summary
B. BC3 Corpus This dataset is split into two xml files. One contains the original emails split line by line, and the other contains the summarizations created by the annotators. Each email may contain several summarizations from different annotators and summarizations may also be over several emails. This will create ...
def parse_bc3_emails(root): ''' This adds every BC3 email to a newly created dataframe. ''' BC3_email_list = [] #The emails are seperated by threads. for thread in root: email_num = 0 #Iterate through the thread elements <name, listno, Doc> for thread_element in thread: ...
_____no_output_____
MIT
notebooks/Process_Emails.ipynb
dailykirt/ML_Enron_email_summary
The second dataframe contains the summarizations of each email:Annotator: Person who created summarization. Email_num: Email in thread sequence. Listno: Thread identifier. Summary: Human summarization of the email.
def parse_bc3_summaries(root): ''' This parses every BC3 Human summary that is contained in the dataset. ''' BC3_summary_list = [] for thread in root: #Iterate through the thread elements <listno, name, annotation> for thread_element in thread: if thread_element.tag == "...
_____no_output_____
MIT
notebooks/Process_Emails.ipynb
dailykirt/ML_Enron_email_summary
3. Preprocessing A. Data Cleaning
#Convert date to pandas datetime. enron_email_df['date'] = pd.to_datetime(enron_email_df['date'], utc=True) bc3_df['date'] = pd.to_datetime(bc3_df.date, utc=True) #Look at the timeframe start_date = str(enron_email_df.date.min()) end_date = str(enron_email_df.date.max()) print("Start Date: " + start_date) print("End ...
Start Date: 1980-01-01 00:00:00+00:00 End Date: 2024-05-26 10:49:57+00:00
MIT
notebooks/Process_Emails.ipynb
dailykirt/ML_Enron_email_summary
Since the Enron data was collected in May 2002 according to wikipedia its a bit strange to see emails past that date. Reading some of the emails seem to suggest it's mostly spam.
enron_email_df[(enron_email_df.date > '2003-01-01')].head() #Quick look at emails before 1999, enron_email_df[(enron_email_df.date < '1999-01-01')].date.value_counts().head() enron_email_df[(enron_email_df.date == '1980-01-01')].head()
_____no_output_____
MIT
notebooks/Process_Emails.ipynb
dailykirt/ML_Enron_email_summary
There seems to be a glut of emails dated exactly on 1980-01-01. The emails seem legitimate, but these should be droped since without the true date we won't be able to figure out where the email fits in the context of a batch of summaries. Keep emails between Jan 1st 1999 and June 1st 2002.
enron_email_df = enron_email_df[(enron_email_df.date > '1998-01-01') & (enron_email_df.date < '2002-06-01')]
_____no_output_____
MIT
notebooks/Process_Emails.ipynb
dailykirt/ML_Enron_email_summary
B. Sentence Cleaning The raw enron email Corpus tends to have a large amount of unneeded characters that can interfere with tokenizaiton. It's best to do a bit more cleaning.
def clean_email_df(df): ''' These remove symbols and character patterns that don't aid in producing a good summary. ''' #Removing strings related to attatchments and certain non numerical characters. patterns = ["\[IMAGE\]","-", "_", "\*", "+","\".\""] for pattern in patterns: df['body'...
_____no_output_____
MIT
notebooks/Process_Emails.ipynb
dailykirt/ML_Enron_email_summary
C. Tokenizing It's important to split up sentences into it's constituent parts for the ML algorithim that will be used for text summarization. This will aid in further processing like removing extra whitespace. We can also remove stopwords, which are very commonly used words that don't provide additional sentence mean...
def remove_stopwords(sen): ''' This function removes stopwords ''' stop_words = stopwords.words('english') sen_new = " ".join([i for i in sen if i not in stop_words]) return sen_new def tokenize_email(text): ''' This function splits up the body into sentence tokens and removes stop word...
_____no_output_____
MIT
notebooks/Process_Emails.ipynb
dailykirt/ML_Enron_email_summary
Starting with the Enron dataset.
#This tokenizing will be the extracted sentences that may be chosen to form the email summaries. enron_email_df['extractive_sentences'] = enron_email_df['body'].apply(sent_tokenize) #Splitting the text in emails into cleaned sentences enron_email_df['tokenized_body'] = enron_email_df['body'].apply(tokenize_email) #Tok...
_____no_output_____
MIT
notebooks/Process_Emails.ipynb
dailykirt/ML_Enron_email_summary
Now working on the BC3 Dataset.
bc3_df['extractive_sentences'] = bc3_df['body'].apply(sent_tokenize) bc3_df['tokenized_body'] = bc3_df['body'].apply(tokenize_email) #bc3_email_df = bc3_email_df.loc[bc3_email_df.astype(str).drop_duplicates(subset='tokenized_body').index]
_____no_output_____
MIT
notebooks/Process_Emails.ipynb
dailykirt/ML_Enron_email_summary
Store Data A. Locally as pickle After all the preprocessing is finished its best to store the the data so it can be quickly and easily retrieved by other software. Pickles are best used if you are working locally and want a simple way to store and load data. You can also use a cloud database that can be accessed by o...
#Local locations for pickle files. ENRON_PICKLE_LOC = "../data/dataframes/wrangled_enron_full_df.pkl" BC3_PICKLE_LOC = "../data/dataframes/wrangled_BC3_df.pkl" #Store dataframes to disk enron_email_df.to_pickle(ENRON_PICKLE_LOC) bc3_df.head() bc3_df.to_pickle(BC3_PICKLE_LOC)
_____no_output_____
MIT
notebooks/Process_Emails.ipynb
dailykirt/ML_Enron_email_summary
B. Into database I used a Postgres database with the DB configurations stored in a config_notebook.ini file. This allows me to easily switch between local and AWS configurations.
#Configure postgres database config = configparser.ConfigParser() config.read('config_notebook.ini') #database_config = 'LOCAL_POSTGRES' database_config = 'AWS_POSTGRES' POSTGRES_ADDRESS = config[database_config]['POSTGRES_ADDRESS'] POSTGRES_USERNAME = config[database_config]['POSTGRES_USERNAME'] POSTGRES_PASSWORD = ...
_____no_output_____
MIT
notebooks/Process_Emails.ipynb
dailykirt/ML_Enron_email_summary
5. Data Exploration Exploring the dataset can go a long way to building more accurate machine learning models and spotting any possible issues with the dataset. Since the Enron dataset is quite large, we can speed up some of our computations by using Dask. While not strictly necessary, iterating on this dataset should...
client = Client(processes = True) client.cluster #Make into dask dataframe. enron_email_df = dd.from_pandas(enron_email_df, npartitions=cpus) enron_email_df.columns #Used to create a describe summary of the dataset. Ignoring tokenized columns. enron_email_df[['body', 'chain', 'date', 'email_folder', 'employee', 'from...
_____no_output_____
MIT
notebooks/Process_Emails.ipynb
dailykirt/ML_Enron_email_summary
B. BC3 Corpus
bc3_df.head() bc3_df['to'].value_counts().head()
_____no_output_____
MIT
notebooks/Process_Emails.ipynb
dailykirt/ML_Enron_email_summary
Compass heading
# Figure initialization fig, ax1 = plt.subplots() ax1.set_xlabel('Time [sec]', fontsize=16) ax1.set_ylabel('Heading [degree]', fontsize=16) ax1.plot(standardized_time, compass_heading, label='compass heading') ax1.legend() for wp in standardized_time2: plt.axvline(x=wp, color='gray', linestyle='--') plt.show...
_____no_output_____
MIT
Jupyter_notebook/ISER2021/Path 1/.ipynb_checkpoints/20200626-Sunapee-manualvisit-checkpoint.ipynb
dartmouthrobotics/epscor_asv_data_analysis
Temperature
# Figure initialization fig, ax1 = plt.subplots() ax1.set_xlabel('Time [sec]', fontsize=16) ax1.set_ylabel('Temperature [degree]', fontsize=16) ax1.plot(standardized_time, temp, label='temp', color='k') ax1.legend() for wp in standardized_time2: plt.axvline(x=wp, color='gray', linestyle='--') plt.show() pri...
_____no_output_____
MIT
Jupyter_notebook/ISER2021/Path 1/.ipynb_checkpoints/20200626-Sunapee-manualvisit-checkpoint.ipynb
dartmouthrobotics/epscor_asv_data_analysis
PH
# Figure initialization fig, ax1 = plt.subplots() ax1.set_xlabel('Time [sec]', fontsize=16) ax1.set_ylabel('PH', fontsize=16) ax1.plot(standardized_time, PH, label='PH', color='r') ax1.legend() for wp in standardized_time2: plt.axvline(x=wp, color='gray', linestyle='--') plt.show() print("Standard Deviation...
_____no_output_____
MIT
Jupyter_notebook/ISER2021/Path 1/.ipynb_checkpoints/20200626-Sunapee-manualvisit-checkpoint.ipynb
dartmouthrobotics/epscor_asv_data_analysis
Conductivity
# Figure initialization fig, ax1 = plt.subplots() ax1.set_xlabel('Time [sec]', fontsize=16) ax1.set_ylabel('Conductivity [ms]', fontsize=16) ax1.plot(standardized_time, cond, label='conductivity', color='b') ax1.legend() for wp in standardized_time2: plt.axvline(x=wp, color='gray', linestyle='--') plt.show()...
_____no_output_____
MIT
Jupyter_notebook/ISER2021/Path 1/.ipynb_checkpoints/20200626-Sunapee-manualvisit-checkpoint.ipynb
dartmouthrobotics/epscor_asv_data_analysis
Chlorophyll
# Figure initialization fig, ax1 = plt.subplots() ax1.set_xlabel('Time [sec]', fontsize=16) ax1.set_ylabel('chlorophyll [RFU]', fontsize=16) ax1.plot(standardized_time, chlorophyll, label='chlorophyll', color='g') ax1.legend() for wp in standardized_time2: plt.axvline(x=wp, color='gray', linestyle='--') plt....
_____no_output_____
MIT
Jupyter_notebook/ISER2021/Path 1/.ipynb_checkpoints/20200626-Sunapee-manualvisit-checkpoint.ipynb
dartmouthrobotics/epscor_asv_data_analysis
ODO
# Figure initialization fig, ax1 = plt.subplots() ax1.set_xlabel('Time [sec]', fontsize=16) ax1.set_ylabel('ODO [%sat]', fontsize=16) ax1.plot(standardized_time, ODO, label='ODO', color='m') ax1.legend() for wp in standardized_time2: plt.axvline(x=wp, color='gray', linestyle='--') plt.show() print("Standard...
_____no_output_____
MIT
Jupyter_notebook/ISER2021/Path 1/.ipynb_checkpoints/20200626-Sunapee-manualvisit-checkpoint.ipynb
dartmouthrobotics/epscor_asv_data_analysis
Sonar depth
# Figure initialization fig, ax1 = plt.subplots() ax1.set_xlabel('Time [sec]', fontsize=16) ax1.set_ylabel('sonar [m]', fontsize=16) ax1.plot(standardized_time, sonar, label='sonar', color='c') ax1.legend() for wp in standardized_time2: plt.axvline(x=wp, color='gray', linestyle='--') plt.show()
_____no_output_____
MIT
Jupyter_notebook/ISER2021/Path 1/.ipynb_checkpoints/20200626-Sunapee-manualvisit-checkpoint.ipynb
dartmouthrobotics/epscor_asv_data_analysis
Classification Binary classification Stochastic gradient descent (SGD)
from sklearn.linear_model import SGDClassifier
_____no_output_____
Apache-2.0
cheat-sheets/ml/classification/algorithms.ipynb
AElOuassouli/reading-notes
QSVM multiclass classificationA [multiclass extension](https://qiskit.org/documentation/apidoc/qiskit.aqua.components.multiclass_extensions.html) works in conjunction with an underlying binary (two class) classifier to provide classification where the number of classes is greater than two.Currently the following multi...
import numpy as np from qiskit import BasicAer from qiskit.circuit.library import ZZFeatureMap from qiskit.utils import QuantumInstance, algorithm_globals from qiskit_machine_learning.algorithms import QSVM from qiskit_machine_learning.multiclass_extensions import AllPairs from qiskit_machine_learning.utils.dataset_he...
_____no_output_____
Apache-2.0
tutorials/02_qsvm_multiclass.ipynb
gabrieleagl/qiskit-machine-learning
We want a dataset with more than two classes, so here we choose the `Wine` dataset that has 3 classes.
from qiskit_machine_learning.datasets import wine n = 2 # dimension of each data point sample_Total, training_input, test_input, class_labels = wine(training_size=24, test_size=6, n=n, plot_data=True) temp = [test_input[k] for k in test_input] total_array ...
_____no_output_____
Apache-2.0
tutorials/02_qsvm_multiclass.ipynb
gabrieleagl/qiskit-machine-learning
To used a multiclass extension an instance thereof simply needs to be supplied, on the QSVM creation, using the `multiclass_extension` parameter. Although `AllPairs()` is used in the example below, the following multiclass extensions would also work: OneAgainstRest() ErrorCorrectingCode(code_size=5)
algorithm_globals.random_seed = 10598 backend = BasicAer.get_backend('qasm_simulator') feature_map = ZZFeatureMap(feature_dimension=get_feature_dimension(training_input), reps=2, entanglement='linear') svm = QSVM(feature_map, training_input, test_input, total_array, multiclass_ext...
_____no_output_____
Apache-2.0
tutorials/02_qsvm_multiclass.ipynb
gabrieleagl/qiskit-machine-learning
Building Simple Neural NetworksIn this section you will:* Import the MNIST dataset from Keras.* Format the data so it can be used by a Sequential model with Dense layers.* Split the dataset into training and test sections data.* Build a simple neural network using Keras Sequential model and Dense layers.* Train that m...
# For drawing the MNIST digits as well as plots to help us evaluate performance we # will make extensive use of matplotlib from matplotlib import pyplot as plt # All of the Keras datasets are in keras.datasets from keras.datasets import mnist # Keras has already split the data into training and test data (training_im...
_____no_output_____
Unlicense
01-intro-to-deep-learning/02-building-simple-neural-networks.ipynb
rekil156/intro-to-deep-learning
Problems With This DataThere are (at least) two problems with this data as it is currently formatted, what do you think they are? 1. The input data is formatted as a 2D array, but our deep neural network needs to data as a 1D vector. * This is because of how deep neural networks are constructed, it is simply not poss...
from keras.utils import to_categorical # Preparing the dataset # Setup train and test splits (training_images, training_labels), (test_images, test_labels) = mnist.load_data() # 28 x 28 = 784, because that's the dimensions of the MNIST data. image_size = 784 # Reshaping the training_images and test_images to lists ...
[0. 0. 0. 0. 0. 1. 0. 0. 0. 0.]
Unlicense
01-intro-to-deep-learning/02-building-simple-neural-networks.ipynb
rekil156/intro-to-deep-learning
Building a Deep Neural NetworkNow that we've prepared our data, it's time to build a simple neural network. To start we'll make a deep network with 3 layers—the input layer, a single hidden layer, and the output layer. In a deep neural network all the layers are 1 dimensional. The input layer has to be the shape of ou...
from keras.models import Sequential from keras.layers import Dense # Sequential models are a series of layers applied linearly. model = Sequential() # The first layer must specify it's input_shape. # This is how the first two layers are added, the input layer and the hidden layer. model.add(Dense(units=32, activation...
Model: "sequential_1" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense_1 (Dense) (None, 32) 25120 __________________________________...
Unlicense
01-intro-to-deep-learning/02-building-simple-neural-networks.ipynb
rekil156/intro-to-deep-learning
Compiling and Training a ModelOur model must be compiled and trained before it can make useful predictions. Models are trainined with the training data and training labels. During this process Keras will use an optimizer, loss function, metrics of our chosing to repeatedly make predictions and recieve corrections. The...
# sgd stands for stochastic gradient descent. # categorical_crossentropy is a common loss function used for categorical classification. # accuracy is the percent of predictions that were correct. model.compile(optimizer="sgd", loss='categorical_crossentropy', metrics=['accuracy']) # The network will make predictions f...
Train on 54000 samples, validate on 6000 samples Epoch 1/5 54000/54000 [==============================] - 1s 17us/step - loss: 1.3324 - accuracy: 0.6583 - val_loss: 0.8772 - val_accuracy: 0.8407 Epoch 2/5 54000/54000 [==============================] - 1s 13us/step - loss: 0.7999 - accuracy: 0.8356 - val_loss: 0.6273 - ...
Unlicense
01-intro-to-deep-learning/02-building-simple-neural-networks.ipynb
rekil156/intro-to-deep-learning
Evaluating Our ModelNow that we've trained our model, we want to evaluate its performance. We're using the "test data" here although in a serious experiment, we would likely not have done nearly enough work to warrent the application of the test data. Instead, we would rely on the validation metrics as a proxy for our...
loss, accuracy = model.evaluate(test_data, test_labels, verbose=True) plt.plot(history.history['accuracy']) plt.plot(history.history['val_accuracy']) plt.title('model accuracy') plt.ylabel('accuracy') plt.xlabel('epoch') plt.legend(['training', 'validation'], loc='best') plt.show() plt.plot(history.history['loss'])...
10000/10000 [==============================] - 0s 15us/step
Unlicense
01-intro-to-deep-learning/02-building-simple-neural-networks.ipynb
rekil156/intro-to-deep-learning
How Did Our Network Do? * Why do we only have one value for test loss and test accuracy, but a chart over time for training and validation loss and accuracy?* Our model was more accurate on the validation data than it was on the training data. * Is this okay? Why or why not? * What if our model had been more accura...
from numpy import argmax # Predicting once, then we can use these repeatedly in the next cell without recomputing the predictions. predictions = model.predict(test_data) # For pagination & style in second cell page = 0 fontdict = {'color': 'black'} # Repeatedly running this cell will page through the predictions for ...
_____no_output_____
Unlicense
01-intro-to-deep-learning/02-building-simple-neural-networks.ipynb
rekil156/intro-to-deep-learning
Will A Different Network Perform Better?Given what you know so far, use Keras to build and train another sequential model that you think will perform __better__ than the network we just built and trained. Then evaluate that model and compare its performance to our model. Remember to look at accuracy and loss for train...
# Your code here...
_____no_output_____
Unlicense
01-intro-to-deep-learning/02-building-simple-neural-networks.ipynb
rekil156/intro-to-deep-learning
> The email portion of this campaign was actually run as an A/B test. Half the emails sent out were generic upsells to your product while the other half contained personalized messaging around the users’ usage of the site.这是 AB Test 的实验内容。
import pandas as pd import matplotlib.pyplot as plt import numpy as np # export '''Calculate conversion rates and related metrics.''' import pandas as pd import matplotlib.pyplot as plt import numpy as np def conversion_rate(dataframe, column_names, converted = 'converted', id_name = 'user_id'): '''Calculate conv...
_____no_output_____
MIT
01-demo1.ipynb
JiaxiangBU/conversion_metrics
差异不大。
# Group marketing by user_id and variant subscribers = email.groupby(['user_id', 'variant'])['converted'].max() subscribers_df = pd.DataFrame(subscribers.unstack(level=1)) # Drop missing values from the control column control = subscribers_df['control'].dropna() # Drop missing values fr...
Control conversion rate: 0.2814814814814815 Personalization conversion rate: 0.3908450704225352
MIT
01-demo1.ipynb
JiaxiangBU/conversion_metrics
这种 Python 写法我觉得有点复杂。 $$\begin{array}{l}{\text { Calculating lift: }} \\ {\qquad \frac{\text { Treatment conversion rate - Control conversion rate }}{\text { Control conversion rate }}}\end{array}$$ 注意这里的 lift 是转化率的比较,因此是可以超过 100 %
# export def lift(a,b, sig = 2): '''Calculate lift statistic for an AB test. Cite https://www.datacamp.com/courses/analyzing-marketing-campaigns-with-pandas Parmaters --------- a: float. control group. b: float. test group. sig: integer. default 2. Returns -...
_____no_output_____
MIT
01-demo1.ipynb
JiaxiangBU/conversion_metrics
查看是否统计显著
# export from scipy import stats def lift_sig(a,b): '''Calculate lift statistical significance for an AB test. Cite https://www.datacamp.com/courses/analyzing-marketing-campaigns-with-pandas Parmaters --------- a: float. control group. b: float. test group. sig: integer. ...
The t value of the two variables is -0.577 with p value 0.580
MIT
01-demo1.ipynb
JiaxiangBU/conversion_metrics
> In the next lesson, you will explore whether that holds up across all demographics.这真是做 AB test 一个成熟的思维,不代表每一个 group 都很好。
# export def ab_test(df, segment, id_name = 'user_id', test_column = 'variant', converted = 'converted'): '''Calculate lift statistic by segmentation. Cite https://www.datacamp.com/courses/analyzing-marketing-campaigns-with-pandas Parmaters --------- df: pandas.DataFrame. segment: str. ...
_____no_output_____
MIT
01-demo1.ipynb
JiaxiangBU/conversion_metrics
Ran the new few blocks for my colab configuration, can be ignored.
from google.colab import drive drive.mount('/content/gdrive') !wget https://d17h27t6h515a5.cloudfront.net/topher/2016/December/584f6edd_data/data.zip import shutil shutil.move("/content/data.zip", "/content/gdrive/My Drive/udacity-behavioural-cloning/") os.chdir('/content/gdrive/My Drive/udacity-behavioural-cloning/...
_____no_output_____
MIT
behavioral-cloning/model.ipynb
KOKSANG/Self-Driving-Car
Training code starts here
df = pd.read_csv('driving_log.csv') # Visualizing original distribution plt.figure(figsize=(15, 3)) hist, bins = np.histogram(df.steering.values, bins=50) plt.hist(df.steering.values, bins=bins) plt.title('Steering Distribution Plot') plt.xlabel('Steering') plt.ylabel('Count') plt.show() # create grayscale image def g...
_____no_output_____
MIT
behavioral-cloning/model.ipynb
KOKSANG/Self-Driving-Car
entities-search-engine loadingSPARQL query to `{"type": [values]}`
import sys sys.path.append("..") from heritageconnector.config import config from heritageconnector.utils.sparql import get_sparql_results from heritageconnector.utils.wikidata import url_to_qid import json import time from tqdm import tqdm endpoint = config.WIKIDATA_SPARQL_ENDPOINT
_____no_output_____
MIT
experiments/entities-search-engine/1. load data from sparql.ipynb
TheScienceMuseum/heritage-connector
humans sample
limit = 10000 query = f""" SELECT ?item WHERE {{ ?item wdt:P31 wd:Q5. }} LIMIT {limit} """ res = get_sparql_results(endpoint, query) data = { "humans": [url_to_qid(x['item']['value']) for x in res['results']['bindings']] } with open("./entities-search-engine/data/humans_sample.json", 'w') as f: json.dump(...
_____no_output_____
MIT
experiments/entities-search-engine/1. load data from sparql.ipynb
TheScienceMuseum/heritage-connector
humans sample: paginatedgot a 500 timeout error nearly all of the way through. Looked like it was going to take around 1h20m. *Better to do with dump?*
# there are 8,011,382 humans in Wikidata so this should take 161 iterations total_humans = 8011382 pagesize = 40000 reslen = pagesize paged_json = [] i = 0 start = time.time() pbar = tqdm(total=total_humans) while reslen == pagesize: query = f""" SELECT ?item WHERE {{ ?item wdt:P31 wd:Q5. }} LIMIT ...
0it [00:00, ?it/s] 3it [00:00, 24.90it/s] 5it [00:00, 22.01it/s] 8it [00:00, 23.76it/s] 11it [00:00, 24.37it/s] 15it [00:00, 24.48it/s] 19it [00:00, 26.02it/s] 22it [00:00, 26.84it/s] 25it [00:00, 27.48it/s] 29it [00:01, 28.67it/s] 32it [00:01, 27.39it/s] 35it [00:01, 27.23it/s] 38i...
MIT
experiments/entities-search-engine/1. load data from sparql.ipynb
TheScienceMuseum/heritage-connector
By now basically everyone ([here](http://datacolada.org/2014/06/04/23-ceiling-effects-and-replications/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+DataColada+%28Data+Colada+Feed%29), [here](http://yorl.tumblr.com/post/87428392426/ceiling-effects), [here](http://www.talyarkoni.org/blog/2014/06/01/there-i...
%pylab inline import pystan from matustools.matusplotlib import * from scipy import stats il=['dog','trolley','wallet','plane','resume','kitten','mean score','median score'] D=np.loadtxt('schnallstudy1.csv',delimiter=',') D[:,1]=1-D[:,1] Dtemp=np.zeros((D.shape[0],D.shape[1]+1)) Dtemp[:,:-1]=D Dtemp[:,-1]=np.median(D[:...
/usr/local/lib/python2.7/dist-packages/matplotlib-1.3.1-py2.7-linux-i686.egg/matplotlib/font_manager.py:1236: UserWarning: findfont: Font family ['Arial'] not found. Falling back to Bitstream Vera Sans (prop.get_family(), self.defaultFamily[fontext])) /usr/local/lib/python2.7/dist-packages/matplotlib-1.3.1-py2.7-linu...
MIT
_ipynb/SchnallSupplement.ipynb
simkovic/simkovic.github.io
Legend: OC - original study, control group; OT - original study, treatment group; RC - replication study, control group; RT - replication study, treatment group; In the original study the difference between the treatment and control is significantly greater than zero. In the replication, it is not. However the ratings ...
def plotComparison(A,B,stan=False): plt.figure(figsize=(8,16)) cl=['control','treatment'] x=np.arange(11)-0.5 if not stan:assert A.shape[1]==B.shape[1] for i in range(A.shape[1]-1): for cond in range(2): plt.subplot(A.shape[1]-1,2,2*i+cond+1) a=np.histogram(A[A[:,0]==...
_____no_output_____
MIT
_ipynb/SchnallSupplement.ipynb
simkovic/simkovic.github.io
!pip3 install xgboost > /dev/null import pandas as pd import numpy as np import io import gc import time from pprint import pprint # import PIL.Image as Image # import matplotlib.pylab as plt from datetime import date # import tensorflow as tf # import tensorflow_hub as hub # settings import warnings warnings.filterw...
_____no_output_____
MIT
Hackerearth-Predict_condition_and_insurance_amount/train_models.ipynb
chiranjeet14/ML_Projects
Removing NaN in target variable
# select rows where amount is not NaN df_train = df_train[df_train['Amount'].notna()] df_train[df_train['Amount'].isna()].shape # delete rows where Amount < 0 df_train = df_train[df_train['Amount'] >= 0] df_train[['Cost_of_vehicle', 'Min_coverage', 'Max_coverage', 'Amount']].describe() selected_columns = ['Cost_of_vehi...
_____no_output_____
MIT
Hackerearth-Predict_condition_and_insurance_amount/train_models.ipynb
chiranjeet14/ML_Projects
Checking if the dataset is balanced/imbalanced - Condition
# python check if dataset is imbalanced : https://www.kaggle.com/rafjaa/resampling-strategies-for-imbalanced-datasets target_count = df_train['Condition'].value_counts() print('Class 0 (No):', target_count[0]) print('Class 1 (Yes):', target_count[1]) print('Proportion:', round(target_count[0] / target_count[1], 2), ':...
Class 0 (No): 99 Class 1 (Yes): 1288 Proportion: 0.08 : 1
MIT
Hackerearth-Predict_condition_and_insurance_amount/train_models.ipynb
chiranjeet14/ML_Projects
Splitting Data into train-cv
classification_labels = df_train['Condition'].values # for regresion delete rows where Condition = 0 df_train_regression = df_train[df_train['Condition'] == 1] regression_labels = df_train_regression['Amount'].values ###### df_train_regression.drop(['Condition','Amount'], axis=1, inplace=True) df_train.drop(['Condit...
_____no_output_____
MIT
Hackerearth-Predict_condition_and_insurance_amount/train_models.ipynb
chiranjeet14/ML_Projects
Over Sampling using SMOTE
# https://machinelearningmastery.com/smote-oversampling-for-imbalanced-classification/ from imblearn.over_sampling import SMOTE smote_overSampling = SMOTE() X_train,y_train = smote_overSampling.fit_resample(X_train,y_train) unique, counts = np.unique(y_train, return_counts=True) dict(zip(unique, counts))
_____no_output_____
MIT
Hackerearth-Predict_condition_and_insurance_amount/train_models.ipynb
chiranjeet14/ML_Projects
Scaling data
from sklearn.preprocessing import StandardScaler scaler = StandardScaler() X_train_scaled = scaler.fit_transform(X_train) X_cv_scaled = scaler.transform(X_cv) X_test_scaled = scaler.transform(df_test) X_train_scaled
_____no_output_____
MIT
Hackerearth-Predict_condition_and_insurance_amount/train_models.ipynb
chiranjeet14/ML_Projects
Modelling & Cross-Validation Classification
%%time # Train multiple models : https://www.kaggle.com/tflare/testing-multiple-models-with-scikit-learn-0-79425 from sklearn.linear_model import LogisticRegression from sklearn.svm import SVC, LinearSVC from sklearn.neighbors import KNeighborsClassifier from sklearn.tree import DecisionTreeClassifier from sklearn.ense...
{ "AdaBoost": { "f1": 0.9991397849462367 } }
MIT
Hackerearth-Predict_condition_and_insurance_amount/train_models.ipynb
chiranjeet14/ML_Projects
Regression
X_train_regression, X_cv_regression, y_train_regression, y_cv_regression = train_test_split(df_train_regression, regression_labels, test_size=0.1) scaler = StandardScaler() X_train_scaled_regression = scaler.fit_transform(X_train_regression) X_cv_scaled_regression = scaler.transform(X_cv_regression) X_test_scaled_reg...
Best: 0.062463 using {}
MIT
Hackerearth-Predict_condition_and_insurance_amount/train_models.ipynb
chiranjeet14/ML_Projects
Predicting on CV data
classification_alg = AdaBoost # regression_alg = ExtraTreesReg # hypertuned model regression_alg = gsc classification_alg.fit(X_train_scaled, y_train) regression_alg.fit(X_train_scaled_regression, y_train_regression) # predictions_class = classification_alg.predict(X_cv) # pprint(classification_alg.get_params()) # ...
_____no_output_____
MIT
Hackerearth-Predict_condition_and_insurance_amount/train_models.ipynb
chiranjeet14/ML_Projects
Predicting on test Data
trained_classifier = classification_alg trained_regressor = regression_alg predictions_trained_classifier_test = trained_classifier.predict(X_test_scaled) predictions_trained_regressor_test = trained_regressor.predict(X_test_scaled_regression) read = pd.read_csv(gDrivePath + 'test.csv') submission = pd.DataFrame({ ...
_____no_output_____
MIT
Hackerearth-Predict_condition_and_insurance_amount/train_models.ipynb
chiranjeet14/ML_Projects
Build a Traffic Sign Recognition Classifier Deep Learning Some improvements are taken :- [x] Adding of convolution networks at the same size of previous layer, to get 1x1 layer- [x] Activation function use 'ReLU' instead of 'tanh'- [x] Adaptative learning rate, so learning rate is decayed along to training phase- [x] ...
# load enhanced traffic signs import os import cv2 import matplotlib.pyplot as plot import numpy dir_enhancedsign = 'figures\enhanced_training_dataset2' files_enhancedsign = [os.path.join(dir_enhancedsign, f) for f in os.listdir(dir_enhancedsign)] # read & resize (32,32) images in enhanced dataset images_enhance...
[0, 1, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 2, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 3, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 4, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 5, 6, 7, 8, 9]
MIT
traffic_sign_classifier_LeNet_enhanced_trainingdataset_HLS.ipynb
nguyenrobot/Traffic-Sign-Recognition-with-Keras-Tensorflow
*Enhanced German traffic signs dataset &8595;* **We would have 50 classes in total with new enhanced training dataset :**
n_classes_enhanced = len(numpy.unique(y_enhancedsign)) print('n_classes enhanced : {}'.format(n_classes_enhanced))
n_classes enhanced : 50
MIT
traffic_sign_classifier_LeNet_enhanced_trainingdataset_HLS.ipynb
nguyenrobot/Traffic-Sign-Recognition-with-Keras-Tensorflow
Load and Visualize the standard German Traffic Signs Dataset
# Load pickled data import pickle import numpy # TODO: Fill this in based on where you saved the training and testing data training_file = 'traffic-signs-data/train.p' validation_file = 'traffic-signs-data/valid.p' testing_file = 'traffic-signs-data/test.p' with open(training_file, mode='rb') as f: train = ...
_____no_output_____
MIT
traffic_sign_classifier_LeNet_enhanced_trainingdataset_HLS.ipynb
nguyenrobot/Traffic-Sign-Recognition-with-Keras-Tensorflow
Implementation of LeNet>http://yann.lecun.com/exdb/publis/pdf/sermanet-ijcnn-11.pdf Above is the original article of Pierre Sermanet and Yann LeCun in 1998 that we can follow to create LeNet convolutional networks with a good accuracy even for very-beginners in deep-learning. It's really excited to see that many yea...
### Import tensorflow and keras import tensorflow as tf from tensorflow import keras print ("TensorFlow version: " + tf.__version__)
TensorFlow version: 2.1.0
MIT
traffic_sign_classifier_LeNet_enhanced_trainingdataset_HLS.ipynb
nguyenrobot/Traffic-Sign-Recognition-with-Keras-Tensorflow
2-stage ConvNet architecture by Pierre Sermanet and Yann LeCunWe will try to implement the 2-stage ConvNet architecture by Pierre Sermanet and Yann LeCun which is not sequential. Keras disposes keras.Sequential() API for sequential architectures but it can not handle models with non-linear topology, shared layers or m...
#LeNet model inputs = keras.Input(shape=(32,32,3), name='image_in') #0 stage :conversion from normalized RGB [0..1] to HSV layer_HSV = tf.image.rgb_to_hsv(inputs) #1st stage ___________________________________________________________ #Convolution with ReLU activation layer1_conv = keras.layers.C...
Model: "LeNet_Model_improved" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ============================================================================================...
MIT
traffic_sign_classifier_LeNet_enhanced_trainingdataset_HLS.ipynb
nguyenrobot/Traffic-Sign-Recognition-with-Keras-Tensorflow
Input preprocessing Color-SpacePierre Sermanet and Yann LeCun used YUV color space with almost of processings on Y-channel (Y stands for brightness, U and V stand for Chrominance). NormalizationEach channel of an image is in uint8 scale (0-255), we will normalize each channel to 0-1. Generally, we normalize data to...
import cv2 def input_normalization(X_in): X = numpy.float32(X_in/255.0) return X # normalization of dataset # enhanced training dataset is added X_train_norm = input_normalization(X_train) X_valid_norm = input_normalization(X_valid) X_enhancedtrain_norm = input_normalization(images...
(50, 32, 32, 3) 1 0
MIT
traffic_sign_classifier_LeNet_enhanced_trainingdataset_HLS.ipynb
nguyenrobot/Traffic-Sign-Recognition-with-Keras-Tensorflow
Training Pipeline_Optimizer : we use Adam optimizer, better than SDG (Stochastic Gradient Descent) _Loss function : Cross Entropy by category _Metrics : accuracy *learning rate 0.001 work well with our network, it's better to try with small laerning rate in the begining.
rate = 0.001 LeNet_Model.compile( optimizer=keras.optimizers.Nadam(learning_rate = rate, beta_1=0.9, beta_2=0.999, epsilon=1e-07), loss=keras.losses.CategoricalCrossentropy(from_logits=True), metrics=["accuracy"])
_____no_output_____
MIT
traffic_sign_classifier_LeNet_enhanced_trainingdataset_HLS.ipynb
nguyenrobot/Traffic-Sign-Recognition-with-Keras-Tensorflow
Real-time data augmentation
from tensorflow.keras.preprocessing.image import ImageDataGenerator datagen_enhanced = ImageDataGenerator( rotation_range=30.0, zoom_range=0.5, width_shift_range=0.5, height_shift_range=0.5, featurewise_center=True, ...
_____no_output_____
MIT
traffic_sign_classifier_LeNet_enhanced_trainingdataset_HLS.ipynb
nguyenrobot/Traffic-Sign-Recognition-with-Keras-Tensorflow
Train the Model on standard training dataset
EPOCHS = 30 BATCH_SIZE = 32 STEPS_PER_EPOCH = int(len(X_train_norm)/BATCH_SIZE) history_standard_HLS = LeNet_Model.fit( datagen.flow(X_train_norm, y_train_onehot, batch_size=BATCH_SIZE,shuffle=True), validation_data=(X_valid_norm, y_valid_onehot), shuffle=T...
WARNING:tensorflow:sample_weight modes were coerced from ... to ['...'] Train for 1087 steps, validate on 4410 samples Epoch 1/30 1087/1087 [==============================] - 310s 285ms/step - loss: 1.5436 - accuracy: 0.5221 - val_loss: 1.1918 - val_accuracy: 0.6120 Epoch 2/30 1087/1087 [=====================...
MIT
traffic_sign_classifier_LeNet_enhanced_trainingdataset_HLS.ipynb
nguyenrobot/Traffic-Sign-Recognition-with-Keras-Tensorflow
on enhanced training dataset
EPOCHS = 30 BATCH_SIZE = 1 STEPS_PER_EPOCH = int(len(X_enhancedtrain_norm)/BATCH_SIZE) history_enhanced_HLS = LeNet_Model.fit( datagen_enhanced.flow(X_enhancedtrain_norm, y_enhanced_onehot, batch_size=BATCH_SIZE,shuffle=True), shuffle=True, #validat...
_____no_output_____
MIT
traffic_sign_classifier_LeNet_enhanced_trainingdataset_HLS.ipynb
nguyenrobot/Traffic-Sign-Recognition-with-Keras-Tensorflow
Evaluate the ModelWe will use the test dataset to evaluate classification accuracy.
#Normalize test dataset X_test_norm = input_normalization(X_test) #One-hot matrix y_test_onehot = keras.utils.to_categorical(y_test, n_classes) #Load saved model reconstructed_LeNet_Model = keras.models.load_model("LeNet_enhanced_trainingdataset_HLS.h5") #Evaluate and display the prediction result ...
Plot of training accuracy over 30 epochs:
MIT
traffic_sign_classifier_LeNet_enhanced_trainingdataset_HLS.ipynb
nguyenrobot/Traffic-Sign-Recognition-with-Keras-Tensorflow
Prediction of test dataset with trained modelWe will use the test dataset to test trained model's prediction of instances that it has never seen during training.
print("Test Set : {} samples".format(len(X_test))) print('n_classes : {}'.format(n_classes)) X_test.shape #Normalize test dataset X_test_norm = input_normalization(X_test) #One-hot matrix y_test_onehot = keras.utils.to_categorical(y_test, n_classes) #Load saved model reconstructed = keras.models.load_model...
Image 0 - Target = 16, Predicted = 6 Image 1 - Target = 1, Predicted = 6 Image 2 - Target = 38, Predicted = 6 Image 3 - Target = 33, Predicted = 6 Image 4 - Target = 11, Predicted = 6 Image 5 - Target = 38, Predicted = 6 Image 6 - Target = 18, Predicted = 6 Image 7 - Target = 12, Predicted = 6 Image 8 - Target = 25, Pr...
MIT
traffic_sign_classifier_LeNet_enhanced_trainingdataset_HLS.ipynb
nguyenrobot/Traffic-Sign-Recognition-with-Keras-Tensorflow
We will display a confusion matrix on test dataset to figure out our error-rate. `X_test_norm` : test dataset `y_test` : test dataset ground truth labels `y_pred_class` : prediction labels on test dataset
confusion_matrix = numpy.zeros([n_classes, n_classes])
_____no_output_____
MIT
traffic_sign_classifier_LeNet_enhanced_trainingdataset_HLS.ipynb
nguyenrobot/Traffic-Sign-Recognition-with-Keras-Tensorflow
confusion_matrix`column` : test dataset ground truth labels `row` : prediction labels on test dataset `diagonal` : incremented when prediction matches ground truth label
for ij in range(len(X_test_norm)): if y_test[ij] == y_pred_class[ij]: confusion_matrix[y_test[ij],y_test[ij]] += 1 else: confusion_matrix[y_pred_class[ij],y_test[ij]] -= 1 column_label = [' L % d' % x for x in range(n_classes)] row_label = [' P % d' % x for x in range(n_classes)] # Pl...
_____no_output_____
MIT
traffic_sign_classifier_LeNet_enhanced_trainingdataset_HLS.ipynb
nguyenrobot/Traffic-Sign-Recognition-with-Keras-Tensorflow
Thank to confusion matrix, we could identify where to enhance -[x] training dataset -[x] real-time data augmentation -[x] preprocessing *Extract of confusion matrix of classification on test dataset &8595;* Prediction of new instances with trained modelWe will use the test dataset to test trained model's predic...
# load french traffic signs import os import cv2 import matplotlib.pyplot as plot import numpy dir_frenchsign = 'french_traffic-signs-data' images_frenchsign = [os.path.join(dir_frenchsign, f) for f in os.listdir(dir_frenchsign)] images_frenchsign = numpy.array([cv2.cvtColor(cv2.imread(f), cv2.COLOR_BGR2RGB) for ...
_____no_output_____
MIT
traffic_sign_classifier_LeNet_enhanced_trainingdataset_HLS.ipynb
nguyenrobot/Traffic-Sign-Recognition-with-Keras-Tensorflow
*Enhanced German traffic signs dataset &8595;*
# manually label for these new images y_frenchsign = [13, 31, 29, 24, 26, 27, 33, 17, 15, 34, 12, 2, 2, 4, 2] n_classes = n_classes_enhanced # when a sign doesn't present in our training dataset, we'll try to find a enough 'similar' sign to label it. # image 2 : class 29 differed # image 3 : class 24,...
_____no_output_____
MIT
traffic_sign_classifier_LeNet_enhanced_trainingdataset_HLS.ipynb
nguyenrobot/Traffic-Sign-Recognition-with-Keras-Tensorflow