markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Custom sorting of plot series
import pandas as pd import numpy as np from pandas.api.types import CategoricalDtype from plotnine import * from plotnine.data import mpg %matplotlib inline
_____no_output_____
MIT
demo_plot/plotnine-examples/tutorials/miscellaneous-order-plot-series.ipynb
hermanzhaozzzz/DataScienceScripts
Bar plot of manufacturer - Default Output
(ggplot(mpg) + aes(x='manufacturer') + geom_bar(size=20) + coord_flip() + labs(y='Count', x='Manufacturer', title='Number of Cars by Make') )
_____no_output_____
MIT
demo_plot/plotnine-examples/tutorials/miscellaneous-order-plot-series.ipynb
hermanzhaozzzz/DataScienceScripts
Bar plot of manufacturer - Ordered by count (Categorical)By default the discrete values along axis are ordered alphabetically. If we want aspecific orderingwe use a pandas.Categorical variable with categories ordered to our preference.
# Determine order and create a categorical type # Note that value_counts() is already sorted manufacturer_list = mpg['manufacturer'].value_counts().index.tolist() manufacturer_cat = pd.Categorical(mpg['manufacturer'], categories=manufacturer_list) # assign to a new column in the DataFrame mpg = mpg.assign(manufacturer...
_____no_output_____
MIT
demo_plot/plotnine-examples/tutorials/miscellaneous-order-plot-series.ipynb
hermanzhaozzzz/DataScienceScripts
We could also modify the **existing manufacturer category** to set it as orderedinstead of having to create a new CategoricalDtype and apply that to the data.
mpg = mpg.assign(manufacturer_cat = mpg['manufacturer'].cat.reorder_categories(manufacturer_list))
_____no_output_____
MIT
demo_plot/plotnine-examples/tutorials/miscellaneous-order-plot-series.ipynb
hermanzhaozzzz/DataScienceScripts
Bar plot of manufacturer - Ordered by count (limits)Another method to quickly reorder a discrete axis without changing the data is to change it's limits
# Determine order and create a categorical type # Note that value_counts() is already sorted manufacturer_list = mpg['manufacturer'].value_counts().index.tolist() (ggplot(mpg) + aes(x='manufacturer_cat') + geom_bar(size=20) + scale_x_discrete(limits=manufacturer_list) + coord_flip() + labs(y='Count', x='Manufactu...
_____no_output_____
MIT
demo_plot/plotnine-examples/tutorials/miscellaneous-order-plot-series.ipynb
hermanzhaozzzz/DataScienceScripts
You can 'flip' an axis (independent of limits) by reversing the order of the limits.
# Determine order and create a categorical type # Note that value_counts() is already sorted manufacturer_list = mpg['manufacturer'].value_counts().index.tolist()[::-1] (ggplot(mpg) + aes(x='manufacturer_cat') + geom_bar(size=20) + scale_x_discrete(limits=manufacturer_list) + coord_flip() + labs(y='Count', x='Man...
_____no_output_____
MIT
demo_plot/plotnine-examples/tutorials/miscellaneous-order-plot-series.ipynb
hermanzhaozzzz/DataScienceScripts
Breast Cancer Wisconsin (Diagnostic) Prediction*Predict whether the cancer is benign or malignant*Features are computed from a digitized image of a fine needle aspirate (FNA) of a breast mass. They describe characteristics of the cell nuclei present in the image. n the 3-dimensional space is that described in: [K. P. ...
import numpy as np import matplotlib.pyplot as plt import pandas as pd import seaborn as sns from sklearn.model_selection import train_test_split import seaborn as sns # used for plot interactive graph. I like it most for plot from sklearn.linear_model import LogisticRegression # to apply the Logistic regression from s...
_____no_output_____
MIT
Breast Cancer .ipynb
suvhradipghosh07/Breast-Cancer-prediction-using-Machine-Learning-Various-Algorithms
Data Cleaning
#df is dataframe and its self a variable & here i import the dataset into this variable df=pd.read_csv("cancer.csv") #it will show top 5 data rows df.head() #deleting the useless columns df.drop(['id'], 1, inplace=True) df.drop(['Unnamed: 32'], 1, inplace=True) df.info() # lets now start with features_mean # now as ou...
_____no_output_____
MIT
Breast Cancer .ipynb
suvhradipghosh07/Breast-Cancer-prediction-using-Machine-Learning-Various-Algorithms
Data Analysis
# plotting the diagonisis result sns.countplot(df['diagnosis'],label="Count") feat_mean= list(df.columns[1:11]) feat_se= list(df.columns[11:20]) feat_worst=list(df.columns[21:31]) corr = df[feat_mean].corr() # .corr is used for find corelation plt.figure(figsize=(14,14)) sns.heatmap(corr, cbar = True, square = True,...
_____no_output_____
MIT
Breast Cancer .ipynb
suvhradipghosh07/Breast-Cancer-prediction-using-Machine-Learning-Various-Algorithms
Spliting the Dataset into two part
# spliting the dataset into two part train_set,test_set=train_test_split(df, test_size=0.2) #printing the data shape print(train_set.shape) print(test_set.shape)
(455, 31) (114, 31)
MIT
Breast Cancer .ipynb
suvhradipghosh07/Breast-Cancer-prediction-using-Machine-Learning-Various-Algorithms
**Training Set :**
x_train=train_set[main_pred_var] y_train=train_set.diagnosis print(y_train.shape) print(x_train.shape)
(455,) (455, 5)
MIT
Breast Cancer .ipynb
suvhradipghosh07/Breast-Cancer-prediction-using-Machine-Learning-Various-Algorithms
**Test Set :**
x_test=test_set[main_pred_var] y_test=test_set.diagnosis print(y_test.shape) print(x_test.shape)
(114,) (114, 5)
MIT
Breast Cancer .ipynb
suvhradipghosh07/Breast-Cancer-prediction-using-Machine-Learning-Various-Algorithms
Various AlgorithmNow i will train this Breast cancer dataset using various Algorithm from scratch see how each of them behaves with respect to one another. Random Forest SVM RandomForst Algorithm
#define the algorithm class into the algo_one variable algo_one=RandomForestClassifier() algo_one.fit(x_train,y_train) #predicting the algorithm into the non trained dataset that is test set prediction = algo_one.predict(x_test) metrics.accuracy_score(prediction,y_test)
_____no_output_____
MIT
Breast Cancer .ipynb
suvhradipghosh07/Breast-Cancer-prediction-using-Machine-Learning-Various-Algorithms
SupportVector Machine Algorithm (SVM)
algo_two=svm.SVC() algo_two.fit(x_train,y_train) #predicting the algorithm into the non trained dataset that is test set prediction = algo_two.predict(x_test) metrics.accuracy_score(prediction,y_test)
_____no_output_____
MIT
Breast Cancer .ipynb
suvhradipghosh07/Breast-Cancer-prediction-using-Machine-Learning-Various-Algorithms
Decision Tree Classifier Algorithm
algo_three=DecisionTreeClassifier() algo_three.fit(x_train,y_train) #predicting the algorithm into the non trained dataset that is test set prediction = algo_three.predict(x_test) metrics.accuracy_score(prediction,y_test)
_____no_output_____
MIT
Breast Cancer .ipynb
suvhradipghosh07/Breast-Cancer-prediction-using-Machine-Learning-Various-Algorithms
K-Nearest NeighborsClassifier Algorithm
algo_four=KNeighborsClassifier() algo_four.fit(x_train,y_train) #predicting the algorithm into the non trained dataset that is test set prediction = algo_four.predict(x_test) metrics.accuracy_score(prediction,y_test)
_____no_output_____
MIT
Breast Cancer .ipynb
suvhradipghosh07/Breast-Cancer-prediction-using-Machine-Learning-Various-Algorithms
GaussianNB Algorithm
algo_five=GaussianNB() algo_five.fit(x_train,y_train) prediction = algo_five.predict(x_test) metrics.accuracy_score(prediction,y_test)
_____no_output_____
MIT
Breast Cancer .ipynb
suvhradipghosh07/Breast-Cancer-prediction-using-Machine-Learning-Various-Algorithms
Tuning Parameters using grid search CVLets Start with Random Forest Classifier Tuning the parameters means using the best parameter for predict there are many parameters need to model a Machine learning Algorithm for RandomForestClassifier.
pred_var = ['radius_mean','perimeter_mean','area_mean','compactness_mean','concave points_mean'] #creating new variable x_grid= df[pred_var] y_grid= df["diagnosis"] # lets Make a function for Grid Search CV def Classification_model_gridsearchCV(model,param_grid,x_grid,y_grid): clf = GridSearchCV(model,param_grid,cv...
The best parameter found on development set is : {'max_features': 'log2', 'min_samples_leaf': 2, 'min_samples_split': 6} the bset estimator is RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini', max_depth=None, max_features='log2', max_leaf_nodes=None, min_impurity_decre...
MIT
Breast Cancer .ipynb
suvhradipghosh07/Breast-Cancer-prediction-using-Machine-Learning-Various-Algorithms
๋‹จ์ˆœํ•œ ๊ธฐ์šธ๊ธฐ ๊ณ„์‚ฐ - z = 2x^2+3
# ๋จผ์ € ํŒŒ์ดํ† ์น˜๋ฅผ ๋ถˆ๋Ÿฌ์˜ต๋‹ˆ๋‹ค. import torch # x๋ฅผ [2.0,3.0]์˜ ๊ฐ’์„ ๊ฐ€์ง„ ํ…์„œ๋กœ ์ดˆ๊ธฐํ™” ํ•ด์ฃผ๊ณ  ๊ธฐ์šธ๊ธฐ ๊ณ„์‚ฐ์„ True๋กœ ์ผœ ๋†“์Šต๋‹ˆ๋‹ค. # z = 2x^2+3 x = torch.tensor(data=[2.0,3.0],requires_grad=True) y = x**2 z = 2*y +3 # https://pytorch.org/docs/stable/autograd.html?highlight=backward#torch.autograd.backward # ๋ชฉํ‘œ๊ฐ’์„ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค. target = torch.tensor([3.0,4.0]) # z์™€ ...
tensor([ 8., 12.]) None None
MIT
Linear Regression Analysis/Calculate Gradients.ipynb
TaehoLi/Pytorch-secondstep
Step1. Define the function wich integrate with.
import numpy as np def f(x): return np.log(3+x)
_____no_output_____
MIT
integration.ipynb
henrymorenoespitia/numerical_methods_and_analysis
Step 2define functions for diferent methods
# Simple Trapezoid def simpleTrapezoid(a, b): return (b-a) * (f(a) + f(b)) def simpleTrapezoidError(realIntegrate, a, b): return (abs(realIntegrate - simpleTrapezoid(a, b)) / realIntegrate ) * 100 # compound trapezoid def compoundTrapezoid(a, b, x): sum = 0 for i in range(1, n): sum += 2 * f(x[i...
_____no_output_____
MIT
integration.ipynb
henrymorenoespitia/numerical_methods_and_analysis
Step 3 Define entry values
# code proof a = 0.1 b = 3.1 realIntegrate = -5.82773
_____no_output_____
MIT
integration.ipynb
henrymorenoespitia/numerical_methods_and_analysis
a. proof of simple rules
# simple trapezoid print(f"La integral aproximada por la regla de trapecio simple es: {simpleTrapezoid(a, b)} ; error (%): {simpleTrapezoidError(realIntegrate, a, b)} ") #simple Simpson 1/3 n= 2 x = np.linspace(a, b, n+1) print(f"La integral aproximada por la regla de Simpson 1/3 simle es: {simpleSimpson1_3(x)} : er...
La integral aproximada por la regla de trapecio simple es: 8.819072648011097 ; error (%): -251.32946529799932 La integral aproximada por la regla de Simpson 1/3 simle es: 4.521958048325281 : error (%): -1.7759381523037754 La integral aproximada por la regla de Simpson 3/8 simple es: 4.522640033621997 ; error (%): -1....
MIT
integration.ipynb
henrymorenoespitia/numerical_methods_and_analysis
a. proof of compound rules
# compound trapezoid n= 18 # number of segments x = np.linspace(a, b, n+1) print(f"La integral aproximada por la regla de trapecio compuesto es: {compoundTrapezoid(a, b, x)} ; error (%): {compoundTrapezoidError(realIntegrate, a, b, x)} ") #compound Simpson 1/3 n= 18 # even number x = np.linspace(a, b, n+1) print(f"La...
La integral aproximada por la regla de trapecio compuesto es: 4.522847784399211 ; error (%): -177.6090825141043 La integral aproximada por la regla de Simpson 1/3 compuesta es: 4.523214709695509 : error (%): -1.776153787099867 La integral aproximada por la regla de Simpson 3/8 compuesta es: 4.523214401106994 ; error ...
MIT
integration.ipynb
henrymorenoespitia/numerical_methods_and_analysis
Step 5: Gauss quadrature to 2 and 3 points
# 2 points xd_0 = -0.577350269 xd_1 = 0.577350269 C0 = 1 C1 = 1 dx = ( b- a ) / 2 x0 = ((b+a) + (b-a) * xd_0 )/ 2 x1 = ((b+a) + (b-a) * xd_1)/ 2 F0 = f(x0) * dx F1 = f(x1) * dx integralGaussQuadrature = C0 *F0 + C1 * F1 integralGaussQuadratureError = (abs(realIntegrate - integralGaussQuadrature) / realIntegrate) *...
la integral aproximada por el metodo de cuadratura de Gauss con 2 puntos es: 4.5240374652688375 ; y el error (%): -177.62949665253603 la integral aproximada por el metodo de cuadratura de Gauss con 3 puntos es: 4.52323074338855 ; y el error (%): -177.6156538375757
MIT
integration.ipynb
henrymorenoespitia/numerical_methods_and_analysis
Example with real audio recordingsThe iterations are dropped in contrast to the offline version. To use past observations the correlation matrix and the correlation vector are calculated recursively with a decaying window. $\alpha$ is the decay factor. Setup
channels = 8 sampling_rate = 16000 delay = 3 alpha=0.9999 taps = 10 frequency_bins = stft_options['size'] // 2 + 1
_____no_output_____
MIT
examples/WPE_Numpy_online.ipynb
mdeegen/nara_wpe
Audio data
file_template = 'AMI_WSJ20-Array1-{}_T10c0201.wav' signal_list = [ sf.read(str(project_root / 'data' / file_template.format(d + 1)))[0] for d in range(channels) ] y = np.stack(signal_list, axis=0) IPython.display.Audio(y[0], rate=sampling_rate)
_____no_output_____
MIT
examples/WPE_Numpy_online.ipynb
mdeegen/nara_wpe
Online bufferFor simplicity the STFT is performed before providing the frames.Shape: (frames, frequency bins, channels)frames: K+delay+1
Y = stft(y, **stft_options).transpose(1, 2, 0) T, _, _ = Y.shape def aquire_framebuffer(): buffer = list(Y[:taps+delay+1, :, :]) for t in range(taps+delay+1, T): yield np.array(buffer) buffer.append(Y[t, :, :]) buffer.pop(0)
_____no_output_____
MIT
examples/WPE_Numpy_online.ipynb
mdeegen/nara_wpe
Non-iterative frame online approachA frame online example requires, that certain state variables are kept from frame to frame. That is the inverse correlation matrix $\text{R}_{t, f}^{-1}$ which is stored in Q and initialized with an identity matrix, as well as filter coefficient matrix that is stored in G and initial...
Z_list = [] Q = np.stack([np.identity(channels * taps) for a in range(frequency_bins)]) G = np.zeros((frequency_bins, channels * taps, channels)) for Y_step in tqdm(aquire_framebuffer()): Z, Q, G = online_wpe_step( Y_step, get_power_online(Y_step.transpose(1, 2, 0)), Q, G, a...
_____no_output_____
MIT
examples/WPE_Numpy_online.ipynb
mdeegen/nara_wpe
Frame online WPE in class fashion: Online WPE class holds the correlation Matrix and the coefficient matrix.
Z_list = [] online_wpe = OnlineWPE( taps=taps, delay=delay, alpha=alpha ) for Y_step in tqdm(aquire_framebuffer()): Z_list.append(online_wpe.step_frame(Y_step)) Z = np.stack(Z_list) z = istft(np.asarray(Z).transpose(2, 0, 1), size=stft_options['size'], shift=stft_options['shift']) IPython.display.Audi...
_____no_output_____
MIT
examples/WPE_Numpy_online.ipynb
mdeegen/nara_wpe
Power spectrumBefore and after applying WPE.
fig, [ax1, ax2] = plt.subplots(1, 2, figsize=(20, 8)) im1 = ax1.imshow(20 * np.log10(np.abs(Y[200:400, :, 0])).T, origin='lower') ax1.set_xlabel('') _ = ax1.set_title('reverberated') im2 = ax2.imshow(20 * np.log10(np.abs(Z_stacked[200:400, :, 0])).T, origin='lower') _ = ax2.set_title('dereverberated') cb = fig.colorbar...
_____no_output_____
MIT
examples/WPE_Numpy_online.ipynb
mdeegen/nara_wpe
_Lambda School Data Science_ Scrape and process dataObjectives- scrape and parse web pages- use list comprehensions- select rows and columns with pandasLinks- [Automate the Boring Stuff with Python, Chapter 11](https://automatetheboringstuff.com/chapter11/) - Requests - Beautiful Soup- [Python List Comprehensions: E...
url = 'https://us.pycon.org/2018/schedule/talks/list/'
_____no_output_____
MIT
module1-scrape-and-process-data/LS_DS_121_Scrape_and_process_data.ipynb
wjarvis2/DS-Unit-1-Sprint-2-Data-Wrangling
5 ways to look at long titlesLet's define a long title as greater than 80 characters 1. For Loop
_____no_output_____
MIT
module1-scrape-and-process-data/LS_DS_121_Scrape_and_process_data.ipynb
wjarvis2/DS-Unit-1-Sprint-2-Data-Wrangling
2. List Comprehension
_____no_output_____
MIT
module1-scrape-and-process-data/LS_DS_121_Scrape_and_process_data.ipynb
wjarvis2/DS-Unit-1-Sprint-2-Data-Wrangling
3. Filter with named function
_____no_output_____
MIT
module1-scrape-and-process-data/LS_DS_121_Scrape_and_process_data.ipynb
wjarvis2/DS-Unit-1-Sprint-2-Data-Wrangling
4. Filter with anonymous function
_____no_output_____
MIT
module1-scrape-and-process-data/LS_DS_121_Scrape_and_process_data.ipynb
wjarvis2/DS-Unit-1-Sprint-2-Data-Wrangling
5. Pandaspandas documentation: [Working with Text Data](https://pandas.pydata.org/pandas-docs/stable/text.html)
_____no_output_____
MIT
module1-scrape-and-process-data/LS_DS_121_Scrape_and_process_data.ipynb
wjarvis2/DS-Unit-1-Sprint-2-Data-Wrangling
Make new dataframe columnspandas documentation: [apply](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.apply.html)
_____no_output_____
MIT
module1-scrape-and-process-data/LS_DS_121_Scrape_and_process_data.ipynb
wjarvis2/DS-Unit-1-Sprint-2-Data-Wrangling
title length
_____no_output_____
MIT
module1-scrape-and-process-data/LS_DS_121_Scrape_and_process_data.ipynb
wjarvis2/DS-Unit-1-Sprint-2-Data-Wrangling
long title
_____no_output_____
MIT
module1-scrape-and-process-data/LS_DS_121_Scrape_and_process_data.ipynb
wjarvis2/DS-Unit-1-Sprint-2-Data-Wrangling
first letter
_____no_output_____
MIT
module1-scrape-and-process-data/LS_DS_121_Scrape_and_process_data.ipynb
wjarvis2/DS-Unit-1-Sprint-2-Data-Wrangling
word countUsing [`textstat`](https://github.com/shivam5992/textstat)
#!pip install textstat
_____no_output_____
MIT
module1-scrape-and-process-data/LS_DS_121_Scrape_and_process_data.ipynb
wjarvis2/DS-Unit-1-Sprint-2-Data-Wrangling
Rename column`title length` --> `title character count`pandas documentation: [rename](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.rename.html)
_____no_output_____
MIT
module1-scrape-and-process-data/LS_DS_121_Scrape_and_process_data.ipynb
wjarvis2/DS-Unit-1-Sprint-2-Data-Wrangling
Analyze the dataframe Describepandas documentation: [describe](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.describe.html)
_____no_output_____
MIT
module1-scrape-and-process-data/LS_DS_121_Scrape_and_process_data.ipynb
wjarvis2/DS-Unit-1-Sprint-2-Data-Wrangling
Sort valuespandas documentation: [sort_values](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort_values.html) Five shortest titles, by character count
_____no_output_____
MIT
module1-scrape-and-process-data/LS_DS_121_Scrape_and_process_data.ipynb
wjarvis2/DS-Unit-1-Sprint-2-Data-Wrangling
Titles sorted reverse alphabetically
_____no_output_____
MIT
module1-scrape-and-process-data/LS_DS_121_Scrape_and_process_data.ipynb
wjarvis2/DS-Unit-1-Sprint-2-Data-Wrangling
Get value countspandas documentation: [value_counts](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.value_counts.html) Frequency counts of first letters
_____no_output_____
MIT
module1-scrape-and-process-data/LS_DS_121_Scrape_and_process_data.ipynb
wjarvis2/DS-Unit-1-Sprint-2-Data-Wrangling
Percentage of talks with long titles
_____no_output_____
MIT
module1-scrape-and-process-data/LS_DS_121_Scrape_and_process_data.ipynb
wjarvis2/DS-Unit-1-Sprint-2-Data-Wrangling
Plotpandas documentation: [Visualization](https://pandas.pydata.org/pandas-docs/stable/visualization.html) Top 5 most frequent first letters
_____no_output_____
MIT
module1-scrape-and-process-data/LS_DS_121_Scrape_and_process_data.ipynb
wjarvis2/DS-Unit-1-Sprint-2-Data-Wrangling
Histogram of title lengths, in characters
_____no_output_____
MIT
module1-scrape-and-process-data/LS_DS_121_Scrape_and_process_data.ipynb
wjarvis2/DS-Unit-1-Sprint-2-Data-Wrangling
Assignment**Scrape** the talk descriptions. Hint: `soup.select('.presentation-description')`**Make** new columns in the dataframe:- description- description character count- description word count- description grade level (use [this `textstat` function](https://github.com/shivam5992/textstatthe-flesch-kincaid-grade-le...
import bs4 import requests import numpy as np import pandas as pd import matplotlib.pyplot as plt result = requests.get(url) soup = bs4.BeautifulSoup(result.text) descriptions = [tag.text.strip() for tag in soup.select('.presentation-description')] print (len(descriptions)) print (descriptions) df = pd.D...
_____no_output_____
MIT
module1-scrape-and-process-data/LS_DS_121_Scrape_and_process_data.ipynb
wjarvis2/DS-Unit-1-Sprint-2-Data-Wrangling
SpecificationsThese parameters need to be specified prior to running the predictive GAN.
# modes gp = False sn = True # steps train = False fixed_input = True eph = '9999' # the model to read if not training # paramters DATASET = 'sine' # sine, moon, 2spirals, circle, helix suffix = '_sn' # suffix of output folder LATENT_DIM = 2 # latent space dimension DIM = 512 # 512 Model dimensionality INPUT_DIM =...
_____no_output_____
MIT
gan_toy_example.ipynb
acse-wx319/gans-on-multimodal-data
Make generator and discriminatorInitializing generator and discriminator objects. The architectures have been declared in lib.models.
netG = Generator(LATENT_DIM, DIM, DROPOUT_RATE, INPUT_DIM) if sn: netD = DiscriminatorSN(DIM, INPUT_DIM) else: netD = Discriminator(DIM, INPUT_DIM) netG.apply(weights_init) netD.apply(weights_init)
_____no_output_____
MIT
gan_toy_example.ipynb
acse-wx319/gans-on-multimodal-data
Train or load modelIf in training mode, a WGAN with either SN or GP will be trained on the specified type of synthetic data (sine, circle, half-moon, helix or double-spirals). A log file will be created with all specifications to keep track of the runs. Loss will be plotted against the number of epochs. Randomly gener...
if train: # start writing log f = open(TMP_PATH + "log.txt", "w") # print specifications f.write('gradient penalty: ' + str(gp)) f.write('\n spectral normalization: ' + str(sn)) f.write('\n datasest: ' + DATASET) f.write('\n hidden layer dimension: ' + str(DIM)) f.write('\n latent space ...
_____no_output_____
MIT
gan_toy_example.ipynb
acse-wx319/gans-on-multimodal-data
PredictionFor a list of x, use the trained GAN to make multiple predictions for y. For each x, many predictions will be made. A subset is taken depending on the similarity between the generated x and the specified x.
if fixed_input: data = make_data_iterator(DATASET, BATCH_SIZE) # sine data preds = None for x in np.linspace(-4., 4., 17): print(x) out = predict_fixed(netG, x, 80, 8, INPUT_DIM, LATENT_DIM, use_cuda) if preds == None: preds = out else: preds = to...
-4.0 -3.5 -3.0 -2.5 -2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0
MIT
gan_toy_example.ipynb
acse-wx319/gans-on-multimodal-data
A pplication Programming Interface (API) An API lets two pieces of software talk to each other. Just like a function, you donโ€™t have to know how the API works only its inputs and outputs. An essential type of API is a REST API that allows you to access resources via the internet. In this lab, we will review the ...
!pip install nba_api
Collecting nba_api [?25l Downloading https://files.pythonhosted.org/packages/fd/94/ee060255b91d945297ebc2fe9a8672aee07ce83b553eef1c5ac5b974995a/nba_api-1.1.8-py3-none-any.whl (217kB)  |โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 225kB 2.7MB/s [?25hRequirement already satisfied: requests in /usr/local/lib/python3.6/dis...
MIT
Python_For_DSandAI_5_1_Intro_API.ipynb
ornob39/Python_For_DataScience_AI-IBM-
Pandas is an API You will use this function in the lab:
def one_dict(list_dict): keys=list_dict[0].keys() out_dict={key:[] for key in keys} for dict_ in list_dict: for key, value in dict_.items(): out_dict[key].append(value) return out_dict
_____no_output_____
MIT
Python_For_DSandAI_5_1_Intro_API.ipynb
ornob39/Python_For_DataScience_AI-IBM-
Pandas is an API Pandas is actually set of software components , much of witch is not even written in Python.
import pandas as pd import matplotlib.pyplot as plt
_____no_output_____
MIT
Python_For_DSandAI_5_1_Intro_API.ipynb
ornob39/Python_For_DataScience_AI-IBM-
You create a dictionary, this is just data.
dict_={'a':[11,21,31],'b':[12,22,32]}
_____no_output_____
MIT
Python_For_DSandAI_5_1_Intro_API.ipynb
ornob39/Python_For_DataScience_AI-IBM-
When you create a Pandas object with the Dataframe constructor in API lingo, this is an "instance". The data in the dictionary is passed along to the pandas API. You then use the dataframe to communicate with the API.
df=pd.DataFrame(dict_) type(df)
_____no_output_____
MIT
Python_For_DSandAI_5_1_Intro_API.ipynb
ornob39/Python_For_DataScience_AI-IBM-
When you call the method head the dataframe communicates with the API displaying the first few rows of the dataframe.
df.head()
_____no_output_____
MIT
Python_For_DSandAI_5_1_Intro_API.ipynb
ornob39/Python_For_DataScience_AI-IBM-
When you call the method mean,the API will calculate the mean and return the value.
df.mean()
_____no_output_____
MIT
Python_For_DSandAI_5_1_Intro_API.ipynb
ornob39/Python_For_DataScience_AI-IBM-
REST APIs Rest APIโ€™s function by sending a request, the request is communicated via HTTP message. The HTTP message usually contains a JSON file. This contains instructions for what operation we would like the service or resource to perform. In a similar manner, API returns a response, via an HTTP message, this respons...
from nba_api.stats.static import teams import matplotlib.pyplot as plt #https://pypi.org/project/nba-api/
_____no_output_____
MIT
Python_For_DSandAI_5_1_Intro_API.ipynb
ornob39/Python_For_DataScience_AI-IBM-
The method get_teams() returns a list of dictionaries the dictionary key id has a unique identifier for each team as a value
nba_teams = teams.get_teams()
_____no_output_____
MIT
Python_For_DSandAI_5_1_Intro_API.ipynb
ornob39/Python_For_DataScience_AI-IBM-
The dictionary key id has a unique identifier for each team as a value, let's look at the first three elements of the list:
nba_teams[0:3]
_____no_output_____
MIT
Python_For_DSandAI_5_1_Intro_API.ipynb
ornob39/Python_For_DataScience_AI-IBM-
To make things easier, we can convert the dictionary to a table. First, we use the function one dict, to create a dictionary. We use the common keys for each team as the keys, the value is a list; each element of the list corresponds to the values for each team.We then convert the dictionary to a dataframe, each row c...
dict_nba_team=one_dict(nba_teams) df_teams=pd.DataFrame(dict_nba_team) df_teams.head()
_____no_output_____
MIT
Python_For_DSandAI_5_1_Intro_API.ipynb
ornob39/Python_For_DataScience_AI-IBM-
Will use the team's nickname to find the unique id, we can see the row that contains the warriors by using the column nickname as follows:
df_warriors=df_teams[df_teams['nickname']=='Warriors'] df_warriors
_____no_output_____
MIT
Python_For_DSandAI_5_1_Intro_API.ipynb
ornob39/Python_For_DataScience_AI-IBM-
we can use the following line of code to access the first column of the dataframe:
id_warriors=df_warriors[['id']].values[0][0] #we now have an integer that can be used to request the Warriors information id_warriors
_____no_output_____
MIT
Python_For_DSandAI_5_1_Intro_API.ipynb
ornob39/Python_For_DataScience_AI-IBM-
The function "League Game Finder " will make an API call, its in the module stats.endpoints
from nba_api.stats.endpoints import leaguegamefinder
_____no_output_____
MIT
Python_For_DSandAI_5_1_Intro_API.ipynb
ornob39/Python_For_DataScience_AI-IBM-
The parameter team_id_nullable is the unique ID for the warriors. Under the hood, the NBA API is making a HTTP request. The information requested is provided and is transmitted via an HTTP response this is assigned to the object gamefinder.
# Since https://stats.nba.com does lot allow api calls from Cloud IPs and Skills Network Labs uses a Cloud IP. # The following code is comment out, you can run it on jupyter labs on your own computer. # gamefinder = leaguegamefinder.LeagueGameFinder(team_id_nullable=id_warriors)
_____no_output_____
MIT
Python_For_DSandAI_5_1_Intro_API.ipynb
ornob39/Python_For_DataScience_AI-IBM-
we can see the json file by running the following line of code.
# Since https://stats.nba.com does lot allow api calls from Cloud IPs and Skills Network Labs uses a Cloud IP. # The following code is comment out, you can run it on jupyter labs on your own computer. # gamefinder.get_json()
_____no_output_____
MIT
Python_For_DSandAI_5_1_Intro_API.ipynb
ornob39/Python_For_DataScience_AI-IBM-
The game finder object has a method get_data_frames(), that returns a dataframe. If we view the dataframe, we can see it contains information about all the games the Warriors played. The PLUS_MINUS column contains information on the score, if the value is negative the Warriors lost by that many points, if the value i...
# Since https://stats.nba.com does lot allow api calls from Cloud IPs and Skills Network Labs uses a Cloud IP. # The following code is comment out, you can run it on jupyter labs on your own computer. # games = gamefinder.get_data_frames()[0] # games.head()
_____no_output_____
MIT
Python_For_DSandAI_5_1_Intro_API.ipynb
ornob39/Python_For_DataScience_AI-IBM-
you can download the dataframe from the API call for Golden State and run the rest like a video.
! wget https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%205/Labs/Golden_State.pkl file_name = "Golden_State.pkl" games = pd.read_pickle(file_name) games.head()
_____no_output_____
MIT
Python_For_DSandAI_5_1_Intro_API.ipynb
ornob39/Python_For_DataScience_AI-IBM-
We can create two dataframes, one for the games that the Warriors faced the raptors at home and the second for away games.
games_home=games [games ['MATCHUP']=='GSW vs. TOR'] games_away=games [games ['MATCHUP']=='GSW @ TOR']
_____no_output_____
MIT
Python_For_DSandAI_5_1_Intro_API.ipynb
ornob39/Python_For_DataScience_AI-IBM-
We can calculate the mean for the column PLUS_MINUS for the dataframes games_home and games_away:
games_home.mean()['PLUS_MINUS'] games_away.mean()['PLUS_MINUS']
_____no_output_____
MIT
Python_For_DSandAI_5_1_Intro_API.ipynb
ornob39/Python_For_DataScience_AI-IBM-
We can plot out the PLUS MINUS column for for the dataframes games_home and games_away.We see the warriors played better at home.
fig, ax = plt.subplots() games_away.plot(x='GAME_DATE',y='PLUS_MINUS', ax=ax) games_home.plot(x='GAME_DATE',y='PLUS_MINUS', ax=ax) ax.legend(["away", "home"]) plt.show()
_____no_output_____
MIT
Python_For_DSandAI_5_1_Intro_API.ipynb
ornob39/Python_For_DataScience_AI-IBM-
About the Authors: [Joseph Santarcangelo]( https://www.linkedin.com/in/joseph-s-50398b136/) has a PhD in Electrical Engineering, his research focused on using machine learning, signal processing, and computer vision to determine how videos impact human cognition. Joseph has been working for IBM since he completed ...
_____no_output_____
MIT
Python_For_DSandAI_5_1_Intro_API.ipynb
ornob39/Python_For_DataScience_AI-IBM-
Machine Learning Engineer Nanodegree Unsupervised Learning Project: Creating Customer Segments Welcome to the third project of the Machine Learning Engineer Nanodegree! In this notebook, some template code has already been provided for you, and it will be your job to implement the additional functionality necessary to...
# Import libraries necessary for this project import numpy as np import pandas as pd from IPython.display import display # Allows the use of display() for DataFrames from scipy.stats import skew # Import supplementary visualizations code visuals.py import visuals as vs # Pretty display for notebooks %matplotlib inline...
Wholesale customers dataset has 440 samples with 6 features each.
MIT
customer_segments/customer_segments.ipynb
nk101/Machine-Learning-ND
Data ExplorationIn this section, you will begin exploring the data through visualizations and code to understand how each feature is related to the others. You will observe a statistical description of the dataset, consider the relevance of each feature, and select a few sample data points from the dataset which you w...
# Display a description of the dataset display(data.describe()) #by looking at statistics, mean and 50% i.e. median are too far away for every colum. So, skew is found. apply log to every column #or use box cox test
_____no_output_____
MIT
customer_segments/customer_segments.ipynb
nk101/Machine-Learning-ND
Implementation: Selecting SamplesTo get a better understanding of the customers and how their data will transform through the analysis, it would be best to select a few sample data points and explore them in more detail. In the code block below, add **three** indices of your choice to the `indices` list which will rep...
import seaborn as sns # TODO: Select three indices of your choice you wish to sample from the dataset indices = [23, 57,234] # Create a DataFrame of the chosen samples samples = pd.DataFrame(data.loc[indices], columns = data.keys()).reset_index(drop = True) print "Chosen samples of wholesale customers dataset:" displ...
Chosen samples of wholesale customers dataset:
MIT
customer_segments/customer_segments.ipynb
nk101/Machine-Learning-ND
Question 1Consider the total purchase cost of each product category and the statistical description of the dataset above for your sample customers. *What kind of establishment (customer) could each of the three samples you've chosen represent?* **Hint:** Examples of establishments include places like markets, cafes,...
'''# TODO: Make a copy of the DataFrame, using the 'drop' function to drop the given feature new_data = None # TODO: Split the data into training and testing sets using the given feature as the target X_train, X_test, y_train, y_test = (None, None, None, None) # TODO: Create a decision tree regressor and fit it to th...
_____no_output_____
MIT
customer_segments/customer_segments.ipynb
nk101/Machine-Learning-ND
Question 2*Which feature did you attempt to predict? What was the reported prediction score? Is this feature necessary for identifying customers' spending habits?* **Hint:** The coefficient of determination, `R^2`, is scored between 0 and 1, with 1 being a perfect fit. A negative `R^2` implies the model fails to fit ...
# Produce a scatter matrix for each pair of features in the data pd.scatter_matrix(data, alpha = 0.3, figsize = (14,8), diagonal = 'kde'); import seaborn as sns sns.heatmap(data.corr(), annot=True)
_____no_output_____
MIT
customer_segments/customer_segments.ipynb
nk101/Machine-Learning-ND
Question 3*Are there any pairs of features which exhibit some degree of correlation? Does this confirm or deny your suspicions about the relevance of the feature you attempted to predict? How is the data for those features distributed?* **Hint:** Is the data normally distributed? Where do most of the data points lie?...
# TODO: Scale the data using the natural logarithm log_data = np.log(data) # TODO: Scale the sample data using the natural logarithm log_samples = np.log(samples) # Produce a scatter matrix for each pair of newly-transformed features pd.scatter_matrix(log_data, alpha = 0.3, figsize = (14,8), diagonal = 'kde'); log_da...
_____no_output_____
MIT
customer_segments/customer_segments.ipynb
nk101/Machine-Learning-ND
ObservationAfter applying a natural logarithm scaling to the data, the distribution of each feature should appear much more normal. For any pairs of features you may have identified earlier as being correlated, observe here whether that correlation is still present (and whether it is now stronger or weaker than before...
# Display the log-transformed sample data display(log_samples)
_____no_output_____
MIT
customer_segments/customer_segments.ipynb
nk101/Machine-Learning-ND
Implementation: Outlier DetectionDetecting outliers in the data is extremely important in the data preprocessing step of any analysis. The presence of outliers can often skew results which take into consideration these data points. There are many "rules of thumb" for what constitutes an outlier in a dataset. Here, we ...
# For each feature find the data points with extreme high or low values outindex = {} for feature in log_data.keys(): # TODO: Calculate Q1 (25th percentile of the data) for the given feature Q1 = np.percentile(log_data[feature],25) # TODO: Calculate Q3 (75th percentile of the data) for the given f...
Data points considered outliers for the feature 'Fresh':
MIT
customer_segments/customer_segments.ipynb
nk101/Machine-Learning-ND
Question 4*Are there any data points considered outliers for more than one feature based on the definition above? Should these data points be removed from the dataset? If any data points were added to the `outliers` list to be removed, explain why.* **Answer:**[128, 154, 65, 66, 75] are the outliers and should be rem...
from sklearn.decomposition import PCA # TODO: Apply PCA by fitting the good data with the same number of dimensions as features pca = PCA(n_components = 6) pca.fit(good_data) # TODO: Transform log_samples using the PCA fit above pca_samples = pca.transform(log_samples) # Generate PCA results plot pca_results = vs.pca_...
_____no_output_____
MIT
customer_segments/customer_segments.ipynb
nk101/Machine-Learning-ND
Question 5*How much variance in the data is explained* ***in total*** *by the first and second principal component? What about the first four principal components? Using the visualization provided above, discuss what the first four dimensions best represent in terms of customer spending.* **Hint:** A positive increas...
# Display sample log-data after having a PCA transformation applied display(pd.DataFrame(np.round(pca_samples, 4), columns = pca_results.index.values))
_____no_output_____
MIT
customer_segments/customer_segments.ipynb
nk101/Machine-Learning-ND
Implementation: Dimensionality ReductionWhen using principal component analysis, one of the main goals is to reduce the dimensionality of the data โ€” in effect, reducing the complexity of the problem. Dimensionality reduction comes at a cost: Fewer dimensions used implies less of the total variance in the data is being...
# TODO: Apply PCA by fitting the good data with only two dimensions pca = PCA(n_components =2) pca.fit(good_data) # TODO: Transform the good data using the PCA fit above reduced_data = pca.transform(good_data) # TODO: Transform log_samples using the PCA fit above pca_samples = pca.transform(log_samples) # Create a Da...
_____no_output_____
MIT
customer_segments/customer_segments.ipynb
nk101/Machine-Learning-ND
ObservationRun the code below to see how the log-transformed sample data has changed after having a PCA transformation applied to it using only two dimensions. Observe how the values for the first two dimensions remains unchanged when compared to a PCA transformation in six dimensions.
# Display sample log-data after applying PCA transformation in two dimensions display(pd.DataFrame(np.round(pca_samples, 4), columns = ['Dimension 1', 'Dimension 2']))
_____no_output_____
MIT
customer_segments/customer_segments.ipynb
nk101/Machine-Learning-ND
Visualizing a BiplotA biplot is a scatterplot where each data point is represented by its scores along the principal components. The axes are the principal components (in this case `Dimension 1` and `Dimension 2`). In addition, the biplot shows the projection of the original features along the components. A biplot can...
# Create a biplot vs.biplot(good_data, reduced_data, pca)
_____no_output_____
MIT
customer_segments/customer_segments.ipynb
nk101/Machine-Learning-ND
ObservationOnce we have the original feature projections (in red), it is easier to interpret the relative position of each data point in the scatterplot. For instance, a point the lower right corner of the figure will likely correspond to a customer that spends a lot on `'Milk'`, `'Grocery'` and `'Detergents_Paper'`, ...
from sklearn.metrics import silhouette_score from sklearn.mixture import GMM scorer = {}#for n sample points 2 to n-1 clusters can be created. for i in range(2,10): clusterer = GMM(n_components = i) clusterer.fit(reduced_data) pred = clusterer.predict(reduced_data) score = silhouette_score(reduced_data,...
C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:52: DeprecationWarning: Class GMM is deprecated; The class GMM is deprecated in 0.18 and will be removed in 0.20. Use class GaussianMixture instead. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklea...
MIT
customer_segments/customer_segments.ipynb
nk101/Machine-Learning-ND
Question 7*Report the silhouette score for several cluster numbers you tried. Of these, which number of clusters has the best silhouette score?* **Answer:**{2: 0.41181886438624482, 3: 0.37616616509083634, 4: 0.34168407828470648, 5: 0.28001985722335737, 6: 0.26923051036000389, 7: 0.32398601556485884, 8: 0.304106857662...
# Display the results of the clustering from implementation vs.cluster_results(reduced_data, preds, centers, pca_samples)
_____no_output_____
MIT
customer_segments/customer_segments.ipynb
nk101/Machine-Learning-ND
Implementation: Data RecoveryEach cluster present in the visualization above has a central point. These centers (or means) are not specifically data points from the data, but rather the *averages* of all the data points predicted in the respective clusters. For the problem of creating customer segments, a cluster's ce...
# TODO: Inverse transform the centers log_centers = pca.inverse_transform(centers) # TODO: Exponentiate the centers true_centers = np.exp(log_centers) # Display the true centers segments = ['Segment {}'.format(i) for i in range(0,len(centers))] true_centers = pd.DataFrame(np.round(true_centers), columns = data.keys()...
_____no_output_____
MIT
customer_segments/customer_segments.ipynb
nk101/Machine-Learning-ND
Question 8Consider the total purchase cost of each product category for the representative data points above, and reference the statistical description of the dataset at the beginning of this project. *What set of establishments could each of the customer segments represent?* **Hint:** A customer who is assigned to `...
# Display the predictions for i, pred in enumerate(sample_preds): print "Sample point", i, "predicted to be in Cluster", pred display(samples)
Sample point 0 predicted to be in Cluster 0 Sample point 1 predicted to be in Cluster 0 Sample point 2 predicted to be in Cluster 1
MIT
customer_segments/customer_segments.ipynb
nk101/Machine-Learning-ND
**Answer:** The third datapoint is correctly classified by both me and gmm. The first two aren't exactly misclassfied since I have put them into two categories Supermarket and Convenience Store which can be also called as Retailer Conclusion In this final section, you will investigate ways that you can make use of the...
# Display the clustering results based on 'Channel' data vs.channel_results(reduced_data, outliers, pca_samples)
_____no_output_____
MIT
customer_segments/customer_segments.ipynb
nk101/Machine-Learning-ND
SSD Evaluation TutorialThis is a brief tutorial that explains how compute the average precisions for any trained SSD model using the `Evaluator` class. The `Evaluator` computes the average precisions according to the Pascal VOC pre-2010 or post-2010 detection evaluation algorithms. You can find details about these com...
from keras import backend as K from tensorflow.keras.models import load_model from tensorflow.keras.optimizers import Adam from matplotlib.pyplot import imread import numpy as np from matplotlib import pyplot as plt from models.keras_ssd300 import ssd_300 from keras_loss_function.keras_ssd_loss import SSDLoss from ker...
_____no_output_____
Apache-2.0
ssd300_evaluation.ipynb
rogeryan/ssd_keras
1. Load a trained SSDEither load a trained model or build a model and load trained weights into it. Since the HDF5 files I'm providing contain only the weights for the various SSD versions, not the complete models, you'll have to go with the latter option when using this implementation for the first time. You can then...
# 1: Build the Keras model K.clear_session() # Clear previous models from memory. model = ssd_300(image_size=(img_height, img_width, 3), n_classes=n_classes, mode=model_mode, l2_regularization=0.0005, scales=[0.1, 0.2, 0.37, 0.54, 0.71, 0.88, 1.05], # Th...
_____no_output_____
Apache-2.0
ssd300_evaluation.ipynb
rogeryan/ssd_keras
Or 1.2. Load a trained modelWe set `model_mode` to 'inference' above, so the evaluator expects that you load a model that was built in 'inference' mode. If you're loading a model that was built in 'training' mode, change the `model_mode` parameter accordingly.
# TODO: Set the path to the `.h5` file of the model to be loaded. model_path = 'path/to/trained/model.h5' # We need to create an SSDLoss object in order to pass that to the model loader. ssd_loss = SSDLoss(neg_pos_ratio=3, alpha=1.0) K.clear_session() # Clear previous models from memory. model = load_model(model_pat...
_____no_output_____
Apache-2.0
ssd300_evaluation.ipynb
rogeryan/ssd_keras
2. Create a data generator for the evaluation datasetInstantiate a `DataGenerator` that will serve the evaluation dataset during the prediction phase.
dataset = DataGenerator() # TODO: Set the paths to the dataset here. Pascal_VOC_dataset_images_dir = '../../datasets/VOCdevkit/VOC2007/JPEGImages/' Pascal_VOC_dataset_annotations_dir = '../../datasets/VOCdevkit/VOC2007/Annotations/' Pascal_VOC_dataset_image_set_filename = '../../datasets/VOCdevkit/VOC2007/ImageSets/Ma...
test.txt: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 4952/4952 [00:13<00:00, 373.84it/s]
Apache-2.0
ssd300_evaluation.ipynb
rogeryan/ssd_keras
3. Run the evaluationNow that we have instantiated a model and a data generator to serve the dataset, we can set up the evaluator and run the evaluation.The evaluator is quite flexible: It can compute the average precisions according to the Pascal VOC pre-2010 algorithm, which samples 11 equidistant points of the prec...
evaluator = Evaluator(model=model, n_classes=n_classes, data_generator=dataset, model_mode=model_mode) results = evaluator(img_height=img_height, img_width=img_width, batch_size=8, data_generat...
Number of images in the evaluation dataset: 4952 Producing predictions batch-wise: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 619/619 [02:17<00:00, 4.50it/s] Matching predictions to ground truth, class 1/20.: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 7902/7902 [00:00<00:00, 19253.00it/s] Matching predictions to ground truth, class 2/20.: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 4276/4276 [0...
Apache-2.0
ssd300_evaluation.ipynb
rogeryan/ssd_keras