markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Now we can pass these sets into a series of different training & testing algorithms and compare their results. ___ Train a Logistic Regression classifierOne of the simplest multi-class classification tools is [logistic regression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html). Scikit-learn offers a variety of algorithmic solvers; we'll use [L-BFGS](https://en.wikipedia.org/wiki/Limited-memory_BFGS).
from sklearn.linear_model import LogisticRegression lr_model = LogisticRegression(solver='lbfgs') lr_model.fit(X_train, y_train)
_____no_output_____
Apache-2.0
nlp/UPDATED_NLP_COURSE/03-Text-Classification/00-SciKit-Learn-Primer.ipynb
rishuatgithub/MLPy
Test the Accuracy of the Model
from sklearn import metrics # Create a prediction set: predictions = lr_model.predict(X_test) # Print a confusion matrix print(metrics.confusion_matrix(y_test,predictions)) # You can make the confusion matrix less confusing by adding labels: df = pd.DataFrame(metrics.confusion_matrix(y_test,predictions), index=['ham','spam'], columns=['ham','spam']) df
_____no_output_____
Apache-2.0
nlp/UPDATED_NLP_COURSE/03-Text-Classification/00-SciKit-Learn-Primer.ipynb
rishuatgithub/MLPy
These results are terrible! More spam messages were confused as ham (241) than correctly identified as spam (5), although a relatively small number of ham messages (46) were confused as spam.
# Print a classification report print(metrics.classification_report(y_test,predictions)) # Print the overall accuracy print(metrics.accuracy_score(y_test,predictions))
0.84393692224
Apache-2.0
nlp/UPDATED_NLP_COURSE/03-Text-Classification/00-SciKit-Learn-Primer.ipynb
rishuatgithub/MLPy
This model performed *worse* than a classifier that assigned all messages as "ham" would have! ___ Train a naïve Bayes classifier:One of the most common - and successful - classifiers is [naïve Bayes](http://scikit-learn.org/stable/modules/naive_bayes.htmlnaive-bayes).
from sklearn.naive_bayes import MultinomialNB nb_model = MultinomialNB() nb_model.fit(X_train, y_train)
_____no_output_____
Apache-2.0
nlp/UPDATED_NLP_COURSE/03-Text-Classification/00-SciKit-Learn-Primer.ipynb
rishuatgithub/MLPy
Run predictions and report on metrics
predictions = nb_model.predict(X_test) print(metrics.confusion_matrix(y_test,predictions))
[[1583 10] [ 246 0]]
Apache-2.0
nlp/UPDATED_NLP_COURSE/03-Text-Classification/00-SciKit-Learn-Primer.ipynb
rishuatgithub/MLPy
The total number of confusions dropped from **287** to **256**. [241+46=287, 246+10=256]
print(metrics.classification_report(y_test,predictions)) print(metrics.accuracy_score(y_test,predictions))
0.860793909734
Apache-2.0
nlp/UPDATED_NLP_COURSE/03-Text-Classification/00-SciKit-Learn-Primer.ipynb
rishuatgithub/MLPy
Better, but still less accurate than 86.6% ___ Train a support vector machine (SVM) classifierAmong the SVM options available, we'll use [C-Support Vector Classification (SVC)](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.htmlsklearn.svm.SVC)
from sklearn.svm import SVC svc_model = SVC(gamma='auto') svc_model.fit(X_train,y_train)
_____no_output_____
Apache-2.0
nlp/UPDATED_NLP_COURSE/03-Text-Classification/00-SciKit-Learn-Primer.ipynb
rishuatgithub/MLPy
Run predictions and report on metrics
predictions = svc_model.predict(X_test) print(metrics.confusion_matrix(y_test,predictions))
[[1515 78] [ 131 115]]
Apache-2.0
nlp/UPDATED_NLP_COURSE/03-Text-Classification/00-SciKit-Learn-Primer.ipynb
rishuatgithub/MLPy
The total number of confusions dropped even further to **209**.
print(metrics.classification_report(y_test,predictions)) print(metrics.accuracy_score(y_test,predictions))
0.886351277868
Apache-2.0
nlp/UPDATED_NLP_COURSE/03-Text-Classification/00-SciKit-Learn-Primer.ipynb
rishuatgithub/MLPy
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine APIInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.The magic command `%%capture` can be used to hide output from a specific cell.
# %%capture # !pip install earthengine-api # !pip install geehydro
_____no_output_____
MIT
Gena/map_center_object.ipynb
guy1ziv2/earthengine-py-notebooks
Import libraries
import ee import folium import geehydro
_____no_output_____
MIT
Gena/map_center_object.ipynb
guy1ziv2/earthengine-py-notebooks
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once. Uncomment the line `ee.Authenticate()` if you are running this notebook for this first time or if you are getting an authentication error.
# ee.Authenticate() ee.Initialize()
_____no_output_____
MIT
Gena/map_center_object.ipynb
guy1ziv2/earthengine-py-notebooks
Create an interactive map This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function. The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
Map = folium.Map(location=[40, -100], zoom_start=4) Map.setOptions('HYBRID')
_____no_output_____
MIT
Gena/map_center_object.ipynb
guy1ziv2/earthengine-py-notebooks
Add Earth Engine Python script
# get a single feature countries = ee.FeatureCollection("USDOS/LSIB_SIMPLE/2017") country = countries.filter(ee.Filter.eq('country_na', 'Ukraine')) Map.addLayer(country, { 'color': 'orange' }, 'feature collection layer') # TEST: center feature on a map Map.centerObject(country, 6)
_____no_output_____
MIT
Gena/map_center_object.ipynb
guy1ziv2/earthengine-py-notebooks
Display Earth Engine data layers
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True) Map
_____no_output_____
MIT
Gena/map_center_object.ipynb
guy1ziv2/earthengine-py-notebooks
Examples for the AbsComponent Class (v1.1)
%matplotlib inline # suppress warnings for these examples import warnings warnings.filterwarnings('ignore') # import try: import seaborn as sns; sns.set_style("white") except: pass import numpy as np from astropy.table import QTable import astropy.units as u from linetools.spectralline import AbsLine from linetools.isgm import utils as ltiu from linetools.analysis import absline as laa from linetools.spectra import io as lsio from linetools.isgm.abscomponent import AbsComponent import linetools.analysis.voigt as lav import imp lt_path = imp.find_module('linetools')[1]
_____no_output_____
BSD-3-Clause
docs/examples/AbsComponent_examples.ipynb
marijana777/linetools
Instantiate Standard
abscomp = AbsComponent((10.0*u.deg, 45*u.deg), (14,2), 1.0, [-300,300]*u.km/u.s) abscomp
_____no_output_____
BSD-3-Clause
docs/examples/AbsComponent_examples.ipynb
marijana777/linetools
From AbsLines From one line
lya = AbsLine(1215.670*u.AA, z=2.92939) lya.limits.set([-300.,300.]*u.km/u.s) # vlim abscomp = AbsComponent.from_abslines([lya]) print(abscomp) abscomp._abslines
<AbsComponent: 00:00:00 +00:00:00, Name=HI_z2.92939, Zion=(1,1), Ej=0 1 / cm, z=2.92939, vlim=-300 km / s,300 km / s>
BSD-3-Clause
docs/examples/AbsComponent_examples.ipynb
marijana777/linetools
From multiple
lyb = AbsLine(1025.7222*u.AA, z=lya.z) lyb.limits.set([-300.,300.]*u.km/u.s) # vlim abscomp = AbsComponent.from_abslines([lya,lyb]) print(abscomp) abscomp._abslines #### Define from QTable and make an spectrum model # We first create a QTable with the most relevant information for defining AbsComponents tab = QTable() tab['ion_name'] = ['HI', 'HI'] tab['z_comp'] = [0.2, 0.15] # you should put the right redshifts here tab['logN'] = [19., 19.] # you should put the right column densities here tab['sig_logN'] = [0.1, 0.1] # you should put the right column density uncertainties here tab['flag_logN'] = [1, 1] # Flags correspond to linetools notation tab['RA'] = [0, 0]*u.deg # you should put the right coordinates here tab['DEC'] = [0, 0]*u.deg # you should put the right coordinates here tab['vmin'] = [-100, -100]*u.km/u.s # This correspond to the velocity lower limit for the absorption components tab['vmax'] = [100, 100]*u.km/u.s # This correspond to the velocity upper limit for the absorption components tab['b'] = [20, 20]*u.km/u.s # you should put the right Dopper parameters here # We now use this table to create a list of AbsComponents complist = ltiu.complist_from_table(tab) # Now we need to add AbsLines to the component that are relevant for your spectrum # This will be done by knowing the observed wavelength limits wvlim = [1150, 1750]*u.AA for comp in complist: comp.add_abslines_from_linelist(llist='HI') # you can also use llist="ISM" if you have other non HI components # Finally, we can create a model spectrum for each AbsCompontent wv_array = np.arange(1150,1750, 0.01) * u.AA # This should match your spectrum wavelength array model_1 = ltav.voigt_from_components(wv_array, [complist[0]])
Loading abundances from Asplund2009 Abundances are relative by number on a logarithmic scale with H=12
BSD-3-Clause
docs/examples/AbsComponent_examples.ipynb
marijana777/linetools
Methods Generate a Component Table
lya.attrib['logN'] = 14.1 lya.attrib['sig_logN'] = 0.15 lya.attrib['flag_N'] = 1 laa.linear_clm(lya.attrib) lyb.attrib['logN'] = 14.15 lyb.attrib['sig_logN'] = 0.19 lyb.attrib['flag_N'] = 1 laa.linear_clm(lyb.attrib) abscomp = AbsComponent.from_abslines([lya,lyb]) comp_tbl = abscomp.build_table() comp_tbl
_____no_output_____
BSD-3-Clause
docs/examples/AbsComponent_examples.ipynb
marijana777/linetools
Synthesize multiple components
SiIItrans = ['SiII 1260', 'SiII 1304', 'SiII 1526'] SiIIlines = [] for trans in SiIItrans: iline = AbsLine(trans, z=2.92939) iline.attrib['logN'] = 12.8 + np.random.rand() iline.attrib['sig_logN'] = 0.15 iline.attrib['flag_N'] = 1 iline.limits.set([-300.,50.]*u.km/u.s) # vlim _,_ = laa.linear_clm(iline.attrib) SiIIlines.append(iline) SiIIcomp = AbsComponent.from_abslines(SiIIlines) SiIIcomp.synthesize_colm() SiIIlines2 = [] for trans in SiIItrans: iline = AbsLine(trans, z=2.92939) iline.attrib['logN'] = 13.3 + np.random.rand() iline.attrib['sig_logN'] = 0.15 iline.attrib['flag_N'] = 1 iline.limits.set([50.,300.]*u.km/u.s) # vlim _,_ = laa.linear_clm(iline.attrib) SiIIlines2.append(iline) SiIIcomp2 = AbsComponent.from_abslines(SiIIlines2) SiIIcomp2.synthesize_colm() abscomp.synthesize_colm() [abscomp,SiIIcomp,SiIIcomp2] synth_SiII = ltiu.synthesize_components([SiIIcomp,SiIIcomp2]) synth_SiII
_____no_output_____
BSD-3-Clause
docs/examples/AbsComponent_examples.ipynb
marijana777/linetools
Generate multiple components from abslines
comps = ltiu.build_components_from_abslines([lya,lyb,SiIIlines[0],SiIIlines[1]]) comps
_____no_output_____
BSD-3-Clause
docs/examples/AbsComponent_examples.ipynb
marijana777/linetools
Generate an Ion Table
tbl = ltiu.iontable_from_components([abscomp,SiIIcomp,SiIIcomp2]) tbl
_____no_output_____
BSD-3-Clause
docs/examples/AbsComponent_examples.ipynb
marijana777/linetools
Stack plot Load a spectrum
xspec = lsio.readspec(lt_path+'/spectra/tests/files/UM184_nF.fits') lya.analy['spec'] = xspec lyb.analy['spec'] = xspec
_____no_output_____
BSD-3-Clause
docs/examples/AbsComponent_examples.ipynb
marijana777/linetools
Show
abscomp = AbsComponent.from_abslines([lya,lyb]) abscomp.stack_plot()
_____no_output_____
BSD-3-Clause
docs/examples/AbsComponent_examples.ipynb
marijana777/linetools
Notice: This notebook is not optimized for memory nor performance yet. Please use it with caution when handling large datasets. Notice: Please ignore Feature engineering part if you are using a ready dataset Feature engineering This notebook is for BDSE12_03G_HomeCredit_V2.csv processing for bear LGBM final Prepare work environment
# Pandas for managing datasets import numpy as np import pandas as pd np.__version__, pd.__version__ # math for operating numbers import math import gc # Change pd displayg format for float pd.options.display.float_format = '{:,.4f}'.format # Matplotlib for additional customization from matplotlib import pyplot as plt %matplotlib inline # Seaborn for plotting and styling import seaborn as sns #Seaborn set() to set aesthetic parameters in one step. sns.set()
_____no_output_____
MIT
notebooks/homecdt_model/.ipynb_checkpoints/ss_fteng_model_fromBDSE12_03G_HomeCredit_V2_20200204b-checkpoint.ipynb
ss9202150/Project_1
--- Read & combine datasets
appl_all_df = pd.read_csv('../..//datasets/homecdt_fteng/BDSE12_03G_HomeCredit_V2.csv',index_col=0) appl_all_df.info()
<class 'pandas.core.frame.DataFrame'> Int64Index: 356255 entries, 0 to 356254 Columns: 797 entries, AMT_ANNUITY to GOODS_PRICE_PREV% dtypes: float64(741), int64(42), object(14) memory usage: 2.1+ GB
MIT
notebooks/homecdt_model/.ipynb_checkpoints/ss_fteng_model_fromBDSE12_03G_HomeCredit_V2_20200204b-checkpoint.ipynb
ss9202150/Project_1
---
# appl_all_df.apply(lambda x:x.unique().size).describe() appl_all_df['TARGET'].unique(), \ appl_all_df['TARGET'].unique().size appl_all_df['TARGET'].value_counts() appl_all_df['TARGET'].isnull().sum(), \ appl_all_df['TARGET'].size, \ (appl_all_df['TARGET'].isnull().sum()/appl_all_df['TARGET'].size).round(4) # Make sure we can use the nullness of 'TARGET' column to separate train & test # assert appl_all_df['TARGET'].isnull().sum() == appl_test_df.shape[0]
_____no_output_____
MIT
notebooks/homecdt_model/.ipynb_checkpoints/ss_fteng_model_fromBDSE12_03G_HomeCredit_V2_20200204b-checkpoint.ipynb
ss9202150/Project_1
--- Randomized sampleing: If the dataset is too large, consider following randomized sampling from original dataset to facilitate development and testing
# Randomized sampling from original dataset. # This is just for simplifying the development process # After coding is complete, should replace all df-->df, and remove this cell # Reference: https://yiidtw.github.io/blog/2018-05-29-how-to-shuffle-dataframe-in-pandas/ # df= appl_all_df.sample(n = 1000).reset_index(drop=True) # df.shape # df.head()
_____no_output_____
MIT
notebooks/homecdt_model/.ipynb_checkpoints/ss_fteng_model_fromBDSE12_03G_HomeCredit_V2_20200204b-checkpoint.ipynb
ss9202150/Project_1
--- Tool: Get numerical/ categorical variables(columns) from a dataframe
def get_num_df (data_df, unique_value_threshold: int): """ Output: a new dataframe with columns of numerical variables from the input dataframe. Input: data_df: original dataframe, unique_value_threshold(int): number of unique values of each column e.g. If we define a column with > 3 unique values as being numerical variable, unique_value_threshold = 3 """ num_mask = data_df.apply(lambda x:x.unique().size > unique_value_threshold,axis=0) num_df = data_df[data_df.columns[num_mask]] return num_df def get_cat_df (data_df, unique_value_threshold: int): """ Output: a new dataframe with columns of categorical variables from the input dataframe. Input: data_df: original dataframe, unique_value_threshold(int): number of unique values of each column e.g. If we define a column with =<3 unique values as being numerical variable, unique_value_threshold = 3 """ cat_mask = data_df.apply(lambda x:x.unique().size <= unique_value_threshold,axis=0) cat_df = data_df[data_df.columns[cat_mask]] return cat_df # Be careful when doing this assertion with large datasets # assert get_cat_df(appl_all_df, 3).columns.size + get_num_df(appl_all_df, 3).columns.size == appl_all_df.columns.size
_____no_output_____
MIT
notebooks/homecdt_model/.ipynb_checkpoints/ss_fteng_model_fromBDSE12_03G_HomeCredit_V2_20200204b-checkpoint.ipynb
ss9202150/Project_1
--- Splitting id_target_df, cat_df, num_df
# Separate id and target columns before any further processing id_target_df = appl_all_df.loc[:, ['SK_ID_CURR','TARGET']] # Get the operating appl_all_df by removing id and target columns appl_all_df_opr = appl_all_df.drop(['SK_ID_CURR','TARGET'], axis=1) # A quick check of their shapes appl_all_df.shape, id_target_df.shape, appl_all_df_opr.shape # Spliting the numerical and categorical variable containing columns via the tools decribed above. # Max identified unique value of categorical column 'ORGANIZATION_TYPE' = 58 cat_df = get_cat_df (appl_all_df_opr, 58) num_df = get_num_df (appl_all_df_opr, 58) cat_df.info() num_df.info() # A quick check of their shapes appl_all_df_opr.shape, cat_df.shape, num_df.shape assert cat_df.shape[1] + num_df.shape[1] + id_target_df.shape[1] \ == appl_all_df_opr.shape[1] + id_target_df.shape[1] \ == appl_all_df.shape[1] assert cat_df.shape[0] == num_df.shape[0] == id_target_df.shape[0] \ == appl_all_df_opr.shape[0] \ == appl_all_df.shape[0] # Apply the following gc if memory is running slow appl_all_df_opr.info() appl_all_df.info() del appl_all_df_opr del appl_all_df gc.collect()
<class 'pandas.core.frame.DataFrame'> Int64Index: 356255 entries, 0 to 356254 Columns: 795 entries, AMT_ANNUITY to GOODS_PRICE_PREV% dtypes: float64(740), int64(41), object(14) memory usage: 2.1+ GB <class 'pandas.core.frame.DataFrame'> Int64Index: 356255 entries, 0 to 356254 Columns: 797 entries, AMT_ANNUITY to GOODS_PRICE_PREV% dtypes: float64(741), int64(42), object(14) memory usage: 2.1+ GB
MIT
notebooks/homecdt_model/.ipynb_checkpoints/ss_fteng_model_fromBDSE12_03G_HomeCredit_V2_20200204b-checkpoint.ipynb
ss9202150/Project_1
--- Dealing with categorical variables Transform to String (i.e., python object) and fill nan with String 'nan'
cat_df_obj = cat_df.astype(str) assert np.all(cat_df_obj.dtypes) == object # There are no NA left assert all(cat_df_obj.isnull().sum())==0 # The float nan will be tranformed to String 'nan' # Use this assertion carefully when dealing with extra-large datasets assert cat_df.isnull().equals(cat_df_obj.isin({'nan'}))
_____no_output_____
MIT
notebooks/homecdt_model/.ipynb_checkpoints/ss_fteng_model_fromBDSE12_03G_HomeCredit_V2_20200204b-checkpoint.ipynb
ss9202150/Project_1
Dealing with special columns Replace 'nan' with 'not specified' in column 'FONDKAPREMONT_MODE'
# Do the replacement and re-assign the modified column back to the original dataframe cat_df_obj['FONDKAPREMONT_MODE'] = cat_df_obj['FONDKAPREMONT_MODE'].replace('nan','not specified') # check again the unique value, it should be 1 less than the original cat_df assert cat_df['FONDKAPREMONT_MODE'].unique().size == cat_df_obj['FONDKAPREMONT_MODE'].unique().size +1 # Apply the following gc if memory is running slow cat_df.info() del cat_df gc.collect()
<class 'pandas.core.frame.DataFrame'> Int64Index: 356255 entries, 0 to 356254 Columns: 250 entries, AMT_REQ_CREDIT_BUREAU_DAY to AMT_REQ_CREDIT_BUREAU_MON/QRT dtypes: float64(198), int64(38), object(14) memory usage: 682.2+ MB
MIT
notebooks/homecdt_model/.ipynb_checkpoints/ss_fteng_model_fromBDSE12_03G_HomeCredit_V2_20200204b-checkpoint.ipynb
ss9202150/Project_1
Do one-hot encoding Check the input dataframe (i.e., cat_df_obj)
cat_df_obj.shape cat_df_obj.apply(lambda x:x.unique().size).sum() # ?pd.get_dummies # pd.get_dummies() method deals only with categorical variables. # Although it has a built-in argument 'dummy_na' to manage the na value, # our na value has already been converted to string object which are not recognized by the method. # Let's just move forward as planned cat_df_obj_ohe = pd.get_dummies(cat_df_obj, drop_first=True) cat_df_obj_ohe.shape # Make sure the ohe is successful assert np.all(np.isin(cat_df_obj_ohe.values,[0,1])) == True # cat_df_obj_ohe.dtypes assert np.all(cat_df_obj_ohe.dtypes) == 'uint8' # make sure the column counts are correct assert cat_df_obj.apply(lambda x:x.unique().size).sum() == cat_df_obj_ohe.shape[1] + cat_df_obj.shape[1] cat_df_obj_ohe.info() # Apply the following gc if memory is running slow del cat_df_obj gc.collect() # %timeit np.isin(cat_df_obj_ohe.values,[0,1]) # # 1.86 s ± 133 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) # %timeit cat_df_obj_ohe.isin([0 , 1]) # # 3.38 s ± 32.5 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) # %timeit np.all(np.isin(cat_df_obj_ohe.values,[0,1])) # # 1.85 s ± 28 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) # %timeit np.all(cat_df_obj_ohe.isin([0 , 1])) # # 3.47 s ± 193 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
_____no_output_____
MIT
notebooks/homecdt_model/.ipynb_checkpoints/ss_fteng_model_fromBDSE12_03G_HomeCredit_V2_20200204b-checkpoint.ipynb
ss9202150/Project_1
--- Dealing with numerial variables Get na flags
num_df.shape # How many columns contain na value. num_df.isna().any().sum() num_isna_df = num_df[num_df.columns[num_df.isna().any()]] num_notna_df = num_df[num_df.columns[num_df.notna().all()]] assert num_isna_df.shape[1] + num_notna_df.shape[1] == num_df.shape[1] assert num_isna_df.shape[0] == num_notna_df.shape[0] == num_df.shape[0] num_isna_df.shape, num_notna_df.shape # num_df.isna().any(): column names for those na containing columns # use it to transform values bool to int, and then add suffix on the column names to get the na-flag df num_naFlag_df = num_isna_df.isna().astype(np.uint8).add_suffix('_na') num_naFlag_df.info()
<class 'pandas.core.frame.DataFrame'> Int64Index: 356255 entries, 0 to 356254 Columns: 528 entries, APARTMENTS_AVG_na to GOODS_PRICE_PREV%_na dtypes: uint8(528) memory usage: 182.1 MB
MIT
notebooks/homecdt_model/.ipynb_checkpoints/ss_fteng_model_fromBDSE12_03G_HomeCredit_V2_20200204b-checkpoint.ipynb
ss9202150/Project_1
replace na with zero
num_isna_df = num_isna_df.fillna(0) num_isna_df.shape # How many columns contain na value. num_isna_df.isna().any().sum() num_isna_df.info() assert num_isna_df.shape == num_naFlag_df.shape num_df = pd.concat([num_notna_df,num_isna_df,num_naFlag_df], axis = 'columns') assert num_notna_df.shape[1] + num_isna_df.shape[1] + num_naFlag_df.shape[1] == num_df.shape[1] num_df.info(verbose=False) # Apply the following gc if memory is running slow del num_notna_df del num_isna_df del num_naFlag_df gc.collect()
_____no_output_____
MIT
notebooks/homecdt_model/.ipynb_checkpoints/ss_fteng_model_fromBDSE12_03G_HomeCredit_V2_20200204b-checkpoint.ipynb
ss9202150/Project_1
--- Normalization (DO LATER!!) Generally, in tree-based models, the scale of the features does not matter.https://scikit-learn.org/stable/modules/preprocessing.htmlnormalizationhttps://datascience.stackexchange.com/questions/22036/how-does-lightgbm-deal-with-value-scale --- Combine to a complete, processed dataset
frames = np.array([id_target_df, cat_df_obj_ohe, num_df]) id_target_df.shape, cat_df_obj_ohe.shape, num_df.shape appl_all_processed_df = pd.concat(frames, axis ='columns') appl_all_processed_df.shape assert appl_all_processed_df.shape[1] == id_target_df.shape[1] + cat_df_obj_ohe.shape[1] + num_df.shape[1] appl_all_processed_df.info() # Apply the following gc if memory is running slow del id_target_df del cat_df_obj_ohe del num_df gc.collect()
_____no_output_____
MIT
notebooks/homecdt_model/.ipynb_checkpoints/ss_fteng_model_fromBDSE12_03G_HomeCredit_V2_20200204b-checkpoint.ipynb
ss9202150/Project_1
--- Export to CSV
# Export the dataframe to csv for future use appl_all_processed_df.to_csv('../../datasets/homecdt_fteng/ss_fteng_fromBDSE12_03G_HomeCredit_V2_20200204a.csv', index = False) # Export the dtypes Series to csv for future use appl_all_processed_df.dtypes.to_csv('../../datasets/homecdt_fteng/ss_fteng_fromBDSE12_03G_HomeCredit_V2_20200204a_dtypes_series.csv')
C:\Users\Student\.conda\envs\homecdt\lib\site-packages\ipykernel_launcher.py:2: FutureWarning: The signature of `Series.to_csv` was aligned to that of `DataFrame.to_csv`, and argument 'header' will change its default value from False to True: please pass an explicit value to suppress this warning.
MIT
notebooks/homecdt_model/.ipynb_checkpoints/ss_fteng_model_fromBDSE12_03G_HomeCredit_V2_20200204b-checkpoint.ipynb
ss9202150/Project_1
--- Interface connecting fteng & model parts
# Assign appl_all_processed_df to final_df for follow-up modeling final_df = appl_all_processed_df # Apply the following gc if memory is running slow del appl_all_processed_df gc.collect() final_df.columns = ["".join (c if c.isalnum() else "_" for c in str(x)) for x in final_df.columns] final_df.info()
<class 'pandas.core.frame.DataFrame'> Int64Index: 356255 entries, 0 to 356254 Columns: 4081 entries, SK_ID_CURR to GOODS_PRICE_PREV__na dtypes: float64(543), int64(4), uint8(3534) memory usage: 2.6 GB
MIT
notebooks/homecdt_model/.ipynb_checkpoints/ss_fteng_model_fromBDSE12_03G_HomeCredit_V2_20200204b-checkpoint.ipynb
ss9202150/Project_1
--- Modeling part. If using a ready dataset, please start here
# Reading the saved dtypes Series final_df_dtypes = \ pd.read_csv('../../datasets/homecdt_fteng/ss_fteng_fromBDSE12_03G_HomeCredit_V2_20200204a_dtypes_series.csv'\ , header=None, index_col=0, squeeze=True) del final_df_dtypes.index.name final_df_dtypes = final_df_dtypes.to_dict() final_df = \ pd.read_csv('../../datasets/homecdt_fteng/ss_fteng_fromBDSE12_03G_HomeCredit_V2_20200204a.csv'\ , dtype= final_df_dtypes) final_df.columns = ["".join (c if c.isalnum() else "_" for c in str(x)) for x in final_df.columns] final_df.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 356255 entries, 0 to 356254 Columns: 4081 entries, SK_ID_CURR to GOODS_PRICE_PREV__na dtypes: float64(543), int64(4), uint8(3534) memory usage: 2.6 GB
MIT
notebooks/homecdt_model/.ipynb_checkpoints/ss_fteng_model_fromBDSE12_03G_HomeCredit_V2_20200204b-checkpoint.ipynb
ss9202150/Project_1
This following is based on 'bear_Final_model' released 2020/01/23
# Forked from excellent kernel : https://www.kaggle.com/jsaguiar/updated-0-792-lb-lightgbm-with-simple-features # From Kaggler : https://www.kaggle.com/jsaguiar # Just added a few features so I thought I had to make release it as well... import numpy as np import pandas as pd import gc import time from contextlib import contextmanager import lightgbm as lgb from sklearn.metrics import roc_auc_score, roc_curve from sklearn.model_selection import KFold, StratifiedKFold import matplotlib.pyplot as plt import seaborn as sns import warnings warnings.simplefilter(action='ignore', category=FutureWarning) import csv lgb.__version__ print(final_df['TARGET'].isna().sum(), final_df['TARGET'].dtypes)
48744 float64
MIT
notebooks/homecdt_model/.ipynb_checkpoints/ss_fteng_model_fromBDSE12_03G_HomeCredit_V2_20200204b-checkpoint.ipynb
ss9202150/Project_1
LightGBM 模型
def timer(title): t0 = time.time() yield print("{} - done in {:.0f}s".format(title, time.time() - t0)) def kfold_lightgbm(df, num_folds = 5, stratified = True, debug= False, boosting_type= 'goss', epoch=20000, early_stop=200): # Divide in training/validation and test data train_df = df[df['TARGET'].notnull()] test_df = df[df['TARGET'].isnull()] print("Starting LightGBM goss. Train shape: {}, test shape: {}".format(train_df.shape, test_df.shape)) del df gc.collect() # Cross validation model if stratified: folds = StratifiedKFold(n_splits= num_folds, shuffle=True, random_state=924) else: folds = KFold(n_splits= num_folds, shuffle=True, random_state=924) # Create arrays and dataframes to store results oof_preds = np.zeros(train_df.shape[0]) sub_preds = np.zeros(test_df.shape[0]) feature_importance_df = pd.DataFrame() feats = [f for f in train_df.columns if f not in ['TARGET','SK_ID_CURR','SK_ID_BUREAU','SK_ID_PREV','index']] for n_fold, (train_idx, valid_idx) in enumerate(folds.split(train_df[feats], train_df['TARGET'])): dtrain = lgb.Dataset(data=train_df[feats].iloc[train_idx], label=train_df['TARGET'].iloc[train_idx], free_raw_data=False, silent=True) dvalid = lgb.Dataset(data=train_df[feats].iloc[valid_idx], label=train_df['TARGET'].iloc[valid_idx], free_raw_data=False, silent=True) # LightGBM parameters found by Bayesian optimization # {'learning_rate': 0.027277797382058662, # 'max_bin': 252.71833139557864, # 'max_depth': 19.94051833524931, # 'min_child_weight': 20.868586608046186, # 'min_data_in_leaf': 68.98157854879867, # 'min_split_gain': 0.04938251335634182, # 'num_leaves': 23.027556285612434, # 'reg_alpha': 0.9107785355990146, # 'reg_lambda': 0.15418005208807806, # 'subsample': 0.7997032951619153} params = { 'objective': 'binary', 'boosting_type': boosting_type, 'nthread': 4, 'learning_rate': 0.0272778, # 02, 'num_leaves': 23, #20,33 'tree_learner': 'voting', 'colsample_bytree': 0.9497036, 'subsample': 0.8715623, 'subsample_freq': 0, 'max_depth': 20, #8,7 'reg_alpha': 0.9107785, 'reg_lambda': 0.1541800, 'subsample': 0.7997033, 'min_split_gain': 0.0493825, 'min_data_in_leaf': 69, # ss add 'min_child_weight': 49, # 60,39 'seed': 924, 'verbose': 2000, 'metric': 'auc', 'max_bin': 253, # 'histogram_pool_size': 20480 # 'device' : 'gpu', # 'gpu_platform_id': 0, # 'gpu_device_id':0 } clf = lgb.train( params=params, train_set=dtrain, num_boost_round=epoch, valid_sets=[dtrain, dvalid], early_stopping_rounds=early_stop, verbose_eval=2000 ) oof_preds[valid_idx] = clf.predict(dvalid.data) sub_preds += clf.predict(test_df[feats]) / folds.n_splits fold_importance_df = pd.DataFrame() fold_importance_df["feature"] = feats fold_importance_df["importance"] = clf.feature_importance(importance_type='gain') fold_importance_df["fold"] = n_fold + 1 feature_importance_df = pd.concat([feature_importance_df, fold_importance_df], axis=0) print('Fold %2d AUC : %.6f' % (n_fold + 1, roc_auc_score(dvalid.label, oof_preds[valid_idx]))) del clf, dtrain, dvalid gc.collect() print('Full AUC score %.6f' % roc_auc_score(train_df['TARGET'], oof_preds)) # Write submission file and plot feature importance if not debug: sub_df = test_df[['SK_ID_CURR']].copy() sub_df['TARGET'] = sub_preds sub_df[['SK_ID_CURR', 'TARGET']].to_csv('homecdt_submission_LGBM.csv', index= False) display_importances(feature_importance_df) return feature_importance_df # Display/plot feature importance def display_importances(feature_importance_df_): cols = feature_importance_df_[["feature", "importance"]].groupby("feature").mean().sort_values(by="importance", ascending=False)[:40].index best_features = feature_importance_df_.loc[feature_importance_df_.feature.isin(cols)] plt.figure(figsize=(8, 10)) sns.barplot(x="importance", y="feature", data=best_features.sort_values(by="importance", ascending=False)) plt.title('LightGBM Features (avg over folds)') plt.tight_layout plt.savefig('lgbm_importances01.png')
_____no_output_____
MIT
notebooks/homecdt_model/.ipynb_checkpoints/ss_fteng_model_fromBDSE12_03G_HomeCredit_V2_20200204b-checkpoint.ipynb
ss9202150/Project_1
boosting_type:goss
init_time = time.time() kfold_lightgbm(final_df,10) print("Elapsed time={:5.2f} sec.".format(time.time() - init_time)) init_time = time.time() kfold_lightgbm(final_df,10) print("Elapsed time={:5.2f} sec.".format(time.time() - init_time))
Starting LightGBM goss. Train shape: (307511, 4081), test shape: (48744, 4081) Training until validation scores don't improve for 200 rounds Early stopping, best iteration is: [1773] training's auc: 0.860105 valid_1's auc: 0.793118 Fold 1 AUC : 0.793118 Training until validation scores don't improve for 200 rounds [2000] training's auc: 0.866676 valid_1's auc: 0.795427 Early stopping, best iteration is: [2229] training's auc: 0.872846 valid_1's auc: 0.795636 Fold 2 AUC : 0.795636 Training until validation scores don't improve for 200 rounds [2000] training's auc: 0.866057 valid_1's auc: 0.796907 Early stopping, best iteration is: [2010] training's auc: 0.866315 valid_1's auc: 0.796969 Fold 3 AUC : 0.796969 Training until validation scores don't improve for 200 rounds Early stopping, best iteration is: [1754] training's auc: 0.859484 valid_1's auc: 0.798664 Fold 4 AUC : 0.798664 Training until validation scores don't improve for 200 rounds [2000] training's auc: 0.866282 valid_1's auc: 0.794075 Early stopping, best iteration is: [2680] training's auc: 0.88354 valid_1's auc: 0.794693 Fold 5 AUC : 0.794693
MIT
notebooks/homecdt_model/.ipynb_checkpoints/ss_fteng_model_fromBDSE12_03G_HomeCredit_V2_20200204b-checkpoint.ipynb
ss9202150/Project_1
boosting_type:gbdt
# init_time = time.time() # kfold_lightgbm(final_df, 10, boosting_type= 'gbdt') # print("Elapsed time={:5.2f} sec.".format(time.time() - init_time))
_____no_output_____
MIT
notebooks/homecdt_model/.ipynb_checkpoints/ss_fteng_model_fromBDSE12_03G_HomeCredit_V2_20200204b-checkpoint.ipynb
ss9202150/Project_1
boosting_type:dart
# init_time = time.time() # kfold_lightgbm(final_df,10, boosting_type= 'dart') # print("Elapsed time={:5.2f} sec.".format(time.time() - init_time))
_____no_output_____
MIT
notebooks/homecdt_model/.ipynb_checkpoints/ss_fteng_model_fromBDSE12_03G_HomeCredit_V2_20200204b-checkpoint.ipynb
ss9202150/Project_1
boosting_type:rf
# init_time = time.time() # kfold_lightgbm(final_df,10,boosting_type= 'rf') # print("Elapsed time={:5.2f} sec.".format(time.time() - init_time))
_____no_output_____
MIT
notebooks/homecdt_model/.ipynb_checkpoints/ss_fteng_model_fromBDSE12_03G_HomeCredit_V2_20200204b-checkpoint.ipynb
ss9202150/Project_1
XGBoost 模型
from numba import cuda cuda.select_device(0) cuda.close() import numpy as np import pandas as pd import gc import time from contextlib import contextmanager from xgboost import XGBClassifier from sklearn.metrics import roc_auc_score, roc_curve from sklearn.model_selection import KFold, StratifiedKFold import matplotlib.pyplot as plt import seaborn as sns import warnings warnings.simplefilter(action='ignore', category=FutureWarning) import pickle def kfold_xgb(df, num_folds, stratified = True, debug= False): # Divide in training/validation and test data train_df = df[df['TARGET'].notnull()] test_df = df[df['TARGET'].isnull()] print("Starting XGBoost. Train shape: {}, test shape: {}".format(train_df.shape, test_df.shape)) del df gc.collect() # Cross validation model if stratified: folds = StratifiedKFold(n_splits= num_folds, shuffle=True, random_state=1054) else: folds = KFold(n_splits= num_folds, shuffle=True, random_state=1054) # Create arrays and dataframes to store results oof_preds = np.zeros(train_df.shape[0]) sub_preds = np.zeros(test_df.shape[0]) feature_importance_df = pd.DataFrame() feats = [f for f in train_df.columns if f not in ['TARGET','SK_ID_CURR','SK_ID_BUREAU','SK_ID_PREV','index']] for n_fold, (train_idx, valid_idx) in enumerate(folds.split(train_df[feats], train_df['TARGET'])): #if n_fold == 0: # REmove for full K-fold run cuda.select_device(0) cuda.close() train_x, train_y = train_df[feats].iloc[train_idx], train_df['TARGET'].iloc[train_idx] valid_x, valid_y = train_df[feats].iloc[valid_idx], train_df['TARGET'].iloc[valid_idx] clf = XGBClassifier(learning_rate =0.01, n_estimators=5000, max_depth=4, min_child_weight=5, # tree_method='gpu_hist', subsample=0.8, colsample_bytree=0.8, objective= 'binary:logistic', nthread=4, scale_pos_weight=2.5, seed=28, reg_lambda = 1.2) # clf = pickle.load(open('test.pickle','rb')) cuda.select_device(0) cuda.close() clf.fit(train_x, train_y, eval_set=[(train_x, train_y), (valid_x, valid_y)], eval_metric= 'auc', verbose= 1000, early_stopping_rounds= 200) cuda.select_device(0) cuda.close() oof_preds[valid_idx] = clf.predict_proba(valid_x)[:, 1] sub_preds += clf.predict_proba(test_df[feats])[:, 1] # / folds.n_splits # - Uncomment for K-fold fold_importance_df = pd.DataFrame() fold_importance_df["feature"] = feats fold_importance_df["importance"] = clf.feature_importances_ fold_importance_df["fold"] = n_fold + 1 feature_importance_df = pd.concat([feature_importance_df, fold_importance_df], axis=0) print('Fold %2d AUC : %.6f' % (n_fold + 1, roc_auc_score(valid_y, oof_preds[valid_idx]))) del clf, train_x, train_y, valid_x, valid_y gc.collect() np.save("xgb_oof_preds_1", oof_preds) np.save("xgb_sub_preds_1", sub_preds) cuda.select_device(0) cuda.close() clf = pickle.load(open('test.pickle','rb')) # print('Full AUC score %.6f' % roc_auc_score(train_df['TARGET'], oof_preds)) # Write submission file and plot feature importance if not debug: test_df['TARGET'] = sub_preds test_df[['SK_ID_CURR', 'TARGET']].to_csv('submission_XGBoost_GPU.csv', index= False) #display_importances(feature_importance_df) #return feature_importance_df # Display/plot feature importance def display_importances(feature_importance_df_): cols = feature_importance_df_[["feature", "importance"]].groupby("feature").mean().sort_values(by="importance", ascending=False)[:40].index best_features = feature_importance_df_.loc[feature_importance_df_.feature.isin(cols)] plt.figure(figsize=(8, 10)) sns.barplot(x="importance", y="feature", data=best_features.sort_values(by="importance", ascending=False)) plt.title('XGBoost Features (avg over folds)') plt.tight_layout() plt.savefig('xgb_importances02.png') init_time = time.time() kfold_xgb(final_df, 5) print("Elapsed time={:5.2f} sec.".format(time.time() - init_time))
_____no_output_____
MIT
notebooks/homecdt_model/.ipynb_checkpoints/ss_fteng_model_fromBDSE12_03G_HomeCredit_V2_20200204b-checkpoint.ipynb
ss9202150/Project_1
--- Below not executed Balance the 'TARGET' column
appl_all_processed_df['TARGET'].value_counts() balanceFactor = ((appl_all_processed_df['TARGET'].value_counts()[0])/(appl_all_processed_df['TARGET'].value_counts()[1])).round(0).astype(int) balanceFactor # appl_all_processed_df['TARGET'].value_counts()[0] # appl_all_processed_df['TARGET'].value_counts()[1] default_df = appl_all_processed_df[appl_all_processed_df['TARGET']==1] default_df.shape default_df_balanced = pd.concat( [default_df] * (balanceFactor - 1), sort=False, ignore_index=True ) default_df_balanced.shape appl_all_processed_df_balanced = pd.concat([appl_all_processed_df , default_df_balanced], sort=False, ignore_index=True) appl_all_processed_df_balanced.shape (appl_all_processed_df_balanced['TARGET'].unique(), (appl_all_processed_df_balanced['TARGET'].value_counts()[1], \ appl_all_processed_df_balanced['TARGET'].value_counts()[0], \ appl_all_processed_df_balanced['TARGET'].isnull().sum())) # Apply the following gc if memory is running slow del appl_all_processed_df_balanced gc.collect()
_____no_output_____
MIT
notebooks/homecdt_model/.ipynb_checkpoints/ss_fteng_model_fromBDSE12_03G_HomeCredit_V2_20200204b-checkpoint.ipynb
ss9202150/Project_1
--- --- Todo Todo:* cleaning: * num_df: normalize with z-score* feature engineering: * make reciprocol, polynomial columns of the existing columns. 1/x, x^x. * multiplying each columns, two columns at a time. * asset items, income items, willingness(history + misc profile) items, loading(principle + interest) items * Integration from other tables?https://ithelp.ithome.com.tw/articles/10202059https://stackoverflow.com/questions/26414913/normalize-columns-of-pandas-data-framehttps://www.kaggle.com/parasjindal96/how-to-normalize-dataframe-pandas --- EDA Quick check for numerical columns
numcol = df['CNT_FAM_MEMBERS'] numcol.describe(), \ numcol.isnull().sum(), \ numcol.size numcol.value_counts(sort=True), numcol.unique().size # numcol_toYear = pd.to_numeric(\ # ((numcol.abs() / 365) \ # .round(0)) \ # ,downcast='integer') # numcol_toYear.describe() # numcol_toYear.value_counts(sort=True), numcol_toYear.unique().size
_____no_output_____
MIT
notebooks/homecdt_model/.ipynb_checkpoints/ss_fteng_model_fromBDSE12_03G_HomeCredit_V2_20200204b-checkpoint.ipynb
ss9202150/Project_1
Quick check for categorical columns
catcol = df['HOUR_APPR_PROCESS_START'] catcol.unique(), \ catcol.unique().size catcol.value_counts(sort=True) catcol.isnull().sum(), \ catcol.size catcol.isnull().sum(), \ catcol.size
_____no_output_____
MIT
notebooks/homecdt_model/.ipynb_checkpoints/ss_fteng_model_fromBDSE12_03G_HomeCredit_V2_20200204b-checkpoint.ipynb
ss9202150/Project_1
Appendix Tool: Getting summary dataframe
# might not be very useful at this point def summary_df (data_df): """ Output: a new dataframe with summary info from the input dataframe. Input: data_df, the original dataframe """ summary_df = pd.concat([(data_df.describe(include='all')), \ (data_df.dtypes.to_frame(name='dtypes').T), \ (data_df.isnull().sum().to_frame(name='isnull').T), \ (data_df.apply(lambda x:x.unique().size).to_frame(name='uniqAll').T)]) return summary_df def data_quality_df (data_df): """ Output: a new dataframe with summary info from the input dataframe. Input: data_df, the original dataframe """ data_quality_df = pd.concat([(data_df.describe(include='all')), \ (data_df.dtypes.to_frame(name='dtypes').T), \ (data_df.isnull().sum().to_frame(name='isnull').T), \ (data_df.apply(lambda x:x.unique().size).to_frame(name='uniqAll').T)]) return data_quality_df.iloc[[11,13,12,0,],:] data_quality_df(appl_all_df) # df.to_csv(file_name, encoding='utf-8', index=False) # data_quality_df(df).to_csv("./eda_output/application_train_data_quality.csv") df['CNT_CHILDREN'].value_counts() df['CNT_CHILDREN'].value_counts().sum() df.describe() summary_df(df) # df.to_csv(file_name, encoding='utf-8', index=False) # summary_df(df).to_csv("./eda_output/application_train_summary_df.csv")
_____no_output_____
MIT
notebooks/homecdt_model/.ipynb_checkpoints/ss_fteng_model_fromBDSE12_03G_HomeCredit_V2_20200204b-checkpoint.ipynb
ss9202150/Project_1
--- .nunique() function
# nunique() function excludes NaN # i.e. it does not consider NaN as a "value", therefore NaN is not counted as a "unique value" df.nunique() df.nunique() == df.apply(lambda x:x.unique().shape[0]) df['AMT_REQ_CREDIT_BUREAU_YEAR'].unique().shape[0] df['AMT_REQ_CREDIT_BUREAU_YEAR'].nunique() df['AMT_REQ_CREDIT_BUREAU_YEAR'].unique().size
_____no_output_____
MIT
notebooks/homecdt_model/.ipynb_checkpoints/ss_fteng_model_fromBDSE12_03G_HomeCredit_V2_20200204b-checkpoint.ipynb
ss9202150/Project_1
.value_counts() function
# .value_counts() function has similar viewpoint towards NaN. # i.e. it does not consider null as a value, therefore not counted in .value_counts() df['NAME_TYPE_SUITE'].value_counts() df['AMT_REQ_CREDIT_BUREAU_YEAR'].isnull().sum() df['AMT_REQ_CREDIT_BUREAU_YEAR'].size df['AMT_REQ_CREDIT_BUREAU_YEAR'].value_counts().sum() + df['AMT_REQ_CREDIT_BUREAU_YEAR'].isnull().sum() == \ df['AMT_REQ_CREDIT_BUREAU_YEAR'].size
_____no_output_____
MIT
notebooks/homecdt_model/.ipynb_checkpoints/ss_fteng_model_fromBDSE12_03G_HomeCredit_V2_20200204b-checkpoint.ipynb
ss9202150/Project_1
重複值
# Counting unique values (cf. .nunique() function, see above section) # This code was retrieved from HT df.apply(lambda x:x.unique().shape[0]) # It is the same if you write (df.apply(lambda x:x.unique().size)) assert (df.apply(lambda x:x.unique().shape[0])==df.apply(lambda x:x.unique().size)).all # # %timeit showed the performances are similar # %timeit df.apply(lambda x:x.unique().shape[0]) # %timeit df.apply(lambda x:x.unique().size)
_____no_output_____
MIT
notebooks/homecdt_model/.ipynb_checkpoints/ss_fteng_model_fromBDSE12_03G_HomeCredit_V2_20200204b-checkpoint.ipynb
ss9202150/Project_1
空值
# 含空值欄位占比 print(f"{df.isnull().any().sum()} in {df.shape[1]} columns (ratio: {(df.isnull().any().sum()/df.shape[1]).round(2)}) has empty value(s)")
_____no_output_____
MIT
notebooks/homecdt_model/.ipynb_checkpoints/ss_fteng_model_fromBDSE12_03G_HomeCredit_V2_20200204b-checkpoint.ipynb
ss9202150/Project_1
--- re-casting to reduce memory use (beta)
# np.isfinite(num_df).all().value_counts() # num_df_finite = num_df[num_df.columns[np.isfinite(num_df).all()]] # num_df_infinite = num_df[num_df.columns[np.isfinite(num_df).all() == False]] # num_df_finite.shape, num_df_infinite.shape # assert num_df_finite.shape[0] == num_df_infinite.shape[0] == num_df.shape[0] # assert num_df_finite.shape[1] + num_df_infinite.shape[1] == num_df.shape[1] # def reduce_mem_usage(props, finite:bool = True): # props.info(verbose=False) # start_mem_usg = props.memory_usage().sum() / 1024**2 # print("Memory usage of properties dataframe is :",start_mem_usg," MB") # if finite == True: # props[props.columns[(props.min()>=0) & (props.max()<255)]] = \ # props[props.columns[(props.min()>=0) & (props.max()<255)]].astype(np.uint8, copy=False) # props.info(verbose=False) # props[props.columns[(props.min()>=0) &(props.max() >= 255) & (props.max()<65535)]] = \ # props[props.columns[(props.min()>=0) &(props.max() >= 255) & (props.max()<65535)]] \ # .astype(np.uint16, copy=False) # props.info(verbose=False) # props[props.columns[(props.min()>=0) &(props.max() >= 65535) & (props.max()<4294967295)]] = \ # props[props.columns[(props.min()>=0) &(props.max() >= 65535) & (props.max()<4294967295)]] \ # .astype(np.uint32, copy=False) # props.info(verbose=False) # props[props.columns[(props.min()>=0) &(props.max() >= 4294967295)]] = \ # props[props.columns[(props.min()>=0) &(props.max() >= 4294967295)]] \ # .astype(np.uint64, copy=False) # props.info(verbose=False) # else: # props = props.astype(np.float32, copy=False) # props.info(verbose=False) # print("___MEMORY USAGE AFTER COMPLETION:___") # mem_usg = props.memory_usage().sum() / 1024**2 # print("Memory usage is: ",mem_usg," MB") # print("This is ",100*mem_usg/start_mem_usg,"% of the initial size") # return props # if num_na_df_finite.min()>=0: # if num_na_df_finite.max() < 255: # props[col] = props[col].astype(np.uint8) # elif num_na_df_finite.max() < 65535: # props[col] = props[col].astype(np.uint16) # elif num_na_df_finite.max() < 4294967295: # props[col] = props[col].astype(np.uint32) # else: # props[col] = props[col].astype(np.uint64) # num_df_finite.info() # num_df_finite = reduce_mem_usage(num_df_finite, finite = True) # num_df_infinite.info() # num_df_infinite = reduce_mem_usage(num_df_infinite, finite = False) # num_df = pd.concat([num_df_finite, num_df_infinite], axis ='columns') # num_df.info() # assert num_df_finite.shape[0] == num_df_infinite.shape[0] == num_df.shape[0] # assert num_df_finite.shape[1] + num_df_infinite.shape[1] == num_df.shape[1] # del num_df_finite # del num_df_infinite # gc.collect()
_____no_output_____
MIT
notebooks/homecdt_model/.ipynb_checkpoints/ss_fteng_model_fromBDSE12_03G_HomeCredit_V2_20200204b-checkpoint.ipynb
ss9202150/Project_1
Batch Prediction 1. Download demo data```cd PhaseNetwget https://github.com/wayneweiqiang/PhaseNet/releases/download/test_data/test_data.zipunzip test_data.zip``` 2. Run batch prediction PhaseNet currently supports three data formats: numpy, hdf5, and mseed- For numpy format:~~~bashpython phasenet/predict.py --model=model/190703-214543 --data_list=test_data/npz.csv --data_dir=test_data/npz --format=numpy --plot_figure~~~- For hdf5 format:~~~bashpython phasenet/predict.py --model=model/190703-214543 --hdf5_file=test_data/data.h5 --hdf5_group=data --format=hdf5~~~- For mseed format:~~~bashpython phasenet/predict.py --model=model/190703-214543 --data_list=test_data/mseed.csv --data_dir=test_data/mseed --format=mseed~~~- For sac format:~~~bashpython phasenet/predict.py --model=model/190703-214543 --data_list=test_data/sac.csv --data_dir=test_data/sac --format=sac~~~- For mseed file of an array of stations (used by [QuakeFlow](https://github.com/wayneweiqiang/QuakeFlow)):~~~bashpython phasenet/predict.py --model=model/190703-214543 --data_list=test_data/mseed_array.csv --data_dir=test_data/mseed_array --stations=test_data/stations.csv --format=mseed_array --amplitude~~~Optional arguments:```usage: predict.py [-h] [--batch_size BATCH_SIZE] [--model_dir MODEL_DIR] [--data_dir DATA_DIR] [--data_list DATA_LIST] [--hdf5_file HDF5_FILE] [--hdf5_group HDF5_GROUP] [--result_dir RESULT_DIR] [--result_fname RESULT_FNAME] [--min_p_prob MIN_P_PROB] [--min_s_prob MIN_S_PROB] [--mpd MPD] [--amplitude] [--format FORMAT] [--s3_url S3_URL] [--stations STATIONS] [--plot_figure] [--save_prob]optional arguments: -h, --help show this help message and exit --batch_size BATCH_SIZE batch size --model_dir MODEL_DIR Checkpoint directory (default: None) --data_dir DATA_DIR Input file directory --data_list DATA_LIST Input csv file --hdf5_file HDF5_FILE Input hdf5 file --hdf5_group HDF5_GROUP data group name in hdf5 file --result_dir RESULT_DIR Output directory --result_fname RESULT_FNAME Output file --min_p_prob MIN_P_PROB Probability threshold for P pick --min_s_prob MIN_S_PROB Probability threshold for S pick --mpd MPD Minimum peak distance --amplitude if return amplitude value --format FORMAT input format --s3_url S3_URL s3 url --stations STATIONS seismic station info --plot_figure If plot figure for test --save_prob If save result for test``` 3. Read P/S picksPhaseNet currently outputs two format: **CSV** and **JSON**
import pandas as pd import json import os PROJECT_ROOT = os.path.realpath(os.path.join(os.path.abspath(''), "..")) picks_csv = pd.read_csv(os.path.join(PROJECT_ROOT, "results/picks.csv"), sep="\t") picks_csv.loc[:, 'p_idx'] = picks_csv["p_idx"].apply(lambda x: x.strip("[]").split(",")) picks_csv.loc[:, 'p_prob'] = picks_csv["p_prob"].apply(lambda x: x.strip("[]").split(",")) picks_csv.loc[:, 's_idx'] = picks_csv["s_idx"].apply(lambda x: x.strip("[]").split(",")) picks_csv.loc[:, 's_prob'] = picks_csv["s_prob"].apply(lambda x: x.strip("[]").split(",")) print(picks_csv.iloc[1]) print(picks_csv.iloc[0]) with open(os.path.join(PROJECT_ROOT, "results/picks.json")) as fp: picks_json = json.load(fp) print(picks_json[1]) print(picks_json[0])
{'id': 'NC.MCV..EH.0361339.npz', 'timestamp': '1970-01-01T00:01:30.150', 'prob': 0.9811667799949646, 'type': 'p'} {'id': 'NC.MCV..EH.0361339.npz', 'timestamp': '1970-01-01T00:00:59.990', 'prob': 0.9872905611991882, 'type': 'p'}
MIT
docs/example_batch_prediction.ipynb
javak87/phasenet_chile-subduction-zone
Multithreading and MultiprocessingRecall the phrase "many hands make light work". This is as true in programming as anywhere else.What if you could engineer your Python program to do four things at once? What would normally take an hour could (almost) take one fourth the time.\*This is the idea behind parallel processing, or the ability to set up and run multiple tasks concurrently.\* *We say almost, because you do have to take time setting up four processors, and it may take time to pass information between them.* Threading vs. ProcessingA good illustration of threading vs. processing would be to download an image file and turn it into a thumbnail.The first part, communicating with an outside source to download a file, involves a thread. Once the file is obtained, the work of converting it involves a process. Essentially, two factors determine how long this will take; the input/output speed of the network communication, or I/O, and the available processor, or CPU. I/O-intensive processes improved with multithreading:* webscraping* reading and writing to files* sharing data between programs* network communications CPU-intensive processes improved with multiprocessing:* computations* text formatting* image rescaling* data analysis Multithreading Example: WebscrapingHistorically, the programming knowledge required to set up multithreading was beyond the scope of this course, as it involved a good understanding of Python's Global Interpreter Lock (the GIL prevents multiple threads from running the same Python code at once). Also, you had to set up special classes that behave like Producers to divvy up the work, Consumers (aka "workers") to perform the work, and a Queue to hold tasks and provide communcations. And that was just the beginning.Fortunately, we've already learned one of the most valuable tools we'll need – the `map()` function. When we apply it using two standard libraries, *multiprocessing* and *multiprocessing.dummy*, setting up parallel processes and threads becomes fairly straightforward. Here's a classic multithreading example provided by [IBM](http://www.ibm.com/developerworks/aix/library/au-threadingpython/) and adapted by [Chris Kiehl](http://chriskiehl.com/article/parallelism-in-one-line/) where you divide the task of retrieving web pages across multiple threads: import time import threading import Queue import urllib2 class Consumer(threading.Thread): def __init__(self, queue): threading.Thread.__init__(self) self._queue = queue def run(self): while True: content = self._queue.get() if isinstance(content, str) and content == 'quit': break response = urllib2.urlopen(content) print 'Thanks!' def Producer(): urls = [ 'http://www.python.org', 'http://www.yahoo.com' 'http://www.scala.org', 'http://www.google.com' etc.. ] queue = Queue.Queue() worker_threads = build_worker_pool(queue, 4) start_time = time.time() Add the urls to process for url in urls: queue.put(url) Add the poison pill for worker in worker_threads: queue.put('quit') for worker in worker_threads: worker.join() print 'Done! Time taken: {}'.format(time.time() - start_time) def build_worker_pool(queue, size): workers = [] for _ in range(size): worker = Consumer(queue) worker.start() workers.append(worker) return workers if __name__ == '__main__': Producer() Using the multithreading library provided by the *multiprocessing.dummy* module and `map()` all of this becomes: import urllib2 from multiprocessing.dummy import Pool as ThreadPool pool = ThreadPool(4) choose a number of workers urls = [ 'http://www.python.org', 'http://www.yahoo.com' 'http://www.scala.org', 'http://www.google.com' etc.. ] results = pool.map(urllib2.urlopen, urls) pool.close() pool.join() In the above code, the *multiprocessing.dummy* module provides the parallel threads, and `map(urllib2.urlopen, urls)` assigns the labor! Multiprocessing Example: Monte CarloLet's code out an example to see how the parts fit together. We can time our results using the *timeit* module to measure any performance gains. Our task is to apply the Monte Carlo Method to estimate the value of Pi. Monte Carle Method and Estimating PiIf you draw a circle of radius 1 (a unit circle) and enclose it in a square, the areas of the two shapes are given as Area Formulas circle$$πr^2$$ square$$4 r^2$$Therefore, the ratio of the volume of the circle to the volume of the square is $$\frac{π}{4}$$The Monte Carlo Method plots a series of random points inside the square. By comparing the number that fall within the circle to those that fall outside, with a large enough sample we should have a good approximation of Pi. You can see a good demonstration of this [here](https://academo.org/demos/estimating-pi-monte-carlo/) (Hit the **Animate** button on the page).For a given number of points *n*, we have $$π = \frac{4 \cdot points\ inside\ circle}{total\ points\ n}$$To set up our multiprocessing program, we first derive a function for finding Pi that we can pass to `map()`:
from random import random # perform this import outside the function def find_pi(n): """ Function to estimate the value of Pi """ inside=0 for i in range(0,n): x=random() y=random() if (x*x+y*y)**(0.5)<=1: # if i falls inside the circle inside+=1 pi=4*inside/n return pi
_____no_output_____
MIT
22-Parallel Processing/01-Multithreading and Multiprocessing.ipynb
Pankaj-Ra/Complete-Python3-Bootcamp-master
Let's test `find_pi` on 5,000 points:
find_pi(5000)
_____no_output_____
MIT
22-Parallel Processing/01-Multithreading and Multiprocessing.ipynb
Pankaj-Ra/Complete-Python3-Bootcamp-master
This ran very quickly, but the results are not very accurate!Next we'll write a script that sets up a pool of workers, and lets us time the results against varying sized pools. We'll set up two arguments to represent *processes* and *total_iterations*. Inside the script, we'll break *total_iterations* down into the number of iterations passed to each process, by making a processes-sized list.For example: total_iterations = 1000 processes = 5 iterations = [total_iterations//processes]*processes iterations Output: [200, 200, 200, 200, 200] This list will be passed to our `map()` function along with `find_pi()`
%%writefile test.py from random import random from multiprocessing import Pool import timeit def find_pi(n): """ Function to estimate the value of Pi """ inside=0 for i in range(0,n): x=random() y=random() if (x*x+y*y)**(0.5)<=1: # if i falls inside the circle inside+=1 pi=4*inside/n return pi if __name__ == '__main__': N = 10**5 # total iterations P = 5 # number of processes p = Pool(P) print(timeit.timeit(lambda: print(f'{sum(p.map(find_pi, [N//P]*P))/P:0.7f}'), number=10)) p.close() p.join() print(f'{N} total iterations with {P} processes') ! python test.py
3.1466800 3.1364400 3.1470400 3.1370400 3.1256400 3.1398400 3.1395200 3.1363600 3.1437200 3.1334400 0.2370227286270967 100000 total iterations with 5 processes
MIT
22-Parallel Processing/01-Multithreading and Multiprocessing.ipynb
Pankaj-Ra/Complete-Python3-Bootcamp-master
Great! The above test took under a second on our computer.Now that we know our script works, let's increase the number of iterations, and compare two different pools. Sit back, this may take awhile!
%%writefile test.py from random import random from multiprocessing import Pool import timeit def find_pi(n): """ Function to estimate the value of Pi """ inside=0 for i in range(0,n): x=random() y=random() if (x*x+y*y)**(0.5)<=1: # if i falls inside the circle inside+=1 pi=4*inside/n return pi if __name__ == '__main__': N = 10**7 # total iterations P = 1 # number of processes p = Pool(P) print(timeit.timeit(lambda: print(f'{sum(p.map(find_pi, [N//P]*P))/P:0.7f}'), number=10)) p.close() p.join() print(f'{N} total iterations with {P} processes') P = 5 # number of processes p = Pool(P) print(timeit.timeit(lambda: print(f'{sum(p.map(find_pi, [N//P]*P))/P:0.7f}'), number=10)) p.close() p.join() print(f'{N} total iterations with {P} processes') ! python test.py
3.1420964 3.1417412 3.1411108 3.1408184 3.1414204 3.1417656 3.1408324 3.1418828 3.1420492 3.1412804 36.03526345242264 10000000 total iterations with 1 processes 3.1424524 3.1418376 3.1415292 3.1410344 3.1422376 3.1418736 3.1420540 3.1411452 3.1421652 3.1410672 17.300921846344366 10000000 total iterations with 5 processes
MIT
22-Parallel Processing/01-Multithreading and Multiprocessing.ipynb
Pankaj-Ra/Complete-Python3-Bootcamp-master
Hopefully you saw that with 5 processes our script ran faster! More is Better ...to a point.The gain in speed as you add more parallel processes tends to flatten out at some point. In any collection of tasks, there are going to be one or two that take longer than average, and no amount of added processing can speed them up. This is best described in [Amdahl's Law](https://en.wikipedia.org/wiki/Amdahl%27s_law). Advanced ScriptIn the example below, we'll add a context manager to shrink these three lines p = Pool(P) ... p.close() p.join() to one line: with Pool(P) as p: And we'll accept command line arguments using the *sys* module.
%%writefile test2.py from random import random from multiprocessing import Pool import timeit import sys N = int(sys.argv[1]) # these arguments are passed in from the command line P = int(sys.argv[2]) def find_pi(n): """ Function to estimate the value of Pi """ inside=0 for i in range(0,n): x=random() y=random() if (x*x+y*y)**(0.5)<=1: # if i falls inside the circle inside+=1 pi=4*inside/n return pi if __name__ == '__main__': with Pool(P) as p: print(timeit.timeit(lambda: print(f'{sum(p.map(find_pi, [N//P]*P))/P:0.5f}'), number=10)) print(f'{N} total iterations with {P} processes') ! python test2.py 10000000 500
3.14121 3.14145 3.14178 3.14194 3.14109 3.14201 3.14243 3.14150 3.14203 3.14116 16.871822701405073 10000000 total iterations with 500 processes
MIT
22-Parallel Processing/01-Multithreading and Multiprocessing.ipynb
Pankaj-Ra/Complete-Python3-Bootcamp-master
Define a function to integrate
def func(x): a = 1.01 b= -3.04 c = 2.07 return a*x**2 + b*x + c
_____no_output_____
MIT
Integration.ipynb
QuinnPaddock/UCSC-ASTR-119
Define it's integral so we know the right answer
def func_integral(x): a = 1.01 b= -3.04 c = 2.07 return (a*x**3)/3. + (b*x**2)/2. + c*x
_____no_output_____
MIT
Integration.ipynb
QuinnPaddock/UCSC-ASTR-119
Define core of trapezoid method
def trapezoid_core(f,x,h): return 0.5*h*(f(x*h)+f(x))
_____no_output_____
MIT
Integration.ipynb
QuinnPaddock/UCSC-ASTR-119
Define the wrapper function to perform the trapezoid method
def trapezoid_method(f,a,b,N): #f == function to integrate #a == lower limit of integration #b == upper limit of integration #N == number of intervals to use #define x values to perform the trapezoid rule x = np.linspace(a,b,N) h = x[1]-x[0] #define the value of the integral Fint = 0.0 #perform the integral using the trapezoid method for i in range(0,len(x)-1,1): Fint += trapezoid_core(f,x[i],h) #return the answer return Fint
_____no_output_____
MIT
Integration.ipynb
QuinnPaddock/UCSC-ASTR-119
Define the core of simpson's method
def simpsons_core(f,x,h): return h*(f(x) + 4*f(x+h) + f(x+2*h))/3
_____no_output_____
MIT
Integration.ipynb
QuinnPaddock/UCSC-ASTR-119
Define a wrapper for simpson's method
def simpsons_method(f,a,b,N): #f == function to integrate #a == lower limit of integration #b == upper limit of integration #N == number of intervals to use x = np.linspace(a,b,N) h = x[1]-x[0] #define the value of the integral Fint = 0.0 #perform the integral using the simpson's method for i in range(0,len(x)-2,2): Fint += simpsons_core(f,x[i],h) #apply simpson's rule over the last interval if X is even if((N%2)==0): Fint += simpsons_core(f,x[-2],0.5*h) #return the answer return Fint
_____no_output_____
MIT
Integration.ipynb
QuinnPaddock/UCSC-ASTR-119
Define Romberg core
def romberg_core(f,a,b,i): #we need the difference between a and b h = b-a #interval betwen function evaluations at refine level i dh = h/2.**(i) #we need the cofactor K = h/2.**(i+1) #and the function evaluations M = 0.0 for j in range(2**i): M += f(a + 0.5*dh +j*dh) #return the answer return K*M
_____no_output_____
MIT
Integration.ipynb
QuinnPaddock/UCSC-ASTR-119
Define a wrapper function
def romberg_integration(f,a,b,tol): #define an iteration variable i=0 #define a max number of iterations imax = 1000 #define an error estimate delta = 100.0*np.fabs(tol) #set an array of integral answers I = np.zeros(imax,dtype=float) #fet the zeroth romberg iteration first I[0] = 0.5*(b-a)*(f(a) + f(b)) #iterate by 1 i += 1 #iterate until we reach tolerance while(delta>tol): #find the romberg integration I[i] = 0.5*I[i-1] + romberg_core(f,a,b,i) #compute a fractional error estimate delta = np.fabs((I[i]-I[i-1])/I[i]) print(i,":",I[i],I[i-1],delta) if(delta>tol): #iterate i += 1 #if we've reached maximim iterations if(i>imax): print("Max iterations reached") raise StopIteration("Stopping iterations after ",i) #return the answer return I[i]
_____no_output_____
MIT
Integration.ipynb
QuinnPaddock/UCSC-ASTR-119
Check the interages
Answer = func_integral(1) - func_integral(0) print(Answer) print("Trapezoidal method") print(trapezoid_method(func,0,1,10)) print("Simpson's method") print(simpsons_method(func,0,1,10)) print("Romberg") tolerance = 1.0e-4 RI = romberg_integration(func,0,1,tolerance) print(RI, (RI-Answer)/Answer, tolerance)
_____no_output_____
MIT
Integration.ipynb
QuinnPaddock/UCSC-ASTR-119
Tuning an estimator[José C. García Alanis (he/him)](https://github.com/JoseAlanis) Research Fellow - Child and Adolescent Psychology at [Uni Marburg](https://www.uni-marburg.de/de) Member - [RTG 2271 | Breaking Expectations](https://www.uni-marburg.de/en/fb04/rtg-2271), [Brainhack](https://brainhack.org/) &nbsp;&nbsp;@JoiAlhaniz Aim(s) of this sectionIt's very important to learn when and where its appropriate to "tweak" your model.Since we have done all of the previous analysis in our training data, it's fine to try out different models.But we absolutely cannot "test" it on our *left out data*. If we do, we are in great danger of overfitting.It is not uncommon to try other models, or tweak hyperparameters. In this case, due to our relatively small sample size, we are probably not powered sufficiently to do so, and we would once again risk overfitting. However, for the sake of demonstration, we will do some tweaking. We will try a few different examples:- normalizing our target data- tweaking our hyperparameters- trying a more complicated model- feature selection Prepare data for modelLets bring back our example data set
import numpy as np import pandas as pd # get the data set data = np.load('MAIN2019_BASC064_subsamp_features.npz')['a'] # get the labels info = pd.read_csv('participants.csv') print('There are %s samples and %s features' % (data.shape[0], data.shape[1]))
There are 155 samples and 2016 features
BSD-3-Clause
lecture/ML_tuning_biases.ipynb
JoseAlanis/ML-DL_workshop_SynAGE
We'll set `Age` as target- i.e., well look at these from the `regression` perspective
# set age as target Y_con = info['Age'] Y_con.describe()
_____no_output_____
BSD-3-Clause
lecture/ML_tuning_biases.ipynb
JoseAlanis/ML-DL_workshop_SynAGE
Model specificationNow let's bring back the model specifications we used last time
from sklearn.model_selection import train_test_split # split the data X_train, X_test, y_train, y_test = train_test_split(data, Y_con, random_state=0) # use `AgeGroup` for stratification age_class2 = info.loc[y_train.index,'AgeGroup']
_____no_output_____
BSD-3-Clause
lecture/ML_tuning_biases.ipynb
JoseAlanis/ML-DL_workshop_SynAGE
Normalize the target data¶
# plot the data sns.displot(y_train,label='train') plt.legend() # create a log transformer function and log transform Y (age) from sklearn.preprocessing import FunctionTransformer log_transformer = FunctionTransformer(func = np.log, validate=True) log_transformer.fit(y_train.values.reshape(-1,1)) y_train_log = log_transformer.transform(y_train.values.reshape(-1,1))[:,0]
_____no_output_____
BSD-3-Clause
lecture/ML_tuning_biases.ipynb
JoseAlanis/ML-DL_workshop_SynAGE
Now let's plot the transformed data
import matplotlib.pyplot as plt import seaborn as sns sns.displot(y_train_log,label='test log') plt.legend()
_____no_output_____
BSD-3-Clause
lecture/ML_tuning_biases.ipynb
JoseAlanis/ML-DL_workshop_SynAGE
and go on with fitting the model to the log-tranformed data
# split the data X_train2, X_test, y_train2, y_test = train_test_split( X_train, # x y_train, # y test_size = 0.25, # 75%/25% split shuffle = True, # shuffle dataset before splitting stratify = age_class2, # keep distribution of age class consistent # betw. train & test sets. random_state = 0 # same shuffle each time ) from sklearn.svm import SVR from sklearn.model_selection import cross_val_predict, cross_val_score from sklearn.metrics import r2_score, mean_absolute_error # re-intialize the model lin_svr = SVR(kernel='linear') # predict y_pred = cross_val_predict(lin_svr, X_train, y_train_log, cv=10) # scores acc = r2_score(y_train_log, y_pred) mae = mean_absolute_error(y_train_log,y_pred) # check the accuracy print('R2:', acc) print('MAE:', mae) # plot the relationship sns.regplot(x=y_pred, y=y_train_log, scatter_kws=dict(color='k')) plt.xlabel('Predicted Log Age') plt.ylabel('Log Age')
_____no_output_____
BSD-3-Clause
lecture/ML_tuning_biases.ipynb
JoseAlanis/ML-DL_workshop_SynAGE
Alright, seems like a definite improvement, right? We might agree on that.But we can't forget about interpretability? The MAE is much less interpretable now- do you know why? Tweak the hyperparameters¶Many machine learning algorithms have hyperparameters that can be "tuned" to optimize model fitting.Careful parameter tuning can really improve a model, but haphazard tuning will often lead to overfitting.Our SVR model has multiple hyperparameters. Let's explore some approaches for tuning them for 1000 points, what is a parameter?
SVR?
_____no_output_____
BSD-3-Clause
lecture/ML_tuning_biases.ipynb
JoseAlanis/ML-DL_workshop_SynAGE
Now, how do we know what parameter tuning does?- One way is to plot a **Validation Curve**, this will let us view changes in training and validation accuracy of a model as we shift its hyperparameters. We can do this easily with sklearn. We'll fit the same model, but with a range of different values for `C` - The C parameter tells the SVM optimization how much you want to avoid misclassifying each training example. For large values of C, the optimization will choose a smaller-margin hyperplane if that hyperplane does a better job of getting all the training points classified correctly. Conversely, a very small value of C will cause the optimizer to look for a larger-margin separating hyperplane, even if that hyperplane misclassifies more points. For very tiny values of C, you should get misclassified examples, often even if your training data is linearly separable.
from sklearn.model_selection import validation_curve C_range = 10. ** np.arange(-3, 7) train_scores, valid_scores = validation_curve(lin_svr, X_train, y_train_log, param_name= "C", param_range = C_range, cv=10, scoring='neg_mean_squared_error') # A bit of pandas magic to prepare the data for a seaborn plot tScores = pd.DataFrame(train_scores).stack().reset_index() tScores.columns = ['C','Fold','Score'] tScores.loc[:,'Type'] = ['Train' for x in range(len(tScores))] vScores = pd.DataFrame(valid_scores).stack().reset_index() vScores.columns = ['C','Fold','Score'] vScores.loc[:,'Type'] = ['Validate' for x in range(len(vScores))] ValCurves = pd.concat([tScores,vScores]).reset_index(drop=True) ValCurves.head() # and plot the results g = sns.catplot(x='C',y='Score',hue='Type',data=ValCurves,kind='point') plt.xticks(range(10)) g.set_xticklabels(C_range, rotation=90)
_____no_output_____
BSD-3-Clause
lecture/ML_tuning_biases.ipynb
JoseAlanis/ML-DL_workshop_SynAGE
It looks like accuracy is better for higher values of `C`, and plateaus somewhere between 0.1 and 1.The default setting is `C=1`, so it looks like we can't really improve much by changing `C`.But our SVR model actually has two hyperparameters, `C` and `epsilon`. Perhaps there is an optimal combination of settings for these two parameters.We can explore that somewhat quickly with a `grid search`, which is once again easily achieved with `sklearn`.Because we are fitting the model multiple times witih cross-validation, this will take some time ... Let's tune some hyperparameters
from sklearn.model_selection import GridSearchCV C_range = 10. ** np.arange(-3, 8) epsilon_range = 10. ** np.arange(-3, 8) param_grid = dict(epsilon=epsilon_range, C=C_range) grid = GridSearchCV(lin_svr, param_grid=param_grid, cv=10) grid.fit(X_train, y_train_log)
_____no_output_____
BSD-3-Clause
lecture/ML_tuning_biases.ipynb
JoseAlanis/ML-DL_workshop_SynAGE
Now that the grid search has completed, let's find out what was the "best" parameter combination
print(grid.best_params_)
{'C': 0.01, 'epsilon': 0.01}
BSD-3-Clause
lecture/ML_tuning_biases.ipynb
JoseAlanis/ML-DL_workshop_SynAGE
And if redo our cross-validation with this parameter set?
y_pred = cross_val_predict(SVR(kernel='linear', C=grid.best_params_['C'], epsilon=grid.best_params_['epsilon'], gamma='auto'), X_train, y_train_log, cv=10) # scores acc = r2_score(y_train_log, y_pred) mae = mean_absolute_error(y_train_log,y_pred) # print model performance print('R2:', acc) print('MAE:', mae) # and plot the results sns.regplot(x=y_pred, y=y_train_log, scatter_kws=dict(color='k')) plt.xlabel('Predicted Log Age') plt.ylabel('Log Age')
_____no_output_____
BSD-3-Clause
lecture/ML_tuning_biases.ipynb
JoseAlanis/ML-DL_workshop_SynAGE
Homework 2: classificationData source: http://archive.ics.uci.edu/ml/datasets/Polish+companies+bankruptcy+data**Description:** The goal of this HW is to be familiar with the basic classifiers PML Ch 3.For this HW, we continue to use Polish companies bankruptcy data Data Set from UCI Machine Learning Repository. Download the dataset and put the 4th year file (4year.arff) in your YOUR_GITHUB_ID/PHBS_MLF_2019/HW2/I did a basic process of the data (loading to dataframe, creating bankruptcy column, changing column names, filling-in na values, training-vs-test split, standardizatino, etc). See my github。 Preparation Load, read and clean
from scipy.io import arff import pandas as pd import numpy as np data = arff.loadarff('./data/4year.arff') df = pd.DataFrame(data[0]) df['bankruptcy'] = (df['class']==b'1') del df['class'] df.columns = ['X{0:02d}'.format(k) for k in range(1,65)] + ['bankruptcy'] df.describe() sum(df.bankruptcy == True) from sklearn.impute import SimpleImputer imp_mean = SimpleImputer(missing_values=np.nan, strategy='mean') X_imp = imp_mean.fit_transform(df.values)
_____no_output_____
MIT
HW2/Classifiers.ipynb
oyrx/PHBS_MLF_2019
*A dll load error occured here. Solution recorded in [my blog](https://quoth.win/671.html)*
from sklearn.model_selection import train_test_split X, y = X_imp[:, :-1], X_imp[:, -1] X_train, X_test, y_train, y_test =\ train_test_split(X, y, test_size=0.3, random_state=0, stratify=y) from sklearn.preprocessing import StandardScaler stdsc = StandardScaler() X_train_std = stdsc.fit_transform(X_train) X_test_std = stdsc.transform(X_test)
_____no_output_____
MIT
HW2/Classifiers.ipynb
oyrx/PHBS_MLF_2019
1. Find the 2 most important featuresSelect the 2 most important features using LogisticRegression with L1 penalty. **(Adjust C until you see 2 features)**
from sklearn.linear_model import LogisticRegression C = [1, .1, .01, 0.001] cdf = pd.DataFrame() for c in C: lr = LogisticRegression(penalty='l1', C=c, solver='liblinear', random_state=0) lr.fit(X_train_std, y_train) print(f'[C={c}] with {lr.coef_[lr.coef_!=0].shape[0]} features: \n {lr.coef_[lr.coef_!=0]} \n') # Python >= 3.7 if lr.coef_[lr.coef_!=0].shape[0] == 2: cdf = pd.DataFrame(lr.coef_.T , df.columns[:-1], columns=['coef']) lr = LogisticRegression(penalty='l1', C=0.01, solver='liblinear', random_state=0) # complete lr.fit(X_train_std, y_train) cdf = cdf[cdf.coef != 0] cdf
_____no_output_____
MIT
HW2/Classifiers.ipynb
oyrx/PHBS_MLF_2019
redefine X_train_std and X_test_std
X_train_std = X_train_std[:, lr.coef_[0]!=0] X_test_std = X_test_std[:, lr.coef_[0]!=0] from matplotlib.colors import ListedColormap import matplotlib.pyplot as plt plt.style.use('ggplot') plt.scatter(x=X_train_std[:,0], y=X_train_std[:,1], c=y_train, cmap='Set1')
_____no_output_____
MIT
HW2/Classifiers.ipynb
oyrx/PHBS_MLF_2019
2. Apply LR / SVM / Decision Tree belowUsing the 2 selected features, apply LR / SVM / decision tree. **Try your own hyperparameters (C, gamma, tree depth, etc)** to maximize the prediction accuracy. (Just try several values. You don't need to show your answer is the maximum.) LR
CLr = np.arange(0.000000000000001, 0.0225, 0.0001) acrcLr = [] # acurracy for c in CLr: lr = LogisticRegression(C=c,penalty='l1',solver='liblinear') lr.fit(X_train_std, y_train) acrcLr.append([lr.score(X_train_std, y_train), lr.score(X_test_std, y_test), c]) acrcLr = np.array(acrcLr) plt.plot(acrcLr[:,2], acrcLr[:,0]) plt.plot(acrcLr[:,2], acrcLr[:,1]) plt.xlabel('C') plt.ylabel('Accuracy') plt.title('Logistic Regression') plt.show()
_____no_output_____
MIT
HW2/Classifiers.ipynb
oyrx/PHBS_MLF_2019
Choose `c=.01`
c = .01 lr = LogisticRegression(C=c,penalty='l1',solver='liblinear') lr.fit(X_train_std, y_train) print(f'Accuracy when [c={c}] \nTrain {lr.score(X_train_std, y_train)}\nTest {lr.score(X_test_std, y_test)}')
Accuracy when [c=0.01] Train 0.9474759264662971 Test 0.9469026548672567
MIT
HW2/Classifiers.ipynb
oyrx/PHBS_MLF_2019
SVM
from sklearn.svm import SVC G = np.arange(0.00001, 0.3, 0.005) acrcSvm = [] for g in G: svm = SVC(kernel='rbf', gamma=g, C=1.0, random_state=0) svm.fit(X_train_std, y_train) acrcSvm.append([svm.score(X_train_std, y_train), svm.score(X_test_std, y_test), g]) acrcSvm = np.array(acrcSvm) plt.plot(acrcSvm[:,2], acrcSvm[:,0]) plt.plot(acrcSvm[:,2], acrcSvm[:,1]) plt.xlabel('gamma') plt.ylabel('Accuracy') plt.title('SVM') plt.show()
_____no_output_____
MIT
HW2/Classifiers.ipynb
oyrx/PHBS_MLF_2019
Choose `gamma = 0.2`
g = 0.2 svm = SVC(kernel='rbf', gamma=g, C=1.0, random_state=0) svm.fit(X_train_std, y_train) print(f'Accuracy when [gamma={g}] \nTrain {svm.score(X_train_std, y_train)}\nTest {svm.score(X_test_std, y_test)}')
Accuracy when [gamma=0.2] Train 0.9482054274875985 Test 0.9472430224642614
MIT
HW2/Classifiers.ipynb
oyrx/PHBS_MLF_2019
Decision Tree
from sklearn.tree import DecisionTreeClassifier depthTree = range(1, 6) acrcTree = [] for depth in depthTree: tree = DecisionTreeClassifier(criterion='gini', max_depth=depth, random_state=0) tree.fit(X_train_std, y_train) acrcTree.append([tree.score(X_train_std, y_train), tree.score(X_test_std, y_test), depth]) acrcTree = np.array(acrcTree) plt.plot(acrcTree[:,2], acrcTree[:,0]) plt.plot(acrcTree[:,2], acrcTree[:,1]) plt.xlabel('max_depth') plt.ylabel('Accuracy') plt.title('Decision Tree') plt.show()
_____no_output_____
MIT
HW2/Classifiers.ipynb
oyrx/PHBS_MLF_2019
Choose `max_depth=2`:
depth = 2 tree = DecisionTreeClassifier(criterion='gini', max_depth=depth, random_state=0) tree.fit(X_train_std, y_train) print(f'Accuracy when [max_depth={depth}] \nTrain {tree.score(X_train_std, y_train)}\nTest {tree.score(X_test_std, y_test)}')
Accuracy when [max_depth=2] Train 0.9474759264662971 Test 0.9472430224642614
MIT
HW2/Classifiers.ipynb
oyrx/PHBS_MLF_2019
3. Visualize the classificationVisualize your classifiers using the plot_decision_regions function from PML Ch. 3
def plot_decision_regions(X, y, classifier, test_idx=None, resolution=0.02): # setup marker generator and color map markers = ('s', 'x', 'o', '^', 'v') colors = ('red', 'blue', 'lightgreen', 'gray', 'cyan') cmap = ListedColormap(colors[:len(np.unique(y))]) # plot the decision surface x1_min, x1_max = X[:, 0].min() - 1, X[:, 0].max() + 1 x2_min, x2_max = X[:, 1].min() - 1, X[:, 1].max() + 1 xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution), np.arange(x2_min, x2_max, resolution)) Z = classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T) Z = Z.reshape(xx1.shape) plt.contourf(xx1, xx2, Z, alpha=0.3, cmap=cmap) plt.xlim(xx1.min(), xx1.max()) plt.ylim(xx2.min(), xx2.max()) for idx, cl in enumerate(np.unique(y)): plt.scatter(x=X[y == cl, 0], y=X[y == cl, 1], alpha=0.8, c=colors[idx], marker=markers[idx], label=cl, edgecolor='black') # highlight test samples if test_idx: # plot all samples X_test, y_test = X[test_idx, :], y[test_idx] plt.scatter(X_test[:, 0], X_test[:, 1], c='', edgecolor='black', alpha=1.0, linewidth=1, marker='o', s=100, label='test set') X_combined_std = np.vstack((X_train_std, X_test_std)) y_combined = np.hstack((y_train, y_test))
_____no_output_____
MIT
HW2/Classifiers.ipynb
oyrx/PHBS_MLF_2019
LR`test_idx` removed on purpose
plot_decision_regions(X=X_combined_std, y=y_combined, classifier=lr) plt.xlabel(cdf.index[0]) plt.ylabel(cdf.index[1]) plt.legend(loc='lower left') plt.tight_layout() #plt.savefig('images/03_01.png', dpi=300) plt.show()
_____no_output_____
MIT
HW2/Classifiers.ipynb
oyrx/PHBS_MLF_2019
Decision Tree
plot_decision_regions(X=X_combined_std, y=y_combined, classifier=tree) plt.xlabel(cdf.index[0]) plt.ylabel(cdf.index[1]) plt.legend(loc='lower left') plt.tight_layout() #plt.savefig('images/03_01.png', dpi=300) plt.show()
_____no_output_____
MIT
HW2/Classifiers.ipynb
oyrx/PHBS_MLF_2019
SVM (samples)
# Visualization of all features in a SVM model is too slow # Because the complexity is very high (sourse:https://scikit-learn.org/stable/modules/svm.html#complexity) # So use random samples(n=3000) instead samples = np.random.randint(0, len(X_combined_std), size=3000) plot_decision_regions(X=X_combined_std[samples], y=y_combined[samples], classifier=svm) plt.xlabel(cdf.index[0] + '[samples]') plt.ylabel(cdf.index[1] + '[samples]') plt.legend(loc='lower left') plt.tight_layout() #plt.savefig('images/03_01.png', dpi=300) plt.show()
_____no_output_____
MIT
HW2/Classifiers.ipynb
oyrx/PHBS_MLF_2019
TopicsThis notebook covers the following topics -1. Basic Concepts 1. [Basic Syntax](basic-syntax) 2. [Lists](lists) 3. [String Manipulation](string) 4. [Decision making (If statement)](if) 5. Loops 1. [For loop](for) 2. [While loop](while) 6. [Function](func) 7. [Scope](scope) 8. Miscellaneous 1. [Dictionary](dict) 2. [Tuples](tuple) 3. [List Comprehension](lc) 4. [Error Handling](eh) 5. [Lambda Expressions](le) 6. [Mapping Function](mf) 7. [User Input](ui)2. Advanced Concepts 1. [Numpy](numpy) 2. [Pandas](pandas) 3. [Matplotlib (Plotting)](plot) 4. [pdb (Debugging)](pdb) 5. [Other Useful Libraries](oul) Basic Topics Basic Syntax Hello World!
#A basic print statement to display given message print("Hello World!")
Hello World!
MIT
.ipynb_checkpoints/Python Crash Course-checkpoint.ipynb
rafia37/DSA5113-TA-class-repo
Basic Operations
#Addition 2 + 10 #Subtraction 2 - 10 #Multiplication 2*10 #Division 3/2 #Integer division 3//2 #Raising to a power 10**3 #Exponentiating - not the same as 10^3 10e3
_____no_output_____
MIT
.ipynb_checkpoints/Python Crash Course-checkpoint.ipynb
rafia37/DSA5113-TA-class-repo
Defining VariablesYou can define variables as `variable_name = value`- Variable names can be alphanumeric though it can't start with a number.- Variable names are case sensitive- The values that you assign to a variable will typically be of these 5 standard data types (In python, you can assign almost anything to a variable and not have to declare what type of variable it is) - Numbers (floats, integers, complex etc) - Strings* - List* - Tuple* - Dictionary* *Discussed in a later section. Will only show how to define them in this section.
#Numbers my_num = 5113 #Example of defining an integer my_float = 3.0 #Example of defining a float #Strings truth = "This crash course is just the tip of the iceberg o_O" #Lists same_type_list = [1,2,3,4,5] #A simple list of same type of objects - integers mixed_list = [1,2,"three", my_num, same_type_list] #A list containing many type of objects - integer, string, variable, another list #Dictionary simple_dict = {"red": 1, "blue":2, "green":3} #Similar to a list but enclosed in curly braces {} and consists of key-value pairs #Tuple aTuple = (1,2,3) #Similar to a list but enclosed in parenthesis ()
_____no_output_____
MIT
.ipynb_checkpoints/Python Crash Course-checkpoint.ipynb
rafia37/DSA5113-TA-class-repo
More print statementsNow we're going to print the variables we defined in the previous cell and look at some more ways to use the print statement
#printing a variable print(my_float) #printing the truth! print(truth) print(simple_dict) print(mixed_list) #Notice how the 4th & 5th objects got the value of the variables we defined earlier #Dynamic printing print("This is DSA {}".format(my_num)) #The value/variable given inside format replaces the curly braces in the string #When the dynamically set part is a number, we can set the precision print("Value of pi up to 4 decimal places = {:.4f}".format(3.141592653589793238))
Value of pi up to 4 decimal places = 3.1416
MIT
.ipynb_checkpoints/Python Crash Course-checkpoint.ipynb
rafia37/DSA5113-TA-class-repo