markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Table Styles The next option you have are "table styles". These are styles that apply to the table as a whole, but don't look at the data. Certain sytlings, including pseudo-selectors like :hover can only be used this way.
from IPython.display import HTML def hover(hover_color="#ffff99"): return dict(selector="tr:hover", props=[("background-color", "%s" % hover_color)]) styles = [ hover(), dict(selector="th", props=[("font-size", "150%"), ("text-align", "center")]), dict(se...
pandas/doc/source/style.ipynb
Ziqi-Li/bknqgis
gpl-2.0
table_styles should be a list of dictionaries. Each dictionary should have the selector and props keys. The value for selector should be a valid CSS selector. Recall that all the styles are already attached to an id, unique to each Styler. This selector is in addition to that id. The value for props should be a list of...
from IPython.html import widgets @widgets.interact def f(h_neg=(0, 359, 1), h_pos=(0, 359), s=(0., 99.9), l=(0., 99.9)): return df.style.background_gradient( cmap=sns.palettes.diverging_palette(h_neg=h_neg, h_pos=h_pos, s=s, l=l, as_cmap=True) ) def magnify()...
pandas/doc/source/style.ipynb
Ziqi-Li/bknqgis
gpl-2.0
Export to Excel New in version 0.20.0 <span style="color: red">Experimental: This is a new feature and still under development. We'll be adding features and possibly making breaking changes in future releases. We'd love to hear your feedback.</span> Some support is available for exporting styled DataFrames to Excel wor...
df.style.\ applymap(color_negative_red).\ apply(highlight_max).\ to_excel('styled.xlsx', engine='openpyxl')
pandas/doc/source/style.ipynb
Ziqi-Li/bknqgis
gpl-2.0
A screenshot of the output: Extensibility The core of pandas is, and will remain, its "high-performance, easy-to-use data structures". With that in mind, we hope that DataFrame.style accomplishes two goals Provide an API that is pleasing to use interactively and is "good enough" for many tasks Provide the foundations...
from jinja2 import Environment, ChoiceLoader, FileSystemLoader from IPython.display import HTML from pandas.io.formats.style import Styler %mkdir templates
pandas/doc/source/style.ipynb
Ziqi-Li/bknqgis
gpl-2.0
This next cell writes the custom template. We extend the template html.tpl, which comes with pandas.
%%file templates/myhtml.tpl {% extends "html.tpl" %} {% block table %} <h1>{{ table_title|default("My Table") }}</h1> {{ super() }} {% endblock table %}
pandas/doc/source/style.ipynb
Ziqi-Li/bknqgis
gpl-2.0
Now that we've created a template, we need to set up a subclass of Styler that knows about it.
class MyStyler(Styler): env = Environment( loader=ChoiceLoader([ FileSystemLoader("templates"), # contains ours Styler.loader, # the default ]) ) template = env.get_template("myhtml.tpl")
pandas/doc/source/style.ipynb
Ziqi-Li/bknqgis
gpl-2.0
Notice that we include the original loader in our environment's loader. That's because we extend the original template, so the Jinja environment needs to be able to find it. Now we can use that custom styler. It's __init__ takes a DataFrame.
MyStyler(df)
pandas/doc/source/style.ipynb
Ziqi-Li/bknqgis
gpl-2.0
Our custom template accepts a table_title keyword. We can provide the value in the .render method.
HTML(MyStyler(df).render(table_title="Extending Example"))
pandas/doc/source/style.ipynb
Ziqi-Li/bknqgis
gpl-2.0
For convenience, we provide the Styler.from_custom_template method that does the same as the custom subclass.
EasyStyler = Styler.from_custom_template("templates", "myhtml.tpl") EasyStyler(df)
pandas/doc/source/style.ipynb
Ziqi-Li/bknqgis
gpl-2.0
Here's the template structure:
with open("template_structure.html") as f: structure = f.read() HTML(structure)
pandas/doc/source/style.ipynb
Ziqi-Li/bknqgis
gpl-2.0
See the template in the GitHub repo for more details.
# Hack to get the same style in the notebook as the # main site. This is hidden in the docs. from IPython.display import HTML with open("themes/nature_with_gtoc/static/nature.css_t") as f: css = f.read() HTML('<style>{}</style>'.format(css))
pandas/doc/source/style.ipynb
Ziqi-Li/bknqgis
gpl-2.0
Initialize H2O First, we'll start our H2O cluster...
with warnings.catch_warnings(): warnings.simplefilter('ignore') # I started this cluster up via CLI with: # $ java -Xmx2g -jar /anaconda/h2o_jar/h2o.jar h2o.init(ip='10.7.187.84', port=54321, start_h2o=False)
doc/examples/h2o/h2o_example.ipynb
tgsmith61591/skutil
bsd-3-clause
Load data We'll load sklearn's breast cancer data. Using skutil's from_pandas method, we can upload a Pandas frame to the H2O cloud
from sklearn.datasets import load_breast_cancer from skutil.h2o.util import from_pandas # import data, load into pandas bc = load_breast_cancer() X = pd.DataFrame.from_records(data=bc.data, columns=bc.feature_names) X['target'] = bc.target # push to h2o cloud X = from_pandas(X) print(X.shape) X.head() # Here are our...
doc/examples/h2o/h2o_example.ipynb
tgsmith61591/skutil
bsd-3-clause
train/test split Sklearn provides a great mechanism for splitting data into a train and validation set. Skutil provides the same mechanism for h2o frames. This cell does the following: Makes the response variable an enum Creates two splits: X_train: 75% X_val: 25%
from skutil.h2o import h2o_train_test_split # first, let's make sure our target is a factor X[y] = X[y].asfactor() # we'll use 75% of the data for training, 25% X_train, X_val = h2o_train_test_split(X, train_size=0.75, random_state=42) # make sure we did it right... # assert X.shape[0] == (X_train.shape[0] + X_val.s...
doc/examples/h2o/h2o_example.ipynb
tgsmith61591/skutil
bsd-3-clause
preprocessing with skutil.h2o Skutil provides an h2o module which delivers some skutil feature_selection classes that can operate on an H2OFrame. Each BaseH2OTransformer has the following __init__ signature: BaseH2OTransformer(self, feature_names=None, target_feature=None) The selector will only operate on the feature...
from skutil.h2o import H2ONearZeroVarianceFilterer # Let's determine whether we're at risk for any near-zero variance nzv = H2ONearZeroVarianceFilterer(feature_names=x, target_feature=y, threshold=1e-4) nzv.fit(X_train) # let's see if anything was dropped... nzv.drop_ nzv.var_
doc/examples/h2o/h2o_example.ipynb
tgsmith61591/skutil
bsd-3-clause
Multicollinearity Multicollinearity (MC) can be detrimental to the fit of parametric models (for our example, we're going to use a tree-based model, which is non-parametric, but the demo is still useful), and can cause confounding results in some models' variable importances. With skutil, we can filter out features tha...
from skutil.h2o import h2o_corr_plot # note that we want to exclude the target!! h2o_corr_plot(X_train[x], xticklabels=x, yticklabels=x) from skutil.h2o import H2OMulticollinearityFilterer # Are we at risk of any multicollinearity? mcf = H2OMulticollinearityFilterer(feature_names=x, target_feature=y, threshold=0.90)...
doc/examples/h2o/h2o_example.ipynb
tgsmith61591/skutil
bsd-3-clause
Dropping features As you'll see in the next section (Pipelines), where certain preprocessing steps take place matters. If there are a subset of features on which you don't want to model or process, you can drop them out. Sometimes this is more effective than creating a list of potentially thousands of feature names to ...
from skutil.h2o import H2OFeatureDropper # maybe I don't like 'mean fractal dimension' dropper = H2OFeatureDropper(feature_names=['mean fractal dimension'], target_feature=y) transformed = dropper.fit_transform(X_train) # we can ensure it's not there assert not 'mean fractal dimension' in transformed.columns
doc/examples/h2o/h2o_example.ipynb
tgsmith61591/skutil
bsd-3-clause
skutil.h2o modeling Skutil's h2o module allows us to form the Pipeline objects we're familiar with from sklearn. This permits us to string a series of preprocessors together, with an optional H2OEstimator as the last step. Like sklearn Pipelines, the first argument is a single list of length-two tuples (where the fir...
from skutil.h2o import H2OPipeline from h2o.estimators import H2ORandomForestEstimator from skutil.h2o.metrics import h2o_accuracy_score # same as sklearn's, but with H2OFrames # let's fit a pipeline with our estimator... pipe = H2OPipeline([ ('nzv', H2ONearZeroVarianceFilterer(threshold=1e-1)), ('mcf'...
doc/examples/h2o/h2o_example.ipynb
tgsmith61591/skutil
bsd-3-clause
Which features were retained? We can see which features were modeled on with the training_cols_ attribute of the fitted pipe.
pipe.training_cols_
doc/examples/h2o/h2o_example.ipynb
tgsmith61591/skutil
bsd-3-clause
Hyperparameter optimization With relatively little effort, we got > 93% accuracy on our validation set! Can we improve that? We can use sklearn-esque grid searches, which also allow us to search over preprocessor objects to optimize a set of hyperparameters.
from skutil.h2o import H2ORandomizedSearchCV from skutil.h2o import H2OKFold from scipy.stats import uniform, randint # define our random state rand_state = 2016 # we have the option to choose the model that maximizes CV scores, # or the model that minimizes std deviations between CV scores. # let's choose the former...
doc/examples/h2o/h2o_example.ipynb
tgsmith61591/skutil
bsd-3-clause
Model evaluation Beyond merely observing our validation set score, we can dig into the cross validation scores of each model in our H2O grid search, and select the model that has not only the best mean score, but the model that minimizes variability in the CV scores.
from skutil.utils import report_grid_score_detail # now let's look deeper... sort_by = 'std' if minimize == 'variance' else 'score' report_grid_score_detail(search, charts=True, sort_results=True, ascending=minimize=='variance', sort_by=sort_by)
doc/examples/h2o/h2o_example.ipynb
tgsmith61591/skutil
bsd-3-clause
Variable importance We can easily extract the best model's variable importances like so:
search.varimp()
doc/examples/h2o/h2o_example.ipynb
tgsmith61591/skutil
bsd-3-clause
Model evaluation&mdash;introduce the validation set So our best estimator achieves a mean cross validation accuracy of 93%! We can predict on our best estimator as follows:
val_preds = search.predict(X_val) # print accuracy print('Validation accuracy: %.5f' % h2o_accuracy_score(actual, val_preds['predict'])) val_preds.head()
doc/examples/h2o/h2o_example.ipynb
tgsmith61591/skutil
bsd-3-clause
Model selection (Not shown: other models we built and evaluated against the validation set (once!)&mdash;we only introduce the holdout set at the very end) In a real situation, you probably will have a holdout set, and will have built several models. After you have a collection of models and you'd like to select one, y...
import os # get absolute path cwd = os.getcwd() model_path = os.path.join(cwd, 'grid.pkl') # save -- it's that easy!!! search.save(location=model_path, warn_if_exists=False)
doc/examples/h2o/h2o_example.ipynb
tgsmith61591/skutil
bsd-3-clause
Loading and making predictions
search = H2ORandomizedSearchCV.load(model_path) new_predictions = search.predict(X_val) new_predictions.head()
doc/examples/h2o/h2o_example.ipynb
tgsmith61591/skutil
bsd-3-clause
Cleanup Always make sure to shut down your cluster...
h2o.shutdown(prompt=False) # shutdown cluster os.unlink(model_path) # remove the pickle file...
doc/examples/h2o/h2o_example.ipynb
tgsmith61591/skutil
bsd-3-clause
Load data
#seed_data = pd.read_csv('20160128_AD_Decrease_Meta_Christian.csv') template_036= nib.load('/home/cdansereau/data/template_cambridge_basc_multiscale_nii_sym/template_cambridge_basc_multiscale_sym_scale036.nii.gz') template_020= nib.load('/home/cdansereau/data/template_cambridge_basc_multiscale_nii_sym/template_cambrid...
metaad/network_level_meta.ipynb
SIMEXP/Projects
mit
Get the number of coordinates reported for each network
from numpy.linalg import norm # find the closest network to the coordo def get_nearest_net(template,world_coor): list_coord = np.array(np.where(template.get_data()>0)) mni_coord = apply_affine(template.get_affine(),list_coord.T) distances = norm(mni_coord-np.array(world_coor),axis=1) #print distances.sh...
metaad/network_level_meta.ipynb
SIMEXP/Projects
mit
Generate random coordinates The assigned coodinates are generated for each network witha proability equivalent to there volume size compare to the total volume of the brain
''' from numpy.random import permutation def permute_table(frequency_votes,n_iter): h0_results = [] for n in range(n_iter): perm_freq = frequency_votes.copy() #print perm_freq for i in range(perm_freq.shape[0]): perm_freq[i,:] = permutation(perm_freq[i,:]) #print perm...
metaad/network_level_meta.ipynb
SIMEXP/Projects
mit
Generate the p-values for each network
def getpval_old(nhit,dist_data): distribution_val = np.histogram(dist_data,bins=np.arange(0,1,0.01)) idx_bin = np.where((distribution_val[1]>=round(nhit,2)) & (distribution_val[1]<=round(nhit,2)))[0][0] #print distribution_val[1] return (np.sum(distribution_val[0][idx_bin:-1])+1)/(dist_data.shape[0]+1....
metaad/network_level_meta.ipynb
SIMEXP/Projects
mit
Map the p-values to the template
from proteus.matrix import tseries as ts hitfreq_vol = ts.vec2map(network_votes,template) pval_vol = ts.vec2map(1-np.array(pval_results),template) plt.figure() plotting.plot_stat_map(hitfreq_vol,cut_coords=(0,0,0),draw_cross=False) plt.figure() plotting.plot_stat_map(pval_vol,cut_coords=(0,0,0),draw_cross=False)
metaad/network_level_meta.ipynb
SIMEXP/Projects
mit
FDR correction of the p-values
# correct for FRD from statsmodels.sandbox.stats.multicomp import fdrcorrection0 fdr_test,fdr_pval=fdrcorrection0(pval_results,alpha=0.05) print network_votes print fdr_test print fdr_pval # save the results path_output = '/home/cdansereau/git/Projects/metaad/maps_results/' stats_results = {'Hits':network_votes ,'pv...
metaad/network_level_meta.ipynb
SIMEXP/Projects
mit
Model specification The VARMAX class in Statsmodels allows estimation of VAR, VMA, and VARMA models (through the order argument), optionally with a constant term (via the trend argument). Exogenous regressors may also be included (as usual in Statsmodels, by the exog argument), and in this way a time trend may be added...
# exog = pd.Series(np.arange(len(endog)), index=endog.index, name='trend') exog = endog['dln_consump'] mod = sm.tsa.VARMAX(endog[['dln_inv', 'dln_inc']], order=(2,0), trend='nc', exog=exog) res = mod.fit(maxiter=1000) print res.summary()
examples/notebooks/statespace_varmax.ipynb
YihaoLu/statsmodels
bsd-3-clause
Example 2: VMA A vector moving average model can also be formulated. Below we show a VMA(2) on the same data, but where the innovations to the process are uncorrelated. In this example we leave out the exogenous regressor but now include the constant term.
mod = sm.tsa.VARMAX(endog[['dln_inv', 'dln_inc']], order=(0,2), error_cov_type='diagonal') res = mod.fit(maxiter=1000) print res.summary()
examples/notebooks/statespace_varmax.ipynb
YihaoLu/statsmodels
bsd-3-clause
Caution: VARMA(p,q) specifications Although the model allows estimating VARMA(p,q) specifications, these models are not identified without additional restrictions on the representation matrices, which are not built-in. For this reason, it is recommended that the user proceed with error (and indeed a warning is issued w...
mod = sm.tsa.VARMAX(endog[['dln_inv', 'dln_inc']], order=(1,1)) res = mod.fit(maxiter=1000) print res.summary()
examples/notebooks/statespace_varmax.ipynb
YihaoLu/statsmodels
bsd-3-clause
Time-frequency beamforming using DICS Compute DICS source power in a grid of time-frequency windows and display results. The original reference is: Dalal et al. Five-dimensional neuroimaging: Localization of the time-frequency dynamics of cortical activity. NeuroImage (2008) vol. 40 (4) pp. 1686-1700
# Author: Roman Goj <roman.goj@gmail.com> # # License: BSD (3-clause) import mne from mne.event import make_fixed_length_events from mne.datasets import sample from mne.time_frequency import csd_epochs from mne.beamformer import tf_dics from mne.viz import plot_source_spectrogram print(__doc__) data_path = sample.da...
0.13/_downloads/plot_tf_dics.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Read raw data
raw = mne.io.read_raw_fif(raw_fname, preload=True) raw.info['bads'] = ['MEG 2443'] # 1 bad MEG channel # Pick a selection of magnetometer channels. A subset of all channels was used # to speed up the example. For a solution based on all MEG channels use # meg=True, selection=None and add mag=4e-12 to the reject dicti...
0.13/_downloads/plot_tf_dics.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Time-frequency beamforming based on DICS
# Setting frequency bins as in Dalal et al. 2008 freq_bins = [(4, 12), (12, 30), (30, 55), (65, 300)] # Hz win_lengths = [0.3, 0.2, 0.15, 0.1] # s # Then set FFTs length for each frequency range. # Should be a power of 2 to be faster. n_ffts = [256, 128, 128, 128] # Subtract evoked response prior to computation? sub...
0.13/_downloads/plot_tf_dics.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
First we import some datasets of interest
#the seed information df_seeds = pd.read_csv('../input/WNCAATourneySeeds_SampleTourney2018.csv') #tour information df_tour = pd.read_csv('../input/WRegularSeasonCompactResults_PrelimData2018.csv')
MNIST_2017/dump_/women_2018_gridsearchCV.ipynb
minesh1291/Practicing-Kaggle
gpl-3.0
Now we separate the winners from the losers and organize our dataset
df_seeds['seed_int'] = df_seeds['Seed'].apply( lambda x : int(x[1:3]) ) df_winseeds = df_seeds.loc[:, ['TeamID', 'Season', 'seed_int']].rename(columns={'TeamID':'WTeamID', 'seed_int':'WSeed'}) df_lossseeds = df_seeds.loc[:, ['TeamID', 'Season', 'seed_int']].rename(columns={'TeamID':'LTeamID', 'seed_int':'LSeed'}) df_d...
MNIST_2017/dump_/women_2018_gridsearchCV.ipynb
minesh1291/Practicing-Kaggle
gpl-3.0
Now we match the detailed results to the merge dataset above
df_concat['DiffSeed'] = df_concat[['LSeed', 'WSeed']].apply(lambda x : 0 if x[0] == x[1] else 1, axis = 1)
MNIST_2017/dump_/women_2018_gridsearchCV.ipynb
minesh1291/Practicing-Kaggle
gpl-3.0
Here we get our submission info
#prepares sample submission df_sample_sub = pd.read_csv('../input/WSampleSubmissionStage2.csv') df_sample_sub['Season'] = df_sample_sub['ID'].apply(lambda x : int(x.split('_')[0]) ) df_sample_sub['TeamID1'] = df_sample_sub['ID'].apply(lambda x : int(x.split('_')[1]) ) df_sample_sub['TeamID2'] = df_sample_sub['ID'].app...
MNIST_2017/dump_/women_2018_gridsearchCV.ipynb
minesh1291/Practicing-Kaggle
gpl-3.0
Training Data Creation
winners = df_concat.rename( columns = { 'WTeamID' : 'TeamID1', 'LTeamID' : 'TeamID2', 'WScore' : 'Team1_Score', 'LScore' : 'Team2_Score'}).drop(['WSeed', 'L...
MNIST_2017/dump_/women_2018_gridsearchCV.ipynb
minesh1291/Practicing-Kaggle
gpl-3.0
We will only consider years relevant to our test submission
df_sample_sub['Season'].unique()
MNIST_2017/dump_/women_2018_gridsearchCV.ipynb
minesh1291/Practicing-Kaggle
gpl-3.0
Now lets just look at TeamID2, or just the second team info.
train_test_inner = pd.merge( train.loc[ train['Season'].isin([2018]), : ].reset_index(drop = True), df_sample_sub.drop(['ID', 'Pred'], axis = 1), on = ['Season', 'TeamID1', 'TeamID2'], how = 'inner' ) train_test_inner.head()
MNIST_2017/dump_/women_2018_gridsearchCV.ipynb
minesh1291/Practicing-Kaggle
gpl-3.0
From the inner join, we will create data per team id to estimate the parameters we are missing that are independent of the year. Essentially, we are trying to estimate the average behavior of the team across the year.
team1d_num_ot = train_test_inner.groupby(['Season', 'TeamID1'])['NumOT'].median().reset_index()\ .set_index('Season').rename(columns = {'NumOT' : 'NumOT1'}) team2d_num_ot = train_test_inner.groupby(['Season', 'TeamID2'])['NumOT'].median().reset_index()\ .set_index('Season').rename(columns = {'NumOT' : 'NumOT2'}) num_o...
MNIST_2017/dump_/women_2018_gridsearchCV.ipynb
minesh1291/Practicing-Kaggle
gpl-3.0
Here we look at the comparable statistics. For the TeamID2 column, we would consider the inverse of the ratio, and 1 minus the score attempt percentage.
team1d_score_spread = train_test_inner.groupby(['Season', 'TeamID1'])[['Score_Ratio', 'Score_Pct']].median().reset_index()\ .set_index('Season').rename(columns = {'Score_Ratio' : 'Score_Ratio1', 'Score_Pct' : 'Score_Pct1'}) team2d_score_spread = train_test_inner.groupby(['Season', 'TeamID2'])[['Score_Ratio', 'Score_Pct...
MNIST_2017/dump_/women_2018_gridsearchCV.ipynb
minesh1291/Practicing-Kaggle
gpl-3.0
Now lets create a model just solely based on the inner group and predict those probabilities. We will get the teams with the missing result.
X_train = train_test_inner.loc[:, ['Season', 'NumOT', 'Score_Ratio', 'Score_Pct']] train_labels = train_test_inner['Result'] train_test_outer = pd.merge( train.loc[ train['Season'].isin([2014, 2015, 2016, 2017]), : ].reset_index(drop = True), df_sample_sub.drop(['ID', 'Pred'], axis = 1), on = ['Sea...
MNIST_2017/dump_/women_2018_gridsearchCV.ipynb
minesh1291/Practicing-Kaggle
gpl-3.0
We scale our data for our keras classifier, and make sure our categorical variables are properly processed.
X_test = train_test_missing.loc[:, ['Season', 'NumOT', 'Score_Ratio', 'Score_Pct']] n = X_train.shape[0] train_test_merge = pd.concat( [X_train, X_test], axis = 0 ).reset_index(drop = True) train_test_merge = pd.concat( [pd.get_dummies( train_test_merge['Season'].astype(object) ), train_test_merge.drop(...
MNIST_2017/dump_/women_2018_gridsearchCV.ipynb
minesh1291/Practicing-Kaggle
gpl-3.0
Here we store our probabilities
train_test_inner['Pred1'] = model.predict_proba(X_train)[:,1] train_test_missing['Pred1'] = model.predict_proba(X_test)[:,1]
MNIST_2017/dump_/women_2018_gridsearchCV.ipynb
minesh1291/Practicing-Kaggle
gpl-3.0
We merge our predictions
sub = pd.merge(df_sample_sub, pd.concat( [train_test_missing.loc[:, ['Season', 'TeamID1', 'TeamID2', 'Pred1']], train_test_inner.loc[:, ['Season', 'TeamID1', 'TeamID2', 'Pred1']] ], axis = 0).reset_index(drop = True), ...
MNIST_2017/dump_/women_2018_gridsearchCV.ipynb
minesh1291/Practicing-Kaggle
gpl-3.0
We get the 'average' probability of success for each team
team1_probs = sub.groupby('TeamID1')['Pred1'].apply(lambda x : (x ** -1.0).mean() ** -1.0 ).fillna(0.5).to_dict() team2_probs = sub.groupby('TeamID2')['Pred1'].apply(lambda x : (x ** -1.0).mean() ** -1.0 ).fillna(0.5).to_dict()
MNIST_2017/dump_/women_2018_gridsearchCV.ipynb
minesh1291/Practicing-Kaggle
gpl-3.0
Any missing value for the prediciton will be imputed with the product of the probabilities calculated above. We assume these are independent events.
sub['Pred'] = sub[['TeamID1', 'TeamID2','Pred1']]\ .apply(lambda x : team1_probs.get(x[0]) * ( 1 - team2_probs.get(x[1]) ) if np.isnan(x[2]) else x[2], axis = 1) sub = sub.drop_duplicates(subset=["ID"], keep='first') sub[['ID', 'Pred']].to_csv('sub.csv', index = False) sub[['ID', 'Pred']].head(20)
MNIST_2017/dump_/women_2018_gridsearchCV.ipynb
minesh1291/Practicing-Kaggle
gpl-3.0
Compute MxNE with time-frequency sparse prior The TF-MxNE solver is a distributed inverse method (like dSPM or sLORETA) that promotes focal (sparse) sources (such as dipole fitting techniques). The benefit of this approach is that: it is spatio-temporal without assuming stationarity (sources properties can vary ov...
# Author: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr> # # License: BSD (3-clause) import mne from mne.datasets import sample from mne.minimum_norm import make_inverse_operator, apply_inverse from mne.inverse_sparse import tf_mixed_norm from mne.viz import plot_sparse_source_estimates print(__doc__) ...
0.14/_downloads/plot_time_frequency_mixed_norm_inverse.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Run solver
# alpha_space regularization parameter is between 0 and 100 (100 is high) alpha_space = 50. # spatial regularization parameter # alpha_time parameter promotes temporal smoothness # (0 means no temporal regularization) alpha_time = 1. # temporal regularization parameter loose, depth = 0.2, 0.9 # loose orientation & ...
0.14/_downloads/plot_time_frequency_mixed_norm_inverse.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Basic end-to-end example
import sentencepiece as spm # train sentencepiece model from `botchan.txt` and makes `m.model` and `m.vocab` # `m.vocab` is just a reference. not used in the segmentation. spm.SentencePieceTrainer.train('--input=botchan.txt --model_prefix=m --vocab_size=2000') # makes segmenter instance and loads the model file (m.mo...
python/sentencepiece_python_module_example.ipynb
google/sentencepiece
apache-2.0
Loads model from byte stream Sentencepiece's model file is just a serialized protocol buffer. We can instantiate sentencepiece processor from byte object with load_from_serialized_proto method.
import tensorflow as tf # Assumes that m.model is stored in non-Posix file system. serialized_model_proto = tf.gfile.GFile('m.model', 'rb').read() sp = spm.SentencePieceProcessor() sp.load_from_serialized_proto(serialized_model_proto) print(sp.encode_as_pieces('this is a test'))
python/sentencepiece_python_module_example.ipynb
google/sentencepiece
apache-2.0
User defined and control symbols We can define special tokens (symbols) to tweak the DNN behavior through the tokens. Typical examples are BERT's special symbols., e.g., [SEP] and [CLS]. There are two types of special tokens: user defined symbols: Always treated as one token in any context. These symbols can appear...
## Example of user defined symbols spm.SentencePieceTrainer.train('--input=botchan.txt --model_prefix=m_user --user_defined_symbols=<sep>,<cls> --vocab_size=2000') sp_user = spm.SentencePieceProcessor() sp_user.load('m_user.model') # ids are reserved in both mode. # <unk>=0, <s>=1, </s>=2, <sep>=3, <cls>=4 # user def...
python/sentencepiece_python_module_example.ipynb
google/sentencepiece
apache-2.0
BOS/EOS (&lt;s&gt;, &lt;/s&gt;) are defined as control symbols, but we can define them as user defined symbols.
spm.SentencePieceTrainer.train('--input=botchan.txt --model_prefix=m_bos_as_user --user_defined_symbols=<s>,</s> --vocab_size=2000') sp = spm.SentencePieceProcessor() sp.load('m.model') print(sp.encode_as_pieces('<s> hello</s>')) # <s>,</s> are segmented. (default behavior) sp = spm.SentencePieceProcessor() sp.load...
python/sentencepiece_python_module_example.ipynb
google/sentencepiece
apache-2.0
Manipulating BOS/EOS/EOS/PAD symbols BOS, EOS, UNK, and PAD ids can be obtained with bos_id(), eos_id(), unk_id(), and pad_id() methods. We can explicitly insert these ids as follows.
spm.SentencePieceTrainer.train('--input=botchan.txt --model_prefix=m --vocab_size=2000') sp = spm.SentencePieceProcessor() sp.load('m.model') print('bos=', sp.bos_id()) print('eos=', sp.eos_id()) print('unk=', sp.unk_id()) print('pad=', sp.pad_id()) # disabled by default print(sp.encode_as_ids('Hello world')) # P...
python/sentencepiece_python_module_example.ipynb
google/sentencepiece
apache-2.0
Changing the vocab id and surface representation of UNK/BOS/EOS/PAD symbols By default, UNK/BOS/EOS/PAD tokens and their ids are defined as follows: |token|UNK|BOS|EOS|PAD| ---|--- |surface|&lt;unk&gt;|&lt;s&gt;|&lt;/s&gt;|&lt;pad&gt;| |id|0|1|2|undefined (-1)| We can change these mappings with --{unk|bos|eos|pad}_id a...
spm.SentencePieceTrainer.train('--input=botchan.txt --vocab_size=2000 --model_prefix=m --pad_id=0 --unk_id=1 --bos_id=2 --eos_id=3 --pad_piece=[PAD] --unk_piece=[UNK] --bos_piece=[BOS] --eos_piece=[EOS]') sp = spm.SentencePieceProcessor() sp.load('m.model') for id in range(4): print(sp.id_to_piece(id), sp.is_cont...
python/sentencepiece_python_module_example.ipynb
google/sentencepiece
apache-2.0
When -1 is set, this special symbol is disabled. UNK must not be undefined.
# Disable BOS/EOS spm.SentencePieceTrainer.train('--input=botchan.txt --vocab_size=2000 --model_prefix=m --bos_id=-1 --eos_id=-1') sp = spm.SentencePieceProcessor() sp.load('m.model') # <s>, </s> are UNK. print(sp.unk_id()) print(sp.piece_to_id('<s>')) print(sp.piece_to_id('</s>'))
python/sentencepiece_python_module_example.ipynb
google/sentencepiece
apache-2.0
UNK id is decoded into U+2047 (⁇) by default. We can change UNK surface with --unk_surface=&lt;STR&gt; flag.
spm.SentencePieceTrainer.train('--input=botchan.txt --vocab_size=2000 --model_prefix=m') sp = spm.SentencePieceProcessor() sp.load('m.model') print(sp.decode_ids([sp.unk_id()])) # default is U+2047 spm.SentencePieceTrainer.train('--input=botchan.txt --vocab_size=2000 --model_prefix=m --unk_surface=__UNKNOWN__') sp =...
python/sentencepiece_python_module_example.ipynb
google/sentencepiece
apache-2.0
Sampling and nbest segmentation for subword regularization When --model_type=unigram (default) is used, we can perform sampling and n-best segmentation for data augmentation. See subword regularization paper [kudo18] for more detail.
spm.SentencePieceTrainer.train('--input=botchan.txt --model_prefix=m --vocab_size=2000') # Can obtain different segmentations per request. # There are two hyperparamenters for sampling (nbest_size and inverse temperature). see the paper [kudo18] for detail. for n in range(10): print(sp.sample_encode_as_pieces('hello...
python/sentencepiece_python_module_example.ipynb
google/sentencepiece
apache-2.0
BPE (Byte pair encoding) model Sentencepiece supports BPE (byte-pair-encoding) for subword segmentation with --model_type=bpe flag. We do not find empirical differences in translation quality between BPE and unigram model, but unigram model can perform sampling and n-best segmentation. See subword regularization pape...
spm.SentencePieceTrainer.train('--input=botchan.txt --model_prefix=m_bpe --vocab_size=2000 --model_type=bpe') sp_bpe = spm.SentencePieceProcessor() sp_bpe.load('m_bpe.model') print('*** BPE ***') print(sp_bpe.encode_as_pieces('thisisatesthelloworld')) print(sp_bpe.nbest_encode_as_pieces('hello world', 5)) # returns a...
python/sentencepiece_python_module_example.ipynb
google/sentencepiece
apache-2.0
Character and word model Sentencepiece supports character and word segmentation with --model_type=char and --model_type=character flags. In word segmentation, sentencepiece just segments tokens with whitespaces, so the input text must be pre-tokenized. We can apply different segmentation algorithm transparently without...
spm.SentencePieceTrainer.train('--input=botchan.txt --model_prefix=m_char --model_type=char --vocab_size=400') sp_char = spm.SentencePieceProcessor() sp_char.load('m_char.model') print(sp_char.encode_as_pieces('this is a test.')) print(sp_char.encode_as_ids('this is a test.')) spm.SentencePieceTrainer.train('--input...
python/sentencepiece_python_module_example.ipynb
google/sentencepiece
apache-2.0
Text normalization Sentencepiece provides the following general pre-defined normalization rules. We can change the normalizer with --normaliation_rule_name=&lt;NAME&gt; flag. nmt_nfkc: NFKC normalization with some additional normalization around spaces. (default) nfkc: original: NFKC normalization. nmt_nfkc_cf: nmt_nf...
import sentencepiece as spm # NFKC normalization and lower casing. spm.SentencePieceTrainer.train('--input=botchan.txt --model_prefix=m --vocab_size=2000 --normalization_rule_name=nfkc_cf') sp = spm.SentencePieceProcessor() sp.load('m.model') print(sp.encode_as_pieces('HELLO WORLD.')) # lower casing and normalizatio...
python/sentencepiece_python_module_example.ipynb
google/sentencepiece
apache-2.0
The normalization is performed with user-defined string-to-string mappings and leftmost longest matching. We can also define the custom normalization rules as TSV file. The TSV files for pre-defined normalziation rules can be found in the data directory (sample). The normalization rule is compiled into FST and embedded...
def tocode(s): out = [] for c in s: out.append(str(hex(ord(c))).re...
python/sentencepiece_python_module_example.ipynb
google/sentencepiece
apache-2.0
Randomizing training data Sentencepiece loads all the lines of training data into memory to train the model. However, larger training data increases the training time and memory usage, though they are liner to the training data. When --input_sentence_size=&lt;SIZE&gt; is specified, Sentencepiece randomly samples &lt;...
spm.SentencePieceTrainer.train('--input=botchan.txt --model_prefix=m --vocab_size=2000 --input_sentence_size=1000') sp = spm.SentencePieceProcessor() sp.load('m.model') sp.encode_as_pieces('this is a test.')
python/sentencepiece_python_module_example.ipynb
google/sentencepiece
apache-2.0
Vocabulary restriction We can encode the text only using the tokens spececified with set_vocabulary method. The background of this feature is described in subword-nmt page.
spm.SentencePieceTrainer.train('--input=botchan.txt --model_prefix=m --vocab_size=2000') sp = spm.SentencePieceProcessor() sp.load('m.model') print(sp.encode_as_pieces('this is a test.')) # Gets all tokens as Python list. vocabs = [sp.id_to_piece(id) for id in range(sp.get_piece_size())] # Aggregates the frequency ...
python/sentencepiece_python_module_example.ipynb
google/sentencepiece
apache-2.0
Extracting crossing-words pieces Sentencepieces does not extract pieces crossing multiple words (here the word means the space delimited tokens). The piece will never contain the whitespace marker (_) in the middle. --split_by_whtespace=false disables this restriction and allows to extract pieces crossing multiple word...
import re spm.SentencePieceTrainer.train('--input=botchan.txt --model_prefix=m --vocab_size=2000 --split_by_whitespace=false') sp = spm.SentencePieceProcessor() sp.load('m.model') # Gets all tokens as Python list. vocabs = [sp.id_to_piece(id) for id in range(sp.get_piece_size())] for piece in vocabs[0:500]: if ...
python/sentencepiece_python_module_example.ipynb
google/sentencepiece
apache-2.0
Training sentencepiece model from the word list with frequency We can train the sentencepiece model from the pair of &lt;word, frequency&gt;. First, you make a TSV file where the first column is the word and the second column is the frequency. Then, feed this TSV file with --input_format=tsv flag. Note that when feedin...
freq={} with open('botchan.txt', 'r') as f: for line in f: line = line.rstrip() for piece in line.split(): freq.setdefault(piece, 0) freq[piece] += 1 with open('word_freq_list.tsv', 'w') as f: for k, v in freq.items(): f.write('%s\t%d\n' % (k, v)) import sentencepiece as spm...
python/sentencepiece_python_module_example.ipynb
google/sentencepiece
apache-2.0
Getting byte offsets of tokens Sentencepiece keeps track of byte offset (span) of each token, which is useful for highlighting the token on top of unnormalized text. We first need to install protobuf module and sentencepiece_pb2.py as the byte offsets and all other meta data for segementation are encoded in protocol bu...
!pip install protobuf !wget https://raw.githubusercontent.com/google/sentencepiece/master/python/sentencepiece_pb2.py import sentencepiece_pb2 import sentencepiece as spm spm.SentencePieceTrainer.train('--input=botchan.txt --model_prefix=m --vocab_size=2000') sp = spm.SentencePieceProcessor() sp.load('m.model') # O...
python/sentencepiece_python_module_example.ipynb
google/sentencepiece
apache-2.0
We will be using GapMinder data for examples below.
df = pd.read_csv('http://www.stat.ubc.ca/~jenny/notOcto/STAT545A/' 'examples/gapminder/data/' 'gapminderDataFiveYear.txt', sep='\t') df.head()
notebooks/Intro.ipynb
napjon/krisk
bsd-3-clause
Let's start by small example. Using bar plot to count the data of category,
kk.bar(df,'continent')
notebooks/Intro.ipynb
napjon/krisk
bsd-3-clause
Note that by default, the plot already used a tooltip. You can hover the plot to see the y-value. We also can plot bar by averaging GDP per capita for each continent,
kk.bar(df,'continent',y='gdpPercap',how='mean')
notebooks/Intro.ipynb
napjon/krisk
bsd-3-clause
We can change x as year, and use the grouping on continent,
kk.bar(df,'year',y='gdpPercap',c='continent',how='mean')
notebooks/Intro.ipynb
napjon/krisk
bsd-3-clause
Stacked and annotate the chart,
(kk.bar(df,'year',y='gdpPercap',c='continent',how='mean',stacked=True,annotate=True) .set_size(width=1000))
notebooks/Intro.ipynb
napjon/krisk
bsd-3-clause
Next we can do the same thing with line chart, using area, annotate, and tooltip based on axis,
p = kk.line(df,'year',y='gdpPercap',c='continent',how='mean', stacked=True,annotate='all',area=True) p.set_tooltip_style(trigger='axis',axis_pointer='shadow') p.set_size(width=1000)
notebooks/Intro.ipynb
napjon/krisk
bsd-3-clause
We can also create a histogram and add theme into it,
p = (kk.hist(df,x='lifeExp',c='continent',stacked=True,bins=100)) p.set_tooltip_style(trigger='axis',axis_pointer='shadow') p.set_theme('vintage')
notebooks/Intro.ipynb
napjon/krisk
bsd-3-clause
Let's get a little bit advanced. We're going to create scatter points of GapMinder data in 2007. We use Life Expectancy, GDP per Capita, and Population as x,y,size respectively. We also want to add the information on the tooltip, add and reposition toolbox, legend, and title.
p = kk.scatter(df[df.year == 2007],'lifeExp','gdpPercap',s='pop',c='continent') p.set_size(width=1000, height=500) p.set_tooltip_format(['country','lifeExp','gdpPercap','pop','continent']) p.set_theme('dark') p.set_toolbox(save_format='png',restore=True,data_zoom=True) p.set_legend(orient='vertical',x_pos='-1%',y_pos='...
notebooks/Intro.ipynb
napjon/krisk
bsd-3-clause
Bracket Indexing and Selection The simplest way to pick one or some elements of an array looks very similar to python lists:
#Get a value at an index arr[8] #Get values in a range arr[1:5] #Get values in a range arr[0:5]
ml-for-finance/python-for-financial-analysis-and-algorithmic-trading/02-NumPy/2-Numpy-Indexing-and-Selection.ipynb
Almaz-KG/MachineLearning
apache-2.0
Broadcasting Numpy arrays differ from a normal Python list because of their ability to broadcast:
#Setting a value with index range (Broadcasting) arr[0:5]=100 #Show arr # Reset array, we'll see why I had to reset in a moment arr = np.arange(0,11) #Show arr #Important notes on Slices slice_of_arr = arr[0:6] #Show slice slice_of_arr #Change Slice slice_of_arr[:]=99 #Show Slice again slice_of_arr
ml-for-finance/python-for-financial-analysis-and-algorithmic-trading/02-NumPy/2-Numpy-Indexing-and-Selection.ipynb
Almaz-KG/MachineLearning
apache-2.0
Now note the changes also occur in our original array!
arr
ml-for-finance/python-for-financial-analysis-and-algorithmic-trading/02-NumPy/2-Numpy-Indexing-and-Selection.ipynb
Almaz-KG/MachineLearning
apache-2.0
Data is not copied, it's a view of the original array! This avoids memory problems!
#To get a copy, need to be explicit arr_copy = arr.copy() arr_copy
ml-for-finance/python-for-financial-analysis-and-algorithmic-trading/02-NumPy/2-Numpy-Indexing-and-Selection.ipynb
Almaz-KG/MachineLearning
apache-2.0
Indexing a 2D array (matrices) The general format is arr_2d[row][col] or arr_2d[row,col]. I recommend usually using the comma notation for clarity.
arr_2d = np.array(([5,10,15],[20,25,30],[35,40,45])) #Show arr_2d #Indexing row arr_2d[1] # Format is arr_2d[row][col] or arr_2d[row,col] # Getting individual element value arr_2d[1][0] # Getting individual element value arr_2d[1,0] # 2D array slicing #Shape (2,2) from top right corner arr_2d[:2,1:] #Shape bot...
ml-for-finance/python-for-financial-analysis-and-algorithmic-trading/02-NumPy/2-Numpy-Indexing-and-Selection.ipynb
Almaz-KG/MachineLearning
apache-2.0
More Indexing Help Indexing a 2d matrix can be a bit confusing at first, especially when you start to add in step size. Try google image searching NumPy indexing to fins useful images, like this one: <img src= 'http://memory.osu.edu/classes/python/_images/numpy_indexing.png' width=500/> Conditional Selection This is a ...
arr = np.arange(1,11) arr arr > 4 bool_arr = arr>4 bool_arr arr[bool_arr] arr[arr>2] x = 2 arr[arr>x]
ml-for-finance/python-for-financial-analysis-and-algorithmic-trading/02-NumPy/2-Numpy-Indexing-and-Selection.ipynb
Almaz-KG/MachineLearning
apache-2.0
Polynomial regression, revisited We build on the material from Week 3, where we wrote the function to produce an SFrame with columns containing the powers of a given input. Copy and paste the function polynomial_sframe from Week 3:
def polynomial_sframe(feature, degree): poly_dataset = pd.DataFrame() poly_dataset['power_1'] = feature if degree > 1: for power in range(2, degree + 1): column = 'power_' + str(power) poly_dataset[column] = feature**power features = poly_dataset.columns.values.tolist() ...
ml-regression/week3-4/week-4-ridge-regression-assignment-1.ipynb
isendel/machine-learning
apache-2.0
Let's use matplotlib to visualize what a polynomial regression looks like on the house data.
import matplotlib.pyplot as plt %matplotlib inline sales = pd.read_csv('kc_house_data.csv', dtype=dtype_dict)
ml-regression/week3-4/week-4-ridge-regression-assignment-1.ipynb
isendel/machine-learning
apache-2.0
Let us revisit the 15th-order polynomial model using the 'sqft_living' input. Generate polynomial features up to degree 15 using polynomial_sframe() and fit a model with these features. When fitting the model, use an L2 penalty of 1e-5:
l2_small_penalty = 1.5e-5
ml-regression/week3-4/week-4-ridge-regression-assignment-1.ipynb
isendel/machine-learning
apache-2.0
Note: When we have so many features and so few data points, the solution can become highly numerically unstable, which can sometimes lead to strange unpredictable results. Thus, rather than using no regularization, we will introduce a tiny amount of regularization (l2_penalty=1e-5) to make the solution numerically sta...
poly_data, features = polynomial_sframe(sales['sqft_living'],15) print(poly_data['power_1'].mean()) model = linear_model.Ridge(alpha=l2_small_penalty, normalize=True) model.fit(poly_data[features], sales['price']) print(model.coef_) print(model.intercept_) plt.plot(poly_data['power_1'], sales['price'], '.', pol...
ml-regression/week3-4/week-4-ridge-regression-assignment-1.ipynb
isendel/machine-learning
apache-2.0
QUIZ QUESTION: What's the learned value for the coefficient of feature power_1? Observe overfitting Recall from Week 3 that the polynomial fit of degree 15 changed wildly whenever the data changed. In particular, when we split the sales data into four subsets and fit the model of degree 15, the result came out to be v...
set_1 = pd.read_csv('wk3_kc_house_set_1_data.csv', dtype=dtype_dict) set_2 = pd.read_csv('wk3_kc_house_set_2_data.csv', dtype=dtype_dict) set_3 = pd.read_csv('wk3_kc_house_set_3_data.csv', dtype=dtype_dict) set_4 = pd.read_csv('wk3_kc_house_set_4_data.csv', dtype=dtype_dict) l2_small_penalty=1e-9
ml-regression/week3-4/week-4-ridge-regression-assignment-1.ipynb
isendel/machine-learning
apache-2.0
Next, fit a 15th degree polynomial on set_1, set_2, set_3, and set_4, using 'sqft_living' to predict prices. Print the weights and make a plot of the resulting model. Hint: When calling graphlab.linear_regression.create(), use the same L2 penalty as before (i.e. l2_small_penalty). Also, make sure GraphLab Create doesn...
sales_subset = set_1 poly_data, features = polynomial_sframe(sales_subset['sqft_living'],15) model1 = linear_model.Ridge(alpha=l2_small_penalty, normalize=True) model1.fit(poly_data[features], sales_subset['price']) print(model1.coef_) plt.plot(poly_data['power_1'], sales_subset['price'], '.', poly_data['power_...
ml-regression/week3-4/week-4-ridge-regression-assignment-1.ipynb
isendel/machine-learning
apache-2.0
The four curves should differ from one another a lot, as should the coefficients you learned. QUIZ QUESTION: For the models learned in each of these training sets, what are the smallest and largest values you learned for the coefficient of feature power_1? (For the purpose of answering this question, negative numbers...
power1_coefs = [model1.coef_[0],model2.coef_[0],model3.coef_[0],model4.coef_[0]] print(power1_coefs) print(power1_coefs.index(min(power1_coefs))) print(power1_coefs.index(max(power1_coefs)))
ml-regression/week3-4/week-4-ridge-regression-assignment-1.ipynb
isendel/machine-learning
apache-2.0
Ridge regression comes to rescue Generally, whenever we see weights change so much in response to change in data, we believe the variance of our estimate to be large. Ridge regression aims to address this issue by penalizing "large" weights. (Weights of model15 looked quite small, but they are not that small because 's...
l2_large_penalty=1.23e2 power_1_coef = [] sales_subset = set_1 poly_data, features = polynomial_sframe(sales_subset['sqft_living'],15) model1 = linear_model.Ridge(alpha=l2_large_penalty, normalize=True) model1.fit(poly_data[features], sales_subset['price']) print(model1.coef_) power_1_coef.append(model1.coef_[0]) plt....
ml-regression/week3-4/week-4-ridge-regression-assignment-1.ipynb
isendel/machine-learning
apache-2.0
These curves should vary a lot less, now that you applied a high degree of regularization. QUIZ QUESTION: For the models learned with the high level of regularization in each of these training sets, what are the smallest and largest values you learned for the coefficient of feature power_1? (For the purpose of answeri...
power1_coefs = [model1.coef_[0],model2.coef_[0],model3.coef_[0],model4.coef_[0]] print(power1_coefs) print(power1_coefs.index(min(power1_coefs))) print(power1_coefs.index(max(power1_coefs)))
ml-regression/week3-4/week-4-ridge-regression-assignment-1.ipynb
isendel/machine-learning
apache-2.0
Selecting an L2 penalty via cross-validation Just like the polynomial degree, the L2 penalty is a "magic" parameter we need to select. We could use the validation set approach as we did in the last module, but that approach has a major disadvantage: it leaves fewer observations available for training. Cross-validation ...
train_valid_shuffled = pd.read_csv('wk3_kc_house_train_valid_shuffled.csv', dtype=dtype_dict) test = pd.read_csv('wk3_kc_house_test_data.csv', dtype=dtype_dict)
ml-regression/week3-4/week-4-ridge-regression-assignment-1.ipynb
isendel/machine-learning
apache-2.0
Once the data is shuffled, we divide it into equal segments. Each segment should receive n/k elements, where n is the number of observations in the training set and k is the number of segments. Since the segment 0 starts at index 0 and contains n/k elements, it ends at index (n/k)-1. The segment 1 starts where the segm...
n = len(train_valid_shuffled) k = 10 # 10-fold cross-validation for i in range(k): start = (n*i)/k end = (n*(i+1))/k-1 print(i, (start, end))
ml-regression/week3-4/week-4-ridge-regression-assignment-1.ipynb
isendel/machine-learning
apache-2.0
Now let us extract individual segments with array slicing. Consider the scenario where we group the houses in the train_valid_shuffled dataframe into k=10 segments of roughly equal size, with starting and ending indices computed as above. Extract the fourth segment (segment 3) and assign it to a variable called validat...
n = len(train_valid_shuffled) i = 3 print(n) start = (n*i)/10 end = (n*(i+1))/10 validation4 = train_valid_shuffled[start:end+1] print(start) print(end)
ml-regression/week3-4/week-4-ridge-regression-assignment-1.ipynb
isendel/machine-learning
apache-2.0
To verify that we have the right elements extracted, run the following cell, which computes the average price of the fourth segment. When rounded to nearest whole number, the average should be $536,234.
print(int(round(validation4['price'].mean(), 0)))
ml-regression/week3-4/week-4-ridge-regression-assignment-1.ipynb
isendel/machine-learning
apache-2.0