markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Now let's select a single column, for example -- the response column, and look at the data more closely:
y = 'eyeDetection' data[y]
h2o-py/demos/H2O_tutorial_eeg_eyestate.ipynb
spennihana/h2o-3
apache-2.0
It looks like a binary response, but let's validate that assumption:
data[y].unique()
h2o-py/demos/H2O_tutorial_eeg_eyestate.ipynb
spennihana/h2o-3
apache-2.0
If you don't specify the column types when you import the file, H2O makes a guess at what your column types are. If there are 0's and 1's in a column, H2O will automatically parse that as numeric by default. Therefore, we should convert the response column to a more efficient "enum" representation -- in this case it...
data[y] = data[y].asfactor()
h2o-py/demos/H2O_tutorial_eeg_eyestate.ipynb
spennihana/h2o-3
apache-2.0
Now we can check that there are two levels in our response column:
data[y].nlevels()
h2o-py/demos/H2O_tutorial_eeg_eyestate.ipynb
spennihana/h2o-3
apache-2.0
We can query the categorical "levels" as well ('0' and '1' stand for "eye open" and "eye closed") to see what they are:
data[y].levels()
h2o-py/demos/H2O_tutorial_eeg_eyestate.ipynb
spennihana/h2o-3
apache-2.0
We may want to check if there are any missing values, so let's look for NAs in our dataset. For tree-based methods like GBM and RF, H2O handles missing feature values automatically, so it's not a problem if we are missing certain feature values. However, it is always a good idea to check to make sure that you are not...
data.isna() data[y].isna()
h2o-py/demos/H2O_tutorial_eeg_eyestate.ipynb
spennihana/h2o-3
apache-2.0
The isna method doesn't directly answer the question, "Does the response column contain any NAs?", rather it returns a 0 if that cell is not missing (Is NA? FALSE == 0) and a 1 if it is missing (Is NA? TRUE == 1). So if there are no missing values, then summing over the whole column should produce a summand equal to 0...
data[y].isna().sum()
h2o-py/demos/H2O_tutorial_eeg_eyestate.ipynb
spennihana/h2o-3
apache-2.0
Great, no missing labels. :-) Out of curiosity, let's see if there is any missing data in this frame:
data.isna().sum()
h2o-py/demos/H2O_tutorial_eeg_eyestate.ipynb
spennihana/h2o-3
apache-2.0
The sum is still zero, so there are no missing values in any of the cells. The next thing I may wonder about in a binary classification problem is the distribution of the response in the training data. Is one of the two outcomes under-represented in the training set? Many real datasets have what's called an "imbalana...
data[y].table()
h2o-py/demos/H2O_tutorial_eeg_eyestate.ipynb
spennihana/h2o-3
apache-2.0
Ok, the data is not exactly evenly distributed between the two classes -- there are more 0's than 1's in the dataset. However, this level of imbalance shouldn't be much of an issue for the machine learning algos. (We will revisit this later in the modeling section below). Let's calculate the percentage that each clas...
n = data.shape[0] # Total number of training samples data[y].table()['Count']/n
h2o-py/demos/H2O_tutorial_eeg_eyestate.ipynb
spennihana/h2o-3
apache-2.0
Split H2O Frame into a train and test set So far we have explored the original dataset (all rows). For the machine learning portion of this tutorial, we will break the dataset into three parts: a training set, validation set and a test set. If you want H2O to do the splitting for you, you can use the split_frame metho...
train = data[data['split']=="train"] train.shape valid = data[data['split']=="valid"] valid.shape test = data[data['split']=="test"] test.shape
h2o-py/demos/H2O_tutorial_eeg_eyestate.ipynb
spennihana/h2o-3
apache-2.0
Machine Learning in H2O We will do a quick demo of the H2O software using a Gradient Boosting Machine (GBM). The goal of this problem is to train a model to predict eye state (open vs closed) from EEG data. Train and Test a GBM model
# Import H2O GBM: from h2o.estimators.gbm import H2OGradientBoostingEstimator
h2o-py/demos/H2O_tutorial_eeg_eyestate.ipynb
spennihana/h2o-3
apache-2.0
Specify the predictor set and response The model object, like all H2O estimator objects, has a train method, which will actually perform model training. At this step we specify the training and (optionally) a validation set, along with the response and predictor variables. The x argument should be a list of predictor ...
x = list(train.columns) x del x[12:14] #Remove the 13th and 14th columns, 'eyeDetection' and 'split' x
h2o-py/demos/H2O_tutorial_eeg_eyestate.ipynb
spennihana/h2o-3
apache-2.0
Now that we have specified x and y, we can train the model:
model.train(x=x, y=y, training_frame=train, validation_frame=valid)
h2o-py/demos/H2O_tutorial_eeg_eyestate.ipynb
spennihana/h2o-3
apache-2.0
Model Performance on a Test Set Once a model has been trained, you can also use it to make predictions on a test set. In the case above, we just ran the model once, so our validation set (passed as validation_frame), could have also served as a "test set." We technically have already created test set predictions and ...
perf = model.model_performance(test) print(perf.__class__)
h2o-py/demos/H2O_tutorial_eeg_eyestate.ipynb
spennihana/h2o-3
apache-2.0
Individual model performance metrics can be extracted using methods like auc and mse. In the case of binary classification, we may be most interested in evaluating test set Area Under the ROC Curve (AUC).
perf.auc() perf.mse()
h2o-py/demos/H2O_tutorial_eeg_eyestate.ipynb
spennihana/h2o-3
apache-2.0
Cross-validated Performance To perform k-fold cross-validation, you use the same code as above, but you specify nfolds as an integer greater than 1, or add a "fold_column" to your H2O Frame which indicates a fold ID for each row. Unless you have a specific reason to manually assign the observations to folds, you will f...
cvmodel = H2OGradientBoostingEstimator(distribution='bernoulli', ntrees=100, max_depth=4, learn_rate=0.1, nfolds=5) cvmodel.train(x=x, y=y, training_frame=data)
h2o-py/demos/H2O_tutorial_eeg_eyestate.ipynb
spennihana/h2o-3
apache-2.0
This time around, we will simply pull the training and cross-validation metrics out of the model. To do so, you use the auc method again, and you can specify train or xval as True to get the correct metric.
print(cvmodel.auc(train=True)) print(cvmodel.auc(xval=True))
h2o-py/demos/H2O_tutorial_eeg_eyestate.ipynb
spennihana/h2o-3
apache-2.0
An "H2OGridSearch" object also has a train method, which is used to train all the models in the grid.
gs.train(x=x, y=y, training_frame=train, validation_frame=valid)
h2o-py/demos/H2O_tutorial_eeg_eyestate.ipynb
spennihana/h2o-3
apache-2.0
Compare Models
print(gs) # print out the auc for all of the models auc_table = gs.sort_by('auc(valid=True)',increasing=False) print(auc_table)
h2o-py/demos/H2O_tutorial_eeg_eyestate.ipynb
spennihana/h2o-3
apache-2.0
The "best" model in terms of validation set AUC is listed first in auc_table.
best_model = h2o.get_model(auc_table['Model Id'][0]) best_model.auc()
h2o-py/demos/H2O_tutorial_eeg_eyestate.ipynb
spennihana/h2o-3
apache-2.0
The last thing we may want to do is generate predictions on the test set using the "best" model, and evaluate the test set AUC.
best_perf = best_model.model_performance(test) best_perf.auc()
h2o-py/demos/H2O_tutorial_eeg_eyestate.ipynb
spennihana/h2o-3
apache-2.0
Simulate raw data using subject anatomy This example illustrates how to generate source estimates and simulate raw data using subject anatomy with the :class:mne.simulation.SourceSimulator class. Once the raw data is simulated, generated source estimates are reconstructed using dynamic statistical parametric mapping (d...
# Author: Ivana Kojcic <ivana.kojcic@gmail.com> # Eric Larson <larson.eric.d@gmail.com> # Kostiantyn Maksymenko <kostiantyn.maksymenko@gmail.com> # Samuel Deslauriers-Gauthier <sam.deslauriers@gmail.com> # License: BSD-3-Clause import os.path as op import numpy as np import mne from mne.data...
0.24/_downloads/eb0c29f55af0173daab811d4f4dc2f40/simulated_raw_data_using_subject_anatomy.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
In order to simulate source time courses, labels of desired active regions need to be specified for each of the 4 simulation conditions. Make a dictionary that maps conditions to activation strengths within aparc.a2009s :footcite:DestrieuxEtAl2010 labels. In the aparc.a2009s parcellation: 'G_temp_sup-G_T_transv' is th...
activations = { 'auditory/left': [('G_temp_sup-G_T_transv-lh', 30), # label, activation (nAm) ('G_temp_sup-G_T_transv-rh', 60)], 'auditory/right': [('G_temp_sup-G_T_transv-lh', 60), ('G_temp_sup-G_T_transv-rh', 30)], 'visual/left': [('S_calcarine-lh', 30), ...
0.24/_downloads/eb0c29f55af0173daab811d4f4dc2f40/simulated_raw_data_using_subject_anatomy.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Create simulated source activity Generate source time courses for each region. In this example, we want to simulate source activity for a single condition at a time. Therefore, each evoked response will be parametrized by latency and duration.
def data_fun(times, latency, duration): """Function to generate source time courses for evoked responses, parametrized by latency and duration.""" f = 15 # oscillating frequency, beta band [Hz] sigma = 0.375 * duration sinusoid = np.sin(2 * np.pi * f * (times - latency)) gf = np.exp(- (times - ...
0.24/_downloads/eb0c29f55af0173daab811d4f4dc2f40/simulated_raw_data_using_subject_anatomy.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Here, :class:~mne.simulation.SourceSimulator is used, which allows to specify where (label), what (source_time_series), and when (events) event type will occur. We will add data for 4 areas, each of which contains 2 labels. Since add_data method accepts 1 label per call, it will be called 2 times per area. Evoked respo...
times = np.arange(150, dtype=np.float64) / info['sfreq'] duration = 0.03 rng = np.random.RandomState(7) source_simulator = mne.simulation.SourceSimulator(src, tstep=tstep) for region_id, region_name in enumerate(region_names, 1): events_tmp = events[np.where(events[:, 2] == region_id)[0], :] for i in range(2):...
0.24/_downloads/eb0c29f55af0173daab811d4f4dc2f40/simulated_raw_data_using_subject_anatomy.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Simulate raw data Project the source time series to sensor space. Three types of noise will be added to the simulated raw data: multivariate Gaussian noise obtained from the noise covariance from the sample data blink (EOG) noise ECG noise The :class:~mne.simulation.SourceSimulator can be given directly to the :fun...
raw_sim = mne.simulation.simulate_raw(info, source_simulator, forward=fwd) raw_sim.set_eeg_reference(projection=True) mne.simulation.add_noise(raw_sim, cov=noise_cov, random_state=0) mne.simulation.add_eog(raw_sim, random_state=0) mne.simulation.add_ecg(raw_sim, random_state=0) # Plot original and simulated raw data....
0.24/_downloads/eb0c29f55af0173daab811d4f4dc2f40/simulated_raw_data_using_subject_anatomy.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Extract epochs and compute evoked responsses
epochs = mne.Epochs(raw_sim, events, event_id, tmin=-0.2, tmax=0.3, baseline=(None, 0)) evoked_aud_left = epochs['auditory/left'].average() evoked_vis_right = epochs['visual/right'].average() # Visualize the evoked data evoked_aud_left.plot(spatial_colors=True) evoked_vis_right.plot(spatial_colors=...
0.24/_downloads/eb0c29f55af0173daab811d4f4dc2f40/simulated_raw_data_using_subject_anatomy.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Reconstruct simulated source time courses using dSPM inverse operator Here, source time courses for auditory and visual areas are reconstructed separately and their difference is shown. This was done merely for better visual representation of source reconstruction. As expected, when high activations appear in primary a...
method, lambda2 = 'dSPM', 1. / 9. inv = mne.minimum_norm.make_inverse_operator(epochs.info, fwd, noise_cov) stc_aud = mne.minimum_norm.apply_inverse( evoked_aud_left, inv, lambda2, method) stc_vis = mne.minimum_norm.apply_inverse( evoked_vis_right, inv, lambda2, method) stc_diff = stc_aud - stc_vis brain = stc...
0.24/_downloads/eb0c29f55af0173daab811d4f4dc2f40/simulated_raw_data_using_subject_anatomy.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Load Data We now load all the financial data we will be using.
# Define the ticker-names for the stocks we consider. ticker_SP500 = "S&P 500" ticker_SP400 = "S&P 400" ticker_SP600 = "S&P 600" # All tickers for the stocks. tickers = [ticker_SP500, ticker_SP400, ticker_SP600] # Define longer names for the stocks. name_SP500 = "S&P 500 (Large Cap)" name_SP400 = "S&P 400 (Mid Cap)" ...
timeseries-analysis-python/src/main/python/FinanceOps/02_Comparing_Stock_Indices.ipynb
leonarduk/stockmarketview
apache-2.0
Compare Total Returns The first plot shows the so-called Total Return of the stock indices, which is the investor's return when dividends are reinvested in the same stock index and taxes are ignored.
def plot_total_returns(dfs, names, start_date=None, end_date=None): """ Plot and compare the Total Returns for the given DataFrames. :param dfs: List of Pandas DataFrames with TOTAL_RETURN data. :param names: Names of the stock indices. :param start_date: Plot from this date. :param end_dat...
timeseries-analysis-python/src/main/python/FinanceOps/02_Comparing_Stock_Indices.ipynb
leonarduk/stockmarketview
apache-2.0
This plot clearly shows that the S&P 400 (Mid-Cap) had a much higher Total Return than the S&P 500 (Large-Cap) and S&P 600 (Small-Cap), and the S&P 500 performed slightly worse than the S&P 600. But this period was nearly 30 years. What if we consider shorter investment periods with different start and end-dates? We ne...
def calc_ann_returns(df, start_date, end_date, num_years): """ Calculate the annualized returns for the Total Return of the given DataFrame. A list is returned so that ann_ret[0] is a Pandas Series with the ann.returns for 1-year periods, and ann_ret[1] are the ann.returns for 2-year period...
timeseries-analysis-python/src/main/python/FinanceOps/02_Comparing_Stock_Indices.ipynb
leonarduk/stockmarketview
apache-2.0
Examples of Annualized Returns The lists we have created above contain the annualized returns for the stock indices as well as US Government Bonds and the US CPI inflation index. Let us show the annualized returns of the S&P 500 for all 1-year periods. This is itself a time-series. It shows that the return was about 0....
ann_ret_SP500[0].head(10)
timeseries-analysis-python/src/main/python/FinanceOps/02_Comparing_Stock_Indices.ipynb
leonarduk/stockmarketview
apache-2.0
We can also show the summary statistics for the annualized returns of all 1-year periods of the S&P 500. Note that a mean of about 0.113 means an average 1-year return of 11.3%.
ann_ret_SP500[0].describe()
timeseries-analysis-python/src/main/python/FinanceOps/02_Comparing_Stock_Indices.ipynb
leonarduk/stockmarketview
apache-2.0
We can also show the annualized returns of the S&P 500 for all 10-year periods. This shows that between 3. January 1989 and 1999 the annualized return was about 19.3%. Between 4. January 1989 and 1999 it was about 19.1%.
ann_ret_SP500[9].head(10)
timeseries-analysis-python/src/main/python/FinanceOps/02_Comparing_Stock_Indices.ipynb
leonarduk/stockmarketview
apache-2.0
These are the summary statistics for all 10-year periods of the S&P 500, which show that it returned about 8.2% per year on average, for all 10-year periods between 1989 and 2018.
ann_ret_SP500[9].describe()
timeseries-analysis-python/src/main/python/FinanceOps/02_Comparing_Stock_Indices.ipynb
leonarduk/stockmarketview
apache-2.0
For US Government Bonds we only consider bonds with 1-year maturity, so for multi-year periods we assume the return is reinvested in new 1-year bonds. Reinvesting in gov. bonds gave an average return of about 5.7% for all 10-year periods between 1962 and 2018.
ann_ret_bond[9].describe()
timeseries-analysis-python/src/main/python/FinanceOps/02_Comparing_Stock_Indices.ipynb
leonarduk/stockmarketview
apache-2.0
Examples of Good and Bad Periods Using the annualized returns we have just calculated, we can now easily find investment periods where one stock index was better than another.
def plot_better(df1, df2, ann_ret1, ann_ret2, name1, name2, years): """ Plot the Total Return for a period of the given number of years where the return on stock 1 > stock 2. If this does not exist, then plot for the period where the return of stock 1 was closest to that of stock 2....
timeseries-analysis-python/src/main/python/FinanceOps/02_Comparing_Stock_Indices.ipynb
leonarduk/stockmarketview
apache-2.0
First we show a 3-year period where the S&P 500 was better than the S&P 400.
plot_better(df1=df_SP500, df2=df_SP400, ann_ret1=ann_ret_SP500, ann_ret2=ann_ret_SP400, name1=name_SP500, name2=name_SP400, years=3)
timeseries-analysis-python/src/main/python/FinanceOps/02_Comparing_Stock_Indices.ipynb
leonarduk/stockmarketview
apache-2.0
Then we show a 3-year period where the S&P 400 was better than the S&P 500.
plot_better(df1=df_SP400, df2=df_SP500, ann_ret1=ann_ret_SP400, ann_ret2=ann_ret_SP500, name1=name_SP400, name2=name_SP500, years=3)
timeseries-analysis-python/src/main/python/FinanceOps/02_Comparing_Stock_Indices.ipynb
leonarduk/stockmarketview
apache-2.0
Then we show a 3-year period where the S&P 600 was better than the S&P 400.
plot_better(df1=df_SP600, df2=df_SP400, ann_ret1=ann_ret_SP600, ann_ret2=ann_ret_SP400, name1=name_SP600, name2=name_SP400, years=3)
timeseries-analysis-python/src/main/python/FinanceOps/02_Comparing_Stock_Indices.ipynb
leonarduk/stockmarketview
apache-2.0
Then we show a 3-year period where the S&P 400 was better than the S&P 600.
plot_better(df1=df_SP400, df2=df_SP600, ann_ret1=ann_ret_SP400, ann_ret2=ann_ret_SP600, name1=name_SP400, name2=name_SP600, years=3)
timeseries-analysis-python/src/main/python/FinanceOps/02_Comparing_Stock_Indices.ipynb
leonarduk/stockmarketview
apache-2.0
Statistics for Annualized Returns We can also print summary statistics for the annualized returns.
def print_return_stats(): """ Print basic statistics for the annualized returns. """ # For each period-duration. for i in range(num_years): years = i + 1 print(years, "Year Investment Periods:") # Create a new DataFrame. df = pd.DataFrame() # Add th...
timeseries-analysis-python/src/main/python/FinanceOps/02_Comparing_Stock_Indices.ipynb
leonarduk/stockmarketview
apache-2.0
When we print the summary statistics for the stock indices, we see that for 1-year investment periods the S&P 500 returned about 11.3% on average, while the S&P 400 returned about 14.0%, and the S&P 600 returned about 12.4%. For longer investment periods the average returns decrease. For 10-year investment periods the ...
print_return_stats()
timeseries-analysis-python/src/main/python/FinanceOps/02_Comparing_Stock_Indices.ipynb
leonarduk/stockmarketview
apache-2.0
Probability of Loss Another useful statistic is the historical probability of loss for different investment periods.
def prob_loss(ann_ret): """ Calculate the probability of negative ann.returns (losses). """ # Remove rows with NA. ann_ret = ann_ret.dropna() # Calculate the probability using a boolean mask. mask = (ann_ret < 0.0) prob = np.sum(mask) / len(mask) return prob def print_prob_lo...
timeseries-analysis-python/src/main/python/FinanceOps/02_Comparing_Stock_Indices.ipynb
leonarduk/stockmarketview
apache-2.0
This shows the probability of loss for the stock-indices for investment periods between 1 and 10 years. For example, the S&P 500 had a loss in about 17.8% of all 1-year investment periods, while the S&P 400 had a loss in about 18.1% of all 1-year periods, and the S&P 600 had a loss in about 22.3% of all 1-year periods....
print_prob_loss()
timeseries-analysis-python/src/main/python/FinanceOps/02_Comparing_Stock_Indices.ipynb
leonarduk/stockmarketview
apache-2.0
Compared to Inflation It is also useful to consider the probability of a stock index performing better than inflation.
def prob_better(ann_ret1, ann_ret2): """ Calculate the probability that the ann.returns of stock 1 were better than the ann.returns of stock 2. This does not assume the index-dates are identical. :param ann_ret1: Pandas Series with ann.returns for stock 1. :param ann_ret2: Pandas Series with a...
timeseries-analysis-python/src/main/python/FinanceOps/02_Comparing_Stock_Indices.ipynb
leonarduk/stockmarketview
apache-2.0
This shows the probability of each stock index having a higher return than inflation for investment periods between 1 and 10 years. All taxes are ignored. For example, both the S&P 500 and S&P 400 had a higher return than inflation in about 79% of all 1-year investment periods, while the S&P 600 only exceeded inflation...
print_prob_better_than_inflation()
timeseries-analysis-python/src/main/python/FinanceOps/02_Comparing_Stock_Indices.ipynb
leonarduk/stockmarketview
apache-2.0
Compared to Bonds It is also useful to compare the returns of the stock indices to risk-free government bonds.
def print_prob_better_than_bonds(): """ Print the probability of the stocks performing better than US Gov. Bonds for increasing investment periods. """ # Create a new DataFrame. df = pd.DataFrame() # Add a column with the probabilities for the S&P 500. name = ticker_SP500 + " > Bon...
timeseries-analysis-python/src/main/python/FinanceOps/02_Comparing_Stock_Indices.ipynb
leonarduk/stockmarketview
apache-2.0
This shows the probability of each stock index having a higher return than risk-free government bonds, for investment periods between 1 and 10 years. We consider annual reinvestment in bonds with 1-year maturity. All taxes are ignored. For example, the S&P 500 returned more than government bonds in about 79% of all 1-y...
print_prob_better_than_bonds()
timeseries-analysis-python/src/main/python/FinanceOps/02_Comparing_Stock_Indices.ipynb
leonarduk/stockmarketview
apache-2.0
Compared to Other Stock Indices Now we will compare the stock indices directly against each other.
def print_prob_better(): """ Print the probability of one stock index performing better than another stock index for increasing investment periods. """ # Create a new DataFrame. df = pd.DataFrame() # Add a column with the probabilities for S&P 500 > S&P 400. name = ticker_SP500 + "...
timeseries-analysis-python/src/main/python/FinanceOps/02_Comparing_Stock_Indices.ipynb
leonarduk/stockmarketview
apache-2.0
This shows the probability of one stock index performing better than another for investment periods between 1 and 10 years. All taxes are ignored. For example, the S&P 500 (Large-Cap) performed better than the S&P 400 (Mid-Cap) in about 42% of all 1-year periods. Similarly, the S&P 500 performed better than the S&P 600...
print_prob_better()
timeseries-analysis-python/src/main/python/FinanceOps/02_Comparing_Stock_Indices.ipynb
leonarduk/stockmarketview
apache-2.0
Correlation It is also useful to consider the statistical correlation between the returns of stock indices.
def print_correlation(): """ Print the correlation between the stock indices for increasing investment periods. """ # Create a new DataFrame. df = pd.DataFrame() # Add a column with the correlations for S&P 500 vs. S&P 400. name = ticker_SP500 + " vs. " + ticker_SP400 df[name] ...
timeseries-analysis-python/src/main/python/FinanceOps/02_Comparing_Stock_Indices.ipynb
leonarduk/stockmarketview
apache-2.0
This shows the correlation coefficient (Pearson) between the returns on the stock indices for investment periods between 1 and 10 years. For example, the correlation was about 0.88 between the S&P 500 and S&P 400 for all 1-year investment periods, while it was only 0.77 for the S&P 500 and S&P 600, and 0.92 for the S&P...
print_correlation()
timeseries-analysis-python/src/main/python/FinanceOps/02_Comparing_Stock_Indices.ipynb
leonarduk/stockmarketview
apache-2.0
Recovery Times It is also useful to consider how quickly the stock indices typically recover from losses.
def print_recovery_days(): """ Print the probability of the stocks recovering from losses for increasing number of days. """ # Print the probability for these days. num_days = [7, 30, 90, 180, 365, 2*365, 5*365] # Create a new DataFrame. df = pd.DataFrame() # Add a column ...
timeseries-analysis-python/src/main/python/FinanceOps/02_Comparing_Stock_Indices.ipynb
leonarduk/stockmarketview
apache-2.0
This shows the probability that each stock index has recovered from losses within a given number of days. For example, all three stock indices recovered from about 80-83% of all losses within just a week. The probability goes up for longer investment periods. For example, for 5-year investment periods the S&P 500 had r...
print_recovery_days()
timeseries-analysis-python/src/main/python/FinanceOps/02_Comparing_Stock_Indices.ipynb
leonarduk/stockmarketview
apache-2.0
The above command is only needed if you are plotting in a Jupyter notebook. We now construct some data:
import numpy x = numpy.linspace(0, 1) y1 = numpy.sin(numpy.pi * x) + 0.1 * numpy.random.rand(50) y2 = numpy.cos(3.0 * numpy.pi * x) + 0.2 * numpy.random.rand(50)
03-plotting-data.ipynb
IanHawke/orcomp-training
mit
And then produce a line plot:
from matplotlib import pyplot pyplot.plot(x, y1) pyplot.show()
03-plotting-data.ipynb
IanHawke/orcomp-training
mit
We can add labels and titles:
pyplot.plot(x, y1) pyplot.xlabel('x') pyplot.ylabel('y') pyplot.title('A single line plot') pyplot.show()
03-plotting-data.ipynb
IanHawke/orcomp-training
mit
We can change the plotting style, and use LaTeX style notation where needed:
pyplot.plot(x, y1, linestyle='--', color='black', linewidth=3) pyplot.xlabel(r'$x$') pyplot.ylabel(r'$y$') pyplot.title(r'A single line plot, roughly $\sin(\pi x)$') pyplot.show()
03-plotting-data.ipynb
IanHawke/orcomp-training
mit
We can plot two lines at once, and add a legend, which we can position:
pyplot.plot(x, y1, label=r'$y_1$') pyplot.plot(x, y2, label=r'$y_2$') pyplot.xlabel(r'$x$') pyplot.ylabel(r'$y$') pyplot.title('Two line plots') pyplot.legend(loc='lower left') pyplot.show()
03-plotting-data.ipynb
IanHawke/orcomp-training
mit
We would probably prefer to use subplots. At this point we have to leave the simple interface, and start building the plot using its individual components, figures and axes, which are objects to manipulate:
fig, axes = pyplot.subplots(nrows=1, ncols=2, figsize=(10,6)) axis1 = axes[0] axis1.plot(x, y1) axis1.set_xlabel(r'$x$') axis1.set_ylabel(r'$y_1$') axis2 = axes[1] axis2.plot(x, y2) axis2.set_xlabel(r'$x$') axis2.set_ylabel(r'$y_2$') fig.tight_layout() pyplot.show()
03-plotting-data.ipynb
IanHawke/orcomp-training
mit
The axes variable contains all of the separate axes that you may want. This makes it easy to construct many subplots using a loop:
data = [] for nx in range(2,5): for ny in range(2,5): data.append(numpy.sin(nx * numpy.pi * x) + numpy.cos(ny * numpy.pi * x)) fig, axes = pyplot.subplots(nrows=3, ncols=3, figsize=(10,10)) for nrow in range(3): for ncol in range(3): ndata = ncol + 3 * nrow axes[nrow, ncol].plot(x, data...
03-plotting-data.ipynb
IanHawke/orcomp-training
mit
Matplotlib will allow you to generate and place axes pretty much wherever you like, to use logarithmic scales, to do different types of plot, and so on. Check the examples and gallery for details. Exercise The logistic map builds a sequence of numbers ${ x_n }$ using the relation $$ x_{n+1} = r x_n \left( 1 - x_n \ri...
def logistic(x0, r, N = 1000): sequence = [x0] xn = x0 for n in range(N): xnew = r*xn*(1.0-xn) sequence.append(xnew) xn = xnew return sequence x0 = 0.5 N = 2000 sequence1 = logistic(x0, 1.5, N) sequence2 = logistic(x0, 3.5, N) pyplot.plot(sequence1[-100:], 'b-', label = r'$r=1.5...
03-plotting-data.ipynb
IanHawke/orcomp-training
mit
This suggests that, for $r=1.5$, the sequence has settled down to a fixed point. In the $r=3.5$ case it seems to be moving between four points repeatedly.
# This is the "best" way of doing it, but we may not have much numpy yet # r_values = numpy.arange(1.0, 4.0, 0.01) # This way only uses lists r_values = [] for i in range(302): r_values.append(1.0 + 0.01 * i) x0 = 0.5 N = 2000 for r in r_values: sequence = logistic(x0, r, N) pyplot.plot(r*numpy.ones_like(se...
03-plotting-data.ipynb
IanHawke/orcomp-training
mit
Line Follower Example
%vrep_robot_methods PioneerP3DXL %%vrepsim '../scenes/LineFollowerPioneer.ttt' PioneerP3DXL # black color : 43 # white-gray color : -53 import time while True: lclr = robot.color_left() rclr = robot.color_right() if lclr > 10: robot.rotate_left(0.3) if rclr > 10: robot.rotate_righ...
robotVM/notebooks/Demo - vrep magic.ipynb
psychemedia/ou-robotics-vrep
apache-2.0
Widget Demo The following demo shows how to use a couple of text widgets that are updated from the robot control script.
import time def line_follow(pioneer): lclr = pioneer.color_left() rclr = pioneer.color_right() if lclr > 10: pioneer.rotate_left(0.3) if rclr > 10: pioneer.rotate_right(0.3) if lclr < -20 and rclr < -20: pioneer.move_forward(1.5) time.sleep(0.001) sensorText1.description...
robotVM/notebooks/Demo - vrep magic.ipynb
psychemedia/ou-robotics-vrep
apache-2.0
Returning data
%matplotlib inline import pandas as pd df=pd.DataFrame(columns=['Time','Left sensor']) #If we want to set df directly within the evaluated code in the vrepsim block #we need to specify it in that block using: global df #However, objects are mutable in that scope, so pass the dataframe that way data={'df':df} %%vreps...
robotVM/notebooks/Demo - vrep magic.ipynb
psychemedia/ou-robotics-vrep
apache-2.0
Pandas provides a number of read_* options, including read_csv, which we will use here. One important note about read_csv in particular is that there are over 50 possible arguments to it. This allows for intensely flexible specification of how to read data in, how to parse it, and very detailed control over things lik...
df = pd.read_csv("data-readonly/IL_Building_Inventory.csv")
week07/examples_week07.ipynb
UIUC-iSchool-DataViz/fall2017
bsd-3-clause
One of the first things we can do is examine the columns that the dataframe has identified.
df.columns df.head() df.tail() df.describe() df.dtypes df.groupby(["Agency Name"])["Square Footage"].sum() df["Agency Name"].value_counts() df.describe() df["Total Floors"].median() df.median() df.quantile([0.1, 0.2, 0.9]) df["Agency Name"].apply(lambda a: a.upper()).head() df["Agency Name"].apply(lambda a:...
week07/examples_week07.ipynb
UIUC-iSchool-DataViz/fall2017
bsd-3-clause
Now let's take a look at the header.
print(h)
notebooks/Accessing the image's meta-data.ipynb
loli/medpy
gpl-3.0
That is quite a lot of information and the header appear to be of class 'nibabel.nifti1.Nifti1Image'. The reason behind this is that MedPy relies on third party librarier to save and load image. To keep the compatibility high and the maintenance requirements at a minimum, MedPy does not posess a dedicated header object...
from medpy.io import header print header.get_pixel_spacing(h)
notebooks/Accessing the image's meta-data.ipynb
loli/medpy
gpl-3.0
And correspondingly for the offest.
print header.get_offset(h)
notebooks/Accessing the image's meta-data.ipynb
loli/medpy
gpl-3.0
Both of these values can also be set,
header.set_pixel_spacing(h, (0.8, 1.2)) print header.get_pixel_spacing(h)
notebooks/Accessing the image's meta-data.ipynb
loli/medpy
gpl-3.0
Saving the array with the modified header, the new meta-data are stored alongside the image.
from medpy.io import save save(i, "flair_distorted.nii.gz", h, force=True) j, hj = load("flair_distorted.nii.gz") print header.get_pixel_spacing(h)
notebooks/Accessing the image's meta-data.ipynb
loli/medpy
gpl-3.0
Further meta-data from the headers is largely incompatible between formats. If you require access to additional header attributes, you can do this by querying the image header object directly. In the above case of a NiBabel class, you can, for example, query the infamous 'qform_code' of the NIfTI format.
print(h.header['qform_code'])
notebooks/Accessing the image's meta-data.ipynb
loli/medpy
gpl-3.0
Ejercicios Sobre el conjunto de datos model: Representar la matriz scatter de la velocidad y orientación del viento de los primeros mil registros. Misma matriz scatter para los 1000 registros con mayor velocidad, ordenados. Histograma de la velocidad del viento con 36 particiones. Histórico de la velocidad media, con...
pd.tools.plotting.scatter_matrix(model.loc[model.index[:1000], 'M(m/s)':'D(deg)'])
notebooks/051-Pandas-Ejercicios.ipynb
CAChemE/curso-python-datos
bsd-3-clause
Misma matriz scatter para los 1000 registros con mayor velocidad:
pd.tools.plotting.scatter_matrix( model.loc[model.sort_values('M(m/s)', ascending=False).index[:1000], 'M(m/s)':'D(deg)'] ) model.loc[:, 'M(m/s)'].plot.hist(bins=np.arange(0, 35)) model['month'] = model.index.month model['year'] = model.index.year
notebooks/051-Pandas-Ejercicios.ipynb
CAChemE/curso-python-datos
bsd-3-clause
Histórico de la velocidad media:
model.groupby(by = ['year', 'month']).mean().head(24) model.groupby(by=['year', 'month']).mean().plot(y='M(m/s)', figsize=(15, 5))
notebooks/051-Pandas-Ejercicios.ipynb
CAChemE/curso-python-datos
bsd-3-clause
Media móvil de los datos agrupados por mes y año:
monthly = model.groupby(by=['year', 'month']).mean() monthly['ma'] = monthly.loc[:, 'M(m/s)'].rolling(5, center=True).mean() monthly.head() monthly.loc[:, ['M(m/s)', 'ma']].plot(figsize=(15, 6)) monthly.loc[:, 'M(m/s)'].reset_index().pivot(index='year', columns='month') monthly.loc[:, 'M(m/s)'].reset_index().pivot( ...
notebooks/051-Pandas-Ejercicios.ipynb
CAChemE/curso-python-datos
bsd-3-clause
List the Stackdriver groups Load the Stackdriver groups in the default project, and get the dataframe containing all the information.
from google.datalab.stackdriver import monitoring as gcm groups_dataframe = gcm.Groups().as_dataframe() # Sort the dataframe by the group name, and reset the index. groups_dataframe = groups_dataframe.sort_values(by='Group name').reset_index(drop=True) groups_dataframe.head(5)
tutorials/Stackdriver Monitoring/Group metrics.ipynb
googledatalab/notebooks
apache-2.0
Extract the first group Now we initialize first_group_id from the list of Stackdriver groups. Please note: If you don't have any groups so far, please create one via the Stackdriver dashboard. Further, if the first group does not contain any GCE instances, please explicitly set first_group_id to the ID of a group that...
import sys if groups_dataframe.empty: sys.stderr.write('This project has no Stackdriver groups. The remaining notebook ' 'will raise errors!') else: first_group_id = groups_dataframe['Group ID'][0] print('First group ID: %s' % first_group_id)
tutorials/Stackdriver Monitoring/Group metrics.ipynb
googledatalab/notebooks
apache-2.0
Load the CPU metric data for the instances a given group Load the CPU Utilization for last 2 hours for the group with the ID first_group_id. The time series is further aggregated as follows: * The data is aligned to 5 minute intervals using the 'ALIGN_MEAN' method. * The data per zone and instance_name pair is combined...
# Initialize the query for the CPU Utilization metric over the last 2 hours. query_group = gcm.Query('compute.googleapis.com/instance/cpu/utilization', hours=2) # Filter the instances to the members of the first group. query_group = query_group.select_group(first_group_id) # Aggregate the time series. query_group = q...
tutorials/Stackdriver Monitoring/Group metrics.ipynb
googledatalab/notebooks
apache-2.0
Plot the the mean of the CPU Utilization per zone
cpu_group_dataframe_per_zone = cpu_group_dataframe.groupby(level=0, axis=1).mean() _ = cpu_group_dataframe_per_zone.plot().legend(loc='center left', bbox_to_anchor=(1.0, 0.8))
tutorials/Stackdriver Monitoring/Group metrics.ipynb
googledatalab/notebooks
apache-2.0
Plot the CPU Utilization of instances Now, we plot the chart at the instance level. However, instances in each zone are displayed in a separate chart.
# Find all unique zones and sort them. all_zones = sorted(set(cpu_group_dataframe.columns.get_level_values('zone'))) # Find the global min and max so we can set the same range for all y-axes. min_cpu = cpu_group_dataframe.min().min() max_cpu = cpu_group_dataframe.max().max() for zone in all_zones: zone_plot = cpu_g...
tutorials/Stackdriver Monitoring/Group metrics.ipynb
googledatalab/notebooks
apache-2.0
Make sure you have pycocotools installed
!pip install pycocotools
research/object_detection/colab_tutorials/context_rcnn_tutorial.ipynb
tombstone/models
apache-2.0
Get tensorflow/models or cd to parent directory of the repository.
import os import pathlib if "models" in pathlib.Path.cwd().parts: while "models" in pathlib.Path.cwd().parts: os.chdir('..') elif not pathlib.Path('models').exists(): !git clone --depth 1 https://github.com/tensorflow/models
research/object_detection/colab_tutorials/context_rcnn_tutorial.ipynb
tombstone/models
apache-2.0
Compile protobufs and install the object_detection package
%%bash cd models/research/ protoc object_detection/protos/*.proto --python_out=. %%bash cd models/research pip install .
research/object_detection/colab_tutorials/context_rcnn_tutorial.ipynb
tombstone/models
apache-2.0
Imports
import numpy as np import os import six import six.moves.urllib as urllib import sys import tarfile import tensorflow as tf import zipfile import pathlib import json import datetime import matplotlib.pyplot as plt from collections import defaultdict from io import StringIO from matplotlib import pyplot as plt from PIL...
research/object_detection/colab_tutorials/context_rcnn_tutorial.ipynb
tombstone/models
apache-2.0
Import the object detection module.
from object_detection.utils import ops as utils_ops from object_detection.utils import label_map_util from object_detection.utils import visualization_utils as vis_utils
research/object_detection/colab_tutorials/context_rcnn_tutorial.ipynb
tombstone/models
apache-2.0
Patches:
# patch tf1 into `utils.ops` utils_ops.tf = tf.compat.v1 # Patch the location of gfile tf.gfile = tf.io.gfile
research/object_detection/colab_tutorials/context_rcnn_tutorial.ipynb
tombstone/models
apache-2.0
Model preparation Loader
def load_model(model_name): base_url = 'http://download.tensorflow.org/models/object_detection/' model_file = model_name + '.tar.gz' model_dir = tf.keras.utils.get_file( fname=model_name, origin=base_url + model_file, untar=True) model_dir = pathlib.Path(model_dir)/"saved_model" model = tf.saved_...
research/object_detection/colab_tutorials/context_rcnn_tutorial.ipynb
tombstone/models
apache-2.0
Loading label map Label maps map indices to category names, so that when our convolution network predicts 5, we know that this corresponds to zebra. Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine
# List of the strings that is used to add correct label for each box. PATH_TO_LABELS = 'models/research/object_detection/data/snapshot_serengeti_label_map.pbtxt' category_index = label_map_util.create_category_index_from_labelmap(PATH_TO_LABELS, use_display_name=False)
research/object_detection/colab_tutorials/context_rcnn_tutorial.ipynb
tombstone/models
apache-2.0
We will test on a context group of images from one month at one camera from the Snapshot Serengeti val split defined on LILA.science, which was not seen during model training:
# If you want to test the code with your images, just add path to the images to # the TEST_IMAGE_PATHS. PATH_TO_TEST_IMAGES_DIR = pathlib.Path('models/research/object_detection/test_images/snapshot_serengeti') TEST_IMAGE_PATHS = sorted(list(PATH_TO_TEST_IMAGES_DIR.glob("*.jpeg"))) TEST_IMAGE_PATHS
research/object_detection/colab_tutorials/context_rcnn_tutorial.ipynb
tombstone/models
apache-2.0
Load the metadata for each image
test_data_json = 'models/research/object_detection/test_images/snapshot_serengeti/context_rcnn_demo_metadata.json' with open(test_data_json, 'r') as f: test_metadata = json.load(f) image_id_to_datetime = {im['id']:im['date_captured'] for im in test_metadata['images']} image_path_to_id = {im['file_name']: im['id'] ...
research/object_detection/colab_tutorials/context_rcnn_tutorial.ipynb
tombstone/models
apache-2.0
Generate Context Features for each image
faster_rcnn_model_name = 'faster_rcnn_resnet101_snapshot_serengeti_2020_06_10' faster_rcnn_model = load_model(faster_rcnn_model_name)
research/object_detection/colab_tutorials/context_rcnn_tutorial.ipynb
tombstone/models
apache-2.0
Check the model's input signature, it expects a batch of 3-color images of type uint8.
faster_rcnn_model.inputs
research/object_detection/colab_tutorials/context_rcnn_tutorial.ipynb
tombstone/models
apache-2.0
And it returns several outputs. Note this model has been exported with additional output 'detection_features' which will be used to build the contextual memory bank.
faster_rcnn_model.output_dtypes faster_rcnn_model.output_shapes
research/object_detection/colab_tutorials/context_rcnn_tutorial.ipynb
tombstone/models
apache-2.0
Add a wrapper function to call the model, and cleanup the outputs:
def run_inference_for_single_image(model, image): '''Run single image through tensorflow object detection saved_model. This function runs a saved_model on a (single) provided image and returns inference results in numpy arrays. Args: model: tensorflow saved_model. This model can be obtained using e...
research/object_detection/colab_tutorials/context_rcnn_tutorial.ipynb
tombstone/models
apache-2.0
Functions for embedding context features
def embed_date_captured(date_captured): """Encodes the datetime of the image. Takes a datetime object and encodes it into a normalized embedding of shape [5], using hard-coded normalization factors for year, month, day, hour, minute. Args: date_captured: A datetime object. Returns: A numpy float...
research/object_detection/colab_tutorials/context_rcnn_tutorial.ipynb
tombstone/models
apache-2.0