markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Modifying Raw objects .. sidebar:: len(raw) Although the :class:`~mne.io.Raw` object underlyingly stores data samples in a :class:`NumPy array <numpy.ndarray>` of shape (n_channels, n_timepoints), the :class:`~mne.io.Raw` object behaves differently from :class:`NumPy arrays <numpy.ndarray>` with respect to ...
eeg_and_eog = raw.copy().pick_types(meg=False, eeg=True, eog=True) print(len(raw.ch_names), '→', len(eeg_and_eog.ch_names))
stable/_downloads/91078106f2c04f1e09c01a2fa07e9d27/10_raw_overview.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Similar to the :meth:~mne.io.Raw.pick_types method, there is also the :meth:~mne.io.Raw.pick_channels method to pick channels by name, and a corresponding :meth:~mne.io.Raw.drop_channels method to remove channels by name:
raw_temp = raw.copy() print('Number of channels in raw_temp:') print(len(raw_temp.ch_names), end=' → drop two → ') raw_temp.drop_channels(['EEG 037', 'EEG 059']) print(len(raw_temp.ch_names), end=' → pick three → ') raw_temp.pick_channels(['MEG 1811', 'EEG 017', 'EOG 061']) print(len(raw_temp.ch_names))
stable/_downloads/91078106f2c04f1e09c01a2fa07e9d27/10_raw_overview.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
If you want the channels in a specific order (e.g., for plotting), :meth:~mne.io.Raw.reorder_channels works just like :meth:~mne.io.Raw.pick_channels but also reorders the channels; for example, here we pick the EOG and frontal EEG channels, putting the EOG first and the EEG in reverse order:
channel_names = ['EOG 061', 'EEG 003', 'EEG 002', 'EEG 001'] eog_and_frontal_eeg = raw.copy().reorder_channels(channel_names) print(eog_and_frontal_eeg.ch_names)
stable/_downloads/91078106f2c04f1e09c01a2fa07e9d27/10_raw_overview.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Changing channel name and type .. sidebar:: Long channel names Due to limitations in the :file:`.fif` file format (which MNE-Python uses to save :class:`~mne.io.Raw` objects), channel names are limited to a maximum of 15 characters. You may have noticed that the EEG channel names in the sample data are numbered rather...
raw.rename_channels({'EOG 061': 'blink detector'})
stable/_downloads/91078106f2c04f1e09c01a2fa07e9d27/10_raw_overview.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
This next example replaces spaces in the channel names with underscores, using a Python dict comprehension_:
print(raw.ch_names[-3:]) channel_renaming_dict = {name: name.replace(' ', '_') for name in raw.ch_names} raw.rename_channels(channel_renaming_dict) print(raw.ch_names[-3:])
stable/_downloads/91078106f2c04f1e09c01a2fa07e9d27/10_raw_overview.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
If for some reason the channel types in your :class:~mne.io.Raw object are inaccurate, you can change the type of any channel with the :meth:~mne.io.Raw.set_channel_types method. The method takes a :class:dictionary <dict> mapping channel names to types; allowed types are ecg, eeg, emg, eog, exci, ias, misc, resp...
raw.set_channel_types({'EEG_001': 'eog'}) print(raw.copy().pick_types(meg=False, eog=True).ch_names)
stable/_downloads/91078106f2c04f1e09c01a2fa07e9d27/10_raw_overview.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Selection in the time domain If you want to limit the time domain of a :class:~mne.io.Raw object, you can use the :meth:~mne.io.Raw.crop method, which modifies the :class:~mne.io.Raw object in place (we've seen this already at the start of this tutorial, when we cropped the :class:~mne.io.Raw object to 60 seconds to re...
raw_selection = raw.copy().crop(tmin=10, tmax=12.5) print(raw_selection)
stable/_downloads/91078106f2c04f1e09c01a2fa07e9d27/10_raw_overview.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
:meth:~mne.io.Raw.crop also modifies the :attr:~mne.io.Raw.first_samp and :attr:~mne.io.Raw.times attributes, so that the first sample of the cropped object now corresponds to time = 0. Accordingly, if you wanted to re-crop raw_selection from 11 to 12.5 seconds (instead of 10 to 12.5 as above) then the subsequent call ...
print(raw_selection.times.min(), raw_selection.times.max()) raw_selection.crop(tmin=1) print(raw_selection.times.min(), raw_selection.times.max())
stable/_downloads/91078106f2c04f1e09c01a2fa07e9d27/10_raw_overview.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Remember that sample times don't always align exactly with requested tmin or tmax values (due to sampling), which is why the max values of the cropped files don't exactly match the requested tmax (see time-as-index for further details). If you need to select discontinuous spans of a :class:~mne.io.Raw object — or combi...
raw_selection1 = raw.copy().crop(tmin=30, tmax=30.1) # 0.1 seconds raw_selection2 = raw.copy().crop(tmin=40, tmax=41.1) # 1.1 seconds raw_selection3 = raw.copy().crop(tmin=50, tmax=51.3) # 1.3 seconds raw_selection1.append([raw_selection2, raw_selection3]) # 2.5 seconds total print(raw_selection1.times.min...
stable/_downloads/91078106f2c04f1e09c01a2fa07e9d27/10_raw_overview.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
<div class="alert alert-danger"><h4>Warning</h4><p>Be careful when concatenating :class:`~mne.io.Raw` objects from different recordings, especially when saving: :meth:`~mne.io.Raw.append` only preserves the ``info`` attribute of the initial :class:`~mne.io.Raw` object (the one outside the :meth:`~mne.io.Raw...
sampling_freq = raw.info['sfreq'] start_stop_seconds = np.array([11, 13]) start_sample, stop_sample = (start_stop_seconds * sampling_freq).astype(int) channel_index = 0 raw_selection = raw[channel_index, start_sample:stop_sample] print(raw_selection)
stable/_downloads/91078106f2c04f1e09c01a2fa07e9d27/10_raw_overview.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
You can see that it contains 2 arrays. This combination of data and times makes it easy to plot selections of raw data (although note that we're transposing the data array so that each channel is a column instead of a row, to match what matplotlib expects when plotting 2-dimensional y against 1-dimensional x):
x = raw_selection[1] y = raw_selection[0].T plt.plot(x, y)
stable/_downloads/91078106f2c04f1e09c01a2fa07e9d27/10_raw_overview.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Extracting channels by name The :class:~mne.io.Raw object can also be indexed with the names of channels instead of their index numbers. You can pass a single string to get just one channel, or a list of strings to select multiple channels. As with integer indexing, this will return a tuple of (data_array, times_array)...
channel_names = ['MEG_0712', 'MEG_1022'] two_meg_chans = raw[channel_names, start_sample:stop_sample] y_offset = np.array([5e-11, 0]) # just enough to separate the channel traces x = two_meg_chans[1] y = two_meg_chans[0].T + y_offset lines = plt.plot(x, y) plt.legend(lines, channel_names)
stable/_downloads/91078106f2c04f1e09c01a2fa07e9d27/10_raw_overview.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Extracting channels by type There are several ways to select all channels of a given type from a :class:~mne.io.Raw object. The safest method is to use :func:mne.pick_types to obtain the integer indices of the channels you want, then use those indices with the square-bracket indexing method shown above. The :func:~mne....
eeg_channel_indices = mne.pick_types(raw.info, meg=False, eeg=True) eeg_data, times = raw[eeg_channel_indices] print(eeg_data.shape)
stable/_downloads/91078106f2c04f1e09c01a2fa07e9d27/10_raw_overview.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Some of the parameters of :func:mne.pick_types accept string arguments as well as booleans. For example, the meg parameter can take values 'mag', 'grad', 'planar1', or 'planar2' to select only magnetometers, all gradiometers, or a specific type of gradiometer. See the docstring of :meth:mne.pick_types for full details....
data = raw.get_data() print(data.shape)
stable/_downloads/91078106f2c04f1e09c01a2fa07e9d27/10_raw_overview.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
If you want the array of times, :meth:~mne.io.Raw.get_data has an optional return_times parameter:
data, times = raw.get_data(return_times=True) print(data.shape) print(times.shape)
stable/_downloads/91078106f2c04f1e09c01a2fa07e9d27/10_raw_overview.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
The :meth:~mne.io.Raw.get_data method can also be used to extract specific channel(s) and sample ranges, via its picks, start, and stop parameters. The picks parameter accepts integer channel indices, channel names, or channel types, and preserves the requested channel order given as its picks parameter.
first_channel_data = raw.get_data(picks=0) eeg_and_eog_data = raw.get_data(picks=['eeg', 'eog']) two_meg_chans_data = raw.get_data(picks=['MEG_0712', 'MEG_1022'], start=1000, stop=2000) print(first_channel_data.shape) print(eeg_and_eog_data.shape) print(two_meg_chans_data.shape)
stable/_downloads/91078106f2c04f1e09c01a2fa07e9d27/10_raw_overview.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Summary of ways to extract data from Raw objects The following table summarizes the various ways of extracting data from a :class:~mne.io.Raw object. .. cssclass:: table-bordered .. rst-class:: midvalign +-------------------------------------+-------------------------+ | Python code | Result ...
data = raw.get_data() np.save(file='my_data.npy', arr=data)
stable/_downloads/91078106f2c04f1e09c01a2fa07e9d27/10_raw_overview.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
It is also possible to export the data to a :class:Pandas DataFrame &lt;pandas.DataFrame&gt; object, and use the saving methods that :mod:Pandas &lt;pandas&gt; affords. The :class:~mne.io.Raw object's :meth:~mne.io.Raw.to_data_frame method is similar to :meth:~mne.io.Raw.get_data in that it has a picks parameter for re...
sampling_freq = raw.info['sfreq'] start_end_secs = np.array([10, 13]) start_sample, stop_sample = (start_end_secs * sampling_freq).astype(int) df = raw.to_data_frame(picks=['eeg'], start=start_sample, stop=stop_sample) # then save using df.to_csv(...), df.to_hdf(...), etc print(df.head())
stable/_downloads/91078106f2c04f1e09c01a2fa07e9d27/10_raw_overview.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Uploading container data with python-fmrest This is a short example on how to upload container data with python-fmrest. Import, create server and login
import fmrest fms = fmrest.Server('https://10.211.55.15', user='admin', password='admin', database='Contacts', layout='Demo', verify_ssl=False ) fms.login()
examples/uploading_container_data.ipynb
davidhamann/python-fmrest
mit
Upload a document for record 1
record_id = 1
examples/uploading_container_data.ipynb
davidhamann/python-fmrest
mit
We open a file in binary mode from the current directory and pass it to the upload_container() method.
with open('dog-meme.jpg', 'rb') as funny_picture: result = fms.upload_container(record_id, 'portrait', funny_picture) result
examples/uploading_container_data.ipynb
davidhamann/python-fmrest
mit
Retrieve the uploaded document again
record = fms.get_record(1) record.portrait name, type_, length, response = fms.fetch_file(record.portrait) name, type_, length from IPython.display import Image Image(response.content)
examples/uploading_container_data.ipynb
davidhamann/python-fmrest
mit
Create "chicago_taxi.train" and "chicago_taxi.eval" BQ tables to store results.
%%bq datasets create --name chicago_taxi %%bq execute query: texi_query_eval table: chicago_taxi.eval mode: overwrite %%bq execute query: texi_query_train table: chicago_taxi.train mode: overwrite
samples/contrib/mlworkbench/structured_data_regression_taxi/Taxi Fare Model (full data).ipynb
googledatalab/notebooks
apache-2.0
Sanity check on the data.
%%bq query SELECT count(*) FROM chicago_taxi.train %%bq query SELECT count(*) FROM chicago_taxi.eval
samples/contrib/mlworkbench/structured_data_regression_taxi/Taxi Fare Model (full data).ipynb
googledatalab/notebooks
apache-2.0
Explore Data See previous notebook (Taxi Fare Model (small data)) for data exploration. Create Model with ML Workbench The MLWorkbench Magics are a set of Datalab commands that allow an easy code-free experience to training, deploying, and predicting ML models. This notebook will take the data in BigQuery tables and bu...
import google.datalab.contrib.mlworkbench.commands # this loads the %%ml commands %%ml dataset create name: taxi_data_full format: bigquery train: chicago_taxi.train eval: chicago_taxi.eval !gsutil mb gs://datalab-chicago-taxi-demo # Create a Storage Bucket to store results.
samples/contrib/mlworkbench/structured_data_regression_taxi/Taxi Fare Model (full data).ipynb
googledatalab/notebooks
apache-2.0
Step 1: Analyze The first step in the MLWorkbench workflow is to analyze the data for the requested transformations. Analysis in this case builds vocabulary for categorical features, and compute numeric stats for numeric features.
!gsutil rm -r -f gs://datalab-chicago-taxi-demo/analysis # Remove previous analysis results if any %%ml analyze --cloud output: gs://datalab-chicago-taxi-demo/analysis data: taxi_data_full features: unique_key: transform: key fare: transform: target company: transform: embedding embeddin...
samples/contrib/mlworkbench/structured_data_regression_taxi/Taxi Fare Model (full data).ipynb
googledatalab/notebooks
apache-2.0
Step 2: Transform The transform step performs some transformations on the input data and saves the results to a special TensorFlow file called a TFRecord file containing TF.Example protocol buffers. This allows training to start from preprocessed data. If this step is not used, training would have to perform the same p...
!gsutil -m rm -r -f gs://datalab-chicago-taxi-demo/transform # Remove previous transform results if any.
samples/contrib/mlworkbench/structured_data_regression_taxi/Taxi Fare Model (full data).ipynb
googledatalab/notebooks
apache-2.0
Transform takes about 6 hours in cloud. Data is fairely big (33GB) and processing locally on a single VM would be much longer.
%%ml transform --cloud output: gs://datalab-chicago-taxi-demo/transform analysis: gs://datalab-chicago-taxi-demo/analysis data: taxi_data_full !gsutil list gs://datalab-chicago-taxi-demo/transform/eval-* %%ml dataset create name: taxi_data_transformed format: transformed train: gs://datalab-chicago-taxi-demo/transfor...
samples/contrib/mlworkbench/structured_data_regression_taxi/Taxi Fare Model (full data).ipynb
googledatalab/notebooks
apache-2.0
Step 3: Training MLWorkbench help build standard TensorFlow models without you having to write any TensorFlow code. We already know from last notebook that DNN regression model works better.
!gsutil -m rm -r -f gs://datalab-chicago-taxi-demo/train # Remove previous training results.
samples/contrib/mlworkbench/structured_data_regression_taxi/Taxi Fare Model (full data).ipynb
googledatalab/notebooks
apache-2.0
Training takes about 30 min with "STANRDARD_1" scale_tier. Note that we will perform 1M steps. This will take much longer if we run it locally on Datalab's VM. With CloudML Engine, it runs training in a distributed way with multiple VMs, so it runs much faster.
%%ml train --cloud output: gs://datalab-chicago-taxi-demo/train analysis: gs://datalab-chicago-taxi-demo/analysis data: taxi_data_transformed model_args: model: dnn_regression hidden-layer-size1: 400 hidden-layer-size2: 200 train-batch-size: 1000 max-steps: 1000000 cloud_config: region: us-east1...
samples/contrib/mlworkbench/structured_data_regression_taxi/Taxi Fare Model (full data).ipynb
googledatalab/notebooks
apache-2.0
Step 4: Evaluation using batch prediction Below, we use the evaluation model and run batch prediction in cloud. For demo purpose, we will use the evaluation data again.
# Delete previous results !gsutil -m rm -r gs://datalab-chicago-taxi-demo/batch_prediction
samples/contrib/mlworkbench/structured_data_regression_taxi/Taxi Fare Model (full data).ipynb
googledatalab/notebooks
apache-2.0
Currently, batch_prediction service does not work with BigQuery data. So we export eval data to csv file.
%%bq extract table: chicago_taxi.eval format: csv path: gs://datalab-chicago-taxi-demo/eval.csv
samples/contrib/mlworkbench/structured_data_regression_taxi/Taxi Fare Model (full data).ipynb
googledatalab/notebooks
apache-2.0
Run batch prediction. Note that we use evaluation_model because it takes input data with target (truth) column.
%%ml batch_predict --cloud model: gs://datalab-chicago-taxi-demo/train/evaluation_model output: gs://datalab-chicago-taxi-demo/batch_prediction format: csv data: csv: gs://datalab-chicago-taxi-demo/eval.csv cloud_config: region: us-east1
samples/contrib/mlworkbench/structured_data_regression_taxi/Taxi Fare Model (full data).ipynb
googledatalab/notebooks
apache-2.0
Once batch prediction is done, check results files. Batch prediction service outputs to JSON files.
!gsutil list -l -h gs://datalab-chicago-taxi-demo/batch_prediction
samples/contrib/mlworkbench/structured_data_regression_taxi/Taxi Fare Model (full data).ipynb
googledatalab/notebooks
apache-2.0
We can load the results back to BigQuery.
%%bq load format: json mode: overwrite table: chicago_taxi.eval_results path: gs://datalab-chicago-taxi-demo/batch_prediction/prediction.results* schema: - name: unique_key type: STRING - name: predicted type: FLOAT - name: target type: FLOAT
samples/contrib/mlworkbench/structured_data_regression_taxi/Taxi Fare Model (full data).ipynb
googledatalab/notebooks
apache-2.0
With data in BigQuery can do some query analysis. For example, RMSE.
%%ml evaluate regression bigquery: chicago_taxi.eval_results
samples/contrib/mlworkbench/structured_data_regression_taxi/Taxi Fare Model (full data).ipynb
googledatalab/notebooks
apache-2.0
From above, the results are better than local run with sampled data. RMSE reduced by 2.5%, MAE reduced by around 20%. Average absolute error reduced by around 30%. Select top results sorted by error.
%%bq query SELECT predicted, target, ABS(predicted-target) as error, s.* FROM `chicago_taxi.eval_results` as r JOIN `chicago_taxi.eval` as s ON r.unique_key = s.unique_key ORDER BY error DESC LIMIT 10
samples/contrib/mlworkbench/structured_data_regression_taxi/Taxi Fare Model (full data).ipynb
googledatalab/notebooks
apache-2.0
There is also a feature slice visualization component designed for viewing evaluation results. It shows correlation between features and prediction results.
%%bq query --name error_by_hour SELECT COUNT(*) as count, hour as feature, AVG(ABS(predicted - target)) as avg_error, STDDEV(ABS(predicted - target)) as stddev_error FROM `chicago_taxi.eval_results` as r JOIN `chicago_taxi.eval` as s ON r.unique_key = s.unique_key GROUP BY hour # Note: the interactive output...
samples/contrib/mlworkbench/structured_data_regression_taxi/Taxi Fare Model (full data).ipynb
googledatalab/notebooks
apache-2.0
What we can see from above charts is that model performs worst in hour 5 and 6 (why?), and best on Sundays (less traffic?). Model Deployment and Online Prediction Model deployment works the same between locally trained models and cloud trained models. Please see previous notebook (Taxi Fare Model (small data)). Cleanup
!gsutil -m rm -rf gs://datalab-chicago-taxi-demo
samples/contrib/mlworkbench/structured_data_regression_taxi/Taxi Fare Model (full data).ipynb
googledatalab/notebooks
apache-2.0
2. Uso de Pandas para descargar datos de precios de cierre Bajar datos en forma de función
def get_historical_closes(ticker, start_date, end_date): p = web.DataReader(ticker, "yahoo", start_date, end_date).sort_index('major_axis') d = p.to_frame()['Adj Close'].reset_index() d.rename(columns={'minor': 'Ticker', 'Adj Close': 'Close'}, inplace=True) pivoted = d.pivot(index='Date', columns='Ticke...
02. Parte 2/15. Clase 15/.ipynb_checkpoints/10Class NB-checkpoint.ipynb
jdsanch1/SimRC
mit
Una vez cargados los paquetes, es necesario definir los tickers de las acciones que se usarán, la fuente de descarga (Yahoo en este caso, pero también se puede desde Google) y las fechas de interés. Con esto, la función DataReader del paquete pandas_datareader bajará los precios solicitados. Nota: Usualmente, las distr...
assets = ['AAPL','MSFT','AA','AMZN','KO','QAI'] closes=get_historical_closes(assets, '2016-01-01', '2017-09-22') closes closes.plot(figsize=(8,6));
02. Parte 2/15. Clase 15/.ipynb_checkpoints/10Class NB-checkpoint.ipynb
jdsanch1/SimRC
mit
Nota: Para descargar datos de la bolsa mexicana de valores (BMV), el ticker debe tener la extensión MX. Por ejemplo: MEXCHEM.MX, LABB.MX, GFINBURO.MX y GFNORTEO.MX. 3. Formulación del riesgo de un portafolio
def calc_daily_returns(closes): return np.log(closes/closes.shift(1))[1:] daily_returns=calc_daily_returns(closes) daily_returns daily_returns_b=calc_daily_returns(closes) yb=0.000001 daily_returns_b['BOND']=yb*np.ones(daily_returns.index.size) daily_returns_b mean_daily_returns = pd.DataFrame(daily_returns.mean...
02. Parte 2/15. Clase 15/.ipynb_checkpoints/10Class NB-checkpoint.ipynb
jdsanch1/SimRC
mit
4. Optimización de portafolios
num_portfolios = 200000 num_assets=len(assets) r=0.0001 weights = np.array(np.random.random(num_assets*num_portfolios)).reshape(num_portfolios,num_assets) weights = weights*np.matlib.repmat(1/weights.sum(axis=1),num_assets,1).T rend=252*weights.dot(mean_daily_returns.values[:,0]).T sd = np.zeros(num_portfolios) for i...
02. Parte 2/15. Clase 15/.ipynb_checkpoints/10Class NB-checkpoint.ipynb
jdsanch1/SimRC
mit
2. Extracting features from text (Preprocessing) X_train_tfidf == X_train_tfidf_vect
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer, TfidfVectorizer count_vect = CountVectorizer() X_train_counts = count_vect.fit_transform(X_train) tfidf_transformer = TfidfTransformer() X_train_tfidf = tfidf_transformer.fit_transform(X_train_counts) #or tfidf_vect = TfidfVectorizer() X_...
findmyjob_multiNB_model.ipynb
JKeun/project-01-findmyjob
mit
3. Model Selection & Parameter search MultinomialNB SGD
from sklearn.cross_validation import StratifiedKFold from sklearn.naive_bayes import MultinomialNB from sklearn.linear_model.stochastic_gradient import SGDClassifier cv = StratifiedKFold(y_train, n_folds=5, random_state=0) i_range = [] score_range = [] sigma = [] for a in np.arange(-5e-05, 5e-05, 1e-05): mnb = M...
findmyjob_multiNB_model.ipynb
JKeun/project-01-findmyjob
mit
4. Tuning & Improvement
from sklearn.pipeline import Pipeline text_clf = Pipeline([ ('vect', CountVectorizer()), ('tfidf', TfidfTransformer()), ('clf', MultinomialNB()), ]) #text_clf = text_clf.fit(X_train, y_train) from sklearn.grid_search import GridSearchCV parameters = { 'vect__ngram_range': [(1, 1), (1, 2...
findmyjob_multiNB_model.ipynb
JKeun/project-01-findmyjob
mit
5. Final test
test_df = pd.read_excel('./resource/test.xlsx') X_test = test_df['X'].values y_test = test_df['Y'].values test_df.head() final_clf = Pipeline([ ('vect', CountVectorizer(ngram_range=(1, 2), stop_words='english')), ('clf', MultinomialNB(alpha=1e-05)) ]) final_clf = final_clf.fit(X_train, y_train) pr...
findmyjob_multiNB_model.ipynb
JKeun/project-01-findmyjob
mit
6. Prediction
docs = [raw_input()] predicted = final_clf.predict(docs)[0] prob = final_clf.predict_proba(list(docs))[0] prob_gap = np.max(prob) - np.median(prob) if prob_gap > 0.4: print("\n==== Your job ====") print(categories[predicted]) print("\n=== Probability ===") print(prob[predicted]) else: print("...
findmyjob_multiNB_model.ipynb
JKeun/project-01-findmyjob
mit
docs = [raw_input()] predicted = final_clf.predict(docs)[0] prob = final_clf.predict_proba(list(docs))[0] prob_gap = np.max(prob) - np.median(prob) if prob_gap > 0.4: print("\n==== Your job ====") print(categories[predicted]) print("\n=== Probability ===") print(prob[predicted]) else: print("+++More...
from wordcloud import WordCloud ds_text = " ".join(df[df.Y == 0].X) ds_text_adjusted = ds_text.lower().replace("skill", "").replace("experience", "") dm_text = " ".join(df[df.Y == 1].X) dm_text_adjusted = dm_text.lower().replace("skill", "").replace("experience", "") ux_text = " ".join(df[df.Y == 2].X) ux_text_adjusted...
findmyjob_multiNB_model.ipynb
JKeun/project-01-findmyjob
mit
Digital Marketing
plt.figure(figsize=(18, 10)) plt.imshow(wordcloud_dm.recolor(random_state=31)) plt.xticks([]) plt.yticks([]) plt.grid(False)
findmyjob_multiNB_model.ipynb
JKeun/project-01-findmyjob
mit
UX/UI Design
plt.figure(figsize=(18, 10)) plt.imshow(wordcloud_ux.recolor(random_state=33)) plt.xticks([]) plt.yticks([]) plt.grid(False)
findmyjob_multiNB_model.ipynb
JKeun/project-01-findmyjob
mit
Feature creations - Math Expressions
df_t = df.copy() ## Since we are going to edit the data we should always make a copy df_t.head() df_t["sepallength_sqr"] = df_t["sepallength"]**2 ## ** in python is used for exponent. df_t.head() df_t["sepallength_log"] = np.log10(df_t["sepallength"]) df_t.head()
Lecture Notebooks/Redoing Weka stuff.ipynb
napsternxg/DataMiningPython
gpl-3.0
Creating many features at once using patsy
df_t = df_t.rename(columns={"class": "label"}) df_t.head() y, X = patsy.dmatrices("label ~ petalwidth + petallength:petalwidth + I(sepallength**2)-1", data=df_t, return_type="dataframe") print y.shape, X.shape y.head() X.head() model = sm.MNLogit(y, X) res = model.fit() res.summary() model_sk = linear_model.Logist...
Lecture Notebooks/Redoing Weka stuff.ipynb
napsternxg/DataMiningPython
gpl-3.0
Plot decision regions We can only do this if our data has 2 features
y, X = patsy.dmatrices("label ~ petalwidth + petallength - 1", data=df_t, return_type="dataframe") # -1 forces the data to not generate an intercept X.columns y = df_t["label"] clf = tree.DecisionTreeClassifier() clf.fit(X, y) _ = get_confusion_matrix(clf, X, y) clf.feature_importances_ show_decision_tree(clf, X,...
Lecture Notebooks/Redoing Weka stuff.ipynb
napsternxg/DataMiningPython
gpl-3.0
Naive Bayes classifier
clf = naive_bayes.GaussianNB() clf.fit(X, y) _ = get_confusion_matrix(clf, X, y)
Lecture Notebooks/Redoing Weka stuff.ipynb
napsternxg/DataMiningPython
gpl-3.0
Decision surface of Naive Bayes classifier will not have overlapping colors because of the basic code I am using to show decision boundaries. A better code can show the mixing of colors properly
plot_decision_regions(clf, X, y, col_x="petalwidth", col_y="petallength")
Lecture Notebooks/Redoing Weka stuff.ipynb
napsternxg/DataMiningPython
gpl-3.0
Logistic regression
clf = linear_model.LogisticRegression(multi_class="multinomial", solver="lbfgs") clf.fit(X, y) _ = get_confusion_matrix(clf, X, y) plot_decision_regions(clf, X, y, col_x="petalwidth", col_y="petallength")
Lecture Notebooks/Redoing Weka stuff.ipynb
napsternxg/DataMiningPython
gpl-3.0
IBk of K-nearest neighbors classifier
clf = neighbors.KNeighborsClassifier(n_neighbors=1) clf.fit(X, y) _ = get_confusion_matrix(clf, X, y) plot_decision_regions(clf, X, y, col_x="petalwidth", col_y="petallength")
Lecture Notebooks/Redoing Weka stuff.ipynb
napsternxg/DataMiningPython
gpl-3.0
Image cleaning If we use the denoising + thresholding approach, the result of the thresholding is not completely what we want: small objects are detected, and small holes exist in the objects. Such defects of the segmentation can be amended, using the knowledge that no small holes should exist, and that blobs have a mi...
from skimage import morphology only_large_blobs = morphology.remove_small_objects(binary_image, min_size=300) plt.imshow(only_large_blobs, cmap='gray') only_large = np.logical_not(morphology.remove_small_objects( np.logical_not(on...
scikit_image/lectures/adv5_blob_segmentation.ipynb
M-R-Houghton/euroscipy_2015
mit
It would be possible to filter the image to smoothen the boundary, and then threshold again. However, it is more satisfying to first filter the image so that it is as binary as possible (which corresponds better to our prior information on the materials), and then to threshold the image. Going further: want to know mor...
%reload_ext load_style %load_style ../themes/tutorial.css
scikit_image/lectures/adv5_blob_segmentation.ipynb
M-R-Houghton/euroscipy_2015
mit
NXP IMU Our inertial measurement unit (IMU) contains 2 main chips: FXOS8700 3-Axis Accelerometer/Magnetometer ±2 g/±4 g/±8 g adjustable acceleration range ±1200 µT magnetic sensor range Output data rates (ODR) from 1.563 Hz to 800 Hz 14-bit ADC resolution for acceleration measurements 16-bit ADC resolution for magnet...
def normalize(x, y, z): """Return a unit vector""" norm = sqrt(x * x + y * y + z * z) if norm > 0.0: inorm = 1/norm x *= inorm y *= inorm z *= inorm else: raise Exception('division by zero: {} {} {}'.format(x, y, z)) return (x, y, z) def plotArray(g, dt=None,...
website/block_4_mobile_robotics/misc/lsn28.ipynb
MarsUniversity/ece387
mit
Run Raw Compass Performance First lets tumble around the imu and grab lots of data in ALL orientations.
bag = BagReader() bag.use_compression = True cal = bag.load('imu-1-2.json') def split(data): ret = [] rdt = [] start = data[0][1] for d, ts in data: ret.append(d) rdt.append(ts - start) return ret, rdt accel, adt = split(cal['accel']) mag, mdt = split(cal['mag']) gyro, gdt = split(...
website/block_4_mobile_robotics/misc/lsn28.ipynb
MarsUniversity/ece387
mit
Now using this bias, we should get better performance. Check Calibration
# apply correction in previous step cm = apply_calibration(mag, bias) plotMagnetometer(cm) # Now let's run through the data and correct it roll = [] pitch = [] heading = [] for accel, mag in zip(a, cm): r,p,h = getOrientation(accel, mag) roll.append(r) pitch.append(p) heading.append(h) x_scale = ...
website/block_4_mobile_robotics/misc/lsn28.ipynb
MarsUniversity/ece387
mit
Création d'une data frame aléatoire
N = 10 df = pd.DataFrame({k: np.random.randint(low=1, high=7, size=N) for k in "ABCD"}) df
stacked_bar_graph.ipynb
gVallverdu/cookbook
gpl-2.0
Transformer le score en catégorie L'idée est de transformer le score du questionnaire en une catégorie : "défavorise", "neutre", "favrorise". Le point le plus difficile est l'écriture de la fonction. Avec df.apply() on va appliquer la fonction à l'ensemble du tableau. On peut le faire de deux manières différentes. Vers...
def cat_column(column): values = list() for val in column: if val <= 2: cat = "defavorise" elif val >= 5: cat = "favorise" else: cat = "neutre" values.append(cat) return values
stacked_bar_graph.ipynb
gVallverdu/cookbook
gpl-2.0
Sur le principe, voici comme ça marche avec :
liste = [1, 2, 3, 4, 5, 6] values = cat_column(liste) print(values)
stacked_bar_graph.ipynb
gVallverdu/cookbook
gpl-2.0
Maintenant on applique à notre tableau :
df_cat = df.apply(cat_column, axis=0) df_cat
stacked_bar_graph.ipynb
gVallverdu/cookbook
gpl-2.0
Version 2: sur tout le tableau Les opérations ne dépendent que d'une case du tableau. On peut donc utiliser une fonction qui ne connait que le contenu d'une case et retourne la bonne catégorie. La fonction est plus simple (pas de boucle), elle prend comme arguement le contenu d'une case et retourne la catégorie :
def cat_cell(val): if val <= 2: cat = "defavorise" elif val >= 5: cat = "favorise" else: cat = "neutre" return cat
stacked_bar_graph.ipynb
gVallverdu/cookbook
gpl-2.0
Par exemple :
print(cat_cell(2), cat_cell(3))
stacked_bar_graph.ipynb
gVallverdu/cookbook
gpl-2.0
On l'applique au tableau. Maintenant on doit utiliser la méthode df.applymap() au lieu de apply().
df_cat = df.applymap(cat_cell) df_cat
stacked_bar_graph.ipynb
gVallverdu/cookbook
gpl-2.0
Pourcentage de chaque catégorie par colonne Maintenant on va calculer le nombre de fois que chaque catégorie apparaît dans une colonne. Il faut pour cela utiliser pd.value_counts() :
df_cat.apply(pd.value_counts)
stacked_bar_graph.ipynb
gVallverdu/cookbook
gpl-2.0
Si on veut le pourcentage, il faut savoir combien il y a de lignes. Dans cet exemple, c'est N que l'on a définit tout au début. Sinon, il faut récupérer le nombre de lignes de la DataFrame. Cette information est contenu dans df.shape :
nrows, ncols = df_cat.shape print(nrows, ncols) df_percent = df_cat.apply(pd.value_counts) / nrows * 100 df_percent
stacked_bar_graph.ipynb
gVallverdu/cookbook
gpl-2.0
Stacked histogram Avant de faire le graphique avec pandas, il faut transposer le tableau pour qu'il trace en fonction de A, B, C et D.
df_percent_t = df_percent.transpose() df_percent_t
stacked_bar_graph.ipynb
gVallverdu/cookbook
gpl-2.0
Pour que le graphique soit plus cohérent, on va réorganiser les colonnes de sorte que "neutre" soit au milieu :
df_percent_t = df_percent_t[["defavorise", "neutre", "favorise"]] df_percent_t
stacked_bar_graph.ipynb
gVallverdu/cookbook
gpl-2.0
Voici une première version, le principe étant de choisir un graphique de type barh avec stacked vrai. On choisit ensuite une colormap divergente pour donner un sens aux couleurs.
fig = plt.figure() ax = fig.add_subplot(111) df_percent_t.plot(kind="barh", stacked=True, ax=ax, colormap="RdYlGn", alpha=.8, xlim=(0, 101)) ax.legend(ncol=3, loc='upper center', bbox_to_anchor=(0.5, 1.15), frameon=False)
stacked_bar_graph.ipynb
gVallverdu/cookbook
gpl-2.0
Un peu plus de détails, la partie compliquée est l'ajout des annotations.
fig = plt.figure() ax = fig.add_subplot(111) df_percent_t.plot(kind="barh", stacked=True, ax=ax, colormap="RdYlGn", alpha=.8) ax.legend(ncol=3, loc='upper center', bbox_to_anchor=(0.5, 1.15), frameon=False) ax.set_frame_on(False) ax.set_xticks([]) # add texts y = 0 for index, row in df_percent_t.iterrows(): # boucle s...
stacked_bar_graph.ipynb
gVallverdu/cookbook
gpl-2.0
Pour y voir plus clair sur le calcul des intervalles, voici un exemple :
l = [40, 50, 10] x = [0] for i in range(3): x.append(x[i] + l[i]) print(x)
stacked_bar_graph.ipynb
gVallverdu/cookbook
gpl-2.0
Clustering Clustering is the task of gathering samples into groups of similar samples according to some predefined similarity or dissimilarity measure (such as the Euclidean distance). In this section we will explore a basic clustering task on some synthetic and real datasets. Here are some common applications of clust...
from sklearn.datasets import make_blobs X, y = make_blobs(random_state=42) X.shape plt.scatter(X[:, 0], X[:, 1])
notebooks/02.4 Unsupervised Learning - Clustering.ipynb
samstav/scipy_2015_sklearn_tutorial
cc0-1.0
There are clearly three separate groups of points in the data, and we would like to recover them using clustering. Even if the groups are obvious in the data, it is hard to find them when the data lives in a high-dimensional space. Now we will use one of the simplest clustering algorithms, K-means. This is an iterative...
from sklearn.cluster import KMeans kmeans = KMeans(n_clusters=3, random_state=42)
notebooks/02.4 Unsupervised Learning - Clustering.ipynb
samstav/scipy_2015_sklearn_tutorial
cc0-1.0
We can get the cluster labels either by calling fit and then accessing the labels_ attribute of the K means estimator, or by calling fit_predict. Either way, the result contains the ID of the cluster that each point is assigned to.
labels = kmeans.fit_predict(X) all(labels == kmeans.labels_)
notebooks/02.4 Unsupervised Learning - Clustering.ipynb
samstav/scipy_2015_sklearn_tutorial
cc0-1.0
Let's visualize the assignments that have been found
plt.scatter(X[:, 0], X[:, 1], c=labels)
notebooks/02.4 Unsupervised Learning - Clustering.ipynb
samstav/scipy_2015_sklearn_tutorial
cc0-1.0
Here, we are probably satisfied with the clustering. But in general we might want to have a more quantitative evaluation. How about we compare our cluster labels with the ground truth we got when generating the blobs?
from sklearn.metrics import confusion_matrix, accuracy_score print(accuracy_score(y, labels)) print(confusion_matrix(y, labels))
notebooks/02.4 Unsupervised Learning - Clustering.ipynb
samstav/scipy_2015_sklearn_tutorial
cc0-1.0
Even though we recovered the partitioning of the data into clusters perfectly, the cluster IDs we assigned were arbitrary, and we can not hope to recover them. Therefore, we must use a different scoring metric, such as adjusted_rand_score, which is invariant to permutations of the labels:
from sklearn.metrics import adjusted_rand_score adjusted_rand_score(y, labels)
notebooks/02.4 Unsupervised Learning - Clustering.ipynb
samstav/scipy_2015_sklearn_tutorial
cc0-1.0
Clustering comes with assumptions: A clustering algorithm finds clusters by making assumptions with samples should be grouped together. Each algorithm makes different assumptions and the quality and interpretability of your results will depend on whether the assumptions are satisfied for your goal. For K-means clusteri...
from sklearn.datasets import make_blobs X, y = make_blobs(random_state=170, n_samples=600) rng = np.random.RandomState(74) transformation = rng.normal(size=(2, 2)) X = np.dot(X, transformation) y_pred = KMeans(n_clusters=3).fit_predict(X) plt.scatter(X[:, 0], X[:, 1], c=y_pred)
notebooks/02.4 Unsupervised Learning - Clustering.ipynb
samstav/scipy_2015_sklearn_tutorial
cc0-1.0
Some Notable Clustering Routines The following are two well-known clustering algorithms. sklearn.cluster.KMeans: <br/> The simplest, yet effective clustering algorithm. Needs to be provided with the number of clusters in advance, and assumes that the data is normalized as input (but use a PCA model as pre...
from sklearn.datasets import load_digits digits = load_digits() # ... # %load solutions/08B_digits_clustering.py
notebooks/02.4 Unsupervised Learning - Clustering.ipynb
samstav/scipy_2015_sklearn_tutorial
cc0-1.0
Pandas DataFrame UltraQuick Tutorial This Colab introduces DataFrames, which are the central data structure in the pandas API. This Colab is not a comprehensive DataFrames tutorial. Rather, this Colab provides a very quick introduction to the parts of DataFrames required to do the other Colab exercises in Machine Lear...
import numpy as np import pandas as pd
ml/cc/exercises/pandas_dataframe_ultraquick_tutorial.ipynb
google/eng-edu
apache-2.0
Creating a DataFrame The following code cell creates a simple DataFrame containing 10 cells organized as follows: 5 rows 2 columns, one named temperature and the other named activity The following code cell instantiates a pd.DataFrame class to generate a DataFrame. The class takes two arguments: The first argument p...
# Create and populate a 5x2 NumPy array. my_data = np.array([[0, 3], [10, 7], [20, 9], [30, 14], [40, 15]]) # Create a Python list that holds the names of the two columns. my_column_names = ['temperature', 'activity'] # Create a DataFrame. my_dataframe = pd.DataFrame(data=my_data, columns=my_column_names) # Print th...
ml/cc/exercises/pandas_dataframe_ultraquick_tutorial.ipynb
google/eng-edu
apache-2.0
Adding a new column to a DataFrame You may add a new column to an existing pandas DataFrame just by assigning values to a new column name. For example, the following code creates a third column named adjusted in my_dataframe:
# Create a new column named adjusted. my_dataframe["adjusted"] = my_dataframe["activity"] + 2 # Print the entire DataFrame print(my_dataframe)
ml/cc/exercises/pandas_dataframe_ultraquick_tutorial.ipynb
google/eng-edu
apache-2.0
Specifying a subset of a DataFrame Pandas provide multiples ways to isolate specific rows, columns, slices or cells in a DataFrame.
print("Rows #0, #1, and #2:") print(my_dataframe.head(3), '\n') print("Row #2:") print(my_dataframe.iloc[[2]], '\n') print("Rows #1, #2, and #3:") print(my_dataframe[1:4], '\n') print("Column 'temperature':") print(my_dataframe['temperature'])
ml/cc/exercises/pandas_dataframe_ultraquick_tutorial.ipynb
google/eng-edu
apache-2.0
Task 1: Create a DataFrame Do the following: Create an 3x4 (3 rows x 4 columns) pandas DataFrame in which the columns are named Eleanor, Chidi, Tahani, and Jason. Populate each of the 12 cells in the DataFrame with a random integer between 0 and 100, inclusive. Output the following: the entire DataFrame the valu...
# Write your code here. #@title Double-click for a solution to Task 1. # Create a Python list that holds the names of the four columns. my_column_names = ['Eleanor', 'Chidi', 'Tahani', 'Jason'] # Create a 3x4 numpy array, each cell populated with a random integer. my_data = np.random.randint(low=0, high=101, size=(3...
ml/cc/exercises/pandas_dataframe_ultraquick_tutorial.ipynb
google/eng-edu
apache-2.0
Copying a DataFrame (optional) Pandas provides two different ways to duplicate a DataFrame: Referencing. If you assign a DataFrame to a new variable, any change to the DataFrame or to the new variable will be reflected in the other. Copying. If you call the pd.DataFrame.copy method, you create a true independent copy...
# Create a reference by assigning my_dataframe to a new variable. print("Experiment with a reference:") reference_to_df = df # Print the starting value of a particular cell. print(" Starting value of df: %d" % df['Jason'][1]) print(" Starting value of reference_to_df: %d\n" % reference_to_df['Jason'][1]) # Modify a...
ml/cc/exercises/pandas_dataframe_ultraquick_tutorial.ipynb
google/eng-edu
apache-2.0
1. Load blast hits
#Load blast hits blastp_hits = pd.read_csv("2_blastp_hits.csv") blastp_hits.head() #Filter out Metahit 2010 hits, keep only Metahit 2014 blastp_hits = blastp_hits[blastp_hits.db != "metahit_pep"]
phage_assembly/5_annotation/asm_v1.2/orf_160621/.ipynb_checkpoints/4_select_reliable_orfs-checkpoint.ipynb
maubarsom/ORFan-proteins
mit
2. Process blastp results 2.1 Extract ORF stats from fasta file
#Assumes the Fasta file comes with the header format of EMBOSS getorf fh = open("1_orf/d9539_asm_v1.2_orf.fa") header_regex = re.compile(r">([^ ]+?) \[([0-9]+) - ([0-9]+)\]") orf_stats = [] for line in fh: header_match = header_regex.match(line) if header_match: is_reverse = line.rstrip(" \n").endswith(...
phage_assembly/5_annotation/asm_v1.2/orf_160621/.ipynb_checkpoints/4_select_reliable_orfs-checkpoint.ipynb
maubarsom/ORFan-proteins
mit
2.2 Annotate blast hits with orf stats
blastp_hits_annot = blastp_hits.merge(orf_stats_df,left_on="query_id",right_on="q_id") #Add query coverage calculation blastp_hits_annot["q_cov_calc"] = (blastp_hits_annot["q_end"] - blastp_hits_annot["q_start"] + 1 ) * 100 / blastp_hits_annot["q_len"] blastp_hits_annot.sort_values(by="bitscore",ascending=False).head()...
phage_assembly/5_annotation/asm_v1.2/orf_160621/.ipynb_checkpoints/4_select_reliable_orfs-checkpoint.ipynb
maubarsom/ORFan-proteins
mit
2.3 Extract best hit for each ORF ( q_cov > 0.8 and pct_id > 40% and e-value < 1) Define these resulting 7 ORFs as the core ORFs for the d9539 assembly. The homology between the Metahit gene catalogue is very good, and considering the catalogue was curated on a big set of gut metagenomes, it is reasonable to assume t...
! mkdir -p 4_msa_prots #Get best hit (highest bitscore) for each ORF gb = blastp_hits_annot[ (blastp_hits_annot.q_cov > 80) & (blastp_hits_annot.pct_id > 40) & (blastp_hits_annot.e_value < 1) ].groupby("query_id") reliable_orfs = pd.DataFrame( hits.ix[hits.bitscore.idxmax()] for q_id,hits in gb )[["query_id","db","sub...
phage_assembly/5_annotation/asm_v1.2/orf_160621/.ipynb_checkpoints/4_select_reliable_orfs-checkpoint.ipynb
maubarsom/ORFan-proteins
mit
2.4 Extract selected orfs for further analysis
reliable_orfs["orf_id"] = ["orf{}".format(x) for x in range(1,reliable_orfs.shape[0]+1) ] reliable_orfs["cds_len"] = reliable_orfs["q_cds_end"] - reliable_orfs["q_cds_start"] +1 reliable_orfs.sort_values(by="q_cds_start",ascending=True).to_csv("3_filtered_orfs/filt_orf_stats.csv",index=False,header=True) reliable_orfs....
phage_assembly/5_annotation/asm_v1.2/orf_160621/.ipynb_checkpoints/4_select_reliable_orfs-checkpoint.ipynb
maubarsom/ORFan-proteins
mit
2.4.2 Extract fasta
! ~/utils/bin/seqtk subseq 1_orf/d9539_asm_v1.2_orf.fa 3_filtered_orfs/filt_orf_list.txt > 3_filtered_orfs/d9539_asm_v1.2_orf_filt.fa
phage_assembly/5_annotation/asm_v1.2/orf_160621/.ipynb_checkpoints/4_select_reliable_orfs-checkpoint.ipynb
maubarsom/ORFan-proteins
mit
2.4.3 Write out filtered blast hits
filt_blastp_hits = blastp_hits_annot[ blastp_hits_annot.query_id.apply(lambda x: x in reliable_orfs.query_id.tolist())] filt_blastp_hits.to_csv("3_filtered_orfs/d9539_asm_v1.2_orf_filt_blastp.csv") filt_blastp_hits.head()
phage_assembly/5_annotation/asm_v1.2/orf_160621/.ipynb_checkpoints/4_select_reliable_orfs-checkpoint.ipynb
maubarsom/ORFan-proteins
mit
Compute power and phase lock in label of the source space Compute time-frequency maps of power and phase lock in the source space. The inverse method is linear based on dSPM inverse operator. The example also shows the difference in the time-frequency maps when they are computed with and without subtracting the evoked ...
# Authors: Alexandre Gramfort <alexandre.gramfort@inria.fr> # # License: BSD-3-Clause import numpy as np import matplotlib.pyplot as plt import mne from mne import io from mne.datasets import sample from mne.minimum_norm import read_inverse_operator, source_induced_power print(__doc__)
dev/_downloads/f31e73ee907864d95a2b617fdc76b71e/source_label_time_frequency.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Set parameters
data_path = sample.data_path() meg_path = data_path / 'MEG' / 'sample' raw_fname = meg_path / 'sample_audvis_raw.fif' fname_inv = meg_path / 'sample_audvis-meg-oct-6-meg-inv.fif' label_name = 'Aud-rh' fname_label = meg_path / 'labels' / f'{label_name}.label' tmin, tmax, event_id = -0.2, 0.5, 2 # Setup for reading the...
dev/_downloads/f31e73ee907864d95a2b617fdc76b71e/source_label_time_frequency.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause