markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Let's take a look at the content of the config file which is located at `./demo_configs.yml`:
with open(path_to_config, 'r') as f: print(f.read())
PATH_TO_MVTS: './temp/petdataset_01/' PATH_TO_EXTRACTED_FEATURES: './temp/extracted_features/' META_DATA_TAGS: ['id', 'lab', 'st', 'et'] MVTS_PARAMETERS: - 'TOTUSJH' - 'TOTBSQ' - 'TOTPOT' - 'TOTUSJZ' - 'ABSNJZH' - 'SAVNCPP' - 'USFLUX' - 'TOTFZ' - 'MEANPOT' - 'EPSZ' - 'MEANSHR' - 'SHRGT45' - 'M...
MIT
demo.ipynb
skad00sh/gsudmlab-mvtsdata_toolkit
Here is the break-down of the pieces: - `PATH_to_MVTS`: A relative or absolute path to where the multivariate time series dataset is stored at. - `PATH_TO_EXTRACTED`: A relative or absolute path to where the extracted features should be stored at, using the Feature Extraction component of the package. - `META_DATA_TAGS...
from mvtsdatatoolkit.data_analysis.mvts_data_analysis import MVTSDataAnalysis mvda = MVTSDataAnalysis(path_to_config) mvda.print_stat_of_directory()
---------------------------------------- Directory: ./temp/petdataset_01/ Total no. of files: 2000 Total size: 76M Total average: 38K ----------------------------------------
MIT
demo.ipynb
skad00sh/gsudmlab-mvtsdata_toolkit
- Get a summary stats of the data.Let's now get some statistics from the content of the files. To speed up the demo, I analyze only 3 time series parameters (namely `TOTUSJH`, `TOTBSQ`, and `TOTPOT`), and only the first 50 mvts files.
params = ['TOTUSJH', 'TOTBSQ', 'TOTPOT'] n = 50 mvda.compute_summary(params_name=params, first_k=n) mvda.summary
_____no_output_____
MIT
demo.ipynb
skad00sh/gsudmlab-mvtsdata_toolkit
... which says the length of the time series, across the 50 mvts files is 3000, including 0 `NA/NAN` values. In addition, `mean`, `min`, `max`, and three quantiles are calculated for each time series. - You have a LARGE dataset? A parallel version of this function is also provided to help process much larger datasets...
mvda.compute_summary_in_parallel(n_jobs=4, first_k=50, verbose=False, params_name=['TOTUSJH', 'TOTBSQ', 'TOTPOT']) mvda.summary
_____no_output_____
MIT
demo.ipynb
skad00sh/gsudmlab-mvtsdata_toolkit
**Note**: The results of the parallel and sequential versions of `mvts_data_analysis` are not exactly identical. This discrepency is due to the fact that in the parallel version, the program has to approximate the percentiles. More specifically, it is designed to avoid loading the entire dataset into memory so that it ...
import mvtsdatatoolkit.features.feature_collection as fc help(fc)
Help on module mvtsdatatoolkit.features.feature_collection in mvtsdatatoolkit.features: NAME mvtsdatatoolkit.features.feature_collection FUNCTIONS get_average_absolute_change(uni_ts:Union[pandas.core.series.Series, numpy.ndarray]) -> numpy.float64 :return: The average absolute first difference of a un...
MIT
demo.ipynb
skad00sh/gsudmlab-mvtsdata_toolkit
- How to extract these features from the data?Let's extract 3 simple statistical features, namely `min`, `max`, and `median`, from 3 parameters, such as `TOTUSJH`, `TOTBSQ`, and `TOTPOT`. Again, to speed up the process in this demo, we only process the first 50 mvts files.
from mvtsdatatoolkit.features.feature_extractor import FeatureExtractor fe = FeatureExtractor(path_to_config) fe.do_extraction(features_name=['get_min', 'get_max', 'get_median'], params_name=['TOTUSJH', 'TOTBSQ', 'TOTPOT'], first_k=50) fe.df_all_features
_____no_output_____
MIT
demo.ipynb
skad00sh/gsudmlab-mvtsdata_toolkit
... where each row corresponds to one mvts file, and the first 4 columns represent the extracted information from the file-names using the tags specified in the configuration file (i.e., `id`, `lab`, `st`, and `et`). The remaining columns contains the extracted features. They are named by appending each statistical-fea...
fe.plot_boxplot(feature_names=['TOTUSJH_median', 'TOTBSQ_median', 'TOTPOT_median']) fe.plot_violinplot(feature_names=['TOTUSJH_median', 'TOTBSQ_median', 'TOTPOT_median']) fe.plot_splom(feature_names=['TOTUSJH_median', 'TOTBSQ_median', 'TOTPOT_median']) fe.plot_correlation_heatmap(feature_names=['TOTUSJH_median', 'TOTBS...
_____no_output_____
MIT
demo.ipynb
skad00sh/gsudmlab-mvtsdata_toolkit
For all of these plots, it is a common practice to have the data normalized before generating such plots. This is automatically done in the above steps. Iin rare cases that normaliztion should not take place, using the `StatVisualizer` class in `stat_visualizer` module, this can be avoided. Simply, set the argument `no...
fe.do_extraction_in_parallel(n_jobs=4, features_name=['get_min', 'get_max', 'get_median'], params_name=['TOTUSJH', 'TOTBSQ', 'TOTPOT']) fe.df_all_features[:5]
_____no_output_____
MIT
demo.ipynb
skad00sh/gsudmlab-mvtsdata_toolkit
**Note**: Here I am showing only the first 5 rows of the extracted features. Also, keep in mind that the column `id` should be used if we want to compare the results of this table with the one that was sequentially calculated. 5. Extracted Features Analysis- A quick look over the results?The extracted features can be...
from mvtsdatatoolkit.data_analysis.extracted_features_analysis import ExtractedFeaturesAnalysis efa = ExtractedFeaturesAnalysis(fe.df_all_features, exclude=['id']) efa.compute_summary() efa.summary
_____no_output_____
MIT
demo.ipynb
skad00sh/gsudmlab-mvtsdata_toolkit
... which gives a summary statistics over every extracted feature. For instance, in row `0` that corresponds to the extracted feature `TOTUSJH_min`, the changes of the minimum values of the parameter `TUOTUSJH`, across 2000 mvts files, is described in terms of `mean`, `std`, `min`, `max`, and percentiles `25th` , `50th...
from mvtsdatatoolkit.normalizing import normalizer df_norm = normalizer.zero_one_normalize(df=fe.df_all_features, excluded_colnames=['id']) df_norm
_____no_output_____
MIT
demo.ipynb
skad00sh/gsudmlab-mvtsdata_toolkit
**Note**: The argument `excluded_colnames` is used to keep the column `id` unchanged in the normalization process. Although this column is numeric, normalization of its values would be meaningless. Moreover, any other column with non-numeric values were automatically preserved in the output. 7. Data SamplingVery often...
from mvtsdatatoolkit.sampling.sampler import Sampler sampler = Sampler(extracted_features_df=fe.df_all_features, label_col_name='lab') sampler.original_class_populations
_____no_output_____
MIT
demo.ipynb
skad00sh/gsudmlab-mvtsdata_toolkit
- Sampling by size?Suppose I want only 100 instances of `NF` class, nothing from `M`, all of the `X` and `C` instances, 20 of the `B` class.
desired_populations = {'X': -1, 'M': 0, 'C': -1, 'B': 20, 'NF': 100} sampler.sample(desired_populations=desired_populations) sampler.sampled_class_populations
_____no_output_____
MIT
demo.ipynb
skad00sh/gsudmlab-mvtsdata_toolkit
Which gives me exactly what I asked for. Note that I used *-1* to indicate that I want *all* instances of the `X` and `C` classes. Also, see how I received 20 instances of `B` class while there was originally only 10 insances of that class in the dataset. This allows a seamless *undersampling* and *oversampling*.Let's ...
print('Original shape: {}'.format(sampler.original_mvts.shape)) print('Sampled shape: {}'.format(sampler.sampled_mvts.shape))
Original shape: (2000, 13) Sampled shape: (471, 13)
MIT
demo.ipynb
skad00sh/gsudmlab-mvtsdata_toolkit
471 rows (= 100 + 335 + 0 + 20 + 16) is indeed what I wanted. - Sampling by ratio?The `Sampler` class allows sampling using the desired *ratio*s as well. This is particularly handy when a specific balance ratio is desired.Suppose I need 50% of the entire population to come from `NF` class, nothing from `M` or `B` clas...
desired_ratios = {'X': -1, 'M': 0.0, 'C': 0.20, 'B': 0.0,'NF': 0.50} sampler.sample(desired_ratios=desired_ratios) sampler.sampled_class_populations
_____no_output_____
MIT
demo.ipynb
skad00sh/gsudmlab-mvtsdata_toolkit
*Examples* Import PyOphidia Import *client* module from *PyOphidia* package
from PyOphidia import client
_____no_output_____
BSD-3-Clause
GettingStarted.ipynb
SofianeB/jh-prod-image
Instantiate a clientCreate a new *Client()* using the login parameters *username*, *password*, *host* and *port*. It will also try to resume the last session the user was connected to, as well as the last working directory and the last produced cube
ophclient = client.Client(username="*oph-user*", password="*password*", server="ecas-server.dkrz.de", port="11732")
_____no_output_____
BSD-3-Clause
GettingStarted.ipynb
SofianeB/jh-prod-image
Submit a request Execute the request oph_list level=2:
ophclient.submit("oph_list level=2", display=True)
_____no_output_____
BSD-3-Clause
GettingStarted.ipynb
SofianeB/jh-prod-image
Set a Client for the Cube classInstantiate a new Client common to all Cube instances:
from PyOphidia import cube cube.Cube.setclient(username="oph-user",password="oph-passwd",server="ecas-server.dkrz.de",port="11732")
_____no_output_____
BSD-3-Clause
GettingStarted.ipynb
SofianeB/jh-prod-image
Create a Cube object with an existing cube identifierInstantiate a new Cube using the PID of an existing cube:
mycube = cube.Cube(pid='http://127.0.0.1/1/2')
_____no_output_____
BSD-3-Clause
GettingStarted.ipynb
SofianeB/jh-prod-image
HW3 Importing Libraries
import requests from bs4 import BeautifulSoup as bs import os import pickle import numpy as np import time import datetime as dt import csv import pandas as pd import nltk import re from nltk.corpus import stopwords import nltk import string import heapq # nltk.download('stopwords') # nltk.download('punkt')
[nltk_data] Downloading package punkt to /Users/hassan/nltk_data... [nltk_data] Unzipping tokenizers/punkt.zip.
MIT
HW3.ipynb
ihasanreza/HW3-ADM
1. Data collection 1.1.
URL = "https://myanimelist.net/topanime.php" urls = [] # list for storing urls of all the anime def get_urls(): """get_urls() returns the list of the urls for each anime""" for lim in range(0, 20000, 50): r = requests.get(URL, params={"limit": lim}) if r.status_code == 404: # in case...
19218
MIT
HW3.ipynb
ihasanreza/HW3-ADM
1.2
def crawl_animes(urls_): """crawl_animes function fetches html of every anime found by the get_url() method. It then saves them in an 'htmls' directory. Inside 'htmls' directory, it saves htmls wrt to the page folder it belongs to with the fashion 'htmls/page_rank_i/article_j.html'. In order to avoid r...
Starting from anime no. 19218
MIT
HW3.ipynb
ihasanreza/HW3-ADM
1.3
def parse_pages(i_, folder_name="anime_tsvs"): """This routine parses the htmls we downloaded and fetches the information we are required in the homework and saves them in an article_i.tsv file inside anime_tsvs directory.""" print("Working on page {}".format(i_)) page_rank = str(int(np.floor(...
_____no_output_____
MIT
HW3.ipynb
ihasanreza/HW3-ADM
2. Search Engine Pre processing steps The steps that follow involves the merging of all the tsv, resulting in a dataframe. We then process this dataframe by working on its description (synopsis) field. We do tokenization, removing of stopwords & punctuation, and stemming. The resulting dataframe is saved in the csv f...
def sort_files(t): """This method sorts all the tsv files in the following fashion anime_0.tsv, anime_1.tsv, anime_2.tsv, anime_3.tsv, .....""" return [a(x) for x in re.split(r'(\d+)', t)] def a(t): return int(t) if t.isdigit() else t def merge_tsvs(path, column_names): """Here we merge the ...
_____no_output_____
MIT
HW3.ipynb
ihasanreza/HW3-ADM
2.1 2.1.1
def get_vocabulary(synopsis, vocabulary_file = "vocabulary.pkl"): """Here we generate a vocab of all words from the description. We tag each word with an integer term_id and then save it in a binary file.""" vocab = set() for desc in synopsis: vocab = vocab.union(set(desc)) vocab_dic...
Please enter your query... query: saiyan race
MIT
HW3.ipynb
ihasanreza/HW3-ADM
2.2 2.2.1
def find_tfidf(word, desc, synopsis, idf=None): """Here we calculate tfidf score corresponding the inputted word.""" counter = 0 if idf == None: # calculate idf if not provided for desc in synopsis: if word in desc: counter += 1 idf = np.log...
_____no_output_____
MIT
HW3.ipynb
ihasanreza/HW3-ADM
The objective of this notebook, and more broadly, this project is to see whether we can discern a linear relationship between metrics found on Rotton Tomatoes and Box Office performance.Box office performance is measured in millions as is budget.Because we have used scaling, interpretation of the raw coefficients will ...
df = unpickle_object("final_dataframe_for_analysis.pkl") #dataframe we got from webscraping and cleaning! #see other notebooks for more info. df.dtypes # there are all our features. Our target variable is Box_office df.shape
_____no_output_____
MIT
02-Project-Luther/Model_Creation_and_Analysis.ipynb
igabr/Metis_Projects_Chicago_2017
Upon further thought, it doesnt make sense to have rank_in_genre as a predictor variable for box office budget. When the movie is release, it is not ranked immeadiately. The ranks assigned often occur many years after the movie is released and so it not related to the amount of money accrued at the box office. We will ...
df['Month'] = df['Month'].astype(object) df['Year'] = df['Year'].astype(object) del df['Rank_in_genre'] df.reset_index(inplace=True) del df['index'] percentage_missing(df) df.hist(layout=(4,2), figsize=(50,50))
_____no_output_____
MIT
02-Project-Luther/Model_Creation_and_Analysis.ipynb
igabr/Metis_Projects_Chicago_2017
From the above plots, we see that we have heavy skewness in all of our features and our target variable.The features will be scaled using standard scaler.When splitting the data into training and test. I will fit my scaler according to the training data!There is no sign of multi-collinearity $(>= 0.9)$ - good to go!
plot_corr_matrix(df) X = unpickle_object("X_features_selection.pkl") #all features from the suffled dataframe. Numpy array y = unpickle_object("y_variable_selection.pkl") #target variable from shuffled dataframe. Numpy array final_df = unpickle_object("analysis_dataframe.pkl") #this is the shuffled dataframe!
_____no_output_____
MIT
02-Project-Luther/Model_Creation_and_Analysis.ipynb
igabr/Metis_Projects_Chicago_2017
Baseline Model and Cross-Validation with Holdout Sets Creation of Holdout Set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state = 0) #train on 75% of data sc_X_train = StandardScaler() sc_y_train = StandardScaler() sc_X_train.fit(X_train[:,:6])#only need to learn fit of first 6 - rest are dummies sc_y_train.fit(y_train) X_train[:,:6] = sc_X_train.transform(...
_____no_output_____
MIT
02-Project-Luther/Model_Creation_and_Analysis.ipynb
igabr/Metis_Projects_Chicago_2017
Baseline Model As we can see - the baseline model of regular linear regression is dreadful! Let's move on to more sophisticated methods!
baseline_model(X_train, X_test, y_train, y_test)
The R2 score of a basline regression model is -1.9775011421280398e+23 Mean squared error: 139458111276315616215040.00 The top 3 features for predictive power according to the baseline model is ['Country_argentina', 'Country_finland', 'Country_mexico']
MIT
02-Project-Luther/Model_Creation_and_Analysis.ipynb
igabr/Metis_Projects_Chicago_2017
Ridge, Lasso and Elastic Net regression - Holdouts
holdout_results = holdout_grid(["Ridge", "Lasso", "Elastic Net"], X_train, X_test, y_train, y_test) pickle_object(holdout_results, "holdout_model_results")
_____no_output_____
MIT
02-Project-Luther/Model_Creation_and_Analysis.ipynb
igabr/Metis_Projects_Chicago_2017
Cross-Validation - No Holdout Sets
sc_X = StandardScaler() sc_y = StandardScaler() sc_X.fit(X[:,:6])#only need to learn fit of first 6 - rest are dummies sc_y.fit(y) X[:,:6] = sc_X.transform(X[:,:6]) #only need to transform first 6 columns - rest are dummies y = sc_y.transform(y) no_holdout_results = regular_grid(["Ridge", "Lasso", "Elastic Net"], X, y...
_____no_output_____
MIT
02-Project-Luther/Model_Creation_and_Analysis.ipynb
igabr/Metis_Projects_Chicago_2017
Analysis of Results! Ridge Analysis
extract_model_comparisons(holdout_results, no_holdout_results, "Ridge")
The Model with no holdout set has a higher R2 of 0.4581898049882941. This is higher by 0.020384382171813153 The optimal parameters for this model are {'alpha': 107.5} The mean cross validation score for all of the data is: 0.426813529439751 The most important features accordning to this model is ['Budget_final', 'N...
MIT
02-Project-Luther/Model_Creation_and_Analysis.ipynb
igabr/Metis_Projects_Chicago_2017
Lasso Analysis
extract_model_comparisons(holdout_results, no_holdout_results, "Lasso")
The Model with the holdout set has a higher R2 of 0.4290751892395685. This is higher by 0.013850206355912664 The optimal parameters for this model are {'alpha': 0.10000000000000001} The mean cross validation score on the test set is: 0.41774774276015764 The most important features accordning to this model is ['Budg...
MIT
02-Project-Luther/Model_Creation_and_Analysis.ipynb
igabr/Metis_Projects_Chicago_2017
Elastic Net Analysis
extract_model_comparisons(holdout_results, no_holdout_results, "Elastic Net")
The Model with no holdout set has a higher R2 of 0.4459659739120837. This is higher by 0.0027491346588406906 The optimal parameters for this model are {'alpha': 0.1, 'l1_ratio': 0.1} The mean cross validation score for all of the data is: 0.42861943354426085 The most important features accordning to this model is [...
MIT
02-Project-Luther/Model_Creation_and_Analysis.ipynb
igabr/Metis_Projects_Chicago_2017
Explore a dataset of argumentative and dialog texts> Christian Stab and Iryna Gurevych (2014) Annotating Argument Components and Relations in Persuasive Essays. In: Proceedings of the the 25th International Conference on Computational Linguistics (COLING 2014), p.1501-1510, Ireland, Dublin.
import numpy as np import pandas as pd from tqdm.notebook import tqdm import spacy import os nlp = spacy.load("en_core_web_sm") dataset_folder = 'data/brat-project/' docs = [] for f in tqdm(os.listdir(dataset_folder)): if f.endswith('.txt'): with open(os.path.join(dataset_folder, f), 'r') as data: ...
_____no_output_____
MIT
implicit/explore.ipynb
ranieri-unimi/notes-ferrara-2022
Surveys
c = 'but' for z, doc in enumerate(docs): i = [j for j, w in enumerate(doc) if w.lemma_ == c] if len(i) > 0: pos = i[0] if len(doc[:pos]) > 0: print(doc[:pos]) print(c) print(doc[pos+1:], '\n') else: print(docs[z-1]) print(c) ...
However, on the other side of the coin are voices in the opposition, saying that universities provide not only skills for careers, but also academic knowledge for the human race, such as bio science, politics, and medicine. Nowadays, the popularity of mobile phones has brought about a lot of convenience but at the me...
MIT
implicit/explore.ipynb
ranieri-unimi/notes-ferrara-2022
Dialogues
dataset = '/Users/alfio/Dati/cornell movie-dialogs corpus/data/movie_lines.txt' with open(dataset, 'r', encoding='latin-1') as gf: lines = gf.readlines() raw = [nlp(l.split(' +++$+++ ')[-1].rstrip()) for l in lines[:200]] for z, doc in enumerate(raw): i = [j for j, w in enumerate(doc) if w.lemma_ == c] if l...
You always been this selfish? But I was? I looked for you back at the party, but you always seemed to be "occupied". She's not a... I'm workin' on it. But she doesn't seem to be goin' for him. I'm workin' on it. But she doesn't seem to be goin' for him. I really, really, really wanna go, but I can't. Not unless ...
MIT
implicit/explore.ipynb
ranieri-unimi/notes-ferrara-2022
Load brain data
mvp_VBM = joblib.load('mvp/mvp_vbm.jl') mvp_TBSS = joblib.load('mvp/mvp_tbss.jl')
_____no_output_____
MIT
analyses/suppl_simulations/brain_data_vs_brainsize.ipynb
lukassnoek/MVCA
... and brain size
brain_size = pd.read_csv('./mvp/PIOP1_behav_2017_MVCA_with_brainsize.tsv', sep='\t', index_col=0) # Remove subjects without known brain size (or otherwise excluded from dataset) include_subs = np.in1d(brain_size.index.values, mvp_VBM.subject_list) brain_size = brain_size.loc[include_subs] brain_size_VBM = brain_size....
_____no_output_____
MIT
analyses/suppl_simulations/brain_data_vs_brainsize.ipynb
lukassnoek/MVCA
Calculate correlations between voxels & brain sizeCheck-out linear, cubic, and quadratic correlation
if os.path.isfile('./cache/r2s_VBM.tsv'): r2s_VBM = pd.read_csv('./cache/r2s_VBM.tsv', sep='\t', index_col=0).values else: # Calculate R2 by-voxel with polynomial models of degrees 1, 2, 3 (linear, quadratic, cubic) r2s_VBM = np.empty((mvp_VBM.X.shape[1], 3)) # col 1 = Linear, col 2 = Poly2, col 3 = Poly3...
_____no_output_____
MIT
analyses/suppl_simulations/brain_data_vs_brainsize.ipynb
lukassnoek/MVCA
Function for plotting
def plot_voxel(voxel_idx, data, brain_size, ax=None, add_title=False, scale_bs=False, **kwargs): try: len(voxel_idx) except: voxel_idx = [voxel_idx] # Useful for plotting regression lines later # scale brain size first if scale_bs: brain_size = StandardScaler()....
_____no_output_____
MIT
analyses/suppl_simulations/brain_data_vs_brainsize.ipynb
lukassnoek/MVCA
Inner function for cross-validation. This does the work.
def do_crossval(voxel_idx, data, brain_size, n_iter=100, n_fold=10): # make sure voxel_idx has a len try: len(voxel_idx) except: voxel_idx = [voxel_idx] # Create feature vectors out of brain size X_linear = PolynomialFeatures(degree=1).fit_transform(brain_size.reshape(-...
_____no_output_____
MIT
analyses/suppl_simulations/brain_data_vs_brainsize.ipynb
lukassnoek/MVCA
Wrapper function for multiprocessing
import multiprocessing from functools import partial import tqdm def run_CV_MP(data, voxel_idx, brain_size, n_iter, n_processes=10, n_fold=10, pool=None): if pool is None: private_pool = True pool = multiprocessing.Pool(processes=n_processes) else: private_pool = False resu...
_____no_output_____
MIT
analyses/suppl_simulations/brain_data_vs_brainsize.ipynb
lukassnoek/MVCA
Cross-validation
# How many iterations do we want? Iterations are repeats of KFold CV with random partitioning of the data, # to ensure that the results are not dependent on the (random) partitioning n_iter = 50 # How many voxels should we take? n_voxels = 500 # # What modality are we in? # modality = 'VBM' # # Which voxels are sel...
_____no_output_____
MIT
analyses/suppl_simulations/brain_data_vs_brainsize.ipynb
lukassnoek/MVCA
Cross-validate for all combinations of modality & relation type2 modalitites (TBSS, VBM) x 4 relation types (linear, quadratic, cubic, random) = 8 options in total. Each takes about an hour here
# set-up pool here so we can terminate if necessary pool = multiprocessing.Pool(processes=10) for modality in ['VBM', 'TBSS']: # select right mvp & brain size mvp = mvp_VBM if modality == 'VBM' else mvp_TBSS brain_size = brain_size_VBM if modality == 'VBM' else brain_size_TBSS for relation_type in [0,...
0%| | 0/500 [00:00<?, ?it/s]
MIT
analyses/suppl_simulations/brain_data_vs_brainsize.ipynb
lukassnoek/MVCA
Load all results, make Supplementary Figures S7 (VBM) and S9 (TBSS)
modality = 'VBM' mvp = mvp_VBM if modality == 'VBM' else mvp_TBSS brain_size = brain_size_VBM if modality == 'VBM' else brain_size_TBSS results_linear_vox = pd.read_csv('./cache/results_%s_type-%d_nvox-%d.tsv' %(modality, 0, n_voxels), sep='\t') results_poly2_vox = pd.read_csv('./cache/results_%s_type-%d_nvox-%d.tsv' %...
_____no_output_____
MIT
analyses/suppl_simulations/brain_data_vs_brainsize.ipynb
lukassnoek/MVCA
And descriptives?
to_plot = [{'Random voxels': results_random_vox}, {'Cubic correlating voxels': results_poly2_vox}, {'Quadratic correlating voxels': results_poly3_vox}, {'Linearly correlating voxels': results_linear_vox}] labels = ['Linear-Quadratic', 'Linear-Cubic'] for col, d in enumerate(to_plot): ...
For the Random voxels: -linear models have a mean +0.009 R^2 (SD 0.014, min: -0.024) than quadratic models, -and 0.019 (SD 0.027, max: -0.036) over cubic models. -A proportion of 0.134 of voxels prefers a quadratic model, and a proportion of 0.092 prefers a cubic model For the Cubic correlating voxels: -linear mode...
MIT
analyses/suppl_simulations/brain_data_vs_brainsize.ipynb
lukassnoek/MVCA
Supplementary Figure S8
import warnings warnings.filterwarnings("ignore", category=FutureWarning) warnings.filterwarnings("ignore", category=UserWarning) f, ax = plt.subplots(1, 3) to_plot = [{'Random voxels': results_random_vox}, {'Cubic correlating voxels': results_poly2_vox}] labels = ['Linear-Quadratic', 'Linear-Cubic'] for...
-0.0357580830296 -0.0387817539704 0.032486994328
MIT
analyses/suppl_simulations/brain_data_vs_brainsize.ipynb
lukassnoek/MVCA
summarize compositional lasso results on Independent Simulation Scenarios for binary outcome
dir = '/panfs/panfs1.ucsd.edu/panscratch/lij014/Stability_2020/sim_data' dim.list = list() size = c(50, 100, 500, 1000) idx = 0 for (P in size){ for (N in size){ idx = idx + 1 dim.list[[idx]] = c(P=P, N=N) } } files = NULL for (dim in dim.list){ p = dim[1] n = dim[2] files = cbind(f...
_____no_output_____
BSD-3-Clause
simulations/notebooks_sim_bin/0.4_sim_ind_compLasso_binary_update.ipynb
shihuang047/stability-analyses
Example usage Here we will demonstrate how to use `pycounts_jkim` to count the words in a text file and plot the top 5 results. Imports
from pycounts_jkim.pycounts_jkim import count_words from pycounts_jkim.plotting import plot_words
_____no_output_____
MIT
docs/example.ipynb
jamesktkim/pycounts_jkim
Create a text file
quote = """Insanity is doing the same thing over and over and expecting different results.""" with open("einstein.txt", "w") as file: file.write(quote)
_____no_output_____
MIT
docs/example.ipynb
jamesktkim/pycounts_jkim
Count words
counts = count_words("einstein.txt") print(counts)
Counter({'over': 2, 'and': 2, 'insanity': 1, 'is': 1, 'doing': 1, 'the': 1, 'same': 1, 'thing': 1, 'expecting': 1, 'different': 1, 'results': 1})
MIT
docs/example.ipynb
jamesktkim/pycounts_jkim
Plot words
fig = plot_words(counts, n=5)
_____no_output_____
MIT
docs/example.ipynb
jamesktkim/pycounts_jkim
Large scale landscape evolution model with Priority flood flow router and Space_v2The priority flood flow director is designed to calculate flow properties over large scale grids. In the following notebook we illustrate how the priority flood flow accumulator can be used to simulate landscape evolution using the SPAV...
import numpy as np from matplotlib import pyplot as plt from tqdm import tqdm import time from landlab import imshow_grid, RasterModelGrid from landlab.components import ( FlowAccumulator, DepressionFinderAndRouter, Space, SpaceLargeScaleEroder, PriorityFloodFlowRouter, )
_____no_output_____
CC-BY-4.0
tutorials/large_scale_LEM/large_scale_LEMs.ipynb
BCampforts/hylands_modeling
Create raster grid
# nr = 20 # nc = 20 nr = 75 nc = 75 xy_spacing = 10.0 mg = RasterModelGrid((nr, nc), xy_spacing=xy_spacing) z = mg.add_zeros("topographic__elevation", at="node") mg.at_node["topographic__elevation"][mg.core_nodes] += np.random.rand( mg.number_of_core_nodes ) s = mg.add_zeros("soil__depth", at="node", dtype=float) ...
_____no_output_____
CC-BY-4.0
tutorials/large_scale_LEM/large_scale_LEMs.ipynb
BCampforts/hylands_modeling
Perceptron with StandardScaler & Quantile Transformer This Code template is for the Classification task using simple Perceptron. Which is a simple classification algorithm suitable for large scale learning where data rescaling is done using StandardScaler and feature transformation is done is using QuantileTransformer...
!pip install imblearn import warnings import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as se from imblearn.over_sampling import RandomOverSampler from sklearn.preprocessing import LabelEncoder from sklearn.model_selection import train_test_split from sklearn.linear_mod...
_____no_output_____
Apache-2.0
Classification/Linear Models/Perceptron_StandardScaler_QuantileTransformer.ipynb
shreepad-nade/ds-seed
InitializationFilepath of CSV file
#filepath file_path= ""
_____no_output_____
Apache-2.0
Classification/Linear Models/Perceptron_StandardScaler_QuantileTransformer.ipynb
shreepad-nade/ds-seed
List of features which are required for model training .
#x_values features=[]
_____no_output_____
Apache-2.0
Classification/Linear Models/Perceptron_StandardScaler_QuantileTransformer.ipynb
shreepad-nade/ds-seed
Target feature for prediction.
#y_value target= ''
_____no_output_____
Apache-2.0
Classification/Linear Models/Perceptron_StandardScaler_QuantileTransformer.ipynb
shreepad-nade/ds-seed
Data FetchingPandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
df=pd.read_csv(file_path) df.head()
_____no_output_____
Apache-2.0
Classification/Linear Models/Perceptron_StandardScaler_QuantileTransformer.ipynb
shreepad-nade/ds-seed
Feature SelectionsIt is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.We will assign all the required input features to...
X = df[features] Y = df[target]
_____no_output_____
Apache-2.0
Classification/Linear Models/Perceptron_StandardScaler_QuantileTransformer.ipynb
shreepad-nade/ds-seed
Data PreprocessingSince the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the da...
def NullClearner(df): if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])): df.fillna(df.mean(),inplace=True) return df elif(isinstance(df, pd.Series)): df.fillna(df.mode()[0],inplace=True) return df else:return df def EncodeX(df): return pd.get_du...
_____no_output_____
Apache-2.0
Classification/Linear Models/Perceptron_StandardScaler_QuantileTransformer.ipynb
shreepad-nade/ds-seed
Correlation MapIn order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
f,ax = plt.subplots(figsize=(18, 18)) matrix = np.triu(X.corr()) se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix) plt.show()
_____no_output_____
Apache-2.0
Classification/Linear Models/Perceptron_StandardScaler_QuantileTransformer.ipynb
shreepad-nade/ds-seed
Distribution Of Target Variable
plt.figure(figsize = (10,6)) se.countplot(Y)
_____no_output_____
Apache-2.0
Classification/Linear Models/Perceptron_StandardScaler_QuantileTransformer.ipynb
shreepad-nade/ds-seed
Data SplittingThe train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of th...
x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123)
_____no_output_____
Apache-2.0
Classification/Linear Models/Perceptron_StandardScaler_QuantileTransformer.ipynb
shreepad-nade/ds-seed
Handling Target ImbalanceThe challenge of working with imbalanced datasets is that most machine learning techniques will ignore, and in turn have poor performance on, the minority class, although typically it is performance on the minority class that is most important.One approach to addressing imbalanced datasets is ...
x_train,y_train = RandomOverSampler(random_state=123).fit_resample(x_train, y_train)
_____no_output_____
Apache-2.0
Classification/Linear Models/Perceptron_StandardScaler_QuantileTransformer.ipynb
shreepad-nade/ds-seed
Model the perceptron is an algorithm for supervised learning of binary classifiers. The algorithm learns the weights for the input signals in order to draw a linear decision boundary.This enables you to distinguish between the two linearly separable classes +1 and -1. Model Tuning Parameters> **penalty** ->The penalty...
# Build Model here model=make_pipeline(StandardScaler(),QuantileTransformer(),Perceptron(random_state=123)) model.fit(x_train, y_train)
_____no_output_____
Apache-2.0
Classification/Linear Models/Perceptron_StandardScaler_QuantileTransformer.ipynb
shreepad-nade/ds-seed
Model Accuracyscore() method return the mean accuracy on the given test data and labels.In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
print("Accuracy score {:.2f} %\n".format(model.score(x_test,y_test)*100))
Accuracy score 68.75 %
Apache-2.0
Classification/Linear Models/Perceptron_StandardScaler_QuantileTransformer.ipynb
shreepad-nade/ds-seed
Confusion MatrixA confusion matrix is utilized to understand the performance of the classification model or algorithm in machine learning for a given test set where results are known.
plot_confusion_matrix(model,x_test,y_test,cmap=plt.cm.Blues)
_____no_output_____
Apache-2.0
Classification/Linear Models/Perceptron_StandardScaler_QuantileTransformer.ipynb
shreepad-nade/ds-seed
Classification ReportA Classification report is used to measure the quality of predictions from a classification algorithm. How many predictions are True, how many are False.* **where**: - Precision:- Accuracy of positive predictions. - Recall:- Fraction of positives that were correctly identified. - f1-score...
print(classification_report(y_test,model.predict(x_test)))
precision recall f1-score support 0 0.93 0.54 0.68 50 1 0.55 0.93 0.69 30 accuracy 0.69 80 macro avg 0.74 0.74 0.69 80 weighted avg 0.79 0.69 0.69 ...
Apache-2.0
Classification/Linear Models/Perceptron_StandardScaler_QuantileTransformer.ipynb
shreepad-nade/ds-seed
GoalAs with the previous scalar representation the goal here is to be able to randomly create neurons and synapses and be able to find a linear decoder that is able to correctly decode the non-linear encoded value.To do this, in this section, instead of considering that the neuron has only one input and one output, I'...
class VectorNeuron(object): def __init__(self, a, b, sat, weights): self.a = a self.b = b self.saturation = sat self.weights = weights # redefining the vector creation function to allow for the neurons different totally random synapse weights def rand_vector_neuron_creation(n_synaps...
_____no_output_____
MIT
experiments/notebooks/VectorRepresentation_tests_V2.ipynb
leomrocha/minibrain
Important NOTE:I tried with min_weight values of -1.0, -0.9 and 0.0The value 0 clearly outperforms the negative values, giving min_weight to the synapse weight should be then the value 0.0 (zero)
min_weight = -0.8 max_weight = 0.8 random.uniform(min_weight, max_weight) # Definition of function evaluation def limited_vector_neuron_evaluator(x, neuron ): a = neuron.a b = neuron.b sat=neuron.saturation current = np.minimum(x,np.ones(x.shape)).transpose().dot(neuron.weights) return max(0, min(...
_____no_output_____
MIT
experiments/notebooks/VectorRepresentation_tests_V2.ipynb
leomrocha/minibrain
Seems that a few particular values are taking control of all the elements.I think I should ALSO limit the Post Synaptic Current (PSC) for each connexion
decoders = [] ntrain = 50 lm = linear_model.LinearRegression() model = lm.fit(encs[:ntrain],ins[:ntrain]) #now evaluate the results with those numbers print (model.score(encs[ntrain:], ins[ntrain:])) plt.plot(range(n_vectors), model.predict(encs)); decoders = [] ntrain = 200 lm = linear_model.LinearRegre...
0.999980705409
MIT
experiments/notebooks/VectorRepresentation_tests_V2.ipynb
leomrocha/minibrain
Limited current neuron outperforms the non limited one, but the difference does not seems to be too much.The main point of limiting the neuron output I think is to avoid divergence, I'd rather have more noise than divergence Important NOTEThe number of neurons in the ensemble, plus the number of training elements pla...
%%time #make ensembles for different dimensions, the ensambles will have 1000 neurons each #dimensions is the input dimension of the function ensembles = {} rand_values = {} #dimensions = [10,20,50,100,1000, 5000, 10000] dimensions = [2, 3, 4, 10, 20, 100] nsamples=10000 nneurons = [10,20,50,100,1000] for d in dimens...
_____no_output_____
MIT
experiments/notebooks/VectorRepresentation_tests_V2.ipynb
leomrocha/minibrain
I am not sure how to interpret this just yet, I'll have to double check on the results and try to make some sens of it.It seems that for certain values, the regression of RANDOM values is really great, but for the others, is really bad. This seems to be related to the number of dimensions .... 2 dimensions is great but...
scoresdf[abs(scoresdf[3]) <= 1][abs(scoresdf[3]) >= 0.5]
/home/leo/DeepLearning/venv3/lib/python3.5/site-packages/ipykernel_launcher.py:1: UserWarning: Boolean Series key will be reindexed to match DataFrame index. """Entry point for launching an IPython kernel.
MIT
experiments/notebooks/VectorRepresentation_tests_V2.ipynb
leomrocha/minibrain
--- `seaborn`
# The normal imports import numpy as np from numpy.random import randn import pandas as pd # Import the stats library from numpy from scipy import stats # These are the plotting modules adn libraries we'll use: import matplotlib as mpl import matplotlib.pyplot as plt import seaborn as sns # Declare a pre-difinerd st...
_____no_output_____
MIT
Lecture34- seaborn.ipynb
dajebbar/AI-Programming-with-python
What is seaborn? [Seaborn](https://seaborn.pydata.org/) is a library specially built for data visualization in python. It is like the plotting functions of pandas built on top of matplotlib. It has a lot of nice features for easy visualization and styling. What is the difference between categorical, ordinal and numer...
movie = pd.read_csv('movie_raitings.csv') movie.head() # let's check the columns name movie.columns # We see that there are column names with spaces, # which can be tedious when calling these names. # let's change them movie.columns = ['film', 'genre', 'critic_rating',\ 'audience_rating', 'budget', 'ye...
_____no_output_____
MIT
Lecture34- seaborn.ipynb
dajebbar/AI-Programming-with-python
--- `countplot` A count plot can be thought of as a histogram across a categorical, instead of quantitative, variable. The basic API and options are identical to those for `barplot()`, so you can compare counts across nested variables.
ax = sns.countplot(movie.loc[:, 'genre'])
_____no_output_____
MIT
Lecture34- seaborn.ipynb
dajebbar/AI-Programming-with-python
`jointplot` Draw a plot of two variables with bivariate and univariate graphs.
j = sns.jointplot(data=movie, x='critic_rating', y='audience_rating') # we can change the style of the joint j = sns.jointplot(data=movie, x='critic_rating', y='audience_rating', kind='hex')
_____no_output_____
MIT
Lecture34- seaborn.ipynb
dajebbar/AI-Programming-with-python
`Histograms`
sns.distplot(movie.loc[:,'audience_rating'], bins=15, label='audience_rating') sns.distplot(movie.loc[:,'critic_rating'], bins=15, label='critic_rating') plt.legend() plt.show()
_____no_output_____
MIT
Lecture34- seaborn.ipynb
dajebbar/AI-Programming-with-python
Stacked histograms
# Let's compare the income of the movies by genre lst = [] labels = [] for genre in movie.loc[:, 'genre'].unique(): lst.append(movie.loc[movie.loc[:,'genre']==genre, 'budget']) labels.append(genre) plt.figure(figsize=(10,8)) plt.hist(lst, bins=50, stacked=True, rwidth=1, label=labels) plt.legend() plt.show()
_____no_output_____
MIT
Lecture34- seaborn.ipynb
dajebbar/AI-Programming-with-python
`kdeplot` A kernel density estimate (KDE) plot is a method for visualizing the distribution of observations in a dataset, analagous to a histogram. KDE represents the data using a continuous probability density curve in one or more dimensions.
sns.kdeplot(movie.loc[:,'critic_rating'], movie.loc[:, 'audience_rating'])
_____no_output_____
MIT
Lecture34- seaborn.ipynb
dajebbar/AI-Programming-with-python
`violinplot`
comedy = movie.loc[movie.loc[:,'genre']=='Comedy',:] sns.violinplot(data=comedy, x='year', y='critic_rating')
_____no_output_____
MIT
Lecture34- seaborn.ipynb
dajebbar/AI-Programming-with-python
Creating a `FacetGrid`
sns.FacetGrid(movie, row='genre', col='year', hue='genre')\ .map(plt.scatter, 'critic_rating','audience_rating') # can populate with any type of chart. Example: histograms sns.FacetGrid(movie, row='genre', col='year', hue='genre')\ .map(plt.hist, 'budget')
_____no_output_____
MIT
Lecture34- seaborn.ipynb
dajebbar/AI-Programming-with-python
Análise exploratória da base de dados "Winequality-red.csv" do Módulo 02 do IGIT BOOTCAMP
#importando as bibliotecas import pandas as pd #biblioteca para utilizar os dataframes import numpy as np #biblioteca para trabalhar de forma otimizada com matrizes e vetores import matplotlib.pylab as plt #biblioteca para a construção de gráficos import seaborn as sn #biblioteca para gráficos mais "bonitos" #crian...
_____no_output_____
MIT
DataAnalysis_Desafio_mod02.ipynb
vinicius-mattoso/Wine_classifier
**Quantas instâncias e atributos possuem o dataset?**
instancias,atributos = df_vinhos.shape print("O dataset possue {} instâncias e {} atributos".format(instancias,atributos))
O dataset possue 1599 instâncias e 12 atributos
MIT
DataAnalysis_Desafio_mod02.ipynb
vinicius-mattoso/Wine_classifier
**Existem valores nulos?**
df_vinhos.info() #contando os valores df_vinhos.isnull().sum() print("Existem {} atributos do tipo {} e {} atributos do tipo {}.".format(df_vinhos.dtypes.value_counts()[0],df_vinhos.dtypes.value_counts().index[0],df_vinhos.dtypes.value_counts()[1], df_vinhos.dtypes.value_counts().index[1])) #aplicando as "estatísticas...
_____no_output_____
MIT
DataAnalysis_Desafio_mod02.ipynb
vinicius-mattoso/Wine_classifier
**Aplicando a matriz de correlação**
#matriz de correlação não gráfica df_vinhos.corr()
_____no_output_____
MIT
DataAnalysis_Desafio_mod02.ipynb
vinicius-mattoso/Wine_classifier
Observando o heatmap de correlação abaixo que a qualidade do vinho tem grande correlação com a quantidade de alcool, os sufactantes, a volatilidade da acidez, a acidez crítica
#matriz de correlação plotada plt.figure(figsize=(12,9)) matriz_correlacao=df_vinhos.corr() sn.heatmap(matriz_correlacao, annot=True) plt.show()
_____no_output_____
MIT
DataAnalysis_Desafio_mod02.ipynb
vinicius-mattoso/Wine_classifier
Agora vamos fazer uma série de plot para melhor visualizar a correlação entre as variaveis
%matplotlib inline from matplotlib import style style.use("seaborn-colorblind") #df_vinhos.plot(x='pH', y='alcohol', c='quality', kind='scatter' , colormap='Reds') plt.plot(x=df_vinhos['pH'], y=df_vinhos['alcohol'], c=df_vinhos['quality'], kind='scatter' , colormap='Reds')
_____no_output_____
MIT
DataAnalysis_Desafio_mod02.ipynb
vinicius-mattoso/Wine_classifier
Podemos observar que existe um correlação inversa entre a densidade do vinho e a quantidade de alcool nele
fig, ax = plt.subplots() plt.scatter(df_vinhos['alcohol'], df_vinhos['density'],marker='.') plt.grid(True, linestyle='-.') plt.tick_params(labelcolor='r', labelsize='medium', width=3) plt.xlabel("alcohol") plt.ylabel("density") plt.show()
_____no_output_____
MIT
DataAnalysis_Desafio_mod02.ipynb
vinicius-mattoso/Wine_classifier
Podemos observar que existe um correlação entre os sufactantes e "chlorides", vemos que temos uma grande concentração de "Chlorides" em 0.1
fig, ax = plt.subplots() plt.scatter(df_vinhos['sulphates'], df_vinhos['chlorides'],marker='.') plt.grid(True, linestyle='-.') plt.tick_params(labelcolor='r', labelsize='medium', width=3) plt.xlabel("sulphates") plt.ylabel("chlorides") plt.show()
_____no_output_____
MIT
DataAnalysis_Desafio_mod02.ipynb
vinicius-mattoso/Wine_classifier
Podemos observar que existe uma correlação negativa entre a acidez crítica e a "Volatilie acidity"
fig, ax = plt.subplots() plt.scatter(df_vinhos['volatile acidity'], df_vinhos['citric acid'],marker='.') plt.grid(True, linestyle='-.') plt.tick_params(labelcolor='r', labelsize='medium', width=3) plt.xlabel("volatile acidity") plt.ylabel("citric acid") plt.show()
_____no_output_____
MIT
DataAnalysis_Desafio_mod02.ipynb
vinicius-mattoso/Wine_classifier
Separação em Features (Entradas) e Target (Saída)
#dividindo o dataset entre entrada e saída entradas=df_vinhos.iloc[:,:-1] #seleciona todas as colunas menor a última saida=df_vinhos.iloc[:,-1] #seleciona apenas a coluna de qualidade do vinho entradas.head(5) saida.head(5)
_____no_output_____
MIT
DataAnalysis_Desafio_mod02.ipynb
vinicius-mattoso/Wine_classifier
**Quantas instâncias existem para a qualidade do vinho igual a 5?**
#identificando as instâncias existentes para os dados df_vinhos['quality'].value_counts() df_vinhos['quality'].unique()
_____no_output_____
MIT
DataAnalysis_Desafio_mod02.ipynb
vinicius-mattoso/Wine_classifier
Criando uma nova base de dados, na qual a classificação é binária onde vinhos com qualidade superior a 5 é considerado bom (1)
#modificando o dataset new_df=df_vinhos.copy() new_df['nova_qualidade']=new_df['quality'].apply(lambda x: 0 if x<=5 else 1) new_df.tail() # Como criamos um novo target, podemos retirar o quality anterior new_df.drop('quality', axis=1, inplace=True)
_____no_output_____
MIT
DataAnalysis_Desafio_mod02.ipynb
vinicius-mattoso/Wine_classifier
Após a criação do novo target, podemos observar que o alcool, "volatile acidity", "sulphates" e "total sufur dioxide" Podemos observar que nessa nova base houve uma mudança de importancia entre "citric acid" para o "total sufur dioxide"
#matriz de correlação plotada da nova base de dados plt.figure(figsize=(12,9)) matriz_correlacao=new_df.corr() sn.heatmap(matriz_correlacao, annot=True) plt.show()
_____no_output_____
MIT
DataAnalysis_Desafio_mod02.ipynb
vinicius-mattoso/Wine_classifier
Podemos observar que existe uma correlação positiva entre a quantidade total de ácido sufúrico e a quantidade de ácido livre, ou seja, quanto maior a quantidade de ácido maior a quantidade de ácido livre
fig, ax = plt.subplots() plt.scatter(df_vinhos['total sulfur dioxide'], df_vinhos['free sulfur dioxide'],marker='.') plt.grid(True, linestyle='-.') plt.tick_params(labelcolor='r', labelsize='medium', width=3) plt.xlabel("total sulfur dioxide") plt.ylabel("free sulfur dioxide") plt.show()
_____no_output_____
MIT
DataAnalysis_Desafio_mod02.ipynb
vinicius-mattoso/Wine_classifier