markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Machine Learning Pipeline - Feature EngineeringIn the following notebooks, we will go through the implementation of each one of the steps in the Machine Learning Pipeline. We will discuss:1. Data Analysis2. **Feature Engineering**3. Feature Selection4. Model Training5. Obtaining Predictions / ScoringWe will use the ho...
# to handle datasets import pandas as pd import numpy as np # for plotting import matplotlib.pyplot as plt # for the yeo-johnson transformation import scipy.stats as stats # to divide train and test set from sklearn.model_selection import train_test_split # feature scaling from sklearn.preprocessing import MinMaxSc...
(1460, 81)
BSD-3-Clause
section-04-research-and-development/02-machine-learning-pipeline-feature-engineering.ipynb
chauthinh/machine-learning-deployment
Separate dataset into train and testIt is important to separate our data intro training and testing set. When we engineer features, some techniques learn parameters from data. It is important to learn these parameters only from the train set. This is to avoid over-fitting.Our feature engineering techniques will learn:...
# Let's separate into train and test set # Remember to set the seed (random_state for this sklearn function) X_train, X_test, y_train, y_test = train_test_split( data.drop(['Id', 'SalePrice'], axis=1), # predictive variables data['SalePrice'], # target test_size=0.1, # portion of dataset to allocate to tes...
_____no_output_____
BSD-3-Clause
section-04-research-and-development/02-machine-learning-pipeline-feature-engineering.ipynb
chauthinh/machine-learning-deployment
Feature EngineeringIn the following cells, we will engineer the variables of the House Price Dataset so that we tackle:1. Missing values2. Temporal variables3. Non-Gaussian distributed variables4. Categorical variables: remove rare labels5. Categorical variables: convert strings to numbers5. Put the variables in a sim...
y_train = np.log(y_train) y_test = np.log(y_test)
_____no_output_____
BSD-3-Clause
section-04-research-and-development/02-machine-learning-pipeline-feature-engineering.ipynb
chauthinh/machine-learning-deployment
Missing values Categorical variablesWe will replace missing values with the string "missing" in those variables with a lot of missing data. Alternatively, we will replace missing data with the most frequent category in those variables that contain fewer observations without values. This is common practice.
# let's identify the categorical variables # we will capture those of type object cat_vars = [var for var in data.columns if data[var].dtype == 'O'] # MSSubClass is also categorical by definition, despite its numeric values # (you can find the definitions of the variables in the data_description.txt # file available ...
_____no_output_____
BSD-3-Clause
section-04-research-and-development/02-machine-learning-pipeline-feature-engineering.ipynb
chauthinh/machine-learning-deployment
Numerical variablesTo engineer missing values in numerical variables, we will:- add a binary missing indicator variable- and then replace the missing values in the original variable with the mean
# now let's identify the numerical variables num_vars = [ var for var in X_train.columns if var not in cat_vars and var != 'SalePrice' ] # number of numerical variables len(num_vars) # make a list with the numerical variables that contain missing values vars_with_na = [ var for var in num_vars if X_train[...
_____no_output_____
BSD-3-Clause
section-04-research-and-development/02-machine-learning-pipeline-feature-engineering.ipynb
chauthinh/machine-learning-deployment
Temporal variables Capture elapsed timeWe learned in the previous notebook, that there are 4 variables that refer to the years in which the house or the garage were built or remodeled. We will capture the time elapsed between those variables and the year in which the house was sold:
def elapsed_years(df, var): # capture difference between the year variable # and the year in which the house was sold df[var] = df['YrSold'] - df[var] return df for var in ['YearBuilt', 'YearRemodAdd', 'GarageYrBlt']: X_train = elapsed_years(X_train, var) X_test = elapsed_years(X_test, var) # no...
_____no_output_____
BSD-3-Clause
section-04-research-and-development/02-machine-learning-pipeline-feature-engineering.ipynb
chauthinh/machine-learning-deployment
Numerical variable transformation Logarithmic transformationIn the previous notebook, we observed that the numerical variables are not normally distributed.We will transform with the logarightm the positive numerical variables in order to get a more Gaussian-like distribution.
for var in ["LotFrontage", "1stFlrSF", "GrLivArea"]: X_train[var] = np.log(X_train[var]) X_test[var] = np.log(X_test[var]) # check that test set does not contain null values in the engineered variables [var for var in ["LotFrontage", "1stFlrSF", "GrLivArea"] if X_test[var].isnull().sum() > 0] # same for train s...
_____no_output_____
BSD-3-Clause
section-04-research-and-development/02-machine-learning-pipeline-feature-engineering.ipynb
chauthinh/machine-learning-deployment
Yeo-Johnson transformationWe will apply the Yeo-Johnson transformation to LotArea.
# the yeo-johnson transformation learns the best exponent to transform the variable # it needs to learn it from the train set: X_train['LotArea'], param = stats.yeojohnson(X_train['LotArea']) # and then apply the transformation to the test set with the same # parameter: see who this time we pass param as argument to ...
_____no_output_____
BSD-3-Clause
section-04-research-and-development/02-machine-learning-pipeline-feature-engineering.ipynb
chauthinh/machine-learning-deployment
Binarize skewed variablesThere were a few variables very skewed, we would transform those into binary variables.
skewed = [ 'BsmtFinSF2', 'LowQualFinSF', 'EnclosedPorch', '3SsnPorch', 'ScreenPorch', 'MiscVal' ] for var in skewed: # map the variable values into 0 and 1 X_train[var] = np.where(X_train[var]==0, 0, 1) X_test[var] = np.where(X_test[var]==0, 0, 1)
_____no_output_____
BSD-3-Clause
section-04-research-and-development/02-machine-learning-pipeline-feature-engineering.ipynb
chauthinh/machine-learning-deployment
Categorical variables Apply mappingsThese are variables which values have an assigned order, related to quality. For more information, check Kaggle website.
# re-map strings to numbers, which determine quality qual_mappings = {'Po': 1, 'Fa': 2, 'TA': 3, 'Gd': 4, 'Ex': 5, 'Missing': 0, 'NA': 0} qual_vars = ['ExterQual', 'ExterCond', 'BsmtQual', 'BsmtCond', 'HeatingQC', 'KitchenQual', 'FireplaceQu', 'GarageQual', 'GarageCond', ] for v...
_____no_output_____
BSD-3-Clause
section-04-research-and-development/02-machine-learning-pipeline-feature-engineering.ipynb
chauthinh/machine-learning-deployment
Removing Rare LabelsFor the remaining categorical variables, we will group those categories that are present in less than 1% of the observations. That is, all values of categorical variables that are shared by less than 1% of houses, well be replaced by the string "Rare".To learn more about how to handle categorical v...
# capture all quality variables qual_vars = qual_vars + finish_vars + ['BsmtExposure','GarageFinish','Fence'] # capture the remaining categorical variables # (those that we did not re-map) cat_others = [ var for var in cat_vars if var not in qual_vars ] len(cat_others) def find_frequent_labels(df, var, rare_pe...
MSZoning Index(['FV', 'RH', 'RL', 'RM'], dtype='object', name='MSZoning') Street Index(['Pave'], dtype='object', name='Street') Alley Index(['Grvl', 'Missing', 'Pave'], dtype='object', name='Alley') LotShape Index(['IR1', 'IR2', 'Reg'], dtype='object', name='LotShape') LandContour Index(['Bnk', 'HLS', 'Low', 'Lvl']...
BSD-3-Clause
section-04-research-and-development/02-machine-learning-pipeline-feature-engineering.ipynb
chauthinh/machine-learning-deployment
Encoding of categorical variablesNext, we need to transform the strings of the categorical variables into numbers. We will do it so that we capture the monotonic relationship between the label and the target.To learn more about how to encode categorical variables visit our course [Feature Engineering for Machine Learn...
# this function will assign discrete values to the strings of the variables, # so that the smaller value corresponds to the category that shows the smaller # mean house sale price def replace_categories(train, test, y_train, var, target): tmp = pd.concat([X_train, y_train], axis=1) # order the catego...
_____no_output_____
BSD-3-Clause
section-04-research-and-development/02-machine-learning-pipeline-feature-engineering.ipynb
chauthinh/machine-learning-deployment
The monotonic relationship is particularly clear for the variables MSZoning and Neighborhood. Note how, the higher the integer that now represents the category, the higher the mean house sale price.(remember that the target is log-transformed, that is why the differences seem so small). Feature ScalingFor use in linea...
# create scaler scaler = MinMaxScaler() # fit the scaler to the train set scaler.fit(X_train) # transform the train and test set # sklearn returns numpy arrays, so we wrap the # array with a pandas dataframe X_train = pd.DataFrame( scaler.transform(X_train), columns=X_train.columns ) X_test = pd.DataFra...
_____no_output_____
BSD-3-Clause
section-04-research-and-development/02-machine-learning-pipeline-feature-engineering.ipynb
chauthinh/machine-learning-deployment
Fictional Army - Filtering and Sorting Introduction:This exercise was inspired by this [page](http://chrisalbon.com/python/)Special thanks to: https://github.com/chrisalbon for sharing the dataset and materials. Step 1. Import the necessary libraries
import pandas as pd
_____no_output_____
BSD-3-Clause
02_Filtering_&_Sorting/Fictional Army/Exercise_with_solutions.ipynb
ViniciusRFerraz/pandas_exercises
Step 2. This is the data given as a dictionary
# Create an example dataframe about a fictional army raw_data = {'regiment': ['Nighthawks', 'Nighthawks', 'Nighthawks', 'Nighthawks', 'Dragoons', 'Dragoons', 'Dragoons', 'Dragoons', 'Scouts', 'Scouts', 'Scouts', 'Scouts'], 'company': ['1st', '1st', '2nd', '2nd', '1st', '1st', '2nd', '2nd','1st', '1st', '2nd...
_____no_output_____
BSD-3-Clause
02_Filtering_&_Sorting/Fictional Army/Exercise_with_solutions.ipynb
ViniciusRFerraz/pandas_exercises
Step 3. Create a dataframe and assign it to a variable called army. Don't forget to include the columns names in the order presented in the dictionary ('regiment', 'company', 'deaths'...) so that the column index order is consistent with the solutions. If omitted, pandas will order the columns alphabetically.
army = pd.DataFrame(raw_data, columns = ['regiment', 'company', 'deaths', 'battles', 'size', 'veterans', 'readiness', 'armored', 'deserters', 'origin'])
_____no_output_____
BSD-3-Clause
02_Filtering_&_Sorting/Fictional Army/Exercise_with_solutions.ipynb
ViniciusRFerraz/pandas_exercises
Step 4. Set the 'origin' colum as the index of the dataframe
army = army.set_index('origin') army
_____no_output_____
BSD-3-Clause
02_Filtering_&_Sorting/Fictional Army/Exercise_with_solutions.ipynb
ViniciusRFerraz/pandas_exercises
Step 5. Print only the column veterans
army['veterans']
_____no_output_____
BSD-3-Clause
02_Filtering_&_Sorting/Fictional Army/Exercise_with_solutions.ipynb
ViniciusRFerraz/pandas_exercises
Step 6. Print the columns 'veterans' and 'deaths'
army[['veterans', 'deaths']]
_____no_output_____
BSD-3-Clause
02_Filtering_&_Sorting/Fictional Army/Exercise_with_solutions.ipynb
ViniciusRFerraz/pandas_exercises
Step 7. Print the name of all the columns.
army.columns
_____no_output_____
BSD-3-Clause
02_Filtering_&_Sorting/Fictional Army/Exercise_with_solutions.ipynb
ViniciusRFerraz/pandas_exercises
Step 8. Select the 'deaths', 'size' and 'deserters' columns from Maine and Alaska
# Select all rows with the index label "Maine" and "Alaska" army.loc[['Maine','Alaska'] , ["deaths","size","deserters"]]
_____no_output_____
BSD-3-Clause
02_Filtering_&_Sorting/Fictional Army/Exercise_with_solutions.ipynb
ViniciusRFerraz/pandas_exercises
Step 9. Select the rows 3 to 7 and the columns 3 to 6
# army.iloc[3:7, 3:6]
_____no_output_____
BSD-3-Clause
02_Filtering_&_Sorting/Fictional Army/Exercise_with_solutions.ipynb
ViniciusRFerraz/pandas_exercises
Step 10. Select every row after the fourth row
army.iloc[3:]
_____no_output_____
BSD-3-Clause
02_Filtering_&_Sorting/Fictional Army/Exercise_with_solutions.ipynb
ViniciusRFerraz/pandas_exercises
Step 11. Select every row up to the 4th row
army.iloc[:3]
_____no_output_____
BSD-3-Clause
02_Filtering_&_Sorting/Fictional Army/Exercise_with_solutions.ipynb
ViniciusRFerraz/pandas_exercises
Step 12. Select the 3rd column up to the 7th column
# the first : means all # after the comma you select the range army.iloc[: , 4:7]
_____no_output_____
BSD-3-Clause
02_Filtering_&_Sorting/Fictional Army/Exercise_with_solutions.ipynb
ViniciusRFerraz/pandas_exercises
Step 13. Select rows where df.deaths is greater than 50
army[army['deaths'] > 50]
_____no_output_____
BSD-3-Clause
02_Filtering_&_Sorting/Fictional Army/Exercise_with_solutions.ipynb
ViniciusRFerraz/pandas_exercises
Step 14. Select rows where df.deaths is greater than 500 or less than 50
army[(army['deaths'] > 500) | (army['deaths'] < 50)]
_____no_output_____
BSD-3-Clause
02_Filtering_&_Sorting/Fictional Army/Exercise_with_solutions.ipynb
ViniciusRFerraz/pandas_exercises
Step 15. Select all the regiments not named "Dragoons"
army[(army['regiment'] != 'Dragoons')]
_____no_output_____
BSD-3-Clause
02_Filtering_&_Sorting/Fictional Army/Exercise_with_solutions.ipynb
ViniciusRFerraz/pandas_exercises
Step 16. Select the rows called Texas and Arizona
army.loc[['Arizona', 'Texas']]
_____no_output_____
BSD-3-Clause
02_Filtering_&_Sorting/Fictional Army/Exercise_with_solutions.ipynb
ViniciusRFerraz/pandas_exercises
Step 17. Select the third cell in the row named Arizona
army.loc[['Arizona'], ['deaths']] #OR army.iloc[[0], army.columns.get_loc('deaths')]
_____no_output_____
BSD-3-Clause
02_Filtering_&_Sorting/Fictional Army/Exercise_with_solutions.ipynb
ViniciusRFerraz/pandas_exercises
Step 18. Select the third cell down in the column named deaths
army.loc['Texas', 'deaths'] #OR army.iloc[[2], army.columns.get_loc('deaths')]
_____no_output_____
BSD-3-Clause
02_Filtering_&_Sorting/Fictional Army/Exercise_with_solutions.ipynb
ViniciusRFerraz/pandas_exercises
Title Spread Element Dependencies Matplotlib Backends Matplotlib Bokeh
import numpy as np import holoviews as hv hv.extension('matplotlib')
_____no_output_____
BSD-3-Clause
examples/reference/elements/matplotlib/Spread.ipynb
stonebig/holoviews
``Spread`` elements have the same data format as the [``ErrorBars``](ErrorBars.ipynb) element, namely x- and y-values with associated symmetric or asymmetric errors, but are interpreted as samples from a continuous distribution (just as ``Curve`` is the continuous version of ``Scatter``). These are often paired with a...
np.random.seed(42) xs = np.linspace(0, np.pi*2, 20) err = 0.2+np.random.rand(len(xs)) hv.Spread((xs, np.sin(xs), err))
_____no_output_____
BSD-3-Clause
examples/reference/elements/matplotlib/Spread.ipynb
stonebig/holoviews
Asymmetric Given three value dimensions corresponding to the position on the y-axis, the negative error and the positive error, ``Spread`` can be used to visualize assymmetric errors:
%%opts Spread (facecolor='indianred' alpha=1) xs = np.linspace(0, np.pi*2, 20) hv.Spread((xs, np.sin(xs), 0.1+np.random.rand(len(xs)), 0.1+np.random.rand(len(xs))), vdims=['y', 'yerrneg', 'yerrpos'])
_____no_output_____
BSD-3-Clause
examples/reference/elements/matplotlib/Spread.ipynb
stonebig/holoviews
Run in Colab View on GitHub Vertex AI: Track parameters and metrics for custom training jobs OverviewThis notebook demonstrates how to track metrics and parameters for Vertex AI custom training jobs, and how to perform detailed analysis using this data. DatasetThis example use...
import sys if "google.colab" in sys.modules: USER_FLAG = "" else: USER_FLAG = "--user" !python3 -m pip install {USER_FLAG} google-cloud-aiplatform --upgrade
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb
thepycoder/ai-platform-samples
Restart the kernelAfter you install the additional packages, you need to restart the notebook kernel so it can find the packages.
# Automatically restart kernel after installs import os if not os.getenv("IS_TESTING"): # Automatically restart kernel after installs import IPython app = IPython.Application.instance() app.kernel.do_shutdown(True)
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb
thepycoder/ai-platform-samples
Before you begin Select a GPU runtime**Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select "Runtime --> Change runtime type > GPU"** Set up your Google Cloud project**The following steps are required, regardless of your notebook environment.**1. [Select or create a Google...
import os PROJECT_ID = "" # Get your Google Cloud project ID from gcloud if not os.getenv("IS_TESTING"): shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null PROJECT_ID = shell_output[0] print("Project ID: ", PROJECT_ID)
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb
thepycoder/ai-platform-samples
Otherwise, set your project ID here.
if PROJECT_ID == "" or PROJECT_ID is None: PROJECT_ID = "[your-project-id]" # @param {type:"string"}
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb
thepycoder/ai-platform-samples
Set gcloud config to your project ID.
!gcloud config set project $PROJECT_ID
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb
thepycoder/ai-platform-samples
TimestampIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial.
from datetime import datetime TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb
thepycoder/ai-platform-samples
Authenticate your Google Cloud account**If you are using Google Cloud Notebooks**, your environment is alreadyauthenticated. Skip this step. **If you are using Colab**, run the cell below and follow the instructionswhen prompted to authenticate your account via oAuth.**Otherwise**, follow these steps:1. In the Cloud C...
import os import sys # If you are running this notebook in Colab, run this cell and follow the # instructions to authenticate your GCP account. This provides access to your # Cloud Storage bucket and lets you submit training jobs and prediction # requests. # If on Google Cloud Notebooks, then don't execute this code ...
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb
thepycoder/ai-platform-samples
Create a Cloud Storage bucket**The following steps are required, regardless of your notebook environment.**When you submit a training job using the Cloud SDK, you upload a Python packagecontaining your training code to a Cloud Storage bucket. Vertex AI runsthe code from this package. In this tutorial, Vertex AI also s...
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"} REGION = "[your-region]" # @param {type:"string"} if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]": BUCKET_NAME = "gs://" + PROJECT_ID + "-aip-" + TIMESTAMP
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb
thepycoder/ai-platform-samples
**Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket.
! gsutil mb -l $REGION $BUCKET_NAME
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb
thepycoder/ai-platform-samples
Finally, validate access to your Cloud Storage bucket by examining its contents:
! gsutil ls -al $BUCKET_NAME
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb
thepycoder/ai-platform-samples
Import libraries and define constants Import required libraries.
import pandas as pd from google.cloud import aiplatform from sklearn.metrics import mean_absolute_error, mean_squared_error from tensorflow.python.keras.utils import data_utils
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb
thepycoder/ai-platform-samples
Initialize Vertex AI and set an _experiment_ Define experiment name.
EXPERIMENT_NAME = "" # @param {type:"string"}
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb
thepycoder/ai-platform-samples
If EXEPERIMENT_NAME is not set, set a default one below:
if EXPERIMENT_NAME == "" or EXPERIMENT_NAME is None: EXPERIMENT_NAME = "my-experiment-" + TIMESTAMP
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb
thepycoder/ai-platform-samples
Initialize the *client* for Vertex AI.
aiplatform.init( project=PROJECT_ID, location=REGION, staging_bucket=BUCKET_NAME, experiment=EXPERIMENT_NAME, )
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb
thepycoder/ai-platform-samples
Tracking parameters and metrics in Vertex AI custom training jobs This example uses the Abalone Dataset. For more information about this dataset please visit: https://archive.ics.uci.edu/ml/datasets/abalone
!wget https://storage.googleapis.com/download.tensorflow.org/data/abalone_train.csv !gsutil cp abalone_train.csv {BUCKET_NAME}/data/ gcs_csv_path = f"{BUCKET_NAME}/data/abalone_train.csv"
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb
thepycoder/ai-platform-samples
Create a managed tabular dataset from a CSVA Managed dataset can be used to create an AutoML model or a custom model.
ds = aiplatform.TabularDataset.create(display_name="abalone", gcs_source=[gcs_csv_path]) ds.resource_name
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb
thepycoder/ai-platform-samples
Write the training scriptRun the following cell to create the training script that is used in the sample custom training job.
%%writefile training_script.py import pandas as pd import argparse import os import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers parser = argparse.ArgumentParser() parser.add_argument('--epochs', dest='epochs', default=10, type=int, help='Nu...
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb
thepycoder/ai-platform-samples
Launch a custom training job and track its trainig parameters on Vertex AI ML Metadata
job = aiplatform.CustomTrainingJob( display_name="train-abalone-dist-1-replica", script_path="training_script.py", container_uri="gcr.io/cloud-aiplatform/training/tf-cpu.2-2:latest", requirements=["gcsfs==0.7.1"], model_serving_container_image_uri="gcr.io/cloud-aiplatform/prediction/tf2-cpu.2-2:late...
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb
thepycoder/ai-platform-samples
Start a new experiment run to track training parameters and start the training job. Note that this operation will take around 10 mins.
aiplatform.start_run("custom-training-run-1") # Change this to your desired run name parameters = {"epochs": 10, "num_units": 64} aiplatform.log_params(parameters) model = job.run( ds, replica_count=1, model_display_name="abalone-model", args=[f"--epochs={parameters['epochs']}", f"--num_units={paramet...
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb
thepycoder/ai-platform-samples
Deploy Model and calculate prediction metrics Deploy model to Google Cloud. This operation will take 10-20 mins.
endpoint = model.deploy(machine_type="n1-standard-4")
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb
thepycoder/ai-platform-samples
Once model is deployed, perform online prediction using the `abalone_test` dataset and calculate prediction metrics. Prepare the prediction dataset.
def read_data(uri): dataset_path = data_utils.get_file("auto-mpg.data", uri) col_names = [ "Length", "Diameter", "Height", "Whole weight", "Shucked weight", "Viscera weight", "Shell weight", "Age", ] dataset = pd.read_csv( dataset_p...
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb
thepycoder/ai-platform-samples
Perform online prediction.
prediction = endpoint.predict(test_dataset.tolist()) prediction
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb
thepycoder/ai-platform-samples
Calculate and track prediction evaluation metrics.
mse = mean_squared_error(test_labels, prediction.predictions) mae = mean_absolute_error(test_labels, prediction.predictions) aiplatform.log_metrics({"mse": mse, "mae": mae})
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb
thepycoder/ai-platform-samples
Extract all parameters and metrics created during this experiment.
aiplatform.get_experiment_df()
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb
thepycoder/ai-platform-samples
View data in the Cloud Console Parameters and metrics can also be viewed in the Cloud Console.
print("Vertex AI Experiments:") print( f"https://console.cloud.google.com/ai/platform/experiments/experiments?folder=&organizationId=&project={PROJECT_ID}" )
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb
thepycoder/ai-platform-samples
Cleaning upTo clean up all Google Cloud resources used in this project, you can [delete the Google Cloudproject](https://cloud.google.com/resource-manager/docs/creating-managing-projectsshutting_down_projects) you used for the tutorial.Otherwise, you can delete the individual resources you created in this tutorial:Tra...
delete_training_job = True delete_model = True delete_endpoint = True # Warning: Setting this to true will delete everything in your bucket delete_bucket = False # Delete the training job job.delete() # Delete the model model.delete() # Delete the endpoint endpoint.delete() if delete_bucket and "BUCKET_NAME" in gl...
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb
thepycoder/ai-platform-samples
End-To-End Example: Password ProgramPassword Program:- 5 attempts for the password- On correct password, print: “Access Granted”, then end the program - On incorrect password “Invalid Password Attempt ” and give the user another try- After 5 attempts, print “You are locked out”. Then end the program.
secret = "rhubarb" attempts = 0 while True: password = input("Enter Password: ") attempts= attempts + 1 if password == secret: print("Access Granted!") break print("Invalid password attempt #",attempts) if attempts == 5: print("You are locked out") break
Enter Password: sd Invalid password attempt # 1 Enter Password: fds Invalid password attempt # 2 Enter Password: sd Invalid password attempt # 3 Enter Password: d Invalid password attempt # 4 Enter Password: d Invalid password attempt # 5 You are locked out
MIT
content/lessons/04/End-To-End-Example/ETEE-Password-Program.ipynb
MahopacHS/spring-2020-Lamk0810
Final Lab*Felix Rojo Lapalma* Main taskIn this notebook, we will apply transfer learning techniques to finetune the [MobileNet](https://arxiv.org/pdf/1704.04861.pdf) CNN on [Cifar-10](https://www.cs.toronto.edu/~kriz/cifar.html) dataset. ProceduresIn general, the main steps that we will follow are:1. Load data, analyz...
# load libs import os import matplotlib.pyplot as plt from IPython.display import SVG # https://keras.io/applications/#documentation-for-individual-models from keras.applications.mobilenet import MobileNet from keras.datasets import cifar10 from keras.models import Model from keras.utils.vis_utils import model_to_dot f...
Using TensorFlow backend.
MIT
vpc_2018/lab/Lab_VpC_FelixRojoLapalma_003.ipynb
felixlapalma/diplodatos_2018
cuda
cuda_flag=False if cuda_flag: # Setup one GPU for tensorflow (don't be greedy). os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" # The GPU id to use, "0", "1", etc. os.environ["CUDA_VISIBLE_DEVICES"] = "0" # Limit tensorflow gpu usage. # Maybe you should comment this lines if you run tensorflow ...
_____no_output_____
MIT
vpc_2018/lab/Lab_VpC_FelixRojoLapalma_003.ipynb
felixlapalma/diplodatos_2018
1. Load data, analyze and split in *training*/*validation*/*testing* sets
# Cifar-10 class names # We will create a dictionary for each type of label # This is a mapping from the int class name to # their corresponding string class name LABELS = { 0: "airplane", 1: "automobile", 2: "bird", 3: "cat", 4: "deer", 5: "dog", 6: "frog", 7: "horse", 8: "ship", ...
Train airplane : 5000 or 10.00% automobile : 5000 or 10.00% bird : 5000 or 10.00% cat : 5000 or 10.00% deer : 5000 or 10.00% dog : 5000 or 10.00% frog : 5000 or 10.00% horse : 5000 or 10.00% ship : 5000 or 10.00% truck ...
MIT
vpc_2018/lab/Lab_VpC_FelixRojoLapalma_003.ipynb
felixlapalma/diplodatos_2018
Todo parece ir de acuerdo a la documentación. Veamos las imagenes,
from genlib import sample_images_data,plot_sample_images for xy,yt in zip([(x_train_data,y_train_data.flatten()),(x_test_data,y_test_data.flatten())],['Train','Test']): print('{:>15s}'.format(yt)) train_sample_images, train_sample_labels = sample_images_data(*xy,LABELS) plot_sample_images(train_sample_image...
Cifar-10 x_train shape: (20000, 32, 32, 3) Cifar-10 y_train shape: (20000, 1) Cifar-10 x_val shape: (4000, 32, 32, 3) Cifar-10 y_val shape: (4000, 1) Cifar-10 x_test shape: (10000, 32, 32, 3) Cifar-10 y_test shape: (10000, 1)
MIT
vpc_2018/lab/Lab_VpC_FelixRojoLapalma_003.ipynb
felixlapalma/diplodatos_2018
Veamos si quedaron balanceados Train y Validation
for y,yt in zip([y_train_data.flatten(),y_val_data.flatten()],['Train','Validation']): print('{:>15s}'.format(yt)) get_classes_distribution(y,LABELS) plot_label_per_class(y,LABELS) # In order to use the MobileNet CNN pre-trained on imagenet, we have # to resize our images to have one of the following static...
_____no_output_____
MIT
vpc_2018/lab/Lab_VpC_FelixRojoLapalma_003.ipynb
felixlapalma/diplodatos_2018
2. Load CNN and analyze architecture
#Model NO_EPOCHS = 25 BATCH_SIZE = 32 NET_IMG_ROWS = 128 NET_IMG_COLS = 128 ############ # [COMPLETE] # Use the MobileNet class from Keras to load your base model, pre-trained on imagenet. # We wan't to load the pre-trained weights, but without the classification layer. # Check the notebook '3_transfer-learning' or ht...
_____no_output_____
MIT
vpc_2018/lab/Lab_VpC_FelixRojoLapalma_003.ipynb
felixlapalma/diplodatos_2018
3. Adapt this CNN to our problem
############ # [COMPLETE] # Having the CNN loaded, now we have to add some layers to adapt this network to our # classification problem. # We can choose to finetune just the new added layers, some particular layers or all the layer of the # model. Play with different settings and compare the results. ############ # g...
_________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) (None, 128, 128, 3) 0 ________________________________________________________...
MIT
vpc_2018/lab/Lab_VpC_FelixRojoLapalma_003.ipynb
felixlapalma/diplodatos_2018
4. Setup data augmentation techniques
############ # [COMPLETE] # Use data augmentation to train your model. # Use the Keras ImageDataGenerator class for this porpouse. # Note: Given that we want to load our images from disk, instead of using # ImageDataGenerator.flow method, we have to use ImageDataGenerator.flow_from_directory # method in the followin...
Found 40000 images belonging to 10 classes. Found 10000 images belonging to 10 classes.
MIT
vpc_2018/lab/Lab_VpC_FelixRojoLapalma_003.ipynb
felixlapalma/diplodatos_2018
5. Add some keras callbacks
############ # [COMPLETE] # Load and set some Keras callbacks here! ############ EXP_ID='experiment_003/' from keras.callbacks import ModelCheckpoint, TensorBoard if not os.path.exists(EXP_ID): os.makedirs(EXP_ID) callbacks = [ ModelCheckpoint(filepath=os.path.join(EXP_ID, 'weights.{epoch:02d}-{val_loss:.2...
_____no_output_____
MIT
vpc_2018/lab/Lab_VpC_FelixRojoLapalma_003.ipynb
felixlapalma/diplodatos_2018
6. Setup optimization algorithm with their hyperparameters
############ # [COMPLETE] # Choose some optimization algorithm and explore different hyperparameters. # Compile your model. ############ from keras.optimizers import SGD from keras.losses import categorical_crossentropy #model.compile(optimizer=SGD(lr=0.0001, momentum=0.9), # loss='categorical_crossentro...
_____no_output_____
MIT
vpc_2018/lab/Lab_VpC_FelixRojoLapalma_003.ipynb
felixlapalma/diplodatos_2018
7. Train model!
generator_train.n ############ # [COMPLETE] # Use fit_generator to train your model. # e.g.: # model.fit_generator( # generator_train, # epochs=50, # validation_data=generator_val, # steps_per_epoch=generator_train.n // 32, # validation_steps=generator_val.n // 32) ############ if...
Epoch 1/25 625/625 [==============================] - 1911s 3s/step - loss: 0.7648 - acc: 0.7352 - val_loss: 2.4989 - val_acc: 0.2167 Epoch 00001: saving model to experiment_003/weights.01-2.50.hdf5 Epoch 2/25 625/625 [==============================] - 1927s 3s/step - loss: 0.7447 - acc: 0.7426 - val_loss: 2.7681 - va...
MIT
vpc_2018/lab/Lab_VpC_FelixRojoLapalma_003.ipynb
felixlapalma/diplodatos_2018
8. Choose best model/snapshot
############ # [COMPLETE] # Analyze and compare your results. Choose the best model and snapshot, # justify your election. ############
_____no_output_____
MIT
vpc_2018/lab/Lab_VpC_FelixRojoLapalma_003.ipynb
felixlapalma/diplodatos_2018
9. Evaluate final model on the *testing* set
############ # [COMPLETE] # Evaluate your model on the testing set. ############
_____no_output_____
MIT
vpc_2018/lab/Lab_VpC_FelixRojoLapalma_003.ipynb
felixlapalma/diplodatos_2018
导入库
import pandas as pd import numpy as np from sklearn.svm import LinearSVR, LinearSVC from sklearn.svm import * from sklearn.linear_model import Lasso, LogisticRegression, LinearRegression from sklearn.tree import DecisionTreeRegressor,DecisionTreeClassifier from sklearn.ensemble import RandomForestRegressor, Rando...
_____no_output_____
MIT
Project-2/FinalProjectFeatureSelection.ipynb
JasonCZH4/SCNU-CS-2018-DataMining
读取数据集
filePath = './data/138rows_after.xlsx' dataFrame = pd.read_excel(filePath) dataArray = np.array(dataFrame) dataFrame
_____no_output_____
MIT
Project-2/FinalProjectFeatureSelection.ipynb
JasonCZH4/SCNU-CS-2018-DataMining
获取标签列
name = [column for column in dataFrame] name = name[5:] pd.DataFrame(name)
_____no_output_____
MIT
Project-2/FinalProjectFeatureSelection.ipynb
JasonCZH4/SCNU-CS-2018-DataMining
查看数据规模
X_withLabel = dataArray[:92,5:] X_all = dataArray[:,5:] y_data = dataArray[:92,3] y_label= dataArray[:92,4].astype(int) print("有标签数据的规模:",X_withLabel.shape) print("所有数据的规模:",X_all.shape) print("回归标签的规模:",y_data.shape) print("分类标签的规模:",y_label.shape)
有标签数据的规模: (92, 76) 所有数据的规模: (138, 76) 回归标签的规模: (92,) 分类标签的规模: (92,)
MIT
Project-2/FinalProjectFeatureSelection.ipynb
JasonCZH4/SCNU-CS-2018-DataMining
回归 利用Lasso进行特征选择
lasso = Lasso(alpha = 0.5,max_iter=5000).fit(X_withLabel, y_data) modelLasso = SelectFromModel(lasso, prefit=True) X_Lasso = modelLasso.transform(X_withLabel) LassoIndexMask = modelLasso.get_support() # 获取筛选的mask value = X_withLabel[:,LassoIndexMask].tolist() # 被筛选出来的列的值 LassoIndexMask = LassoIndexMask.tolist() ...
被筛选后剩下的特征: 1 : FP1-A1 θ 节律, µV 2 : FP1-A1 α 节律, µV 3 : FP2-A2 δ 节律,µV 4 : FP2-A2 θ 节律, µV 5 : FP2-A2 α 节律, µV 6 : FP2-A2 β(LF)节律, µV 7 : F3-A1 α 节律, µV 8 : F4-A2 α 节律, µV 9 : FZ-A2 δ 节律,µV 10 : C3-A1 α 节律, µV 11 : C4-A2 θ 节律, µV 12 : C4-A2 α 节律, µV 13 : C4-A2 β(LF)节律, µV 14 : CZ-A1 α 节律, µV 15 : P3-A1 δ 节律,µV 16 : P4-A...
MIT
Project-2/FinalProjectFeatureSelection.ipynb
JasonCZH4/SCNU-CS-2018-DataMining
利用SVR进行特征选择
lsvr = LinearSVR(C=10,max_iter=10000,loss='squared_epsilon_insensitive',dual=False).fit(X_withLabel, y_data) modelLSVR = SelectFromModel(lsvr, prefit=True) X_LSVR = modelLSVR.transform(X_withLabel) SVRIndexMask = modelLSVR.get_support() # 获取筛选的mask value = X_withLabel[:,SVRIndexMask].tolist() # 被筛选出来的列的值 SVRInde...
被筛选后剩下的特征: 1 : FP1-A1 θ 节律, µV 2 : FP1-A1 β(LF)节律, µV 3 : FP2-A2 δ 节律,µV 4 : FP2-A2 θ 节律, µV 5 : FP2-A2 β(LF)节律, µV 6 : F3-A1 θ 节律, µV 7 : F4-A2 β(LF)节律, µV 8 : C3-A1 β(LF)节律, µV 9 : CZ-A1 θ 节律, µV 10 : CZ-A1 β(LF)节律, µV 11 : P3-A1 δ 节律,µV 12 : P3-A1 θ 节律, µV 13 : P3-A1 α 节律, µV 14 : P4-A2 δ 节律,µV 15 : P4-A2 θ 节律, µV 1...
MIT
Project-2/FinalProjectFeatureSelection.ipynb
JasonCZH4/SCNU-CS-2018-DataMining
利用树进行特征选择
decisionTree = DecisionTreeRegressor(min_samples_leaf=1,random_state=1).fit(X_withLabel, y_data) modelDecisionTree = SelectFromModel(decisionTree, prefit=True) X_DecisionTree = modelDecisionTree.transform(X_withLabel) decisionTreeIndexMask = modelDecisionTree.get_support() # 获取筛选的mask value = X_withLabel[:,LassoI...
被筛选后剩下的特征: 1 : F4-A2 θ 节律, µV 2 : F4-A2 α 节律, µV 3 : FZ-A2 θ 节律, µV 4 : FZ-A2 β(LF)节律, µV 5 : C3-A1 θ 节律, µV 6 : C3-A1 β(LF)节律, µV 7 : CZ-A1 β(LF)节律, µV 8 : P3-A1 δ 节律,µV 9 : P3-A1 β(LF)节律, µV 10 : PZ-A2 α 节律, µV 11 : O2-A2 δ 节律,µV 12 : O2-A2 α 节律, µV 13 : F8-A2 δ 节律,µV 14 : T3-A1 θ 节律, µV 15 : T5-A1 β(LF)节律, µV 16 : T...
MIT
Project-2/FinalProjectFeatureSelection.ipynb
JasonCZH4/SCNU-CS-2018-DataMining
利用随机森林进行特征选择
randomForest = RandomForestRegressor().fit(X_withLabel, y_data) modelrandomForest = SelectFromModel(randomForest, prefit=True) X_randomForest = modelrandomForest.transform(X_withLabel) randomForestIndexMask = modelrandomForest.get_support() # 获取筛选的mask value = X_withLabel[:,randomForestIndexMask].tolist() # 被筛选出来...
被筛选后剩下的特征: 1 : FP1-A1 θ 节律, µV 2 : FP1-A1 α 节律, µV 3 : FP2-A2 θ 节律, µV 4 : FP2-A2 β(LF)节律, µV 5 : F3-A1 θ 节律, µV 6 : F4-A2 θ 节律, µV 7 : C3-A1 θ 节律, µV 8 : C4-A2 δ 节律,µV 9 : C4-A2 θ 节律, µV 10 : P3-A1 δ 节律,µV 11 : P4-A2 θ 节律, µV 12 : PZ-A2 β(LF)节律, µV 13 : O1-A1 θ 节律, µV 14 : O2-A2 δ 节律,µV 15 : O2-A2 θ 节律, µV 16 : O2-A2 ...
MIT
Project-2/FinalProjectFeatureSelection.ipynb
JasonCZH4/SCNU-CS-2018-DataMining
利用GBDT进行特征选择
GBDTRegressor = GradientBoostingRegressor().fit(X_withLabel, y_data) modelGBDTRegressor = SelectFromModel(GBDTRegressor, prefit=True) X_GBDTRegressor = modelGBDTRegressor.transform(X_withLabel) GBDTRegressorIndexMask = modelGBDTRegressor.get_support() # 获取筛选的mask value = X_withLabel[:,GBDTRegressorIndexMask].toli...
被筛选后剩下的特征: 1 : FP2-A2 θ 节律, µV 2 : FP2-A2 β(LF)节律, µV 3 : F3-A1 θ 节律, µV 4 : C3-A1 δ 节律,µV 5 : C3-A1 θ 节律, µV 6 : C4-A2 δ 节律,µV 7 : C4-A2 θ 节律, µV 8 : CZ-A1 θ 节律, µV 9 : P3-A1 δ 节律,µV 10 : P3-A1 α 节律, µV 11 : P4-A2 θ 节律, µV 12 : P4-A2 α 节律, µV 13 : PZ-A2 α 节律, µV 14 : PZ-A2 β(LF)节律, µV 15 : O1-A1 θ 节律, µV 16 : O2-A2 δ ...
MIT
Project-2/FinalProjectFeatureSelection.ipynb
JasonCZH4/SCNU-CS-2018-DataMining
分类 利用Lasso进行特征选择
lasso = Lasso(alpha = 0.3,max_iter=5000).fit(X_withLabel, y_label) modelLasso = SelectFromModel(lasso, prefit=True) X_Lasso = modelLasso.transform(X_withLabel) LassoIndexMask = modelLasso.get_support() # 获取筛选的mask value = X_withLabel[:,LassoIndexMask].tolist() # 被筛选出来的列的值 LassoIndexMask = LassoIndexMask.tolist()...
被筛选后剩下的特征: 1 : FP1-A1 α 节律, µV 2 : FZ-A2 δ 节律,µV 3 : C4-A2 δ 节律,µV 4 : CZ-A1 α 节律, µV 5 : P3-A1 δ 节律,µV 6 : P4-A2 α 节律, µV 7 : PZ-A2 δ 节律,µV 8 : O2-A2 δ 节律,µV 9 : F7-A1 δ 节律,µV 10 : F7-A1 α 节律, µV 11 : T3-A1 α 节律, µV 12 : T4-A2 δ 节律,µV 13 : T4-A2 α 节律, µV 14 : T5-A1 δ 节律,µV 被筛选后去掉的特征: 1 : FP1-A1 δ 节律,µV 2 : FP1-A1 θ 节...
MIT
Project-2/FinalProjectFeatureSelection.ipynb
JasonCZH4/SCNU-CS-2018-DataMining
利用SVC进行特征选择
lsvc = LinearSVC(C=10,max_iter=10000,dual=False).fit(X_withLabel, y_label.ravel()) modelLSVC = SelectFromModel(lsvc, prefit=True) X_LSVR = modelLSVR.transform(X_withLabel) SVCIndexMask = modelLSVC.get_support() # 获取筛选的mask value = X_withLabel[:,SVCIndexMask].tolist() # 被筛选出来的列的值 SVCIndexMask = SVCIndexMask.tolis...
被筛选后剩下的特征: 1 : FP1-A1 θ 节律, µV 2 : FP2-A2 θ 节律, µV 3 : FP2-A2 α 节律, µV 4 : FP2-A2 β(LF)节律, µV 5 : FZ-A2 β(LF)节律, µV 6 : C3-A1 θ 节律, µV 7 : C3-A1 β(LF)节律, µV 8 : C4-A2 δ 节律,µV 9 : C4-A2 θ 节律, µV 10 : C4-A2 α 节律, µV 11 : CZ-A1 δ 节律,µV 12 : CZ-A1 θ 节律, µV 13 : CZ-A1 α 节律, µV 14 : P3-A1 β(LF)节律, µV 15 : P4-A2 θ 节律, µV 16 :...
MIT
Project-2/FinalProjectFeatureSelection.ipynb
JasonCZH4/SCNU-CS-2018-DataMining
利用树进行特征选择
decisionTree = DecisionTreeClassifier(random_state=1).fit(X_withLabel, y_label) modelDecisionTree = SelectFromModel(decisionTree, prefit=True) X_DecisionTree = modelDecisionTree.transform(X_withLabel) decisionTreeIndexMask = modelDecisionTree.get_support() # 获取筛选的mask value = X_withLabel[:,LassoIndexMask].tolist(...
被筛选后剩下的特征: 1 : FP1-A1 δ 节律,µV 2 : FP1-A1 α 节律, µV 3 : F3-A1 θ 节律, µV 4 : C3-A1 θ 节律, µV 5 : CZ-A1 δ 节律,µV 6 : CZ-A1 β(LF)节律, µV 7 : P3-A1 α 节律, µV 8 : PZ-A2 β(LF)节律, µV 9 : O2-A2 δ 节律,µV 10 : O2-A2 β(LF)节律, µV 11 : F7-A1 θ 节律, µV 12 : T4-A2 δ 节律,µV 13 : T5-A1 α 节律, µV 14 : T6-A2 α 节律, µV 被筛选后去掉的特征: 1 : FP1-A1 θ 节律, µV...
MIT
Project-2/FinalProjectFeatureSelection.ipynb
JasonCZH4/SCNU-CS-2018-DataMining
利用随机森林进行特征选择
randomForest = RandomForestRegressor().fit(X_withLabel, y_label) modelrandomForest = SelectFromModel(randomForest, prefit=True) X_randomForest = modelrandomForest.transform(X_withLabel) randomForestIndexMask = modelrandomForest.get_support() # 获取筛选的mask value = X_withLabel[:,randomForestIndexMask].tolist() # 被筛选出...
被筛选后剩下的特征: 1 : FP1-A1 θ 节律, µV 2 : FP2-A2 β(LF)节律, µV 3 : F4-A2 α 节律, µV 4 : F4-A2 β(LF)节律, µV 5 : FZ-A2 β(LF)节律, µV 6 : C3-A1 β(LF)节律, µV 7 : C4-A2 δ 节律,µV 8 : C4-A2 θ 节律, µV 9 : C4-A2 α 节律, µV 10 : CZ-A1 α 节律, µV 11 : P3-A1 δ 节律,µV 12 : P3-A1 α 节律, µV 13 : P3-A1 β(LF)节律, µV 14 : P4-A2 δ 节律,µV 15 : P4-A2 θ 节律, µV 16 :...
MIT
Project-2/FinalProjectFeatureSelection.ipynb
JasonCZH4/SCNU-CS-2018-DataMining
利用GBDT进行特征选择
GBDTClassifier = GradientBoostingClassifier().fit(X_withLabel, y_label) modelGBDTClassifier = SelectFromModel(GBDTClassifier, prefit=True) X_GBDTClassifier = modelGBDTClassifier.transform(X_withLabel) GBDTClassifierIndexMask = modelGBDTClassifier.get_support() # 获取筛选的mask value = X_withLabel[:,GBDTClassifierIndex...
被筛选后剩下的特征: 1 : FP1-A1 α 节律, µV 2 : FP2-A2 θ 节律, µV 3 : FP2-A2 β(LF)节律, µV 4 : C4-A2 θ 节律, µV 5 : P3-A1 α 节律, µV 6 : P4-A2 α 节律, µV 7 : P4-A2 β(LF)节律, µV 8 : PZ-A2 β(LF)节律, µV 9 : O2-A2 δ 节律,µV 10 : F7-A1 δ 节律,µV 11 : F8-A2 δ 节律,µV 12 : F8-A2 β(LF)节律, µV 13 : T3-A1 θ 节律, µV 14 : T4-A2 δ 节律,µV 15 : T4-A2 θ 节律, µV 16 : T5...
MIT
Project-2/FinalProjectFeatureSelection.ipynb
JasonCZH4/SCNU-CS-2018-DataMining
测试选取的特征 读入PCA和LDA降维后的数据 获取特征选取后的数据
RegressionFeatureSelection = [dataFrameOfLassoRegressionFeature,dataFrameOfLSVRegressionFeature,dataFrameOfDecisionTreeRegressionFeature, dataFrameOfRandomForestRegressionFeature,dataFrameOfGBDTRegressionFeature] ClassificationFeatureSelection = [dataFrameOfLassoClassificationFeature,dataFrameOfLSVC...
_____no_output_____
MIT
Project-2/FinalProjectFeatureSelection.ipynb
JasonCZH4/SCNU-CS-2018-DataMining
筛选回归的特征
allMSEResult=[] allr2Result=[] print("LR测试结果") for i in range(len(RegressionFeatureSelection)): tempArray = np.array(RegressionFeatureSelection[i])[:92,:] temp_X = tempArray[:,5:] temp_y = tempArray[:,3] train_X,test_X,train_y,test_y = train_test_split(temp_X,temp_y,test_size=0.2,random_state=4) clf...
_____no_output_____
MIT
Project-2/FinalProjectFeatureSelection.ipynb
JasonCZH4/SCNU-CS-2018-DataMining
原始特征回归表现
print("LR测试结果") tempArray = dataArray[:92,:] temp_X = tempArray[:,5:] temp_y = tempArray[:,3].astype(int) train_X,test_X,train_y,test_y = train_test_split(temp_X,temp_y,test_size=0.2,random_state=4) clf=LinearRegression() clf.fit(train_X,train_y) pred_y = clf.predict(test_X) print('Mean squared error: %.2f' ...
_____no_output_____
MIT
Project-2/FinalProjectFeatureSelection.ipynb
JasonCZH4/SCNU-CS-2018-DataMining
筛选分类的特征
allAccuracyResult=[] allF1Result=[] print("LR测试结果") for i in range(len(ClassificationFeatureSelection)): tempArray = np.array(ClassificationFeatureSelection[i])[:92,:] temp_X = tempArray[:,5:] temp_y = tempArray[:,4].astype(int) train_X,test_X,train_y,test_y = train_test_split(temp_X,temp_y,test_size=0.2...
_____no_output_____
MIT
Project-2/FinalProjectFeatureSelection.ipynb
JasonCZH4/SCNU-CS-2018-DataMining
原始特征分类表现
print("LR测试结果") tempArray = dataArray[:92,:] temp_X = tempArray[:,5:] temp_y = tempArray[:,4].astype(int) train_X,test_X,train_y,test_y = train_test_split(temp_X,temp_y,test_size=0.2,random_state=4) clf=LogisticRegression(max_iter=10000) clf.fit(train_X,train_y) pred_y = clf.predict(test_X) print('Accuracy: %.2...
_____no_output_____
MIT
Project-2/FinalProjectFeatureSelection.ipynb
JasonCZH4/SCNU-CS-2018-DataMining
CSX46: Class session 2 *Introduction to the igraph package and the Pathway Commons network in SIF format* Objective: load a network of human molecular interactions and create three igraph `Graph` objects from it (one for protein-protein interactions, one for metabolism interactions, and one for directed protein-protei...
pcdf <- read.table("shared/pathway_commons.sif", sep="\t", quote="", comment.char="", stringsAsFactors=FALSE, header=FALSE, col.names=c("species1","interaction_type","species2"))
_____no_output_____
Apache-2.0
class02a_igraph_R.ipynb
curiositymap/Networks-in-Computational-Biology
Let's take a peek at `pcdf` using the `head` function:
head(pcdf) library(igraph) interaction_types_ppi <- c("interacts-with", "in-complex-with", "neighbor-of") interaction_types_metab <- c("controls-production-of", "consumption-controlled-by", "controls-produc...
Attaching package: ‘igraph’ The following objects are masked from ‘package:stats’: decompose, spectrum The following object is masked from ‘package:base’: union
Apache-2.0
class02a_igraph_R.ipynb
curiositymap/Networks-in-Computational-Biology
Subset data frame `pcdf` to obtain only the rows whose interactions are in `interaction_types_ppi`, and select only columns 1 and 3:
pcdf_ppi <- pcdf[pcdf$interaction_type %in% interaction_types_ppi,c(1,3)]
_____no_output_____
Apache-2.0
class02a_igraph_R.ipynb
curiositymap/Networks-in-Computational-Biology
Use the `igraph` function `graph_from_data_farme` to build a network from the edge-list data in `pcdf_ppi`; use `print` to see a summary of the graph:
graph_ppi <- graph_from_data_frame(pcdf_ppi, directed=FALSE) print(graph_ppi)
IGRAPH ba9e496 UN-- 17020 523498 -- + attr: name (v/c) + edges from ba9e496 (vertex names): [1] A1BG--ABCC6 A1BG--ANXA7 A1BG--CDKN1A A1BG--CRISP3 A1BG--GDPD1 [6] A1BG--GRB2 A1BG--GRB7 A1BG--HNF4A A1BG--ONECUT1 A1BG--PIK3CA [11] A1BG--PIK3R1 A1BG--PRDX4 A1BG--PTPN11 A1BG--SETD7 A1BG--SMN1 [1...
Apache-2.0
class02a_igraph_R.ipynb
curiositymap/Networks-in-Computational-Biology
Do the same for the metabolic network:
pcdf_metab <- pcdf[pcdf$interaction_type %in% interaction_types_metab, c(1,3)] graph_metab <- graph_from_data_frame(pcdf_metab, directed=TRUE) print(graph_metab)
IGRAPH 77472bf DN-- 7620 38145 -- + attr: name (v/c) + edges from 77472bf (vertex names): [1] A4GALT->CHEBI:17659 A4GALT->CHEBI:17950 A4GALT->CHEBI:18307 [4] A4GALT->CHEBI:18313 A4GALT->CHEBI:58223 A4GALT->CHEBI:67119 [7] A4GNT ->CHEBI:17659 A4GNT ->CHEBI:58223 AAAS ->CHEBI:1604 [10] AAAS ->CHEBI:2274 AACS ->C...
Apache-2.0
class02a_igraph_R.ipynb
curiositymap/Networks-in-Computational-Biology
Do the same for the directed protein-protein interactions:
pcdf_ppd <- pcdf[pcdf$interaction_type %in% interaction_types_ppd, c(1,3)] graph_ppd <- graph_from_data_frame(pcdf_ppd, directed=TRUE) print(graph_ppd)
IGRAPH DN-- 16063 359713 -- + attr: name (v/c), interaction_type (e/c) IGRAPH DN-- 16063 359713 -- + attr: name (v/c), interaction_type (e/c) + edges (vertex names): [1] A1BG ->A2M A1BG ->AKT1 A1BG ->AKT1 A2M ->APOA1 [5] A2M ->CDC42 A2M ->RAC1 A2M ->RAC2 A2M ->RAC3 [9] A...
Apache-2.0
class02a_igraph_R.ipynb
curiositymap/Networks-in-Computational-Biology
Question: of the three networks that you just created, which has the most edges? Next, we need to create a small graph. Let's make a three-vertex undirected graph from an edge-list. Let's connect all vertices to all other vertices: 12, 23, 31. We'll once again use graph_from_data_farme to do this:
testgraph <- graph_from_data_frame(data.frame(c(1,2,3), c(2,3,1)), directed=FALSE)
_____no_output_____
Apache-2.0
class02a_igraph_R.ipynb
curiositymap/Networks-in-Computational-Biology