repo_name
stringlengths
6
77
path
stringlengths
8
215
license
stringclasses
15 values
content
stringlengths
335
154k
eyadsibai/rep
howto/03-howto-gridsearch.ipynb
apache-2.0
%pylab inline """ Explanation: About This notebook demonstrates several additional tools to optimize classification model provided by Reproducible experiment platform (REP) package: grid search for the best classifier hyperparameters different optimization algorithms different scoring models (optimization of arbirtary figure of merit) End of explanation """ !cd toy_datasets; wget -O magic04.data -nc https://archive.ics.uci.edu/ml/machine-learning-databases/magic/magic04.data import numpy, pandas from rep.utils import train_test_split from sklearn.metrics import roc_auc_score columns = ['fLength', 'fWidth', 'fSize', 'fConc', 'fConc1', 'fAsym', 'fM3Long', 'fM3Trans', 'fAlpha', 'fDist', 'g'] data = pandas.read_csv('toy_datasets/magic04.data', names=columns) labels = numpy.array(data['g'] == 'g', dtype=int) data = data.drop('g', axis=1) train_data, test_data, train_labels, test_labels = train_test_split(data, labels) list(data.columns) """ Explanation: Loading data End of explanation """ features = list(set(columns) - {'g'}) """ Explanation: Variables used in training End of explanation """ from rep.report import metrics def AMS(s, b, s_norm=sum(test_labels == 1), b_norm=6*sum(test_labels == 0)): return s * s_norm / numpy.sqrt(b * b_norm + 10.) optimal_AMS = metrics.OptimalMetric(AMS) sum(test_labels == 1),sum(test_labels == 0) """ Explanation: Metric definition In Higgs challenge the aim is to maximize AMS metrics. <br /> To measure the quality one should choose not only classifier, but also an optimal threshold, where the maximal value of AMS is achieved. Such metrics (which require a threshold) are called threshold-based. rep.utils contain class OptimalMetric, which computes the maximal value for threshold-based metric (and may be used as metric). Use this class to generate metric and use it in grid search. Prepare quality metric first we define AMS metric, and utils.OptimalMetric generates End of explanation """ probs_rand = numpy.ndarray((1000, 2)) probs_rand[:, 1] = numpy.random.random(1000) probs_rand[:, 0] = 1 - probs_rand[:, 1] labels_rand = numpy.random.randint(0, high=2, size=1000) optimal_AMS.plot_vs_cut(labels_rand, probs_rand) """ Explanation: Compute threshold vs metric quality random predictions for signal and background were used here End of explanation """ optimal_AMS(labels_rand, probs_rand) """ Explanation: The best quality End of explanation """ from rep.metaml import GridOptimalSearchCV from rep.metaml.gridsearch import RandomParameterOptimizer, FoldingScorer from rep.estimators import SklearnClassifier from sklearn.ensemble import AdaBoostClassifier from collections import OrderedDict """ Explanation: Hyperparameters optimization algorithms AbstractParameterGenerator is an abstract class to generate new points, where the scorer function will be computed. It is used in grid search to get new set of parameters to train classifier. Properties: best_params_ - return the best grid point best_score_ - return the best quality print_results(self, reorder=True) - print all points with corresponding quality The following algorithms inherit from AbstractParameterGenerator: RandomParameterOptimizer - generates random point in parameters space RegressionParameterOptimizer - generate next point using regression algorithm, which was trained on previous results SubgridParameterOptimizer - uses subgrids if grid is huge + annealing-like technique (details see in REP) Grid search GridOptimalSearchCV implemets optimal search over specified parameter values for an estimator. Parameters to use it are: estimator - object of type that implements the "fit" and "predict" methods params_generator - generator of grid search algorithm (AbstractParameterGenerator) scorer - which implement method call with kwargs: "base_estimator", "params", "X", "y", "sample_weight" Important members are "fit", "fit_best_estimator" End of explanation """ # define grid parameters grid_param = OrderedDict() grid_param['n_estimators'] = [30, 50] grid_param['learning_rate'] = [0.2, 0.1, 0.05] # use random hyperparameter optimization algorithm generator = RandomParameterOptimizer(grid_param) # define folding scorer scorer = FoldingScorer(optimal_AMS, folds=6, fold_checks=4) grid_sk = GridOptimalSearchCV(SklearnClassifier(AdaBoostClassifier(), features=features), generator, scorer) grid_sk.fit(data, labels) """ Explanation: Grid search with folding scorer FoldingScorer provides folding cross-validation for train dataset: folds - k, number of folds (train on k-1 fold, test on 1 fold) folds_check - number of times model will be tested score_function - function to calculate quality with interface "function(y_true, proba, sample_weight=None)" NOTE: if fold_checks > 1, the quality is averaged over tests. End of explanation """ grid_sk.generator.best_params_ """ Explanation: Print best parameters End of explanation """ grid_sk.generator.print_results() """ Explanation: Print all qualities for used parameters End of explanation """ from sklearn import clone def generate_scorer(test, test_labels, test_weight=None): """ Generate scorer which calculate metric on fixed test dataset """ def custom(base_estimator, params, X, y, sample_weight=None): cl = clone(base_estimator) cl.set_params(**params) cl.fit(X, y) res = optimal_AMS(test_labels, cl.predict_proba(test), sample_weight) return res return custom # define grid parameters grid_param = OrderedDict() grid_param['n_estimators'] = [30, 50] grid_param['learning_rate'] = [0.2, 0.1, 0.05] grid_param['features'] = [features[:5], features[:8]] # define random hyperparameter optimization algorithm generator = RandomParameterOptimizer(grid_param) # define specific scorer scorer = generate_scorer(test_data, test_labels) grid = GridOptimalSearchCV(SklearnClassifier(clf=AdaBoostClassifier(), features=features), generator, scorer) grid.fit(train_data, train_labels) len(train_data), len(test_data) """ Explanation: Grid search with user-defined scorer You can define your own scorer with specific logic by simple way. Scorer must have just the following: scorer(base_estimator, params, X, y, sample_weight) Define scorer, which will be train model on all dataset and test it on the pre-defined dataset End of explanation """ grid.generator.print_results() """ Explanation: Print all tried combinations of parameters and quality End of explanation """ from rep.report import ClassificationReport from rep.data.storage import LabeledDataStorage lds = LabeledDataStorage(test_data, test_labels) classifiers = {'grid_fold': grid_sk.fit_best_estimator(train_data[features], train_labels), 'grid_test_dataset': grid.fit_best_estimator(train_data[features], train_labels) } report = ClassificationReport(classifiers, lds) """ Explanation: Results comparison End of explanation """ report.roc().plot() """ Explanation: ROCs End of explanation """ report.metrics_vs_cut(AMS, metric_label='AMS').plot() """ Explanation: Metric End of explanation """
sdpython/teachpyx
_doc/notebooks/numpy/numpy_tricks.ipynb
mit
from jyquickhelper import add_notebook_menu add_notebook_menu() """ Explanation: Points d'implémentation avec numpy Quelques écritures efficaces et non efficaces avec numpy. End of explanation """ import numpy mat = numpy.zeros((5, 5)) for i in range(mat.shape[0]): for j in range(mat.shape[1]): mat[i, j] = i * 10 + j mat mat[2, 3], mat[2][3] %timeit mat[2, 3] %timeit mat[2][3] """ Explanation: accéder à un élément en particulier End of explanation """ mat[2] """ Explanation: Les deux écritures ont l'air identique puisqu'elle retourne le même résultat. Néanmoins, mat[2][3] crée un tableau temporaire puis extrait un élément. Les éléments ne sont pas recopiés mais un objet intermédiaire est créé. End of explanation """
GoogleCloudPlatform/training-data-analyst
courses/machine_learning/deepdive2/feature_engineering/labs/mobile_gaming_feature_store.ipynb
apache-2.0
import os # The Google Cloud Notebook product has specific requirements IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version") # Google Cloud Notebook requires dependencies to be installed with '--user' USER_FLAG = "" if IS_GOOGLE_CLOUD_NOTEBOOK: USER_FLAG = "--user" # Install additional packages ! pip3 install {USER_FLAG} --upgrade pip ! pip3 install {USER_FLAG} --upgrade google-cloud-aiplatform==1.11.0 -q --no-warn-conflicts ! pip3 install {USER_FLAG} git+https://github.com/googleapis/python-aiplatform.git@main # For features monitoring ! pip3 install {USER_FLAG} --upgrade google-cloud-bigquery==2.24.0 -q --no-warn-conflicts ! pip3 install {USER_FLAG} --upgrade xgboost==1.1.1 -q --no-warn-conflicts """ Explanation: Exploring Mobile Gaming Using Feature Store Learning objectives In this notebook, you learn how to: Provide a centralized feature repository with easy APIs to search & discover features and fetch them for training/serving. Simplify deployments of models for Online Prediction, via low latency scalable feature serving. Mitigate training serving skew and data leakage by performing point in time lookups to fetch historical data for training. Overview Imagine you are a member of the Data Science team working on the same Mobile Gaming application reported in the Churn prediction for game developers using Google Analytics 4 (GA4) and BigQuery ML blog post. Business wants to use that information in real-time to take immediate intervention actions in-game to prevent churn. In particular, for each player, they want to provide gaming incentives like new items or bonus packs depending on the customer demographic, behavioral information and the resulting propensity of return. Last year, Google Cloud announced Vertex AI, a managed machine learning (ML) platform that allows data science teams to accelerate the deployment and maintenance of ML models. One of the platform building blocks is Vertex AI Feature store which provides a managed service for low latency scalable feature serving. Also it is a centralized feature repository with easy APIs to search & discover features and feature monitoring capabilities to track drift and other quality issues. In this notebook, you learn how the role of Vertex AI Feature Store in a ready to production scenario when the user's activities within the first 24 hours of last engagment and the gaming platform would consume in order to improver UX. Below you can find the high level picture of the system <img src="./assets/mobile_gaming_architecture_1.png"> Dataset The dataset is the public sample export data from an actual mobile game app called "Flood It!" (Android, iOS) Notice that we assume that already know how to set up a Vertex AI Feature store. In case you are not, please check out this detailed notebook. Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook Install additional packages Install additional package dependencies not installed in your notebook environment, such as {XGBoost, AdaNet, or TensorFlow Hub TODO: Replace with relevant packages for the tutorial}. Use the latest major GA version of each package. End of explanation """ # Automatically restart kernel after installs import os if not os.getenv("IS_TESTING"): # Automatically restart kernel after installs import IPython app = IPython.Application.instance() app.kernel.do_shutdown(True) """ Explanation: Restart the kernel After you install the additional packages, you need to restart the notebook kernel so it can find the packages. End of explanation """ import os PROJECT_ID = "qwiklabs-gcp-01-17ee7907a406" # Replace your project id here # Get your Google Cloud project ID from gcloud if not os.getenv("IS_TESTING"): shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null PROJECT_ID = shell_output[0] print("Project ID: ", PROJECT_ID) """ Explanation: Before you begin Set up your Google Cloud project The following steps are required, regardless of your notebook environment. Enable the Vertex AI API and Compute Engine API. If you are running this notebook locally, you will need to install the Cloud SDK. Enter your project ID in the cell below. Then run the cell to make sure the Cloud SDK uses the right project for all the commands in this notebook. Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands. Set your project ID If you don't know your project ID, you may be able to get your project ID using gcloud. End of explanation """ if PROJECT_ID == "" or PROJECT_ID is None: PROJECT_ID = "qwiklabs-gcp-01-17ee7907a406" # Replace your project id here !gcloud config set project $PROJECT_ID #change it """ Explanation: Otherwise, set your project ID here. End of explanation """ # Import necessary library and define Timestamp from datetime import datetime TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S") """ Explanation: Timestamp If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial. End of explanation """ BUCKET_URI = "gs://qwiklabs-gcp-01-17ee7907a406" # Replace your bucket name here REGION = "us-central1" # @param {type:"string"} if BUCKET_URI == "" or BUCKET_URI is None or BUCKET_URI == "gs://qwiklabs-gcp-01-17ee7907a406": # Replace your bucket name here BUCKET_URI = "gs://" + PROJECT_ID + "-aip-" + TIMESTAMP if REGION == "[your-region]": REGION = "us-central1" """ Explanation: Create a Cloud Storage bucket The following steps are required, regardless of your notebook environment. Set the name of your Cloud Storage bucket below. It must be unique across all Cloud Storage buckets. You may also change the REGION variable, which is used for operations throughout the rest of this notebook. Make sure to choose a region where Vertex AI services are available. You may not use a Multi-Regional Storage bucket for training with Vertex AI. End of explanation """ ! gsutil mb -l $REGION -p $PROJECT_ID $BUCKET_URI """ Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket. End of explanation """ ! gsutil uniformbucketlevelaccess set on $BUCKET_URI """ Explanation: Run the following cell to grant access to your Cloud Storage resources from Vertex AI Feature store End of explanation """ ! gsutil ls -al $BUCKET_URI """ Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents: End of explanation """ BQ_DATASET = "Mobile_Gaming" # @param {type:"string"} LOCATION = "US" !bq mk --location=$LOCATION --dataset $PROJECT_ID:$BQ_DATASET """ Explanation: Create a Bigquery dataset You create the BigQuery dataset to store the data along the demo. End of explanation """ # General import os import random import sys import time # Data Science import pandas as pd # Vertex AI and its Feature Store from google.cloud import aiplatform as vertex_ai from google.cloud import bigquery from google.cloud.aiplatform import Feature, Featurestore """ Explanation: Import libraries End of explanation """ # Data Engineering and Feature Engineering TODAY = "2018-10-03" TOMORROW = "2018-10-04" LABEL_TABLE = f"label_table_{TODAY}".replace("-", "") FEATURES_TABLE = "wide_features_table" # @param {type:"string"} FEATURES_TABLE_TODAY = f"wide_features_table_{TODAY}".replace("-", "") FEATURES_TABLE_TOMORROW = f"wide_features_table_{TOMORROW}".replace("-", "") FEATURESTORE_ID = "mobile_gaming" # @param {type:"string"} ENTITY_TYPE_ID = "user" # Vertex AI Feature store ONLINE_STORE_NODES_COUNT = 5 ENTITY_ID = "user" API_ENDPOINT = f"{REGION}-aiplatform.googleapis.com" FEATURE_TIME = "timestamp" ENTITY_ID_FIELD = "user_pseudo_id" BQ_SOURCE_URI = f"bq://{PROJECT_ID}.{BQ_DATASET}.{FEATURES_TABLE}" GCS_DESTINATION_PATH = f"data/features/train_features_{TODAY}".replace("-", "") GCS_DESTINATION_OUTPUT_URI = f"{BUCKET_URI}/{GCS_DESTINATION_PATH}" SERVING_FEATURE_IDS = {"user": ["*"]} READ_INSTANCES_TABLE = f"ground_truth_{TODAY}".replace("-", "") READ_INSTANCES_URI = f"bq://{PROJECT_ID}.{BQ_DATASET}.{READ_INSTANCES_TABLE}" # Vertex AI Training BASE_CPU_IMAGE = "us-docker.pkg.dev/vertex-ai/training/scikit-learn-cpu.0-23:latest" DATASET_NAME = f"churn_mobile_gaming_{TODAY}".replace("-", "") TRAIN_JOB_NAME = f"xgb_classifier_training_{TODAY}".replace("-", "") MODEL_NAME = f"churn_xgb_classifier_{TODAY}".replace("-", "") MODEL_PACKAGE_PATH = "train_package" TRAINING_MACHINE_TYPE = "n1-standard-4" TRAINING_REPLICA_COUNT = 1 DATA_PATH = f"{GCS_DESTINATION_OUTPUT_URI}/000000000000.csv".replace("gs://", "/gcs/") MODEL_PATH = f"model/{TODAY}".replace("-", "") MODEL_DIR = f"{BUCKET_URI}/{MODEL_PATH}".replace("gs://", "/gcs/") # Vertex AI Prediction DESTINATION_URI = f"{BUCKET_URI}/{MODEL_PATH}" VERSION = "v1" SERVING_CONTAINER_IMAGE_URI = ( "us-docker.pkg.dev/vertex-ai/prediction/sklearn-cpu.0-23:latest" ) ENDPOINT_NAME = "mobile_gaming_churn" DEPLOYED_MODEL_NAME = f"churn_xgb_classifier_{VERSION}" MODEL_DEPLOYED_NAME = "churn_xgb_classifier_v1" SERVING_MACHINE_TYPE = "n1-highcpu-4" MIN_NODES = 1 MAX_NODES = 1 # Sampling distributions for categorical features implemented in # https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/model_monitoring/model_monitoring.ipynb LANGUAGE = [ "en-us", "en-gb", "ja-jp", "en-au", "en-ca", "de-de", "en-in", "en", "fr-fr", "pt-br", "es-us", "zh-tw", "zh-hans-cn", "es-mx", "nl-nl", "fr-ca", "en-za", "vi-vn", "en-nz", "es-es", ] OS = ["IOS", "ANDROID", "null"] COUNTRY = [ "United States", "India", "Japan", "Canada", "Australia", "United Kingdom", "Germany", "Mexico", "France", "Brazil", "Taiwan", "China", "Saudi Arabia", "Pakistan", "Egypt", "Netherlands", "Vietnam", "Philippines", "South Africa", ] USER_IDS = [ "C8685B0DFA2C4B4E6E6EA72894C30F6F", "A976A39B8E08829A5BC5CD3827C942A2", "DD2269BCB7F8532CD51CB6854667AF51", "A8F327F313C9448DFD5DE108DAE66100", "8BE7BF90C971453A34C1FF6FF2A0ACAE", "8375B114AFAD8A31DE54283525108F75", "4AD259771898207D5869B39490B9DD8C", "51E859FD9D682533C094B37DC85EAF87", "8C33815E0A269B776AAB4B60A4F7BC63", "D7EA8E3645EFFBD6443946179ED704A6", "58F3D672BBC613680624015D5BC3ADDB", "FF955E4CA27C75CE0BEE9FC89AD275A3", "22DC6A6AE86C0AA33EBB8C3164A26925", "BC10D76D02351BD4C6F6F5437EE5D274", "19DEEA6B15B314DB0ED2A4936959D8F9", "C2D17D9066EE1EB9FAE1C8A521BFD4E5", "EFBDEC168A2BF8C727B060B2E231724E", "E43D3AB2F9B9055C29373523FAF9DB9B", "BBDCBE2491658165B7F20540DE652E3A", "6895EEFC23B59DB13A9B9A7EED6A766F", ] """ Explanation: Define constants End of explanation """ def run_bq_query(query: str): """ An helper function to run a BigQuery job Args: query: a formatted SQL query Returns: None """ try: job = bq_client.query(query) _ = job.result() except RuntimeError as error: print(error) def upload_model( display_name: str, serving_container_image_uri: str, artifact_uri: str, sync: bool = True, ) -> vertex_ai.Model: """ Args: display_name: The name of Vertex AI Model artefact serving_container_image_uri: The uri of the serving image artifact_uri: The uri of artefact to import sync: Returns: Vertex AI Model """ model = vertex_ai.Model.upload( display_name=display_name, artifact_uri=artifact_uri, serving_container_image_uri=serving_container_image_uri, sync=sync, ) model.wait() print(model.display_name) print(model.resource_name) return model def create_endpoint(display_name: str) -> vertex_ai.Endpoint: """ An utility to create a Vertex AI Endpoint Args: display_name: The name of Endpoint Returns: Vertex AI Endpoint """ endpoint = vertex_ai.Endpoint.create(display_name=display_name) print(endpoint.display_name) print(endpoint.resource_name) return endpoint def deploy_model( model: vertex_ai.Model, machine_type: str, endpoint: vertex_ai.Endpoint = None, deployed_model_display_name: str = None, min_replica_count: int = 1, max_replica_count: int = 1, sync: bool = True, ) -> vertex_ai.Model: """ An helper function to deploy a Vertex AI Endpoint Args: model: A Vertex AI Model machine_type: The type of machine to serve the model endpoint: An Vertex AI Endpoint deployed_model_display_name: The name of the model min_replica_count: Minimum number of serving replicas max_replica_count: Max number of serving replicas sync: Whether to execute method synchronously Returns: vertex_ai.Model """ model_deployed = model.deploy( endpoint=endpoint, deployed_model_display_name=deployed_model_display_name, machine_type=machine_type, min_replica_count=min_replica_count, max_replica_count=max_replica_count, sync=sync, ) model_deployed.wait() print(model_deployed.display_name) print(model_deployed.resource_name) return model_deployed def endpoint_predict_sample( instances: list, endpoint: vertex_ai.Endpoint ) -> vertex_ai.models.Prediction: """ An helper function to get prediction from Vertex AI Endpoint Args: instances: The list of instances to score endpoint: An Vertex AI Endpoint Returns: vertex_ai.models.Prediction """ prediction = endpoint.predict(instances=instances) print(prediction) return prediction def generate_online_sample() -> dict: """ An helper function to generate a sample of online features Returns: online_sample: dict of online features """ online_sample = {} online_sample["entity_id"] = random.choices(USER_IDS) online_sample["country"] = random.choices(COUNTRY) online_sample["operating_system"] = random.choices(OS) online_sample["language"] = random.choices(LANGUAGE) return online_sample def simulate_prediction(endpoint: vertex_ai.Endpoint, n_requests: int, latency: int): """ An helper function to simulate online prediction with customer entity type - format entities for prediction - retrieve static features with a singleton lookup operations from Vertex AI Feature store - run the prediction request and get back the result Args: endpoint: Vertex AI Endpoint object n_requests: number of requests to run latency: latency in seconds Returns: vertex_ai.models.Prediction """ for i in range(n_requests): online_sample = generate_online_sample() online_features = pd.DataFrame.from_dict(online_sample) entity_ids = online_features["entity_id"].tolist() customer_aggregated_features = user_entity_type.read( entity_ids=entity_ids, feature_ids=[ "cnt_user_engagement", "cnt_level_start_quickplay", "cnt_level_end_quickplay", "cnt_level_complete_quickplay", "cnt_level_reset_quickplay", "cnt_post_score", "cnt_spend_virtual_currency", "cnt_ad_reward", "cnt_challenge_a_friend", "cnt_completed_5_levels", "cnt_use_extra_steps", ], ) prediction_sample_df = pd.merge( customer_aggregated_features.set_index("entity_id"), online_features.set_index("entity_id"), left_index=True, right_index=True, ).reset_index(drop=True) # prediction_sample = prediction_sample_df.to_dict("records") prediction_instance = prediction_sample_df.values.tolist() prediction = endpoint.predict(prediction_instance) print( f"Prediction request: user_id - {entity_ids} - values - {prediction_instance} - prediction - {prediction[0]}" ) time.sleep(latency) """ Explanation: Helpers End of explanation """ # Initiate the clients bq_client = # TODO 1: Your code goes here(project=PROJECT_ID, location=LOCATION) vertex_ai.init(project=PROJECT_ID, location=REGION, staging_bucket=BUCKET_URI) """ Explanation: Setting the realtime scenario In order to make real-time churn prediction, you need to Collect the historical data about user's events and behaviors Design your data model, build your feature and ingest them into the Feature store to serve both offline for training and online for serving. Define churn and get the data to train a churn model Train the model at scale Deploy the model to an endpoint and generate return the prediction score in real-time You will cover those steps in details below. Initiate clients End of explanation """ features_sql_query = f""" CREATE OR REPLACE TABLE `{PROJECT_ID}.{BQ_DATASET}.{FEATURES_TABLE}` AS WITH # query to extract demographic data for each user --------------------------------------------------------- get_demographic_data AS ( SELECT * EXCEPT (row_num) FROM ( SELECT user_pseudo_id, geo.country as country, device.operating_system as operating_system, device.language as language, ROW_NUMBER() OVER (PARTITION BY user_pseudo_id ORDER BY event_timestamp DESC) AS row_num FROM `firebase-public-project.analytics_153293282.events_*`) WHERE row_num = 1), # query to extract behavioral data for each user ---------------------------------------------------------- get_behavioral_data AS ( SELECT event_timestamp, user_pseudo_id, SUM(IF(event_name = 'user_engagement', 1, 0)) OVER (PARTITION BY user_pseudo_id ORDER BY event_timestamp ASC RANGE BETWEEN 86400000000 PRECEDING AND CURRENT ROW ) AS cnt_user_engagement, SUM(IF(event_name = 'level_start_quickplay', 1, 0)) OVER (PARTITION BY user_pseudo_id ORDER BY event_timestamp ASC RANGE BETWEEN 86400000000 PRECEDING AND CURRENT ROW ) AS cnt_level_start_quickplay, SUM(IF(event_name = 'level_end_quickplay', 1, 0)) OVER (PARTITION BY user_pseudo_id ORDER BY event_timestamp ASC RANGE BETWEEN 86400000000 PRECEDING AND CURRENT ROW ) AS cnt_level_end_quickplay, SUM(IF(event_name = 'level_complete_quickplay', 1, 0)) OVER (PARTITION BY user_pseudo_id ORDER BY event_timestamp ASC RANGE BETWEEN 86400000000 PRECEDING AND CURRENT ROW ) AS cnt_level_complete_quickplay, SUM(IF(event_name = 'level_reset_quickplay', 1, 0)) OVER (PARTITION BY user_pseudo_id ORDER BY event_timestamp ASC RANGE BETWEEN 86400000000 PRECEDING AND CURRENT ROW ) AS cnt_level_reset_quickplay, SUM(IF(event_name = 'post_score', 1, 0)) OVER (PARTITION BY user_pseudo_id ORDER BY event_timestamp ASC RANGE BETWEEN 86400000000 PRECEDING AND CURRENT ROW ) AS cnt_post_score, SUM(IF(event_name = 'spend_virtual_currency', 1, 0)) OVER (PARTITION BY user_pseudo_id ORDER BY event_timestamp ASC RANGE BETWEEN 86400000000 PRECEDING AND CURRENT ROW ) AS cnt_spend_virtual_currency, SUM(IF(event_name = 'ad_reward', 1, 0)) OVER (PARTITION BY user_pseudo_id ORDER BY event_timestamp ASC RANGE BETWEEN 86400000000 PRECEDING AND CURRENT ROW ) AS cnt_ad_reward, SUM(IF(event_name = 'challenge_a_friend', 1, 0)) OVER (PARTITION BY user_pseudo_id ORDER BY event_timestamp ASC RANGE BETWEEN 86400000000 PRECEDING AND CURRENT ROW ) AS cnt_challenge_a_friend, SUM(IF(event_name = 'completed_5_levels', 1, 0)) OVER (PARTITION BY user_pseudo_id ORDER BY event_timestamp ASC RANGE BETWEEN 86400000000 PRECEDING AND CURRENT ROW ) AS cnt_completed_5_levels, SUM(IF(event_name = 'use_extra_steps', 1, 0)) OVER (PARTITION BY user_pseudo_id ORDER BY event_timestamp ASC RANGE BETWEEN 86400000000 PRECEDING AND CURRENT ROW ) AS cnt_use_extra_steps, FROM ( SELECT e.* FROM `firebase-public-project.analytics_153293282.events_*` AS e ) ) SELECT -- PARSE_TIMESTAMP('%Y-%m-%d %H:%M:%S', CONCAT('{TODAY}', ' ', STRING(TIME_TRUNC(CURRENT_TIME(), SECOND))), 'UTC') as timestamp, PARSE_TIMESTAMP('%Y-%m-%d %H:%M:%S', FORMAT_TIMESTAMP('%Y-%m-%d %H:%M:%S', TIMESTAMP_MICROS(beh.event_timestamp))) AS timestamp, dem.*, CAST(IFNULL(beh.cnt_user_engagement, 0) AS FLOAT64) AS cnt_user_engagement, CAST(IFNULL(beh.cnt_level_start_quickplay, 0) AS FLOAT64) AS cnt_level_start_quickplay, CAST(IFNULL(beh.cnt_level_end_quickplay, 0) AS FLOAT64) AS cnt_level_end_quickplay, CAST(IFNULL(beh.cnt_level_complete_quickplay, 0) AS FLOAT64) AS cnt_level_complete_quickplay, CAST(IFNULL(beh.cnt_level_reset_quickplay, 0) AS FLOAT64) AS cnt_level_reset_quickplay, CAST(IFNULL(beh.cnt_post_score, 0) AS FLOAT64) AS cnt_post_score, CAST(IFNULL(beh.cnt_spend_virtual_currency, 0) AS FLOAT64) AS cnt_spend_virtual_currency, CAST(IFNULL(beh.cnt_ad_reward, 0) AS FLOAT64) AS cnt_ad_reward, CAST(IFNULL(beh.cnt_challenge_a_friend, 0) AS FLOAT64) AS cnt_challenge_a_friend, CAST(IFNULL(beh.cnt_completed_5_levels, 0) AS FLOAT64) AS cnt_completed_5_levels, CAST(IFNULL(beh.cnt_use_extra_steps, 0) AS FLOAT64) AS cnt_use_extra_steps, FROM get_demographic_data dem LEFT OUTER JOIN get_behavioral_data beh ON dem.user_pseudo_id = beh.user_pseudo_id """ run_bq_query(features_sql_query) """ Explanation: Identify users and build your features This section we will static features we want to fetch from Vertex AI Feature Store. In particular, we will cover the following steps: Identify users, process demographic features and process behavioral features within the last 24 hours using BigQuery Set up the feature store Register features using Vertex AI Feature Store and the SDK. Below you have a picture that shows the process. <img src="./assets/feature_store_ingestion_2.png"> The original dataset contains raw event data we cannot ingest in the feature store as they are. We need to pre-process the raw data in order to get user features. Notice we simulate those transformations in different point of time (today and tomorrow). Label, Demographic and Behavioral Transformations This section is based on the Churn prediction for game developers using Google Analytics 4 (GA4) and BigQuery ML blog article by Minhaz Kazi and Polong Lin. You will adapt it in order to turn a batch churn prediction (using features within the first 24h user of first engagment) in a real-time churn prediction (using features within the first 24h user of last engagment). End of explanation """ try: mobile_gaming_feature_store = Featurestore.create( featurestore_id=FEATURESTORE_ID, online_store_fixed_node_count=ONLINE_STORE_NODES_COUNT, labels={"team": "dataoffice", "app": "mobile_gaming"}, sync=True, ) except RuntimeError as error: print(error) else: FEATURESTORE_RESOURCE_NAME = mobile_gaming_feature_store.resource_name print(f"Feature store created: {FEATURESTORE_RESOURCE_NAME}") """ Explanation: Create a Vertex AI Feature store and ingest your features Now you have the wide table of features. It is time to ingest them into the feature store. Before to moving on, you may have a question: Why do I need a feature store in this scenario at that point? One of the reason would be to make those features accessable across team by calculating once and reuse them many times. And in order to make it possible you need also be able to monitor those features over time to guarantee freshness and in case have a new feature engineerign run to refresh them. If it is not your case, I will give even more reasons about why you should consider feature store in the following sections. Just keep following me for now. One of the most important thing is related to its data model. As you can see in the picture below, Vertex AI Feature Store organizes resources hierarchically in the following order: Featurestore -&gt; EntityType -&gt; Feature. You must create these resources before you can ingest data into Vertex AI Feature Store. <img src="./assets/feature_store_data_model_3.png"> In our case we are going to create mobile_gaming featurestore resource containing user entity type and all its associated features such as country or the number of times a user challenged a friend (cnt_challenge_a_friend). Create featurestore, mobile_gaming You need to create a featurestore resource to contain entity types, features, and feature values. In your case, you would call it mobile_gaming. End of explanation """ try: user_entity_type = mobile_gaming_feature_store.create_entity_type( entity_type_id=ENTITY_ID, description="User Entity", sync=True ) except RuntimeError as error: print(error) else: USER_ENTITY_RESOURCE_NAME = user_entity_type.resource_name print("Entity type name is", USER_ENTITY_RESOURCE_NAME) """ Explanation: Create the User entity type and its features You define your own entity types which represents one or more level you decide to refer your features. In your case, it would have a user entity. End of explanation """ # Import required libraries from google.cloud.aiplatform_v1beta1 import \ FeaturestoreServiceClient as v1beta1_FeaturestoreServiceClient from google.cloud.aiplatform_v1beta1.types import \ entity_type as v1beta1_entity_type_pb2 from google.cloud.aiplatform_v1beta1.types import \ featurestore_monitoring as v1beta1_featurestore_monitoring_pb2 from google.cloud.aiplatform_v1beta1.types import \ featurestore_service as v1beta1_featurestore_service_pb2 from google.protobuf.duration_pb2 import Duration v1beta1_admin_client = v1beta1_FeaturestoreServiceClient( client_options={"api_endpoint": API_ENDPOINT} ) v1beta1_admin_client.update_entity_type( v1beta1_featurestore_service_pb2.UpdateEntityTypeRequest( entity_type=v1beta1_entity_type_pb2.EntityType( name=v1beta1_admin_client.entity_type_path( PROJECT_ID, REGION, FEATURESTORE_ID, ENTITY_ID ), monitoring_config=v1beta1_featurestore_monitoring_pb2.FeaturestoreMonitoringConfig( snapshot_analysis=v1beta1_featurestore_monitoring_pb2.FeaturestoreMonitoringConfig.SnapshotAnalysis( monitoring_interval=Duration(seconds=86400), # 1 day ), ), ), ) ) """ Explanation: Set Feature Monitoring Notice that Vertex AI Feature store has feature monitoring capability. It is in preview, so you need to use v1beta1 Python which is a lower-level API than the one we've used so far in this notebook. The easiest way to set this for now is using console UI. For completeness, below is example to do this using v1beta1 SDK. End of explanation """ feature_configs = { "country": { "value_type": "STRING", "description": "The country of customer", "labels": {"status": "passed"}, }, "operating_system": { "value_type": "STRING", "description": "The operating system of device", "labels": {"status": "passed"}, }, "language": { "value_type": "STRING", "description": "The language of device", "labels": {"status": "passed"}, }, "cnt_user_engagement": { "value_type": "DOUBLE", "description": "A variable of user engagement level", "labels": {"status": "passed"}, }, "cnt_level_start_quickplay": { "value_type": "DOUBLE", "description": "A variable of user engagement with start level", "labels": {"status": "passed"}, }, "cnt_level_end_quickplay": { "value_type": "DOUBLE", "description": "A variable of user engagement with end level", "labels": {"status": "passed"}, }, "cnt_level_complete_quickplay": { "value_type": "DOUBLE", "description": "A variable of user engagement with complete status", "labels": {"status": "passed"}, }, "cnt_level_reset_quickplay": { "value_type": "DOUBLE", "description": "A variable of user engagement with reset status", "labels": {"status": "passed"}, }, "cnt_post_score": { "value_type": "DOUBLE", "description": "A variable of user score", "labels": {"status": "passed"}, }, "cnt_spend_virtual_currency": { "value_type": "DOUBLE", "description": "A variable of user virtual amount", "labels": {"status": "passed"}, }, "cnt_ad_reward": { "value_type": "DOUBLE", "description": "A variable of user reward", "labels": {"status": "passed"}, }, "cnt_challenge_a_friend": { "value_type": "DOUBLE", "description": "A variable of user challenges with friends", "labels": {"status": "passed"}, }, "cnt_completed_5_levels": { "value_type": "DOUBLE", "description": "A variable of user level 5 completed", "labels": {"status": "passed"}, }, "cnt_use_extra_steps": { "value_type": "DOUBLE", "description": "A variable of user extra steps", "labels": {"status": "passed"}, }, } """ Explanation: Create features In order to ingest features, you need to provide feature configuration and create them as featurestore resources. Create Feature configuration For simplicity, I created the configuration in a declarative way. Of course, we can create an helper function to built it from Bigquery schema. Also notice that we want to pass some feature on-fly. In this case, it country, operating system and language looks perfect for that. End of explanation """ try: user_entity_type.batch_create_features(feature_configs=feature_configs, sync=True) except RuntimeError as error: print(error) else: for feature in user_entity_type.list_features(): print("") print(f"The resource name of {feature.name} feature is", feature.resource_name) """ Explanation: Create features using batch_create_features method Once you have the feature configuration, you can create feature resources using batch_create_features method. End of explanation """ feature_query = "feature_id:cnt_user_engagement" searched_features = Feature.search(query=feature_query) searched_features """ Explanation: Search features Vertex AI Feature store supports serching capabilities. Below you have a simple example that show how to filter a feature based on its name. End of explanation """ FEATURES_IDS = [feature.name for feature in user_entity_type.list_features()] try: user_entity_type.ingest_from_bq( feature_ids=FEATURES_IDS, feature_time=FEATURE_TIME, bq_source_uri=BQ_SOURCE_URI, entity_id_field=ENTITY_ID_FIELD, disable_online_serving=False, worker_count=10, sync=True, ) except RuntimeError as error: print(error) """ Explanation: Ingest features At that point, you create all resources associated to the feature store. You just need to import feature values before you can use them for online/offline serving. End of explanation """ read_instances_query = f""" CREATE OR REPLACE TABLE `{PROJECT_ID}.{BQ_DATASET}.{READ_INSTANCES_TABLE}` AS WITH # get training threshold ---------------------------------------------------------------------------------- get_training_threshold AS ( SELECT (MAX(event_timestamp) - 86400000000) AS training_thrs FROM `firebase-public-project.analytics_153293282.events_*` WHERE event_name="user_engagement" AND PARSE_TIMESTAMP('%Y-%m-%d %H:%M:%S', FORMAT_TIMESTAMP('%Y-%m-%d %H:%M:%S', TIMESTAMP_MICROS(event_timestamp))) < '{TODAY}'), # query to create label ----------------------------------------------------------------------------------- get_label AS ( SELECT user_pseudo_id, user_last_engagement, #label = 1 if last_touch within last hour hr else 0 IF (user_last_engagement < ( SELECT training_thrs FROM get_training_threshold), 1, 0 ) AS churned FROM ( SELECT user_pseudo_id, MAX(event_timestamp) AS user_last_engagement FROM `firebase-public-project.analytics_153293282.events_*` WHERE event_name="user_engagement" AND PARSE_TIMESTAMP('%Y-%m-%d %H:%M:%S', FORMAT_TIMESTAMP('%Y-%m-%d %H:%M:%S', TIMESTAMP_MICROS(event_timestamp))) < '{TODAY}' GROUP BY user_pseudo_id ) GROUP BY 1, 2), # query to create class weights -------------------------------------------------------------------------------- get_class_weights AS ( SELECT CAST(COUNT(*) / (2*(COUNT(*) - SUM(churned))) AS STRING) AS class_weight_zero, CAST(COUNT(*) / (2*SUM(churned)) AS STRING) AS class_weight_one, FROM get_label ) SELECT user_pseudo_id as user, PARSE_TIMESTAMP('%Y-%m-%d %H:%M:%S', CONCAT('{TODAY}', ' ', STRING(TIME_TRUNC(CURRENT_TIME(), SECOND))), 'UTC') as timestamp, churned AS churned, CASE WHEN churned = 0 THEN ( SELECT class_weight_zero FROM get_class_weights) ELSE ( SELECT class_weight_one FROM get_class_weights) END AS class_weights FROM get_label """ """ Explanation: Train and deploy a real-time churn ML model using Vertex AI Training and Endpoints Now that you have your features and you are almost ready to train our churn model. Below an high level picture <img src="./assets/train_model_4.png"> Let's dive into each step of this process. Fetch training data with point-in-time query using BigQuery and Vertex AI Feature store As we mentioned above, in real time churn prediction, it is so important defining the label you want to predict with your model. Let's assume that you decide to predict the churn probability over the last 24 hr. So now you have your label. Next step is to define your training sample. But let's think about that for a second. In that churn real time system, you have a high volume of transactions you could use to calculate those features which keep floating and are collected constantly over time. It implies that you always get fresh data to reconstruct features. And depending on when you decide to calculate one feature or another you can end up with a set of features that are not aligned in time. When you have labels available, it would be incredibly difficult to say which set of features contains the most up to date historical information associated with the label you want to predict. And, when you are not able to guarantee that, the performance of your model would be badly affected because you serve no representative features of the data and the label from the field when it goes live. So you need a way to get the most updated features you calculated over time before the label becomes available in order to avoid this informational skew. With the Vertex AI Feature store, you can fetch feature values corresponding to a particular timestamp thanks to point-in-time lookup capability. In our case, it would be the timestamp associated to the label you want to predict with your model. In this way, you will avoid data leakage and you will get the most updated features to train your model. Let's see how to do that. Define query for reading instances at a specific point in time First thing, you need to define the set of reading instances at a specific point in time you want to consider in order to generate your training sample. End of explanation """ run_bq_query(read_instances_query) """ Explanation: Create the BigQuery instances tables You store those instances in a Bigquery table. End of explanation """ # Serve features for batch training # TODO 2: Your code goes here( gcs_destination_output_uri_prefix=GCS_DESTINATION_OUTPUT_URI, gcs_destination_type="csv", serving_feature_ids=SERVING_FEATURE_IDS, read_instances_uri=READ_INSTANCES_URI, pass_through_fields=["churned", "class_weights"], ) """ Explanation: Serve features for batch training Then you use the batch_serve_to_gcs in order to generate your training sample and store it as csv file in a target cloud bucket. End of explanation """ !rm -Rf train_package #if train_package already exist !mkdir -m 777 -p trainer data/ingest data/raw model config !gsutil -m cp -r $GCS_DESTINATION_OUTPUT_URI/*.csv data/ingest !head -n 1000 data/ingest/*.csv > data/raw/sample.csv """ Explanation: Train a custom model on Vertex AI with Training Pipelines Now that we produce the training sample, we use the Vertex AI SDK to train an new version of the model using Vertex AI Training. Create training package and training sample End of explanation """ !touch trainer/__init__.py %%writefile trainer/task.py import os from pathlib import Path import argparse import yaml import numpy as np import pandas as pd from sklearn.model_selection import train_test_split from sklearn.impute import SimpleImputer from sklearn.preprocessing import OneHotEncoder from sklearn.pipeline import Pipeline import xgboost as xgb import joblib import warnings warnings.filterwarnings("ignore") def get_args(): """ Get arguments from command line. Returns: args: parsed arguments """ parser = argparse.ArgumentParser() parser.add_argument( '--data_path', required=False, default=os.getenv('AIP_TRAINING_DATA_URI'), type=str, help='path to read data') parser.add_argument( '--learning_rate', required=False, default=0.01, type=int, help='number of epochs') parser.add_argument( '--model_dir', required=False, default=os.getenv('AIP_MODEL_DIR'), type=str, help='dir to store saved model') parser.add_argument( '--config_path', required=False, default='../config.yaml', type=str, help='path to read config file') args = parser.parse_args() return args def ingest_data(data_path, data_model_params): """ Ingest data Args: data_path: path to read data data_model_params: data model parameters Returns: df: dataframe """ # read training data df = pd.read_csv(data_path, sep=',', dtype={col: 'string' for col in data_model_params['categorical_features']}) return df def preprocess_data(df, data_model_params): """ Preprocess data Args: df: dataframe data_model_params: data model parameters Returns: df: dataframe """ # convert nan values because pd.NA ia not supported by SimpleImputer # bug in sklearn 0.23.1 version: https://github.com/scikit-learn/scikit-learn/pull/17526 # decided to skip NAN values for now df.replace({pd.NA: np.nan}, inplace=True) df.dropna(inplace=True) # get features and labels x = df[data_model_params['numerical_features'] + data_model_params['categorical_features'] + [ data_model_params['weight_feature']]] y = df[data_model_params['target']] # train-test split x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=data_model_params['train_test_split']['test_size'], random_state=data_model_params['train_test_split'][ 'random_state']) return x_train, x_test, y_train, y_test def build_pipeline(learning_rate, model_params): """ Build pipeline Args: learning_rate: learning rate model_params: model parameters Returns: pipeline: pipeline """ # build pipeline pipeline = Pipeline([ # ('imputer', SimpleImputer(strategy='most_frequent')), ('encoder', OneHotEncoder(handle_unknown='ignore')), ('model', xgb.XGBClassifier(learning_rate=learning_rate, use_label_encoder=False, #deprecated and breaks Vertex AI predictions **model_params)) ]) return pipeline def main(): print('Starting training...') args = get_args() data_path = args.data_path learning_rate = args.learning_rate model_dir = args.model_dir config_path = args.config_path # read config file with open(config_path, 'r') as f: config = yaml.load(f, Loader=yaml.FullLoader) f.close() data_model_params = config['data_model_params'] model_params = config['model_params'] # ingest data print('Reading data...') data_df = ingest_data(data_path, data_model_params) # preprocess data print('Preprocessing data...') x_train, x_test, y_train, y_test = preprocess_data(data_df, data_model_params) sample_weight = x_train.pop(data_model_params['weight_feature']) sample_weight_eval_set = x_test.pop(data_model_params['weight_feature']) # train lgb model print('Training model...') xgb_pipeline = build_pipeline(learning_rate, model_params) # need to use fit_transform to get the encoded eval data x_train_transformed = xgb_pipeline[:-1].fit_transform(x_train) x_test_transformed = xgb_pipeline[:-1].transform(x_test) xgb_pipeline[-1].fit(x_train_transformed, y_train, sample_weight=sample_weight, eval_set=[(x_test_transformed, y_test)], sample_weight_eval_set=[sample_weight_eval_set], eval_metric='error', early_stopping_rounds=50, verbose=True) # save model print('Saving model...') model_path = Path(model_dir) model_path.mkdir(parents=True, exist_ok=True) joblib.dump(xgb_pipeline, f'{model_dir}/model.joblib') if __name__ == "__main__": main() """ Explanation: Create training script You create the training script to train a XGboost model. End of explanation """ %%writefile requirements.txt pip==22.0.4 PyYAML==5.3.1 joblib==0.15.1 numpy==1.18.5 pandas==1.0.4 scipy==1.4.1 scikit-learn==0.23.1 xgboost==1.1.1 """ Explanation: Create requirements.txt You write the requirement file to build the training container. End of explanation """ %%writefile config/config.yaml data_model_params: target: churned categorical_features: - country - operating_system - language numerical_features: - cnt_user_engagement - cnt_level_start_quickplay - cnt_level_end_quickplay - cnt_level_complete_quickplay - cnt_level_reset_quickplay - cnt_post_score - cnt_spend_virtual_currency - cnt_ad_reward - cnt_challenge_a_friend - cnt_completed_5_levels - cnt_use_extra_steps weight_feature: class_weights train_test_split: test_size: 0.2 random_state: 8 model_params: booster: gbtree objective: binary:logistic max_depth: 80 n_estimators: 100 random_state: 8 """ Explanation: Create training configuration You create a training configuration with data and model params. End of explanation """ test_job_script = f""" gcloud ai custom-jobs local-run \ --executor-image-uri={BASE_CPU_IMAGE} \ --python-module=trainer.task \ --extra-dirs=config,data,model \ -- \ --data_path data/raw/sample.csv \ --model_dir model \ --config_path config/config.yaml """ with open("local_train_job_run.sh", "w+") as s: s.write(test_job_script) s.close() # Launch the job locally !chmod +x ./local_train_job_run.sh && ./local_train_job_run.sh """ Explanation: Test the model locally with local-run You leverage the Vertex AI SDK local-run to test the script locally. End of explanation """ !mkdir -m 777 -p {MODEL_PACKAGE_PATH} && mv -t {MODEL_PACKAGE_PATH} trainer requirements.txt config train_job_script = f""" gcloud ai custom-jobs create \ --region={REGION} \ --display-name={TRAIN_JOB_NAME} \ --worker-pool-spec=machine-type={TRAINING_MACHINE_TYPE},replica-count={TRAINING_REPLICA_COUNT},executor-image-uri={BASE_CPU_IMAGE},local-package-path={MODEL_PACKAGE_PATH},python-module=trainer.task,extra-dirs=config \ --args=--data_path={DATA_PATH},--model_dir={MODEL_DIR},--config_path=config/config.yaml \ --verbosity='info' """ with open("train_job_run.sh", "w+") as s: s.write(train_job_script) s.close() # Launch the Custom training Job using chmod command # TODO 3: Your code goes here """ Explanation: Create and Launch the Custom training pipeline to train the model with autopackaging. You use autopackaging from Vertex AI SDK in order to Build a custom Docker training image. Push the image to Container Registry. Start a Vertex AI CustomJob. End of explanation """ TRAIN_JOB_RESOURCE_NAME = "projects/292484118381/locations/us-central1/customJobs/7374149059830874112" # Replace this with your job path # Check the status of training job !gcloud ai custom-jobs describe $TRAIN_JOB_RESOURCE_NAME !gsutil ls $DESTINATION_URI """ Explanation: Check the status of training job and the result. You can use the following commands to monitor the status of your job and check for the artefact in the bucket once the training successfully run. End of explanation """ # Upload the model xgb_model = upload_model( display_name=MODEL_NAME, serving_container_image_uri=SERVING_CONTAINER_IMAGE_URI, artifact_uri=DESTINATION_URI, ) """ Explanation: Upload and Deploy Model on Vertex AI Endpoint You use a custom function to upload your model to a Vertex AI Model Registry. End of explanation """ # Create endpoint endpoint = create_endpoint(display_name=ENDPOINT_NAME) # Deploy the model deployed_model = # TODO 4: Your code goes here( model=xgb_model, machine_type=SERVING_MACHINE_TYPE, endpoint=endpoint, deployed_model_display_name=DEPLOYED_MODEL_NAME, min_replica_count=1, max_replica_count=1, sync=False, ) """ Explanation: Deploy Model to the same Endpoint with Traffic Splitting Now that you have registered in the model registry, you can deploy it in an endpoint. So you firstly create the endpoint and then you deploy your model. End of explanation """ # Simulate online predictions # TODO 5: Your code goes here(endpoint=endpoint, n_requests=1000, latency=1) """ Explanation: Serve ML features at scale with low latency At that time, you are ready to deploy our simple model which would requires fetching preprocessed attributes as input features in real time. Below you can see how it works <img src="./assets/online_serving_5.png" width="600"> But think about those features for a second. Your behavioral features used to trained your model, they cannot be computed when you are going to serve the model online. How could you compute the number of time a user challenged a friend withing the last 24 hours on the fly? You simply can't do that. You need to be computed this feature on the server side and serve it with low latency. And becuase Bigquery is not optimized for those read operations, we need a different service that allows singleton lookup where the result is a single row with many columns. Also, even if it was not the case, when you deploy a model that requires preprocessing your data, you need to be sure to reproduce the same preprocessing steps you had when you trained it. If you are not able to do that a skew between training and serving data would happen and it will affect badly your model performance (and in the worst scenario break your serving system). You need a way to mitigate that in a way you don't need to implement those preprocessing steps online but just serve the same aggregated features you already have for training to generate online prediction. These are other valuable reasons to introduce Vertex AI Feature Store. With it, you have a service which helps you to serve feature at scale with low latency as they were available at training time mitigating in that way possible training-serving skew. Now that you know why you need a feature store, let's closing this journey by deploying your model and use feature store to retrieve features online, pass them to endpoint and generate predictions. Time to simulate online predictions Once the model is ready to receive prediction requests, you can use the simulate_prediction function to generate them. In particular, that function format entities for prediction retrieve static features with a singleton lookup operations from Vertex AI Feature store run the prediction request and get back the result for a number of requests and some latency you define. It will nearly take about 17 minutes to run this cell. End of explanation """
JCardenasRdz/Machine-Learning-4-MRI
Infection_vs_Inflammation/Code/01-Process_Data.ipynb
mit
# Import Python Modules import numpy as np #import seaborn as sn import matplotlib.pyplot as plt %matplotlib inline from pylab import * import pandas as pd # Import LOCAL functions written by me from mylocal_functions import * """ Explanation: Goal: Differentiate Infections, sterile inflammation, and healthy tissue using MRI The following methods were used in this study: 1. T2 relaxation of the tissue without a contrast agent 2. Dynamic contrast-enhanced (DCE) MRI using Maltose as a T2-ex contrast agent 3. Chemical Exchange Saturation Transfer (CEST) MRI without a contrast agent Author: Julio Cárdenas-Rodríguez, Ph.D. email: cardenaj@email.arizona.edu Description of the data A total of XX mice were used in this study. Each mouse was infected as follows: - Right thigh: with approximatley 100 uL of a solution of XX CFU/mL of E. Coli. - Left thigh: same dose but using a solution that contain heat-inactivated E. Coli. Both thighs can be seen in each image, and a total of of five imaging slices were collected around the center of infection. The average signal for the following region of interest (ROIS) were collected for all slices: Infected Site Apparently Healthy Tissue on the right thigh Sterile inflammation on the left thigh Apparently Healthy Tissue on the left thigh End of explanation """ # Make list of all T2.txt files T2_list = get_ipython().getoutput('ls ../Study_03_CBA/*T2.txt') # Allocate variables needed for analysis T2DF=pd.DataFrame() TR=np.linspace(.012,.012*12,12) # Fit T2 for all ROIs, slices and mice. construct dataframe for names in T2_list: #Convert txt file to array YDataMatrix=txt_2_array(names) #Estimate T2 T2time=fitT2(TR,YDataMatrix) #convert to data frame df_T2=pd.DataFrame(T2time.T,columns=["Infected","Healthy_Right","Sterile_Inflammation","Healthy_Left"]) #df_T2=pd.DataFrame(T2time.T,columns=["ROI-1","ROI-2","ROI-3","ROI-4"]) df_info=name_2_df(names) df_final=pd.concat([df_T2,df_info], axis=1) T2DF=T2DF.append(df_final,ignore_index=True) # Plot distribution of estimated T2 for each slice #T2DF[T2DF.Slice==1].iloc[:,:4].plot.density(); title("Slice 01"); xlim((0.025,.15)) #T2DF[T2DF.Slice==2].iloc[:,:4].plot.density(); title("Slice 02"); xlim((0.025,.15)) #T2DF[T2DF.Slice==3].iloc[:,:4].plot.density(); title("Slice 03"); xlim((0.025,.15)) #T2DF[T2DF.Slice==4].iloc[:,:4].plot.density(); title("Slice 04"); xlim((0.025,.15)) T2DF[T2DF.Slice==5].iloc[:,:4].plot.density(); title("Slice 05"); xlim((0.025,.15)) """ Explanation: T2 relaxation End of explanation """ # list of files CEST_list=get_ipython().getoutput('ls ../Study_03_CBA/*CEST.txt') CEST_DF=pd.DataFrame() Z=np.zeros((4,110)) def normalize_data(DataMatrix): rows,cols = DataMatrix.shape newData = np.zeros_like(DataMatrix) for row in range(rows): newData[row,:]=DataMatrix[row,:]/DataMatrix[row,8] return newData for names in CEST_list: #Convert txt file to array D=txt_2_array(names); Zn=normalize_data(D.T) Z=np.concatenate((Z,Zn)) Z=Z[4::,9::] # define offsets in ppm a1=np.linspace(-55,-50,9) ppm=np.linspace(-8,8,101) full_ppm = np.concatenate((a1, ppm)) # Fit data from scipy.optimize import curve_fit import seaborn as sn from mylocal_functions import * def Lorentzian(sat_offset,Amp,Width,Center): Width = Width**2; Width=Width/4 xdata = (sat_offset-Center)**2 return (Amp*Width) / (Width +xdata ) def Lorentzian2(sat_offset,a1,w1,c1,a2,w2,c2): return Lorentzian(sat_offset,a1,w1,c1) + Lorentzian(sat_offset,a2,w2,c2) # Signal=1-Z[12,:] # fix xdata xdata=ppm-ppm[Signal.argmax()] # allocate fitting based on this A10, W10, C10 = 0.90, 1, 0 A20, W20, C20 = .1, 1, -4 A1L, W1L, C1L = 0.5, .1, -.1 A2L, W2L, C2L = 0, .1, -6 A1U, W1U, C1U = 1.0, 5, +.1 A2U, W2U, C2U = 1.0, 5, -1.0 scale0, scaleL, scaleU = 0, -1, +1 initial_guess = [A10, W10, C10, A20, W20, C20, scale0] lb = [A1L, W1L, C1L, A2L, W2L, C2L, scaleL] ub = [A1U, W1U, C1U, A2U, W2U, C2U, scaleU] p, cov = curve_fit(Lscale, xdata, Signal,p0=initial_guess,bounds=(lb, ub)) print(pars_hat) Yhat=Lscale(xdata,p[0],p[1],p[2],p[3],p[4],p[5],p[6]); plt.figure(figsize=(10,5)) plt.plot(xdata,Signal,'o',label='Signal'); plt.plot(xdata,Yhat,'-',label='Signal'); from mylocal_functions import * mylocal_functions.fit_L2_scale? plt.plot(ppm,Lscale(ppm,A10, W10, C10, A20, W20, C20, scale0)); initial_guess = [A10, W10, C10, A20, W20, C20, scale0]; lb = [A1L, W1L, C1L, A2L, W2L, C2L, scaleL]; ub = [A1U, W1U, C1U, A2U, W2U, C2U, scaleU]; A=[[initial_guess],[initial_guess]] array(A).shape ppm[Signal.argmax()] L= Lorentzian(ppm,1,1,1); plt.plot(L) plt.plot(ppm,Z.T,'.'); plt.xlim(-10,10) len(CEST_list) Z=np.zeros? Z=np.zeros plt.plot(ppm,Z,'--'); plt.xlim(-10,10) #Estimate T2 T2time=fitT2(TR,YDataMatrix) #convert to data frame df_T2=pd.DataFrame(T2time.T,columns=["Infected","Healthy_Right","Sterile_Inflammation","Healthy_Left"]) #df_T2=pd.DataFrame(T2time.T,columns=["ROI-1","ROI-2","ROI-3","ROI-4"]) df_info=name_2_df(names) df_final=pd.concat([df_T2,df_info], axis=1) T2DF=T2DF.append(df_final,ignore_index=True) df_info=name_2_df(names) df_info # Make list of all T2.txt files CEST_list=get_ipython().getoutput('ls ../Study_03_CBA/*T2.txt') for names in CEST_list: Ydata=txt_2_array(names) print(Ydata) df_info=name_2_df(names) def scale(y,index): return y/y[index for names in CEST_list: print(names) Ydata=txt_2_array(names) rows, cols = Ydata.shape for i in range(cols): ydata=Ydata[:,i]; ydata=ydata/ydata[9]; ydata=ydata[9:] integral=np.sum(yd) # Fit T2 for all ROIs, slices and mice. construct dataframe for names in T2_list: #Convert txt file to array YDataMatrix=txt_2_array(names) #Estimate T2 T2time=fitT2(TR,YDataMatrix) #convert to data frame df_T2=pd.DataFrame(T2time.T,columns=["Infected","Healthy_Right","Sterile_Inflammation","Healthy_Left"]) #df_T2=pd.DataFrame(T2time.T,columns=["ROI-1","ROI-2","ROI-3","ROI-4"]) df_info=name_2_df(names) df_final=pd.concat([df_T2,df_info], axis=1) T2DF=T2DF.append(df_final,ignore_index=True) """ Explanation: Fit CEST for each slices and mouse End of explanation """
hyzhak/k-nn
k_nn.ipynb
mit
import pandas as pd # define column names names = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width', 'class'] # loading training data df = pd.read_csv('dataset/iris.data', header=None, names=names) df.head() """ Explanation: Load Data Set Tutorials: - https://kevinzakka.github.io/2016/07/13/k-nearest-neighbor/ - http://scikit-learn.org/stable/auto_examples/neighbors/plot_classification.html - http://machinelearningmastery.com/tutorial-to-implement-k-nearest-neighbors-in-python-from-scratch/ - https://www.dataquest.io/blog/k-nearest-neighbors-in-python/ - http://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_ml/py_knn/py_knn_understanding/py_knn_understanding.html Related: - http://scikit-learn.org/stable/modules/neighbors.html - End of explanation """ import seaborn as sns import matplotlib import matplotlib.pyplot as plt # can choose different styles # print(plt.style.available) plt.style.use('fivethirtyeight') # list available fonts: [f.name for f in matplotlib.font_manager.fontManager.ttflist] matplotlib.rc('font', family='DejaVu Sans') sns.lmplot('sepal_length', 'sepal_width', data=df, hue='class', fit_reg=False) plt.show() sns.lmplot('petal_length', 'petal_width', data=df, hue='class', fit_reg=False) plt.show() """ Explanation: Visualize Data Set End of explanation """ import numpy as np from sklearn.model_selection import train_test_split # create design matrix X and target vector Y X = np.array(df.ix[:, 0:4]) # end index is exclusive y = np.array(df['class']) # another way of indexing a pandas df print('{}, {}'.format(len(X), len(y))) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42) print('X_train {}, X_test {}, y_train {}, y_test {}'.format(len(X_train), len(X_test), len(y_train), len(y_test))) """ Explanation: Train Split test and train data End of explanation """ from sklearn.neighbors import KNeighborsClassifier from sklearn.metrics import accuracy_score # instantiate lerning model (k = 3) knn = KNeighborsClassifier(n_neighbors=11) # fitting the model knn.fit(X_train, y_train) # predict the response pred = knn.predict(X_test) print(accuracy_score(y_test, pred)) """ Explanation: Define classifer End of explanation """ from sklearn.model_selection import cross_val_score # creating odd list of K for KNN myList = list(range(1,64)) # subsetting just the odd ones neighbors = list(filter(lambda x: x % 2 != 0, myList)) # empty list that will hold cv scores cv_scores = [] cv_scores_std = [] # perform 10-fold cross validation for k in neighbors: knn = KNeighborsClassifier(n_neighbors=k) scores = cross_val_score(knn, X_train, y_train, cv=10, scoring='accuracy') cv_scores.append(scores.mean()) cv_scores_std.append(scores.std()) """ Explanation: k-fold cross validation [!] Using the test set for hyperparameter tuning can lead to overfitting. End of explanation """ # changing to misclassification error MSE = [1 - x for x in cv_scores] # determining best k optimal_k = neighbors[MSE.index(min(MSE))] print ('the optimal number of neighbors is {}'.format(optimal_k)) """ Explanation: plot the misclassification error versus K End of explanation """ cv_scores_for_test = [] # perform 10-fold cross validation for k in neighbors: knn = KNeighborsClassifier(n_neighbors=k) knn.fit(X_train, y_train) cv_scores_for_test.append(accuracy_score(y_test, knn.predict(X_test))) MSE_test = [1 - x for x in cv_scores_for_test] # for [:2] features # perform 10-fold cross validation cv_scores_for_test_0_2 = [] for k in neighbors: knn = KNeighborsClassifier(n_neighbors=k) knn.fit(X_train[:,:2], y_train) cv_scores_for_test_0_2.append(accuracy_score(y_test, knn.predict(X_test[:,:2]))) MSE_test_0_2 = [1 - x for x in cv_scores_for_test_0_2] # for [2:4] features # perform 10-fold cross validation cv_scores_for_test_2_4 = [] for k in neighbors: knn = KNeighborsClassifier(n_neighbors=k) knn.fit(X_train[:,2:4], y_train) cv_scores_for_test_2_4.append(accuracy_score(y_test, knn.predict(X_test[:,2:4]))) MSE_test_2_4 = [1 - x for x in cv_scores_for_test_2_4] # plot miscllassification error vs k plt.clf() plt.plot(neighbors, MSE, label='k-fold') cv_low = [x - x_std for x, x_std in zip(MSE, cv_scores_std)] cv_hi = [x + x_std for x, x_std in zip(MSE, cv_scores_std)] plt.fill_between(neighbors, cv_low, cv_hi, label='k-fold(deviation)', alpha=0.3) plt.plot(neighbors, MSE_test, label='whole dataset and features') plt.plot(neighbors, MSE_test_0_2, label='features [:2]') plt.plot(neighbors, MSE_test_2_4, label='features [2:4]') plt.xlabel('number of neighbors K') plt.ylabel('misclassification error') plt.legend() plt.show() """ Explanation: scores the missclassification error for pure KNeighborsClassifier End of explanation """ import collections def train(X_train, y_train): # do nothing return def predict(X_train, y_train, x_test, k): # first we compute the euclidean distance distances = [ [np.sqrt(np.sum(np.square(x_test - x_train))), i] for i, x_train in enumerate(X_train) ] # sort the list distances = sorted(distances) # make a list of the k neighbors' targets targets = [y_train[distance[1]] for distance in distances[:k]] # return most common target return collections.Counter(targets).most_common(1)[0][0] def k_nearest_neighbour(X_train, y_train, X_test, k): # train on the input data train(X_train, y_train) # loop over all observations return [predict(X_train, y_train, x_test, k) for x_test in X_test] # making our predictions pred = k_nearest_neighbour(X_train, y_train, X_test, 1) # transform the list into an array pred = np.asarray(pred) # evaluating accuracy accuracy = accuracy_score(y_test, pred) print('\nThe accuracy of our classifier is {}'.format(accuracy)) """ Explanation: K-nn from scratch End of explanation """ # making our predictions pred = k_nearest_neighbour(X_train[:,:2], y_train, X_test[:,:2], 1) # transform the list into an array pred = np.asarray(pred) # evaluating accuracy accuracy = accuracy_score(y_test, pred) print('\nThe accuracy of our classifier is {}'.format(accuracy)) """ Explanation: get prediction for [:2] features End of explanation """ # making our predictions pred = k_nearest_neighbour(X_train[:,2:4], y_train, X_test[:,2:4], 1) # transform the list into an array pred = np.asarray(pred) # evaluating accuracy accuracy = accuracy_score(y_test, pred) print('\nThe accuracy of our classifier is {}'.format(accuracy)) """ Explanation: get prediction for [2:4] features End of explanation """ def label_to_int(labels): return [list(set(labels)).index(y_value) for y_value in labels] # use color map, otherwise it will be grayscale from matplotlib import cm # choose 2 features to classify features_indexes = [0,1] # Plotting decision regions x_min, x_max = X[:, features_indexes[0]].min() - 1, X[:, features_indexes[0]].max() + 1 y_min, y_max = X[:, features_indexes[1]].min() - 1, X[:, features_indexes[1]].max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.1), np.arange(y_min, y_max, 0.1)) knn = KNeighborsClassifier(n_neighbors=13) knn.fit(X_train[:, features_indexes], y_train) Z = knn.predict(np.c_[xx.ravel(), yy.ravel()]) Z = np.array(label_to_int(Z)) Z = Z.reshape(xx.shape) # TODO: try to use seaborn instead plt.contourf(xx, yy, Z, alpha=0.4, cmap=cm.jet) plt.scatter(X[:, features_indexes[0]], X[:, features_indexes[1]], c=[list(set(y)).index(y_value) for y_value in y], alpha=0.9, cmap=cm.jet) plt.show() """ Explanation: Decision Regions End of explanation """
dennys-bd/Udacity-Deep-Learning
3 - Convolutional Neural Net/Project/dlnd_image_classification.ipynb
mit
""" DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ from urllib.request import urlretrieve from os.path import isfile, isdir from tqdm import tqdm import problem_unittests as tests import tarfile cifar10_dataset_folder_path = 'cifar-10-batches-py' # Use Floyd's cifar-10 dataset if present floyd_cifar10_location = '/input/cifar-10/python.tar.gz' if isfile(floyd_cifar10_location): tar_gz_path = floyd_cifar10_location else: tar_gz_path = 'cifar-10-python.tar.gz' class DLProgress(tqdm): last_block = 0 def hook(self, block_num=1, block_size=1, total_size=None): self.total = total_size self.update((block_num - self.last_block) * block_size) self.last_block = block_num if not isfile(tar_gz_path): with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar: urlretrieve( 'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz', tar_gz_path, pbar.hook) if not isdir(cifar10_dataset_folder_path): with tarfile.open(tar_gz_path) as tar: tar.extractall() tar.close() tests.test_folder_path(cifar10_dataset_folder_path) """ Explanation: Image Classification In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images. Get the Data Run the following cell to download the CIFAR-10 dataset for python. End of explanation """ %matplotlib inline %config InlineBackend.figure_format = 'retina' import helper import numpy as np # Explore the dataset batch_id = 3 sample_id = 1016 helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id) """ Explanation: Explore the Data The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following: * airplane * automobile * bird * cat * deer * dog * frog * horse * ship * truck Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch. Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions. End of explanation """ def normalize(x): """ Normalize a list of sample image data in the range of 0 to 1 : x: List of image data. The image shape is (32, 32, 3) : return: Numpy array of normalize data """ # TODO: Implement Function return (x / np.max(x)) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_normalize(normalize) """ Explanation: Implement Preprocess Functions Normalize In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x. End of explanation """ def one_hot_encode(x): """ One hot encode a list of sample labels. Return a one-hot encoded vector for each label. : x: List of sample Labels : return: Numpy array of one-hot encoded labels """ # TODO: Implement Function return np.eye(10)[x] """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_one_hot_encode(one_hot_encode) """ Explanation: One-hot encode Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function. Hint: Don't reinvent the wheel. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ # Preprocess Training, Validation, and Testing Data helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode) """ Explanation: Randomize Data As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset. Preprocess all the data and save it Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ import pickle import problem_unittests as tests import helper # Load the Preprocessed Validation data valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb')) """ Explanation: Check Point This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. End of explanation """ import tensorflow as tf def neural_net_image_input(image_shape): """ Return a Tensor for a batch of image input : image_shape: Shape of the images : return: Tensor for image input. """ return tf.placeholder(tf.float32, shape=(None, image_shape[0], image_shape[1], image_shape[2]), name="x") def neural_net_label_input(n_classes): """ Return a Tensor for a batch of label input : n_classes: Number of classes : return: Tensor for label input. """ return tf.placeholder(tf.float32, shape=(None, n_classes), name="y") def neural_net_keep_prob_input(): """ Return a Tensor for keep probability : return: Tensor for keep probability. """ return tf.placeholder(tf.float32, name="keep_prob") """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tf.reset_default_graph() tests.test_nn_image_inputs(neural_net_image_input) tests.test_nn_label_inputs(neural_net_label_input) tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input) """ Explanation: Build the network For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project. Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup. However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d. Let's begin! Input The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions * Implement neural_net_image_input * Return a TF Placeholder * Set the shape using image_shape with batch size set to None. * Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder. * Implement neural_net_label_input * Return a TF Placeholder * Set the shape using n_classes with batch size set to None. * Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder. * Implement neural_net_keep_prob_input * Return a TF Placeholder for dropout keep probability. * Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder. These names will be used at the end of the project to load your saved model. Note: None for shapes in TensorFlow allow for a dynamic size. End of explanation """ def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides): """ Apply convolution then max pooling to x_tensor :param x_tensor: TensorFlow Tensor :param conv_num_outputs: Number of outputs for the convolutional layer :param conv_ksize: kernal size 2-D Tuple for the convolutional layer :param conv_strides: Stride 2-D Tuple for convolution :param pool_ksize: kernal size 2-D Tuple for pool :param pool_strides: Stride 2-D Tuple for pool : return: A tensor that represents convolution and max pooling of x_tensor """ weights = tf.Variable(tf.truncated_normal([conv_ksize[0], conv_ksize[1], int(x_tensor.get_shape()[3]), conv_num_outputs], stddev=0.1)) bias = tf.Variable(tf.truncated_normal([conv_num_outputs], stddev=0.1)) pool_strides = [1, pool_strides[0], pool_strides[1], 1] pool_ksize = [1, pool_ksize[0], pool_ksize[1], 1] conv_strides = [1, conv_strides[0], conv_strides[1], 1] conv_layer = tf.nn.conv2d(x_tensor, weights, strides=conv_strides, padding='SAME') conv_layer = tf.nn.bias_add(conv_layer, bias) conv_layer = tf.nn.relu(conv_layer) conv_layer = tf.nn.max_pool( conv_layer, ksize=pool_ksize, strides=pool_strides, padding='SAME') return conv_layer """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_con_pool(conv2d_maxpool) """ Explanation: Convolution and Max Pooling Layer Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling: * Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor. * Apply a convolution to x_tensor using weight and conv_strides. * We recommend you use same padding, but you're welcome to use any padding. * Add bias * Add a nonlinear activation to the convolution. * Apply Max Pooling using pool_ksize and pool_strides. * We recommend you use same padding, but you're welcome to use any padding. Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers. End of explanation """ def flatten(x_tensor): """ Flatten x_tensor to (Batch Size, Flattened Image Size) : x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions. : return: A tensor of size (Batch Size, Flattened Image Size). """ return tf.reshape(x_tensor, shape=[-1, int(x_tensor.shape[1])*int(x_tensor.shape[2])*int(x_tensor.shape[3])]) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_flatten(flatten) """ Explanation: Flatten Layer Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages. End of explanation """ def fully_conn(x_tensor, num_outputs): """ Apply a fully connected layer to x_tensor using weight and bias : x_tensor: A 2-D tensor where the first dimension is batch size. : num_outputs: The number of output that the new tensor should be. : return: A 2-D tensor where the second dimension is num_outputs. """ weights = tf.Variable(tf.truncated_normal([int(x_tensor.shape[1]), num_outputs], stddev=0.1)) bias = tf.Variable(tf.truncated_normal([num_outputs], stddev=0.1)) return tf.nn.relu(tf.matmul(x_tensor, weights) + bias) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_fully_conn(fully_conn) """ Explanation: Fully-Connected Layer Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages. End of explanation """ def output(x_tensor, num_outputs): """ Apply a output layer to x_tensor using weight and bias : x_tensor: A 2-D tensor where the first dimension is batch size. : num_outputs: The number of output that the new tensor should be. : return: A 2-D tensor where the second dimension is num_outputs. """ weights = tf.Variable(tf.truncated_normal([int(x_tensor.shape[1]), num_outputs], stddev=0.1)) bias = tf.Variable(tf.truncated_normal([num_outputs], stddev=0.1)) return tf.add(tf.matmul(x_tensor, weights), bias) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_output(output) """ Explanation: Output Layer Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages. Note: Activation, softmax, or cross entropy should not be applied to this. End of explanation """ def conv_net(x, keep_prob): """ Create a convolutional neural network model : x: Placeholder tensor that holds image data. : keep_prob: Placeholder tensor that hold dropout keep probability. : return: Tensor that represents logits """ # TODO: Apply 1, 2, or 3 Convolution and Max Pool layers # Play around with different number of outputs, kernel size and stride # Function Definition from Above: #conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides) layer = conv2d_maxpool(x, 32, (4, 4), (2, 2), (4, 4), (2, 2)) #layer = tf.nn.dropout(layer, keep_prob) layer = conv2d_maxpool(layer, 64, (3, 3), (2, 2), (3, 3), (2, 2)) layer = tf.nn.dropout(layer, keep_prob) #layer = conv2d_maxpool(layer, 128, (4, 4), (2, 2), (2, 2), (2, 2)) #layer = tf.nn.dropout(layer, keep_prob) # TODO: Apply a Flatten Layer # Function Definition from Above: layer = flatten(layer) # TODO: Apply 1, 2, or 3 Fully Connected Layers # Play around with different number of outputs # Function Definition from Above: layer = fully_conn(layer, 40) layer = fully_conn(layer, 40) # TODO: Apply an Output Layer # Set this to the number of classes # Function Definition from Above: #output(layer, num_outputs) # TODO: return output return output(layer, 10) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ ############################## ## Build the Neural Network ## ############################## # Remove previous weights, bias, inputs, etc.. tf.reset_default_graph() # Inputs x = neural_net_image_input((32, 32, 3)) y = neural_net_label_input(10) keep_prob = neural_net_keep_prob_input() # Model logits = conv_net(x, keep_prob) # Name logits Tensor, so that is can be loaded from disk after training logits = tf.identity(logits, name='logits') # Loss and Optimizer cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y)) optimizer = tf.train.AdamOptimizer().minimize(cost) # Accuracy correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1)) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy') tests.test_conv_net(conv_net) """ Explanation: Create Convolutional Model Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model: Apply 1, 2, or 3 Convolution and Max Pool layers Apply a Flatten Layer Apply 1, 2, or 3 Fully Connected Layers Apply an Output Layer Return the output Apply TensorFlow's Dropout to one or more layers in the model using keep_prob. End of explanation """ def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch): """ Optimize the session on a batch of images and labels : session: Current TensorFlow session : optimizer: TensorFlow optimizer function : keep_probability: keep probability : feature_batch: Batch of Numpy image data : label_batch: Batch of Numpy label data """ session.run(optimizer, feed_dict={x: feature_batch, y: label_batch, keep_prob: keep_probability}) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_train_nn(train_neural_network) """ Explanation: Train the Neural Network Single Optimization Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following: * x for image input * y for labels * keep_prob for keep probability for dropout This function will be called for each batch, so tf.global_variables_initializer() has already been called. Note: Nothing needs to be returned. This function is only optimizing the neural network. End of explanation """ def print_stats(session, feature_batch, label_batch, cost, accuracy): """ Print information about loss and validation accuracy : session: Current TensorFlow session : feature_batch: Batch of Numpy image data : label_batch: Batch of Numpy label data : cost: TensorFlow cost function : accuracy: TensorFlow accuracy function """ loss = session.run(cost, feed_dict = {x: feature_batch, y: label_batch, keep_prob: 1.0}) accuracy = session.run(accuracy, feed_dict = {x: valid_features, y: valid_labels, keep_prob: 1.0}) print("Loss: {}".format(loss)) print("Validation Accuracy: {}".format(accuracy)) """ Explanation: Show Stats Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy. End of explanation """ # TODO: Tune Parameters epochs = 100 batch_size = 256 keep_probability = 0.8 """ Explanation: Hyperparameters Tune the following parameters: * Set epochs to the number of iterations until the network stops learning or start overfitting * Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory: * 64 * 128 * 256 * ... * Set keep_probability to the probability of keeping a node using dropout End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ print('Checking the Training on a Single Batch...') with tf.Session() as sess: # Initializing the variables sess.run(tf.global_variables_initializer()) # Training cycle for epoch in range(epochs): batch_i = 1 for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size): train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels) print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='') print_stats(sess, batch_features, batch_labels, cost, accuracy) """ Explanation: Train on a Single CIFAR-10 Batch Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ save_model_path = './image_classification' print('Training...') with tf.Session() as sess: # Initializing the variables sess.run(tf.global_variables_initializer()) # Training cycle for epoch in range(epochs): # Loop over all batches n_batches = 5 for batch_i in range(1, n_batches + 1): for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size): train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels) print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='') print_stats(sess, batch_features, batch_labels, cost, accuracy) # Save Model saver = tf.train.Saver() save_path = saver.save(sess, save_model_path) """ Explanation: Fully Train the Model Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ %matplotlib inline %config InlineBackend.figure_format = 'retina' import tensorflow as tf import pickle import helper import random # Set batch size if not already set try: if batch_size: pass except NameError: batch_size = 64 save_model_path = './image_classification' n_samples = 4 top_n_predictions = 3 def test_model(): """ Test the saved model against the test dataset """ test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb')) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load model loader = tf.train.import_meta_graph(save_model_path + '.meta') loader.restore(sess, save_model_path) # Get Tensors from loaded model loaded_x = loaded_graph.get_tensor_by_name('x:0') loaded_y = loaded_graph.get_tensor_by_name('y:0') loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') loaded_logits = loaded_graph.get_tensor_by_name('logits:0') loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0') # Get accuracy in batches for memory limitations test_batch_acc_total = 0 test_batch_count = 0 for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size): test_batch_acc_total += sess.run( loaded_acc, feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0}) test_batch_count += 1 print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count)) # Print Random Samples random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples))) random_test_predictions = sess.run( tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions), feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0}) helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions) test_model() """ Explanation: Checkpoint The model has been saved to disk. Test Model Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters. End of explanation """
Kaggle/learntools
notebooks/computer_vision/raw/ex6.ipynb
apache-2.0
# Setup feedback system from learntools.core import binder binder.bind(globals()) from learntools.computer_vision.ex6 import * from tensorflow import keras from tensorflow.keras import layers from tensorflow.keras.layers.experimental import preprocessing # Imports import os, warnings import matplotlib.pyplot as plt from matplotlib import gridspec import numpy as np import tensorflow as tf from tensorflow.keras.preprocessing import image_dataset_from_directory # Reproducability def set_seed(seed=31415): np.random.seed(seed) tf.random.set_seed(seed) os.environ['PYTHONHASHSEED'] = str(seed) os.environ['TF_DETERMINISTIC_OPS'] = '1' set_seed() # Set Matplotlib defaults plt.rc('figure', autolayout=True) plt.rc('axes', labelweight='bold', labelsize='large', titleweight='bold', titlesize=18, titlepad=10) plt.rc('image', cmap='magma') warnings.filterwarnings("ignore") # to clean up output cells # Load training and validation sets ds_train_ = image_dataset_from_directory( '../input/car-or-truck/train', labels='inferred', label_mode='binary', image_size=[128, 128], interpolation='nearest', batch_size=64, shuffle=True, ) ds_valid_ = image_dataset_from_directory( '../input/car-or-truck/valid', labels='inferred', label_mode='binary', image_size=[128, 128], interpolation='nearest', batch_size=64, shuffle=False, ) # Data Pipeline def convert_to_float(image, label): image = tf.image.convert_image_dtype(image, dtype=tf.float32) return image, label AUTOTUNE = tf.data.experimental.AUTOTUNE ds_train = ( ds_train_ .map(convert_to_float) .cache() .prefetch(buffer_size=AUTOTUNE) ) ds_valid = ( ds_valid_ .map(convert_to_float) .cache() .prefetch(buffer_size=AUTOTUNE) ) """ Explanation: Introduction In these exercises, you'll explore what effect various random transformations have on an image, consider what kind of augmentation might be appropriate on a given dataset, and then use data augmentation with the Car or Truck dataset to train a custom network. Run the cell below to set everything up! End of explanation """ # all of the "factor" parameters indicate a percent-change augment = keras.Sequential([ # preprocessing.RandomContrast(factor=0.5), preprocessing.RandomFlip(mode='horizontal'), # meaning, left-to-right # preprocessing.RandomFlip(mode='vertical'), # meaning, top-to-bottom # preprocessing.RandomWidth(factor=0.15), # horizontal stretch # preprocessing.RandomRotation(factor=0.20), # preprocessing.RandomTranslation(height_factor=0.1, width_factor=0.1), ]) ex = next(iter(ds_train.unbatch().map(lambda x, y: x).batch(1))) plt.figure(figsize=(10,10)) for i in range(16): image = augment(ex, training=True) plt.subplot(4, 4, i+1) plt.imshow(tf.squeeze(image)) plt.axis('off') plt.show() """ Explanation: (Optional) Explore Augmentation Uncomment a transformation and run the cell to see what it does. You can experiment with the parameter values too, if you like. (The factor parameters should be greater than 0 and, generally, less than 1.) Run the cell again if you'd like to get a new random image. End of explanation """ # View the solution (Run this code cell to receive credit!) q_1.check() # Lines below will give you a hint #_COMMENT_IF(PROD)_ q_1.solution() """ Explanation: Do the transformations you chose seem reasonable for the Car or Truck dataset? In this exercise, we'll look at a few datasets and think about what kind of augmentation might be appropriate. Your reasoning might be different that what we discuss in the solution. That's okay. The point of these problems is just to think about how a transformation might interact with a classification problem -- for better or worse. The EuroSAT dataset consists of satellite images of the Earth classified by geographic feature. Below are a number of images from this dataset. <figure> <img src="https://i.imgur.com/LxARYZe.png" width=600, alt="Sixteen satellite images labeled: SeaLake, PermanentCrop, Industrial, Pasture, Residential, and Forest."> </figure> 1) EuroSAT What kinds of transformations might be appropriate for this dataset? End of explanation """ # View the solution (Run this code cell to receive credit!) q_2.check() # Lines below will give you a hint #_COMMENT_IF(PROD)_ q_2.solution() """ Explanation: The TensorFlow Flowers dataset consists of photographs of flowers of several species. Below is a sample. <figure> <img src="https://i.imgur.com/Mt7PR2x.png" width=600, alt="Sixteen images of flowers labeled: roses, tulips, dandelion, and sunflowers"> </figure> 2) TensorFlow Flowers What kinds of transformations might be appropriate for the TensorFlow Flowers dataset? End of explanation """ from tensorflow import keras from tensorflow.keras import layers model = keras.Sequential([ layers.InputLayer(input_shape=[128, 128, 3]), # Data Augmentation # ____, # Block One layers.BatchNormalization(renorm=True), layers.Conv2D(filters=64, kernel_size=3, activation='relu', padding='same'), layers.MaxPool2D(), # Block Two layers.BatchNormalization(renorm=True), layers.Conv2D(filters=128, kernel_size=3, activation='relu', padding='same'), layers.MaxPool2D(), # Block Three layers.BatchNormalization(renorm=True), layers.Conv2D(filters=256, kernel_size=3, activation='relu', padding='same'), layers.Conv2D(filters=256, kernel_size=3, activation='relu', padding='same'), layers.MaxPool2D(), # Head layers.BatchNormalization(renorm=True), layers.Flatten(), layers.Dense(8, activation='relu'), layers.Dense(1, activation='sigmoid'), ]) # Check your answer q_3.check() #%%RM_IF(PROD)%% # Check number of layers from tensorflow import keras from tensorflow.keras import layers model = keras.Sequential([ layers.InputLayer(input_shape=[128, 128, 3]), # Data Augmentation preprocessing.RandomFlip(mode='horizontal'), preprocessing.RandomRotation(factor=0.10), # Block One layers.BatchNormalization(renorm=True), layers.Conv2D(filters=64, kernel_size=3, activation='relu', padding='same'), layers.MaxPool2D(), # Block Two layers.BatchNormalization(renorm=True), layers.Conv2D(filters=128, kernel_size=3, activation='relu', padding='same'), layers.MaxPool2D(), # Block Three layers.BatchNormalization(renorm=True), layers.Conv2D(filters=256, kernel_size=3, activation='relu', padding='same'), layers.Conv2D(filters=256, kernel_size=3, activation='relu', padding='same'), layers.MaxPool2D(), # Head layers.BatchNormalization(renorm=True), layers.Flatten(), layers.Dense(8, activation='relu'), layers.Dense(1, activation='sigmoid'), ]) q_3.assert_check_failed() #%%RM_IF(PROD)%% # Check layer types from tensorflow import keras from tensorflow.keras import layers model = keras.Sequential([ layers.InputLayer(input_shape=[128, 128, 3]), # Data Augmentation preprocessing.RandomRotation(factor=0.10), preprocessing.RandomContrast(factor=0.10), preprocessing.RandomFlip(mode='horizontal'), # Block One layers.BatchNormalization(renorm=True), layers.Conv2D(filters=64, kernel_size=3, activation='relu', padding='same'), layers.MaxPool2D(), # Block Two layers.BatchNormalization(renorm=True), layers.Conv2D(filters=128, kernel_size=3, activation='relu', padding='same'), layers.MaxPool2D(), # Block Three layers.BatchNormalization(renorm=True), layers.Conv2D(filters=256, kernel_size=3, activation='relu', padding='same'), layers.Conv2D(filters=256, kernel_size=3, activation='relu', padding='same'), layers.MaxPool2D(), # Head layers.BatchNormalization(renorm=True), layers.Flatten(), layers.Dense(8, activation='relu'), layers.Dense(1, activation='sigmoid'), ]) q_3.assert_check_failed() #%%RM_IF(PROD)%% from tensorflow import keras from tensorflow.keras import layers model = keras.Sequential([ layers.InputLayer(input_shape=[128, 128, 3]), # Data Augmentation preprocessing.RandomContrast(factor=0.15), preprocessing.RandomFlip(mode='vertical'), preprocessing.RandomRotation(factor=0.15), # Block One layers.BatchNormalization(renorm=True), layers.Conv2D(filters=64, kernel_size=3, activation='relu', padding='same'), layers.MaxPool2D(), # Block Two layers.BatchNormalization(renorm=True), layers.Conv2D(filters=128, kernel_size=3, activation='relu', padding='same'), layers.MaxPool2D(), # Block Three layers.BatchNormalization(renorm=True), layers.Conv2D(filters=256, kernel_size=3, activation='relu', padding='same'), layers.Conv2D(filters=256, kernel_size=3, activation='relu', padding='same'), layers.MaxPool2D(), # Head layers.BatchNormalization(renorm=True), layers.Flatten(), layers.Dense(8, activation='relu'), layers.Dense(1, activation='sigmoid'), ]) q_3.assert_check_failed() #%%RM_IF(PROD)%% from tensorflow import keras from tensorflow.keras import layers model = keras.Sequential([ layers.InputLayer(input_shape=[128, 128, 3]), # Data Augmentation preprocessing.RandomContrast(factor=0.10), preprocessing.RandomFlip(mode='horizontal'), preprocessing.RandomRotation(factor=0.10), # Block One layers.BatchNormalization(renorm=True), layers.Conv2D(filters=64, kernel_size=3, activation='relu', padding='same'), layers.MaxPool2D(), # Block Two layers.BatchNormalization(renorm=True), layers.Conv2D(filters=128, kernel_size=3, activation='relu', padding='same'), layers.MaxPool2D(), # Block Three layers.BatchNormalization(renorm=True), layers.Conv2D(filters=256, kernel_size=3, activation='relu', padding='same'), layers.Conv2D(filters=256, kernel_size=3, activation='relu', padding='same'), layers.MaxPool2D(), # Head layers.BatchNormalization(renorm=True), layers.Flatten(), layers.Dense(8, activation='relu'), layers.Dense(1, activation='sigmoid'), ]) q_3.assert_check_passed() # Lines below will give you a hint or solution code #_COMMENT_IF(PROD)_ q_3.hint() #_COMMENT_IF(PROD)_ q_3.solution() """ Explanation: Now you'll use data augmentation with a custom convnet similar to the one you built in Exercise 5. Since data augmentation effectively increases the size of the dataset, we can increase the capacity of the model in turn without as much risk of overfitting. 3) Add Preprocessing Layers Add these preprocessing layers to the given model. preprocessing.RandomContrast(factor=0.10), preprocessing.RandomFlip(mode='horizontal'), preprocessing.RandomRotation(factor=0.10), End of explanation """ optimizer = tf.keras.optimizers.Adam(epsilon=0.01) model.compile( optimizer=optimizer, loss='binary_crossentropy', metrics=['binary_accuracy'], ) history = model.fit( ds_train, validation_data=ds_valid, epochs=50, ) # Plot learning curves import pandas as pd history_frame = pd.DataFrame(history.history) history_frame.loc[:, ['loss', 'val_loss']].plot() history_frame.loc[:, ['binary_accuracy', 'val_binary_accuracy']].plot(); """ Explanation: Now we'll train the model. Run the next cell to compile it with a loss and accuracy metric and fit it to the training set. End of explanation """ # View the solution (Run this code cell to receive credit!) q_4.solution() """ Explanation: 4) Train Model Examine the training curves. What there any sign of overfitting? How does the performance of this model compare to other models you've trained in this course? End of explanation """
dahak-metagenomics/dahak
workflows/functional_inference/antibiotic_resistance/functional_inference_antibiotic_res_example.ipynb
bsd-3-clause
from antibiotic_res import * """ Explanation: Summary: This notebook is for visualizing antibiotic resistance gene tables generated by ABRicate and SRST2. Example Use Case: In this example, the complete Shakya et al. 2013 metagenome is being compared to small, medium, and large subsamples of itself after conservative or aggressive read filtering and assembly with SPAdes or MEGAHIT. The datasets used in this example are named according to their metagenome content, relative degree of read filtering, and assembler used where appropriate. SRST2 is appropriate for analysis of antibiotic resistance genes (ARG) in reads while is ABRicate is useful for analysis of ABR in contigs. SRR606249 = Accession number for the complete Shakya et al. 2013 metagenome subset50 = 50% of the complete Shakya et al. 2013 metagenome subset25 = 25% of the complete Shakya et al. 2013 metagenome subset10 = 10% of the complete Shakya et al. 2013 metagenome pe.trim2 = Conservative read filtering pe.trim30 = Aggressive read filtering megahit = MEGHIT assembly spades = SPAdes assembly Objectives: Create table with all of the genes found Count the total number of genes found for each dataset Count the number of unique genes found per dataset Compare unique genes found using a presence/absence table Compare results from reads and assemblies End of explanation """ concat_abricate_files('*tab').to_csv('concatenated_abricate_results.txt') calc_total_genes_abricate() calculate_unique_genes_abricate() create_abricate_presence_absence_gene_table() np.version.version interactive_map_abricate() interactive_table_abricate() df = pd.read_csv('concatenated_abricate_results.csv') qgrid.show_grid(df, show_toolbar=True) """ Explanation: Analysis of antibiotic resistance genes in contigs using ABRicate End of explanation """ concat_srst2_txt("srst2/*results.txt") calc_total_genes_srst2()#.to_csv('') calculate_unique_genes_srst2()#.to_csv('') create_srst2_presence_absence_gene_table() interactive_map_srst2() interactive_table_srst2() """ Explanation: Analysis of SRST2 results End of explanation """
qaisermazhar/qaisermazhar.github.io
markdown_generator/talks.ipynb
mit
import pandas as pd import os """ Explanation: Talks markdown generator for academicpages Takes a TSV of talks with metadata and converts them for use with academicpages.github.io. This is an interactive Jupyter notebook (see more info here). The core python code is also in talks.py. Run either from the markdown_generator folder after replacing talks.tsv with one containing your data. TODO: Make this work with BibTex and other databases, rather than Stuart's non-standard TSV format and citation style. End of explanation """ !cat talks.tsv """ Explanation: Data format The TSV needs to have the following columns: title, type, url_slug, venue, date, location, talk_url, description, with a header at the top. Many of these fields can be blank, but the columns must be in the TSV. Fields that cannot be blank: title, url_slug, date. All else can be blank. type defaults to "Talk" date must be formatted as YYYY-MM-DD. url_slug will be the descriptive part of the .md file and the permalink URL for the page about the paper. The .md file will be YYYY-MM-DD-[url_slug].md and the permalink will be https://[yourdomain]/talks/YYYY-MM-DD-[url_slug] The combination of url_slug and date must be unique, as it will be the basis for your filenames This is how the raw file looks (it doesn't look pretty, use a spreadsheet or other program to edit and create). End of explanation """ talks = pd.read_csv("talks.tsv", sep="\t", header=0) talks """ Explanation: Import TSV Pandas makes this easy with the read_csv function. We are using a TSV, so we specify the separator as a tab, or \t. I found it important to put this data in a tab-separated values format, because there are a lot of commas in this kind of data and comma-separated values can get messed up. However, you can modify the import statement, as pandas also has read_excel(), read_json(), and others. End of explanation """ html_escape_table = { "&": "&amp;", '"': "&quot;", "'": "&apos;" } def html_escape(text): if type(text) is str: return "".join(html_escape_table.get(c,c) for c in text) else: return "False" """ Explanation: Escape special characters YAML is very picky about how it takes a valid string, so we are replacing single and double quotes (and ampersands) with their HTML encoded equivilents. This makes them look not so readable in raw format, but they are parsed and rendered nicely. End of explanation """ loc_dict = {} for row, item in talks.iterrows(): md_filename = str(item.date) + "-" + item.url_slug + ".md" html_filename = str(item.date) + "-" + item.url_slug year = item.date[:4] md = "---\ntitle: \"" + item.title + '"\n' md += "collection: talks" + "\n" if len(str(item.type)) > 3: md += 'type: "' + item.type + '"\n' else: md += 'type: "Talk"\n' md += "permalink: /talks/" + html_filename + "\n" if len(str(item.venue)) > 3: md += 'venue: "' + item.venue + '"\n' if len(str(item.location)) > 3: md += "date: " + str(item.date) + "\n" if len(str(item.location)) > 3: md += 'location: "' + str(item.location) + '"\n' md += "---\n" if len(str(item.talk_url)) > 3: md += "\n[More information here](" + item.talk_url + ")\n" if len(str(item.description)) > 3: md += "\n" + html_escape(item.description) + "\n" md_filename = os.path.basename(md_filename) #print(md) with open("../_talks/" + md_filename, 'w') as f: f.write(md) """ Explanation: Creating the markdown files This is where the heavy lifting is done. This loops through all the rows in the TSV dataframe, then starts to concatentate a big string (md) that contains the markdown for each type. It does the YAML metadata first, then does the description for the individual page. End of explanation """ !ls ../_talks !cat ../_talks/2013-03-01-tutorial-1.md """ Explanation: These files are in the talks directory, one directory below where we're working from. End of explanation """
danui/project-euler
solutions/jupyter/problem-34.ipynb
mit
from scipy.special import factorial factorial(9) def fac(n): return int(factorial(n)) fac(3) import numpy as np N = 100000 for i in range(10, N+1): digits = list(''+str(i)) factorials = list(map(lambda x: fac(int(x)), digits)) summation = np.sum(factorials) #print('{} -> {} -> sum {}'.format(i, factorials, summation)) if i == summation: print(i) """ Explanation: Problem 34 https://projecteuler.net/problem=34 145 is a curious number, as 1! + 4! + 5! = 1 + 24 + 120 = 145. Find the sum of all numbers which are equal to the sum of the factorial of their digits. Note: as 1! = 1 and 2! = 2 are not sums they are not included. Brute force We can brute force this by iterating numbers 10 and above, breaking them down into their component digits, and summing up their factorials. The challenge here is when to stop. My hunch is that we can stop once the sum of the factorials is greater than the original number. Hmm... Okay, my hunch is wrong. 9! is 362880. Therefore 1! + 9! > 19. If we had stopped here we would never find 145. End of explanation """ from scipy.special import factorial def create_fac(): factorials = [int(factorial(i)) for i in range(10)] def f(x): return factorials[x] return f fac = create_fac() time fac(9) time factorial(9) import numpy as np def dfs(chain, max_length): concat_value = 0 for i in chain: concat_value *= 10 concat_value += i fac_sum = sum(map(lambda x: fac(x), chain)) #print(concat_value, fac_sum) if fac_sum == concat_value and len(chain) >= 2: yield concat_value if len(chain) < max_length: if len(chain) == 0: lo = 1 else: lo = 0 for i in range(lo, 10): for x in dfs(chain + [i], max_length): yield x print(sum(map(lambda x: fac(x), [1,4,5]))) list(dfs([],5)) """ Explanation: Brute force in the manner above is too slow. We can do better. For example we can save the result of the summations. I think there's a better way... Take 2 Firstly we only need factorials for digits 0 to 9. It's much faster to lookup an array than to execute scipy's factorial function every time. End of explanation """ # Define `fac(n)` function that returns the factorial of `n`. # For values of n in 0..9. We precalculate the factorials # since those are the only factorials we will need to solve # this problem. from scipy.special import factorial def create_fac(): factorials = [int(factorial(i)) for i in range(10)] def f(x): return factorials[x] return f fac = create_fac() # Define F(v) where v is a multiset in vector form. def F(v): assert len(v) == 10 s = 0 for i in range(10): s += fac(i) * v[i] return s # The following assertion should pass. assert F([0,1,0,0,1,1,0,0,0,0]) == 145 assert F([1,0,0,0,1,2,0,0,1,0]) == 40585 # Vector equality works in python assert [0,1,0,0,1,1,0,0,0,0] == [0,1,0,0,1,1,0,0,0,0] assert [1,1,0,0,0,0,0,0,0,0] != [0,0,0,0,0,0,0,0,1,1] # Define M(k) that converts a number k into a multiset vector. def M(k): v = [0] * 10 while k > 0: r = k % 10 k = k // 10 v[r] += 1 return v # The following assertions about M(k) should pass. assert M(145) == [0,1,0,0,1,1,0,0,0,0] assert M(541) == [0,1,0,0,1,1,0,0,0,0] assert M(5141) == [0,2,0,0,1,1,0,0,0,0] assert M(40585) == [1,0,0,0,1,2,0,0,1,0] # Assertions to test relationships mentioned in the design. assert F(M(145)) == 145 assert M(F([0,1,0,0,1,1,0,0,0,0])) == [0,1,0,0,1,1,0,0,0,0] """ Explanation: I have no idea where take 2 is going. Thinking in terms of multi sets Observe that the sum of factorials for 145 is the same as the sum of factorials for any permutation of those digits. 541, 154, etc. Let the notation {1145} denote a multi set of digits 1, 1, 4, and 5. For clarity, we will always list the smaller digits first. Also, since it is a multi set, repeated digits are allowed. For example, the repeated 1s in {1145}. Let the function F(x) map a multi set x into a sum of factorials, as per the definition in problem 34. Let the function M(y) map a number y into a multi set corresponding to its digits. For example M(123) == M(321). Using this notation we can say the following about 145. F(M(145)) == 145 Generalizing, we can rephrase problem 34 as finding all values y such that F(M(y)) == y. Observe that M(123) is the same as M(321). That is, a single multi set maps from multiple numbers. A many to one mapping. Another way to put this is to say that a multi set maps to one or more numbers. Given this, and that we are searching for factorial sums over multi sets, it makes more sense to search the space of multi sets than to search the space of numbers. Observe that given a multi set, there is exactly one factorial sum. Then we can evaluate if that sum is a curious number by computing its multi set. Therefore we can rephrase the problem in terms of multi sets as follows. This form is better suited for searching the space of multi sets. M(F(x)) == x The next piece of the puzzle is how to navigate the space of multi sets in an efficient manner. Also, how would we know when to stop? Vector representation for multi sets Since the only set members in our multi sets are digits, we can represent our multi sets as vectors of digit counts. For example the multi set {1145} would be a vector [0,2,0,0,1,1,0,0,0,0]. Note: The count at the 0-th position of the vector is also important because 0! == 1. Multi set equality using this notation is when two vectors have the same count at each digit position. In Python you can directly use the == operator. Code End of explanation """ for i in range(10): print('{}! = {}'.format(i, fac(i))) """ Explanation: A list of factorials End of explanation """ v = M(170) print('F(v) = {}, M(F(v)) = {}'.format(F(v), M(F(v)))) v = M(F(v)) """ Explanation: Chaining factorials Seed with a number in the first cell below. Then keep refreshing the second cell below. Seems like there is a loop. ..., 169, 363601, 1454, 169, ... End of explanation """ for number in range(160,169): first_number = number known = set() chain = list() while number not in known: vector = M(number) factorial_sum = F(vector) #print('Number: {} -> {} -> {}'.format(number, vector, factorial_sum)) known.add(number) chain.append(number) number = factorial_sum repeated_number = number #print('Number: {}'.format(number)) print('Chain {} -> {}: {}'.format(first_number, repeated_number, chain)) """ Explanation: Chaining factorials 2 Rather than refreshing, let's just print the chains. End of explanation """ known_numbers = list() lo = 10 hi = 100000 for number in known_numbers: print('Curious number: {}'.format(number)) for number in range(lo, hi): if F(M(number)) == number: print('Curious number: {}'.format(number)) known_numbers.append(number) print('hi={}'.format(hi)) print('sum of known numbers: {}'.format(sum(known_numbers))) lo = hi hi += 100000 """ Explanation: Let's brute force again The first cell initializes the search space. known_number records the numbers we have found so far. lo and hi are updated by the second cell after searching each region. End of explanation """
tensorflow/docs-l10n
site/ja/lite/tutorials/model_maker_image_classification.ipynb
apache-2.0
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: Copyright 2019 The TensorFlow Authors. End of explanation """ !pip install -q tflite-model-maker """ Explanation: TensorFlow Lite Model Maker による画像分類 <table class="tfo-notebook-buttons" align="left"> <td><a target="_blank" href="https://www.tensorflow.org/lite/tutorials/model_maker_image_classification"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org で表示</a></td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/lite/tutorials/model_maker_image_classification.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png"> Google Colabで実行</a> </td> <td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/lite/tutorials/model_maker_image_classification.ipynb"> <img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png"> GitHubでソースを表示</a></td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/lite/tutorials/model_maker_image_classification.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a> </td> <td><a href="https://tfhub.dev/"><img src="https://www.tensorflow.org/images/hub_logo_32px.png">TF Hub モデルを見る</a></td> </table> TensorFlow Lite Model Maker ライブラリは、TensorFlow ニューラルネットワークモデルを適合し、オンデバイス ML アプリケーションにこのモデルをデプロイする際の特定の入力データに変換するプロセスを単純化します。 このノートブックでは、この Model Maker を使用したエンドツーエンドの例を示し、モバイルデバイスで花を分類するために一般的に使用される画像分類モデルの適合と変換を説明します。 前提条件 この例を実行するにはまず、GitHub リポジトリ にある Model Maker パッケージなど、いくつかの必要なパッケージをインストールしてください。 End of explanation """ import os import numpy as np import tensorflow as tf assert tf.__version__.startswith('2') from tflite_model_maker import model_spec from tflite_model_maker import image_classifier from tflite_model_maker.config import ExportFormat from tflite_model_maker.config import QuantizationConfig from tflite_model_maker.image_classifier import DataLoader import matplotlib.pyplot as plt """ Explanation: 必要なパッケージをインポートします。 End of explanation """ image_path = tf.keras.utils.get_file( 'flower_photos.tgz', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', extract=True) image_path = os.path.join(os.path.dirname(image_path), 'flower_photos') """ Explanation: 簡単なエンドツーエンドの例 データパスの取得 この簡単なエンドツーエンドの例に使用する画像を取得しましょう。データ数が多いほどより高い精度を得ることができますが、Model Maker を使用し始めるには、数百枚の画像があれば十分です。 End of explanation """ data = DataLoader.from_folder(image_path) train_data, test_data = data.split(0.9) """ Explanation: 上記の image_path を自分の画像フォルダに置き換えてください。Colab にデータをアップロードする場合は、下の画像の赤く囲まれたアップロードボタンを使用できます。Zip ファイルをアップロードし、解凍してみてください。ルートファイルパスは現在のパスです。 <img src="https://storage.googleapis.com/download.tensorflow.org/models/tflite/screenshots/model_maker_image_classification.png" alt="Upload File"> 画像をクラウドにアップロードしない場合は、GitHub のガイドに従って、ローカルでライブラリを実行できます。 例の実行 以下に示される通り、例は 4 行で構成されています。各行は、プロセス全体の 1 ステップを表します。 ステップ 1. オンデバイス ML アプリに固有の入力データを読み込み、トレーニングデータとテストデータに分割します。 End of explanation """ model = image_classifier.create(train_data) """ Explanation: ステップ 2. TensorFlow モデルをカスタマイズします。 End of explanation """ loss, accuracy = model.evaluate(test_data) """ Explanation: ステップ 3. モデルを評価します。 End of explanation """ model.export(export_dir='.') """ Explanation: ステップ 4. TensorFlow Lite モデルをエクスポートします。 ここでは、モデル記述の標準を提供するメタデータで TensorFlow Lite モデルをエクスポートします。ラベルファイルはメタデータに埋め込まれます。デフォルトのポストトレーニング量子化手法は、画像分類タスクの完全整数量子化です。 アップロードと同じように、左サイドバーでダウンロードできます。 End of explanation """ image_path = tf.keras.utils.get_file( 'flower_photos.tgz', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', extract=True) image_path = os.path.join(os.path.dirname(image_path), 'flower_photos') """ Explanation: 上記の簡単な手順を実行したら、画像分類の参照アプリのようなオンデバイスアプリケーションで、TensorFlow Lite モデルファイルとラベルファイルを使用できるようになります。 詳細なプロセス 現在のところ、EfficientNet-Lite* モデル、MobileNetV2、ResNet50 などの複数のモデルが画像分類用に事前トレーニングされたモデルとしてサポートされています。ただし、非常に柔軟性に優れているため、わずか数行のコードで、新しいトレーニング済みのモデルをこのライブラリに追加することができます。 次のウォークスルーでは、このエンドツーエンドの例の詳細を手順を追って説明します。 ステップ 1: オンデバイス ML アプリ固有の入力データを読み込む flower データセットには、5 つのクラスに属する 3670 個の画像が含まれます。データセットのアーカイブバージョンをダウンロードして解凍してください。 データセットには次のディレクトリ構造があります。 <pre>&lt;b&gt;flower_photos&lt;/b&gt; |__ &lt;b&gt;daisy&lt;/b&gt; |______ 100080576_f52e8ee070_n.jpg |______ 14167534527_781ceb1b7a_n.jpg |______ ... |__ &lt;b&gt;dandelion&lt;/b&gt; |______ 10043234166_e6dd915111_n.jpg |______ 1426682852_e62169221f_m.jpg |______ ... |__ &lt;b&gt;roses&lt;/b&gt; |______ 102501987_3cdb8e5394_n.jpg |______ 14982802401_a3dfb22afb.jpg |______ ... |__ &lt;b&gt;sunflowers&lt;/b&gt; |______ 12471791574_bb1be83df4.jpg |______ 15122112402_cafa41934f.jpg |______ ... |__ &lt;b&gt;tulips&lt;/b&gt; |______ 13976522214_ccec508fe7.jpg |______ 14487943607_651e8062a1_m.jpg |______ ... </pre> End of explanation """ data = DataLoader.from_folder(image_path) """ Explanation: DataLoader クラスを使用して、データを読み込みます。 from_folder() メソッドについては、フォルダからデータを読み込むことができます。同じクラスの画像データは同じサブディレクトリに存在し、サブフォルダ名はクラス名であることを前提とします。現在のところ、JPEG エンコード画像と PNG エンコード画像がサポートされています。 End of explanation """ train_data, rest_data = data.split(0.8) validation_data, test_data = rest_data.split(0.5) """ Explanation: これをトレーニングデータ(80%)、検証データ(10% - オプション)、およびテストデータ(10%)に分割します。 End of explanation """ plt.figure(figsize=(10,10)) for i, (image, label) in enumerate(data.gen_dataset().unbatch().take(25)): plt.subplot(5,5,i+1) plt.xticks([]) plt.yticks([]) plt.grid(False) plt.imshow(image.numpy(), cmap=plt.cm.gray) plt.xlabel(data.index_to_label[label.numpy()]) plt.show() """ Explanation: ラベル付きの 25 個の画像サンプルを表示します。 End of explanation """ model = image_classifier.create(train_data, validation_data=validation_data) """ Explanation: ステップ 2: TensorFlow モデルをカスタマイズする 読み込んだデータをもとに、カスタム画像分類器モデルを作成します。デフォルトのモデルは EfficientNet-Lite0 です。 End of explanation """ model.summary() """ Explanation: モデル構造を詳しく確認します。 End of explanation """ loss, accuracy = model.evaluate(test_data) """ Explanation: ステップ 3: カスタマイズ済みのモデルを評価する モデルの結果を評価し、モデルの損失と精度を取得します。 End of explanation """ # A helper function that returns 'red'/'black' depending on if its two input # parameter matches or not. def get_label_color(val1, val2): if val1 == val2: return 'black' else: return 'red' # Then plot 100 test images and their predicted labels. # If a prediction result is different from the label provided label in "test" # dataset, we will highlight it in red color. plt.figure(figsize=(20, 20)) predicts = model.predict_top_k(test_data) for i, (image, label) in enumerate(test_data.gen_dataset().unbatch().take(100)): ax = plt.subplot(10, 10, i+1) plt.xticks([]) plt.yticks([]) plt.grid(False) plt.imshow(image.numpy(), cmap=plt.cm.gray) predict_label = predicts[i][0][0] color = get_label_color(predict_label, test_data.index_to_label[label.numpy()]) ax.xaxis.label.set_color(color) plt.xlabel('Predicted: %s' % predict_label) plt.show() """ Explanation: 100 個のテスト画像で予測結果を描画できます。赤色の予測ラベルは誤った予測結果を表し、ほかは正しい結果を表します。 End of explanation """ model.export(export_dir='.') """ Explanation: 精度がアプリの要件を満たさない場合は、高度な使用を参照し、より大規模なモデルに変更したり、再トレーニングパラメータを調整したりといった別の手段を調べてください。 ステップ 4: TensorFlow Lite モデルにエクスポートする トレーニングされたモデルをメタデータで TensorFlow Lite モデル形式に変換し、後でオンデバイス ML アプリケーションで使用できるようにします。ラベルファイルと語彙ファイルはメタデータに埋め込まれています。デフォルトの TFLite ファイル名は model.tflite です。 多くのオンデバイス ML アプリケーションでは、モデルサイズが重要な要因です。そのため、モデルの量子化を適用して小さくし、実行速度を高められるようにすることをお勧めします。デフォルトのポストトレーニング量子化手法は、画像分類タスクの完全整数量子化です。 End of explanation """ model.export(export_dir='.', export_format=ExportFormat.LABEL) """ Explanation: TensorFlow Lite モデルをモバイルアプリに統合する方法については、画像分類のサンプルアプリケーションとガイドをご覧ください。 このモデルは、ImageClassifier API(TensorFlow Lite Task ライブラリ)を使って Android または iOS アプリに統合することができます。 次のいずれかのエクスポートフォーマットを使用できます。 ExportFormat.TFLITE ExportFormat.LABEL ExportFormat.SAVED_MODEL デフォルトでは、メタデータとともに TensorFlow Lite モデルをエクスポートするだけです。さまざまなファイルを選択的にエクスポートすることも可能です。たとえば、ラベルファイルのみをエクスポートする場合は、次のように行います。 End of explanation """ model.evaluate_tflite('model.tflite', test_data) """ Explanation: また、evaluate_tflite メソッドを使って tflite を評価することもできます。 End of explanation """ config = QuantizationConfig.for_float16() """ Explanation: 高度な使用 このライブラリでは、create 関数が非常に重要な役割を果たします。この関数は、チュートリアルと同様に、トレーニング済みのモデルで転移学習を使います。 create 関数には、次のステップが含まれます。 パラメータ validation_ratio と test_ratio に基づき、データをトレーニング、検証、テストのデータに分割します。validation_ratio と test_ratio のデフォルト値は、0.1 と 0.1 です。 ベースモデルとして、TensorFlow Hub から Image Feature Vector をダウンロードします。デフォルトのトレーニング済みモデルは EfficientNet-Lite0 です。 ヘッドレイヤーとトレーニング済みモデルの間に、dropout_rate を使用して、ドロップアウトレイヤー付きの分類器ヘッドを追加します。デフォルトのdropout_rate は TensorFlow Hub の make_image_classifier_lib のデフォルトの dropout_rate 値です。 生の入力データを前処理します。現在のところ、前処理ステップには、各画像ピクセルの値をモデルの入力スケールに正規化し、モデルの入力サイズにサイズ変更することが含まれます。EfficientNet-Lite0 の入力スケールは [0, 1]、入力画像サイズは [224, 224, 3] です。 データを分類器モデルにフィードします。デフォルトでは、トレーニングエポック、バッチサイズ、学習率、運動量などのトレーニングパラメータは、TensorFlow Hub のmake_image_classifier_lib のデフォルト値です。分類器ヘッドのみがトレーニングされています。 このセクションでは、異なる画像分類モデルへの切り替えやトレーニングハイパーパラメータの変更など、いくつかの高度なトピックを説明します。 TensorFlow Lite モデルでポストトレーニング量子化をカスタマイズする ポストトレーニング量子化は、モデルサイズと推論レイテンシを縮小できる変換テクニックです。このテクニックでは、モデル精度にほとんど影響することなく、CPU とハードウェアアクセラレータの推論速度も改善することができます。したがって、モデルを改善するために広く使われています。 Model Maker ライブラリは、モデルをエクスポートする際に、デフォルトのポストトレーニング量子化手法を適用します。ポストトレーニング量子化をカスタマイズするのであれば、Model Maker は、QuantizationConfig を使った複数のポストトレーニング量子化オプションもサポートしています。例として、float16 量子化を見てみましょう。まず、量子化構成を定義します。 End of explanation """ model.export(export_dir='.', tflite_filename='model_fp16.tflite', quantization_config=config) """ Explanation: 次に、その構成で TensorFlow Lite モデルをエクスポートします。 End of explanation """ model = image_classifier.create(train_data, model_spec=model_spec.get('mobilenet_v2'), validation_data=validation_data) """ Explanation: Colab では、前述のアップロード手順と同様に、左サイドバーから model_fp16.tflite というモデルをダウンロードできます。 モデルを変更する このライブラリでサポートされているモデルに変更する このライブラリは、EfficientNet-Lite モデル、MobileNetV2、ResNet50 をサポートします。EfficientNet-Lite は、最新の精度を達成し、エッジデバイスに適切した一群の画像分類モデルです。デフォルトのモデルは EfficientNet-Lite0 です。 このモデルを、create メソッドのパラメータ model_spec を MobileNet_v2_spec に設定することで、MobileNetV2 に切り替えることができます。 End of explanation """ loss, accuracy = model.evaluate(test_data) """ Explanation: 新たにトレーニングした MobileNetV2 モデルを評価し、テストデータで精度と損失を確認します。 End of explanation """ inception_v3_spec = image_classifier.ModelSpec( uri='https://tfhub.dev/google/imagenet/inception_v3/feature_vector/1') inception_v3_spec.input_image_shape = [299, 299] """ Explanation: TensorFlow Hub にモデルに変更する さらに、画像を入力し、TensorFlow Hub 形式の特徴ベクトルを出力する他の新しいモデルに切り替えることもできます。 Inception V3 モデルを例とすると、inception_v3_spec を定義することができます。これには、image_classifier.ModelSpec のオブジェクトであり、Inception V3 モデルの仕様が含まれます。 モデル名 name、TensorFlow Hub モデルの URL uri を指定する必要があります。その間、input_image_shape のデフォルト値は [224, 224] です。これを Inception V3 モデルの [299, 299] に変更する必要があります。 End of explanation """ model = image_classifier.create(train_data, validation_data=validation_data, epochs=10) """ Explanation: 次に、create メソッドでパラメータ model_spec を inception_v3_spec に設定することで、Inception V3 モデルを再トレーニングすることができます。 残りのステップはまったく同じで、最終的にカスタマイズされた InceptionV3 TensorFlow Lite モデルを得ることができます。 独自のカスタムモデルを変更する TensorFlow Hub にないカスタムモデルを使用する場合は、ModelSpec を作成して TensorFlow Hub にエクスポートする必要があります。 次に、上記のプロセスのように ImageModelSpec オブジェクトを定義し始めます。 トレーニングハイパーパラメータを変更する また、モデルの精度に影響する epochs、dropout_rate、および batch_size などのトレーニングハイパーパラメータも変更できます。以下は、調整できるモデルパラメータです。 epochs: エポックを増やすと、収束するまでより優れた精度を達成できますが、エポック数が多すぎると、トレーニングは過適合となる可能性があります。 dropout_rate: ドロップアウト率。過適合を回避します。 batch_size: 1 つのトレーニングステップに使用するサンプル数。デフォルトは None。 validation_data: 検証データ。None の場合は、検証をスキップします。デフォルトは None。 train_whole_model: true の場合、Hub モジュールは上の分類レイヤーとともにトレーニングされます。そうでない場合は、上の分類レイヤーのみがトレーニングされます。デフォルトは None です。 learning_rate: 基本学習率。デフォルトは None です。 momentum: オプティマイザに転送される Python float。use_hub_library が True の場合にのみ使用されます。デフォルトは None です。 shuffle: データをシャッフルするかどうかを決めるブール型。デフォルトは False です。 use_augmentation: 前処理にデータ拡張を行うかを決めるブール型。デフォルトは False です。 use_hub_library: モデルの再トレーニングに TensorFlow Hub の make_image_classifier_lib を使用するかを決めるブール型。このトレーニングパイプラインは、多数のカテゴリを持つ複雑なデータセットのパフォーマンスを改善する可能性があります。デフォルトは True です。 warmup_steps: 学習率に関するウォームアップスケジュールのウォームアップステップ数。None の場合、2 エポックの傍系トレーニングステップ数であるデフォルトの warmup_steps が使用されます。use_hub_library が False の場合にのみ使用されます。デフォルトは None です。 model_dir: オプション。モデルチェックポイントファイルの場所です。use_hub_library がFalse の場合にのみ使用されます。デフォルトは None です。 <code>epochs</code> など、デフォルトが None であるパラメータは、TensorFlow Hub library の <a href="https://github.com/tensorflow/hub/blob/02ab9b7d3455e99e97abecf43c5d598a5528e20c/tensorflow_hub/tools/make_image_classifier/make_image_classifier_lib.py#L54">make_image_classifier_lib</a> または <a href="https://github.com/tensorflow/examples/blob/f0260433d133fd3cea4a920d1e53ecda07163aee/tensorflow_examples/lite/model_maker/core/task/train_image_classifier_lib.py#L61">train_image_classifier_lib</a> にある具体的なデフォルトパラメータを取得します。 たとえば、エポック数を増やしてトレーニングすることができます。 End of explanation """ loss, accuracy = model.evaluate(test_data) """ Explanation: 新たに再トレーニングされたモデルを 10 個のトレーニングエポックで評価します。 End of explanation """
kanhua/pypvcell
demos/efficiency_vs_bandgap.ipynb
apache-2.0
%matplotlib inline %load_ext autoreload %autoreload 2 import numpy as np import matplotlib.pyplot as plt import matplotlib from pypvcell.solarcell import SQCell,MJCell,DBCell from pypvcell.illumination import Illumination from pypvcell.photocurrent import gen_step_qe font = {'size' : 12} matplotlib.rc('font', **font) # pass in the font dict as kwargs """ Explanation: Modeling Solar Cells with Detailed Balance Model This notebook demonstrates how to simulate the solar cells with detailed balance model. End of explanation """ input_ill=Illumination("AM1.5g") """ Explanation: Set up the illumination Initiate an object to for the sun illumination of solar cells End of explanation """ gaas_cell=SQCell(eg=1.42,cell_T=293,n_c=1,n_s=1) """ Explanation: Simulate a GaAs solar cell at detailed balance limit Set up a GaAs (Eg=1.42eV) solar cell using SQCell. End of explanation """ gaas_cell.set_input_spectrum(input_ill) """ Explanation: Set up the desired illumination End of explanation """ v,i=gaas_cell.get_iv() plt.plot(v,i,'.-') plt.xlabel("voltage (V)") plt.ylabel("current (A/m^2)") plt.ylim([-400,0]) """ Explanation: Generate and draw the I-V characteristics End of explanation """ # set up an array of different band gaps eg_array = np.linspace(0.5, 2.0) eta_array_air_semi_class=[] for eg in eg_array: sq_cell = SQCell(eg=eg, cell_T=293, n_c=1, n_s=1) sq_cell.set_input_spectrum(input_spectrum=input_ill) eta_array_air_semi_class.append(sq_cell.get_eta()) plt.plot(eg_array, eta_array_air_semi_class, label="nc=1,ns=1 (class)") plt.xlabel("band gap (eV)") plt.ylabel("efficiency") """ Explanation: Investigate the optimal band gap The follwoing code scans through different band gap and calculate the efficiencies End of explanation """ def get_2j_eta(conc_ill): top_eg=np.linspace(1.3,2.1,num=50) # Set up range of top cell band gaps mid_eg=np.linspace(0.9,1.5,num=50)# Set up range of middle cell band gaps eta=np.zeros((50,50)) # Initialize an array for storing efficiencies ge_cell=SQCell(eg=0.67,cell_T=293) for i,teg in enumerate(top_eg): for j,beg in enumerate(mid_eg): tc=SQCell(eg=teg,cell_T=293) # Set up top cell bc=SQCell(eg=beg,cell_T=293) # Set up bottom cell mj=MJCell([tc, bc]) # Make multijunction cell by "streaming" the 1J cells mj.set_input_spectrum(conc_ill) # Set up the illumination eta[i,j]=mj.get_eta() # Store the calculated efficiency in an array return top_eg,mid_eg,eta ill=Illumination("AM1.5g",concentration=1) top_eg,mid_eg,eta=get_2j_eta(ill) levels=np.unique(np.concatenate((np.arange(0.12,0.7,step=0.04),[0.42,0.44]))) cs=plt.contour(mid_eg,top_eg,eta,levels) plt.clabel(cs,levels, fontsize=12, inline=1,fmt="%.2f") plt.xlabel("middle cell band gap (eV)") plt.ylabel("top cell band gap (eV)") plt.tight_layout() plt.savefig("2J_one_sun.pdf") eta.shape print(np.max(eta)) nar=np.argmax(eta) nar//eta.shape[0] print(top_eg[nar//eta.shape[1]]) print(mid_eg[np.mod(nar,eta.shape[1])]) def get_1j_eta_on_si(conc_ill): top_eg=np.linspace(1.3,2.1,num=50) # Set up range of top cell band gaps eta=np.zeros(50) # Initialize an array for storing efficiencies for i,teg in enumerate(top_eg): tc=SQCell(eg=teg,cell_T=293) # Set up top cell bc=SQCell(eg=1.1,cell_T=293) # Set up bottom cell mj=MJCell([tc, bc]) # Make multijunction cell by "streaming" the 1J cells mj.set_input_spectrum(conc_ill) # Set up the illumination eta[i]=mj.get_eta() # Store the calculated efficiency in an array return top_eg,eta ill=Illumination("AM1.5g",concentration=1) top_eg,eta=get_1j_eta_on_si(ill) plt.plot(top_eg,eta) nar=np.argmax(eta) print(top_eg[nar]) print(eta[nar]) """ Explanation: Calculate dual-junction cell efficiency contour End of explanation """ def get_3j_eta(conc_ill): top_eg=np.linspace(1.3,2.1,num=10) # Set up range of top cell band gaps mid_eg=np.linspace(0.9,1.5,num=50)# Set up range of middle cell band gaps eta=np.zeros((10,50)) # Initialize an array for storing efficiencies ge_cell=SQCell(eg=0.67,cell_T=293) for i,teg in enumerate(top_eg): for j,beg in enumerate(mid_eg): tc=SQCell(eg=teg,cell_T=293) # Set up top cell bc=SQCell(eg=beg,cell_T=293) # Set up bottom cell mj=MJCell([tc, bc,ge_cell]) # Make multijunction cell by "streaming" the 1J cells mj.set_input_spectrum(conc_ill) # Set up the illumination eta[i,j]=mj.get_eta() # Store the calculated efficiency in an array return top_eg,mid_eg,eta ill=Illumination("AM1.5d",concentration=240) top_eg,mid_eg,eta=get_3j_eta(ill) levels=np.unique(np.concatenate((np.arange(0.12,0.7,step=0.04),[0.44,0.46,0.48,0.50,0.54,0.56]))) cs=plt.contour(mid_eg,top_eg,eta,levels) plt.clabel(cs,levels, fontsize=12, inline=1,fmt="%.2f") plt.xlabel("middle cell band gap (eV)") plt.ylabel("top cell band gap (eV)") plt.tight_layout() plt.savefig("3J_2d_240suns.png",dpi=600) tc=SQCell(eg=1.87,cell_T=293) # Set up top cell bc=SQCell(eg=1.42,cell_T=293) # Set up bottom cell ge_cell=SQCell(eg=0.67,cell_T=293) mj=MJCell([tc, bc,ge_cell]) # Make multijunction cell by "streaming" the 1J cells mj.set_input_spectrum(ill) print(mj.get_eta()) """ Explanation: Calculate triple-junction cell with Germanium substrate (X/Y/Ge) This calculation is similar to previous one, but this time we calculate the optimal band gap combinations of top and middle cell of the three-junction solar cell, with germanium cell as the bottom junction. End of explanation """ # The AM1.5d spectrum has to be normalized to 1000 W/m^2 input_ill=Illumination("AM1.5d",concentration=1000/918) sq_cell = SQCell(eg=1.13, cell_T=293, n_c=3.5, n_s=1) sq_cell.set_input_spectrum(input_spectrum=input_ill) sq_cell.get_eta() """ Explanation: Compare Table II. of the EtaOpt paper Condition: AM1.5d, Eg=1.13eV, T=300K End of explanation """ ill=Illumination("AM1.5d",concentration=462000*1000/918) mj=MJCell([SQCell(eg=1.84,cell_T=293,n_c=1,n_s=1), SQCell(eg=1.16,cell_T=293,n_c=1,n_s=1), SQCell(eg=0.69,cell_T=293,n_c=1,n_s=1)]) mj.set_input_spectrum(input_spectrum=ill) mj.get_eta() """ Explanation: Condition: 1.84/1.16/0.69 End of explanation """ input_ill=Illumination("AM1.5g",concentration=1) top_eg=np.linspace(1.6,2,num=100) # Set up range of top cell band gaps eta=np.zeros(100) # Initialize an array for storing efficiencies jsc_ratio=np.zeros_like(eta) si_cell=SQCell(eg=1.12,cell_T=293,n_c=3.5,n_s=1) for i,teg in enumerate(top_eg): #qe=gen_step_qe(teg,1) #tc=DBCell(qe,rad_eta=1,T=293,n_c=3.5,n_s=1) # Set up top cell tc=SQCell(teg,cell_T=293) mj=MJCell([tc, si_cell]) # Make multijunction cell by "streaming" the 1J cells mj.set_input_spectrum(input_ill) # Set up the illumination eta[i]=mj.get_eta() # Store the calculated efficiency in an array jsc_a=mj.get_subcell_jsc() jsc_ratio[i]=jsc_a[0]/jsc_a[1] #print(jsc_a) plt.plot(top_eg,eta) plt.xlabel("band gap of top cell (eV)") plt.ylabel("efficiency") plt.savefig("sj_on_si.pdf") """ Explanation: The result does not seemt to match very well (against 67%). Calculate optimal band gap on silicon cell (X/1.1eV) Find the optimal band gap on silicon subcell End of explanation """ top_eg[np.argmax(eta)] """ Explanation: Optimal top cell band gap End of explanation """ np.max(eta) """ Explanation: Maximum efficiency End of explanation """
jrieke/machine-intelligence-2
sheet11/sheet11.ipynb
mit
from __future__ import division, print_function import matplotlib.pyplot as plt %matplotlib inline import scipy.stats import numpy as np from scipy.ndimage import imread import sys """ Explanation: Machine Intelligence II - Team MensaNord Sheet 11 Nikolai Zaki Alexander Moore Johannes Rieke Georg Hoelger Oliver Atanaszov End of explanation """ # import image img_orig = imread('testimg.jpg').flatten() print("$img_orig") print("shape: \t\t", img_orig.shape) # = vector print("values: \t from ", img_orig.min(), " to ", img_orig.max(), "\n") # "img" holds 3 vectors img = np.zeros((3,img_orig.shape[0])) print("$img") print("shape: \t\t",img.shape) std = [0, 0.05, 0.1] for i in range(img.shape[1]): # normalize => img[0] img[0][i] = img_orig[i] / 255 # gaussian noise => img[1] img[2] img[1][i] = img[0][i] + np.random.normal(0, std[1]) img[2][i] = img[0][i] + np.random.normal(0, std[2]) print(img[:, 0:4]) """ Explanation: Exercise 1 Load the data into a vector and normalize it such that the values are between 0 and 1. Create two new datasets by adding Gaussian noise with zero mean and standard deviation σ N ∈ {0.05, 0.1}. End of explanation """ # histograms fig, axes = plt.subplots(1, 3, figsize=(15, 5)) for i, ax in enumerate(axes.flatten()): plt.sca(ax) plt.hist(img[i], 100, normed=1, alpha=0.75) plt.xlim(-0.1, 1.1) plt.ylim(0, 18) plt.xlabel("value") plt.ylabel("probability") plt.title('img[{}]'.format(i)) # divide probablity space in 100 bins nbins = 100 bins = np.linspace(0, 1, nbins+1) # holds data equivalent to shown histograms (but cutted from 0 to 1) elementsPerBin = np.zeros((3,nbins)) for i in range(3): ind = np.digitize(img[i], bins) elementsPerBin[i] = [len(img[i][ind == j]) for j in range(nbins)] # counts number of elements from bin '0' to bin 'j' sumUptoBinJ = np.asarray([[0 for i in range(nbins)] for i in range(3)]) for i in range(3): for j in range(nbins): sumUptoBinJ[i][j] = np.sum(elementsPerBin[i][0:j+1]) # plot plt.figure(figsize=(15, 5)) for i in range(3): plt.plot(sumUptoBinJ[i], '.-') plt.legend(['img[0]', 'img[1]', 'img[2]']) plt.xlabel('bin') plt.ylabel('empirical distribution functions'); """ Explanation: Create a figure showing the 3 histograms (original & 2 sets of noise corrupted data – use enough bins!). In an additional figure, show the three corresponding empirical distribution functions in one plot. End of explanation """ def H(vec, h): """ (rectangular) histogram kernel function """ vec = np.asarray(vec) return np.asarray([1 if abs(x)<.5 else 0 for x in vec]) """ Explanation: Take a subset of P = 100 observations and estimate the probability density p̂ of intensities with a rectangular kernel (“gliding window”) parametrized by window width h. Plot the estimates p̂ resulting for (e.g. 10) different samples of size P End of explanation """ def P_est(x, h, data, kernel = H): """ returns the probability that data contains values @ (x +- h/2) """ n = 1 #= data.shape[1] #number of dimensions (for multidmensional data) p = len(data) return 1/(h**n)/p*np.sum(kernel((data - x)/h, h)) # take 10 data sets with 100 observations (indexes 100k to 101k) # nomenclature: data_3(3, 10, 100) holds 3 times data(10, 100) P = 100 offset = int(100000) data_3 = np.zeros((3, 10,P)) for j in range(3): for i in range(10): data_3[j][i] = img[j][offset+i*P:offset+(i+1)*P] print(data_3.shape) # calculate probability estimation for (center +- h/2) on the 10 data sets h = .15 nCenters = 101 Centers = np.linspace(0,1,nCenters) fig, ax = plt.subplots(2,5,figsize=(15,6)) ax = ax.ravel() for i in range(10): ax[i].plot([P_est(center,h,data_3[0][i]) for center in Centers]) """ Explanation: $P(\underline{x}) = \frac{1}{h^n} \frac{1}{p} \Sigma_{\alpha=1}^{p} H(\frac{\underline{x} - \underline{x}^{(\alpha)}}{h})$ End of explanation """ testdata = img[0][50000:55000] # calculate average negative log likelihood for def avg_NegLL(data, h, kernel=H): sys.stdout.write(".") average = 0 for i in range(10): L_prob = [np.log(P_est(x,h,data[i],kernel)) for x in testdata] negLL = -1*np.sum(L_prob) average += negLL average /= 10 return average """ Explanation: Calculate the negative log-likelihood per datapoint of your estimator using 5000 samples from the data not used for the density estimation (i.e. the “test-set”). Get the average of the negative log-likelihood over the 10 samples. $P({\underline{x}^{(\alpha)}};\underline{w}) = - \Sigma_{\alpha=1}^{p} ln P(\underline{x}^{(\alpha)};\underline{w})$ End of explanation """ hs = np.linspace(0.001, 0.999, 20) def plot_negLL(data_3=data_3, kernel=H): fig = plt.figure(figsize=(12,8)) for j in range(3): print("calc data[{}]".format(j)) LLs = [avg_NegLL(data_3[j],h,kernel=kernel) for h in hs] plt.plot(hs,LLs) print() plt.legend(['img[0]', 'img[1]', 'img[2]']) plt.show() plot_negLL() """ Explanation: 2) Repeat this procedure (without plotting) for a sequence of kernel widths h to get the mean log likelihood (averaged over the different samples) resulting for each value of h. (a) Apply this procedure to all 3 datasets (original and the two noise-corruped ones) to make a plot showing the obtained likelihoods (y-axis) vs. kernel width h (x-axis) as one line for each dataset. End of explanation """ P = 500 data_3b = np.zeros((3, 10,P)) for j in range(3): for i in range(10): data_3b[j][i] = img[j][offset+i*P:offset+(i+1)*P] plot_negLL(data_3=data_3b) """ Explanation: not plotted points have value = inf because: $negLL = - log( \Pi_\alpha P(x^\alpha,w) )$ so if one single $P(x^\alpha,w) = 0$ occurs (x has 5000 elements) the result is -log(0)=inf (not defined) this only occurs with the histogram kernel. (b) Repeat the previous step (LL & plot) for samples of size P = 500. End of explanation """ def Gaussian(x,h): """ gaussian kernel function """ return np.exp(-x**2/h/2)/np.sqrt(2*np.pi*h) fig, ax = plt.subplots(2,5,figsize=(15,6)) h = .15 ax = ax.ravel() for i in range(10): ax[i].plot([P_est(center,h,data_3[0][i],kernel=Gaussian) for center in Centers]) hs = np.linspace(0.001, 0.4, 20) plot_negLL(kernel=Gaussian) plot_negLL(data_3=data_3b, kernel=Gaussian) """ Explanation: (c) Repeat the previous steps (a & b) for the Gaussian kernel with σ^2 = h. End of explanation """ M = 2 w1, w2 = [2,2], [1,1] # means sigma2 = 0.2 # standard deviations N = 100 P1, P2 = 2/3, 1/3 def create_data(sigma1=0.7): X = np.zeros((N, 2)) which_gaussian = np.zeros(N) for n in range(N): if np.random.rand() < P1: # sample from first Gaussian X[n] = np.random.multivariate_normal(w1, np.eye(len(w1)) * sigma1**2) which_gaussian[n] = 0 else: # sample from second Gaussian X[n] = np.random.multivariate_normal(w2, np.eye(len(w2)) * sigma2**2) which_gaussian[n] = 1 return X, which_gaussian sigma1 = 0.7 X, which_gaussian = create_data(sigma1) def plot_data(X, which_gaussian, centers, stds): plt.scatter(*X[which_gaussian == 0].T, c='r', label='Cluster 1') plt.scatter(*X[which_gaussian == 1].T, c='b', label='Cluster 2') plt.plot(centers[0][0], centers[0][1], 'k+', markersize=15, label='Centers') plt.plot(centers[1][0], centers[1][1], 'k+', markersize=15) plt.gca().add_artist(plt.Circle(centers[0], stds[0], ec='k', fc='none')) plt.gca().add_artist(plt.Circle(centers[1], stds[1], ec='k', fc='none')) plt.xlabel('x1') plt.ylabel('x2') plt.legend() plot_data(X, which_gaussian, [w1, w2], [sigma1, sigma2]) plt.title('Ground truth') """ Explanation: Exercise 2 1.1 Create dataset End of explanation """ from scipy.stats import multivariate_normal def variance(X): """Calculate a single variance value for the vectors in X.""" mu = X.mean(axis=0) return np.mean([np.linalg.norm(x - mu)**2 for x in X]) def run_expectation_maximization(X, w=None, sigma_squared=None, verbose=False): # Initialization. P_prior = np.ones(2) * 1 / M P_likelihood = np.zeros((N, M)) P_posterior = np.zeros((M, N)) mu = X.mean(axis=0) # mean of the original data var = variance(X) # variance of the original data if w is None: w = np.array([mu + np.random.rand(M) - 0.5, mu + np.random.rand(M) - 0.5]) if sigma_squared is None: sigma_squared = np.array([var + np.random.rand() - 0.5,var + np.random.rand() - 0.5]) #sigma_squared = np.array([var, var]) if verbose: print('Initial centers:', w) print('Initial variances:', sigma_squared) print() print() theta = 0.001 distance = np.inf step = 0 # Optimization loop. while distance > theta: #for i in range(1): step += 1 if verbose: print('Step', step) print('-'*50) # Store old parameter values to calculate distance later on. w_old = w.copy() sigma_squared_old = sigma_squared.copy() P_prior_old = P_prior.copy() if verbose: print('Distances of X[0] to proposed centers:', np.linalg.norm(X[0] - w[0]), np.linalg.norm(X[0] - w[1])) # E-Step: Calculate likelihood for each data point. for (alpha, q), _ in np.ndenumerate(P_likelihood): P_likelihood[alpha, q] = multivariate_normal.pdf(X[alpha], w[q], sigma_squared[q]) if verbose: print('Likelihoods of X[0]:', P_likelihood[0]) # E-Step: Calculate assignment probabilities (posterior) for each data point. for (q, alpha), _ in np.ndenumerate(P_posterior): P_posterior[q, alpha] = (P_likelihood[alpha, q] * P_prior[q]) / np.sum([P_likelihood[alpha, r] * P_prior[r] for r in range(M)]) if verbose: print('Assignment probabilities of X[0]:', P_posterior[:, 0]) print() distance = 0 # M-Step: Calculate new parameter values. for q in range(M): w[q] = np.sum([P_posterior[q, alpha] * X[alpha] for alpha in range(N)], axis=0) / np.sum(P_posterior[q]) #print(np.sum([P_posterior[q, alpha] * X[alpha] for alpha in range(N)], axis=0)) #print(np.sum(P_posterior[q])) w_distance = np.linalg.norm(w[q] - w_old[q]) if verbose: print('Distance of centers:', w_distance) distance = max(distance, w_distance) sigma_squared[q] = 1 / M * np.sum([np.linalg.norm(X[alpha] - w_old[q])**2 * P_posterior[q, alpha] for alpha in range(N)]) / np.sum(P_posterior[q]) sigma_squared_distance = np.abs(sigma_squared[q] - sigma_squared_old[q]) if verbose: print('Distance of variances:', sigma_squared_distance) distance = max(distance, sigma_squared_distance) P_prior[q] = np.mean(P_posterior[q]) P_prior_distance = np.abs(P_prior[q] - P_prior_old[q]) if verbose: print('Distance of priors:', P_prior_distance) distance = max(distance, P_prior_distance) if verbose: print('Maximum distance:', distance) print() print('New centers:', w) print('New variances:', sigma_squared) print('New priors:', P_prior) print('='*50) print() which_gaussian_EM = P_posterior.argmax(axis=0) return which_gaussian_EM, w, np.sqrt(sigma_squared), step which_gaussian_em, cluster_centers_em, cluster_stds_em, num_steps_em = run_expectation_maximization(X, verbose=True) plot_data(X, which_gaussian_em, cluster_centers_em, cluster_stds_em) plt.title('Predicted by Expectation-Maximization') """ Explanation: 1.2 Run Expectation-Maximization algorithm See slide 18 of the lecture for an outline of the algorithm. End of explanation """ from sklearn.cluster import KMeans def run_k_means(X): km = KMeans(2) km.fit(X) which_gaussian_km = km.predict(X) cluster_stds = np.array([np.sqrt(variance(X[which_gaussian_km == 0])), np.sqrt(variance(X[which_gaussian_km == 1]))]) return which_gaussian_km, km.cluster_centers_, cluster_stds which_gaussian_km, cluster_centers_km, cluster_stds_km = run_k_means(X) plot_data(X, which_gaussian_km, cluster_centers_km, cluster_stds_km) plt.title('Predicted by K-Means') """ Explanation: 1.3 Run K-means algorithm For simplicity, we use the sklearn version of K-means here. The detailed algorithm was already implemented in a previous exercise. End of explanation """ _, _, _, num_steps_em_km = run_expectation_maximization(X, cluster_centers_km, cluster_stds_km**2) print('Took', num_steps_em, 'steps with random initalization') print('Took', num_steps_em_km, 'steps with initialization from K-means') """ Explanation: K-means clusters the data point by establishing a straight separation line. This cannot fully capture the nature of the data, e.g. the points around the lower left Gaussian, which actually belong to the upper right Gaussian. 1.4 Initialize EM algorithm with cluster parameters from K-Means End of explanation """ sigma1s = [0.1, 0.5, 1, 1.5] fig, axes = plt.subplots(len(sigma1s), 3, figsize=(15, 15), sharex=True, sharey=True) for i, (sigma1, horizontal_axes) in enumerate(zip(sigma1s, axes)): X, which_gaussian = create_data(sigma1) plt.sca(horizontal_axes[0]) plot_data(X, which_gaussian, [w1, w2], [sigma1, sigma2]) if i == 0: plt.title('Ground truth') which_gaussian_em, cluster_centers_em, cluster_stds_em, num_steps_em = run_expectation_maximization(X) plt.sca(horizontal_axes[1]) plot_data(X, which_gaussian_em, cluster_centers_em, cluster_stds_em) if i == 0: plt.title('Predicted by Expectation-Maximization') which_gaussian_km, cluster_centers_km, cluster_stds_km = run_k_means(X) plt.sca(horizontal_axes[2]) plot_data(X, which_gaussian_km, cluster_centers_km, cluster_stds_km) if i == 0: plt.title('Predicted by K-Means') """ Explanation: 1.5 Repeat analysis for different $\sigma_1$ values End of explanation """
joegomes/deepchem
examples/broken/solubility.ipynb
mit
%autoreload 2 %pdb off from deepchem.utils.save import load_from_disk dataset_file= "../datasets/delaney-processed.csv" dataset = load_from_disk(dataset_file) print("Columns of dataset: %s" % str(dataset.columns.values)) print("Number of examples in dataset: %s" % str(dataset.shape[0])) """ Explanation: Written by Bharath Ramsundar and Evan Feinberg Copyright 2016, Stanford University Computationally predicting molecular solubility through is useful for drug-discovery. In this tutorial, we will use the deepchem library to fit a simple statistical model that predicts the solubility of drug-like compounds. The process of fitting this model involves four steps: Loading a chemical dataset, consisting of a series of compounds along with aqueous solubility measurements. Transforming each compound into a feature vector $v \in \mathbb{R}^n$ comprehensible to statistical learning methods. Fitting a simple model that maps feature vectors to estimates of aqueous solubility. Visualizing the results. We need to load a dataset of estimated aqueous solubility measurements [1] into deepchem. The data is in CSV format and contains SMILES strings, predicted aqueaous solubilities, and a number of extraneous (for our purposes) molecular properties. Here is an example line from the dataset: |Compound ID|ESOL predicted log solubility in mols per litre|Minimum Degree|Molecular Weight|Number of H-Bond Donors|Number of Rings|Number of Rotatable Bonds|Polar Surface Area|measured log solubility in mols per litre|smiles| |-----------|-----------------------------------------------|--------------|----------------|----------------------------|---------------|-------------------------|-----------------------------------------------------------------------|------| |benzothiazole|-2.733|2|135.191|0|2|0|12.89|-1.5|c2ccc1scnc1c2| Most of these fields are not useful for our purposes. The two fields that we will need are the "smiles" field and the "measured log solubility in mols per litre". The "smiles" field holds a SMILES string [2] that specifies the compound in question. Before we load this data into deepchem, we will load the data into python and do some simple preliminary analysis to gain some intuition for the dataset. End of explanation """ import tempfile from rdkit import Chem from rdkit.Chem import Draw from itertools import islice from IPython.display import Image, HTML, display def display_images(filenames): """Helper to pretty-print images.""" imagesList=''.join( ["<img style='width: 140px; margin: 0px; float: left; border: 1px solid black;' src='%s' />" % str(s) for s in sorted(filenames)]) display(HTML(imagesList)) def mols_to_pngs(mols, basename="test"): """Helper to write RDKit mols to png files.""" filenames = [] for i, mol in enumerate(mols): filename = "%s%d.png" % (basename, i) Draw.MolToFile(mol, filename) filenames.append(filename) return filenames """ Explanation: To gain a visual understanding of compounds in our dataset, let's draw them using rdkit. We define a couple of helper functions to get started. End of explanation """ num_to_display = 14 molecules = [] for _, data in islice(dataset.iterrows(), num_to_display): molecules.append(Chem.MolFromSmiles(data["smiles"])) display_images(mols_to_pngs(molecules)) """ Explanation: Now, we display some compounds from the dataset: End of explanation """ %matplotlib inline import matplotlib import numpy as np import matplotlib.pyplot as plt solubilities = np.array(dataset["measured log solubility in mols per litre"]) n, bins, patches = plt.hist(solubilities, 50, facecolor='green', alpha=0.75) plt.xlabel('Measured log-solubility in mols/liter') plt.ylabel('Number of compounds') plt.title(r'Histogram of solubilities') plt.grid(True) plt.show() """ Explanation: Analyzing the distribution of solubilities shows us a nice spread of data. End of explanation """ from deepchem.featurizers.fingerprints import CircularFingerprint featurizers = [CircularFingerprint(size=1024)] """ Explanation: With our preliminary analysis completed, we return to the original goal of constructing a predictive statistical model of molecular solubility using deepchem. The first step in creating such a molecule is translating each compound into a vectorial format that can be understood by statistical learning techniques. This process is commonly called featurization. deepchem packages a number of commonly used featurization for user convenience. In this tutorial, we will use ECPF4 fingeprints [3]. deepchem offers an object-oriented API for featurization. To get started with featurization, we first construct a Featurizer object. deepchem provides the CircularFingeprint class (a subclass of Featurizer that performs ECFP4 featurization). End of explanation """ import tempfile, shutil from deepchem.featurizers.featurize import DataFeaturizer #Make directories to store the raw and featurized datasets. feature_dir = tempfile.mkdtemp() samples_dir = tempfile.mkdtemp() featurizer = DataFeaturizer(tasks=["measured log solubility in mols per litre"], smiles_field="smiles", compound_featurizers=featurizers) featurized_samples = featurizer.featurize(dataset_file, feature_dir, samples_dir) """ Explanation: Now, let's perform the actual featurization. deepchem provides the DataFeaturizer class for this purpose. The featurize() method for this class loads data from disk and uses provided Featurizerinstances to transform the provided data into feature vectors. The method constructs an instance of class FeaturizedSamples that has useful methods, such as an iterator, over the featurized data. End of explanation """ splittype = "scaffold" train_dir = tempfile.mkdtemp() valid_dir = tempfile.mkdtemp() test_dir = tempfile.mkdtemp() train_samples, valid_samples, test_samples = featurized_samples.train_valid_test_split( splittype, train_dir, valid_dir, test_dir) """ Explanation: When constructing statistical models, it's necessary to separate the provided data into train/test subsets. The train subset is used to learn the statistical model, while the test subset is used to evaluate the learned model. In practice, it's often useful to elaborate this split further and perform a train/validation/test split. The validation set is used to perform model selection. Proposed models are evaluated on the validation-set, and the best performed model is at the end tested on the test-set. Choosing the proper method of performing a train/validation/test split can be challenging. Standard machine learning practice is to perform a random split of the data into train/validation/test, but random splits are not well suited for the purposes of chemical informatics. For our predictive models to be useful, we require them to have predictive power in portions of chemical space beyond the set of molecules in the training data. Consequently, our models should use splits of the data that separate compounds in the training set from those in the validation and test-sets. We use Bemis-Murcko scaffolds [5] to perform this separation (all compounds that share an underlying molecular scaffold will be placed into the same split in the train/test/validation split). End of explanation """ train_mols = [Chem.MolFromSmiles(str(compound["smiles"])) for compound in islice(train_samples.itersamples(), num_to_display)] display_images(mols_to_pngs(train_mols, basename="train")) valid_mols = [Chem.MolFromSmiles(str(compound["smiles"])) for compound in islice(valid_samples.itersamples(), num_to_display)] display_images(mols_to_pngs(valid_mols, basename="valid")) """ Explanation: Let's visually inspect some of the molecules in the separate splits to verify that they appear structurally dissimilar. The FeaturizedSamples class provides an itersamples method that lets us obtain the underlying compounds in each split. End of explanation """ from deepchem.utils.dataset import Dataset train_dataset = Dataset(data_dir=train_dir, samples=train_samples, featurizers=featurizers, tasks=["measured log solubility in mols per litre"]) valid_dataset = Dataset(data_dir=valid_dir, samples=valid_samples, featurizers=featurizers, tasks=["measured log solubility in mols per litre"]) test_dataset = Dataset(data_dir=test_dir, samples=test_samples, featurizers=featurizers, tasks=["measured log solubility in mols per litre"]) """ Explanation: Notice the visual distinction between the train/validation splits. The most-common scaffolds are reserved for the train split, with the rarer scaffolds allotted to validation/test. To perform machine learning upon these datasets, we need to convert the samples into datasets suitable for machine-learning (that is, into data matrix $X \in \mathbb{R}^{n\times d}$ where $n$ is the number of samples and $d$ the dimensionality of the feature vector, and into label vector $y \in \mathbb{R}^n$). deepchem provides the Dataset class to facilitate this transformation. We simply need to instantiate separate instances of the Dataset() class, one corresponding to each split of the data. This style lends itself easily to validation-set hyperparameter searches, which we illustate below. End of explanation """ from deepchem.transformers import NormalizationTransformer input_transformers = [] output_transformers = [NormalizationTransformer(transform_y=True, dataset=train_dataset)] transformers = input_transformers + output_transformers for transformer in transformers: transformer.transform(train_dataset) for transformer in transformers: transformer.transform(valid_dataset) for transformer in transformers: transformer.transform(test_dataset) """ Explanation: The performance of common machine-learning algorithms can be very sensitive to preprocessing of the data. One common transformation applied to data is to normalize it to have zero-mean and unit-standard-deviation. We will apply this transformation to the log-solubility (as seen above, the log-solubility ranges from -12 to 2). End of explanation """ from sklearn.ensemble import RandomForestRegressor from deepchem.models.standard import SklearnModel model_dir = tempfile.mkdtemp() task_types = {"measured log solubility in mols per litre": "regression"} model_params = {"data_shape": train_dataset.get_data_shape()} model = SklearnModel(task_types, model_params, model_instance=RandomForestRegressor()) model.fit(train_dataset) model.save(model_dir) shutil.rmtree(model_dir) """ Explanation: The next step after processing the data is to start fitting simple learning models to our data. deepchem provides a number of machine-learning model classes. In particular, deepchem provides a convenience class, SklearnModel that wraps any machine-learning model available in scikit-learn [6]. Consequently, we will start by building a simple random-forest regressor that attempts to predict the log-solubility from our computed ECFP4 features. To train the model, we instantiate the SklearnModel object, then call the fit() method on the train_dataset we constructed above. We then save the model to disk. End of explanation """ from deepchem.utils.evaluate import Evaluator valid_csv_out = tempfile.NamedTemporaryFile() valid_stats_out = tempfile.NamedTemporaryFile() evaluator = Evaluator(model, valid_dataset, output_transformers) df, r2score = evaluator.compute_model_performance( valid_csv_out, valid_stats_out) print(r2score) """ Explanation: We next evaluate the model on the validation set to see its predictive power. deepchem provides the Evaluator class to facilitate this process. To evaluate the constructed model object, create a new Evaluator instance and call the compute_model_performance() method. End of explanation """ import itertools n_estimators_list = [100] max_features_list = ["auto", "sqrt", "log2", None] hyperparameters = [n_estimators_list, max_features_list] best_validation_score = -np.inf best_hyperparams = None best_model, best_model_dir = None, None for hyperparameter_tuple in itertools.product(*hyperparameters): n_estimators, max_features = hyperparameter_tuple model_dir = tempfile.mkdtemp() model = SklearnModel( task_types, model_params, model_instance=RandomForestRegressor(n_estimators=n_estimators, max_features=max_features)) model.fit(train_dataset) model.save(model_dir) evaluator = Evaluator(model, valid_dataset, output_transformers) df, r2score = evaluator.compute_model_performance( valid_csv_out, valid_stats_out) valid_r2_score = r2score.iloc[0]["r2_score"] print("n_estimators %d, max_features %s => Validation set R^2 %f" % (n_estimators, str(max_features), valid_r2_score)) if valid_r2_score > best_validation_score: best_validation_score = valid_r2_score best_hyperparams = hyperparameter_tuple if best_model_dir is not None: shutil.rmtree(best_model_dir) best_model_dir = model_dir best_model = model else: shutil.rmtree(model_dir) print("Best hyperparameters: %s" % str(best_hyperparams)) best_rf_hyperparams = best_hyperparams best_rf = best_model """ Explanation: The performance of this basic random-forest model isn't very strong. To construct stronger models, let's attempt to optimize the hyperparameters (choices made in the model-specification) to achieve better performance. For random forests, we can tweak n_estimators which controls the number of trees in the forest, and max_features which controls the number of features to consider when performing a split. We now build a series of SklearnModels with different choices for n_estimators and max_features and evaluate performance on the validation set. End of explanation """ from deepchem.models.deep import SingleTaskDNN import numpy.random model_params = {"activation": "relu", "dropout": 0.5, "momentum": .9, "nesterov": True, "decay": 1e-4, "batch_size": 5, "nb_epoch": 10, "init": "glorot_uniform", "data_shape": train_dataset.get_data_shape()} lr_list = np.power(10., np.random.uniform(-5, -1, size=1)) nb_hidden_list = [100] nb_epoch_list = [10] nesterov_list = [False] dropout_list = [.25] nb_layers_list = [1] batchnorm_list = [False] hyperparameters = [lr_list, nb_layers_list, nb_hidden_list, nb_epoch_list, nesterov_list, dropout_list, batchnorm_list] best_validation_score = -np.inf best_hyperparams = None best_model, best_model_dir = None, None for hyperparameter_tuple in itertools.product(*hyperparameters): print("Testing %s" % str(hyperparameter_tuple)) lr, nb_layers, nb_hidden, nb_epoch, nesterov, dropout, batchnorm = hyperparameter_tuple model_params["nb_hidden"] = nb_hidden model_params["nb_layers"] = nb_layers model_params["learning_rate"] = lr model_params["nb_epoch"] = nb_epoch model_params["nesterov"] = nesterov model_params["dropout"] = dropout model_params["batchnorm"] = batchnorm model_dir = tempfile.mkdtemp() model = SingleTaskDNN(task_types, model_params) model.fit(train_dataset) model.save(model_dir) evaluator = Evaluator(model, valid_dataset, output_transformers) df, r2score = evaluator.compute_model_performance( valid_csv_out, valid_stats_out) valid_r2_score = r2score.iloc[0]["r2_score"] print("learning_rate %f, nb_hidden %d, nb_epoch %d, nesterov %s, dropout %f => Validation set R^2 %f" % (lr, nb_hidden, nb_epoch, str(nesterov), dropout, valid_r2_score)) if valid_r2_score > best_validation_score: best_validation_score = valid_r2_score best_hyperparams = hyperparameter_tuple if best_model_dir is not None: shutil.rmtree(best_model_dir) best_model_dir = model_dir best_model = model else: shutil.rmtree(model_dir) print("Best hyperparameters: %s" % str(best_hyperparams)) print("best_validation_score: %f" % best_validation_score) best_dnn = best_model """ Explanation: The best model achieves significantly higher $R^2$ on the validation set than the first model we constructed. Now, let's perform the same sort of hyperparameter search, but with a simple deep-network instead. End of explanation """ rf_test_csv_out = tempfile.NamedTemporaryFile() rf_test_stats_out = tempfile.NamedTemporaryFile() rf_test_evaluator = Evaluator(best_rf, test_dataset, output_transformers) rf_test_df, rf_test_r2score = rf_test_evaluator.compute_model_performance( rf_test_csv_out, rf_test_stats_out) rf_test_r2_score = rf_test_r2score.iloc[0]["r2_score"] print("RF Test set R^2 %f" % (rf_test_r2_score)) dnn_test_csv_out = tempfile.NamedTemporaryFile() dnn_test_stats_out = tempfile.NamedTemporaryFile() dnn_test_evaluator = Evaluator(best_dnn, test_dataset, output_transformers) dnn_test_df, dnn_test_r2score = dnn_test_evaluator.compute_model_performance( dnn_test_csv_out, dnn_test_stats_out) dnn_test_r2_score = dnn_test_r2score.iloc[0]["r2_score"] print("DNN Test set R^2 %f" % (dnn_test_r2_score)) """ Explanation: Now that we have a reasonable choice of hyperparameters, let's evaluate the performance of our best models on the test-set. End of explanation """ task = "measured log solubility in mols per litre" rf_predicted_test = np.array(rf_test_df[task + "_pred"]) rf_true_test = np.array(rf_test_df[task]) plt.scatter(rf_predicted_test, rf_true_test) plt.xlabel('Predicted log-solubility in mols/liter') plt.ylabel('True log-solubility in mols/liter') plt.title(r'RF- predicted vs. true log-solubilities') plt.show() task = "measured log solubility in mols per litre" predicted_test = np.array(dnn_test_df[task + "_pred"]) true_test = np.array(dnn_test_df[task]) plt.scatter(predicted_test, true_test) plt.xlabel('Predicted log-solubility in mols/liter') plt.ylabel('True log-solubility in mols/liter') plt.title(r'DNN predicted vs. true log-solubilities') plt.show() """ Explanation: Now, let's plot the predicted $R^2$ scores versus the true $R^2$ scores for the constructed model. End of explanation """
Skylion007/Reed-Solomon
Generating the exponent and log tables.ipynb
mit
generator = ff.GF256int(3) generator """ Explanation: I used 3 as the generator for this field. For a field defined with the polynomial x^8 + x^4 + x^3 + x + 1, there may be other generators (I can't remember) End of explanation """ generator*generator generator*generator*generator generator**1 generator**2 generator**3 """ Explanation: Multiplying the generator by itself is the same as raising it to a power. I show up to the 3rd power here End of explanation """ generator.multiply(generator) generator.multiply(generator.multiply(generator)) """ Explanation: The slow multiply method implemented without the lookup table has the same results End of explanation """ exptable = [ff.GF256int(1), generator] for _ in range(254): # minus 2 because the first 2 elements are hardcoded exptable.append(exptable[-1].multiply(generator)) # Turn back to ints for a more compact print representation print([int(x) for x in exptable]) """ Explanation: We can enumerate the entire field by repeatedly multiplying by the generator. (The first element is 1 because generator^0 is 1). This becomes our exponent table. End of explanation """ exptable[5] == generator**5 all(exptable[n] == generator**n for n in range(256)) tuple(int(x) for x in exptable) == ff.GF256int.exptable """ Explanation: That's now our exponent table. We can look up the nth element in this list to get generator^n End of explanation """ logtable = [None for _ in range(256)] # Ignore the last element of the field because fields wrap back around. # The log of 1 could be 0 (g^0=1) or it could be 255 (g^255=1) for i, x in enumerate(exptable[:-1]): logtable[x] = i print([int(x) if x is not None else None for x in logtable]) tuple(int(x) if x is not None else None for x in logtable) == ff.GF256int.logtable """ Explanation: The log table is the inverse function of the exponent table End of explanation """
liufuyang/deep_learning_tutorial
course-deeplearning.ai/course5-rnn/Week 3/Machine Translation/Neural machine translation with attention - v4.ipynb
mit
from keras.layers import Bidirectional, Concatenate, Permute, Dot, Input, LSTM, Multiply from keras.layers import RepeatVector, Dense, Activation, Lambda from keras.optimizers import Adam from keras.utils import to_categorical from keras.models import load_model, Model import keras.backend as K import numpy as np from faker import Faker import random from tqdm import tqdm from babel.dates import format_date from nmt_utils import * import matplotlib.pyplot as plt %matplotlib inline """ Explanation: Neural Machine Translation Welcome to your first programming assignment for this week! You will build a Neural Machine Translation (NMT) model to translate human readable dates ("25th of June, 2009") into machine readable dates ("2009-06-25"). You will do this using an attention model, one of the most sophisticated sequence to sequence models. This notebook was produced together with NVIDIA's Deep Learning Institute. Let's load all the packages you will need for this assignment. End of explanation """ m = 10000 dataset, human_vocab, machine_vocab, inv_machine_vocab = load_dataset(m) dataset[:10] """ Explanation: 1 - Translating human readable dates into machine readable dates The model you will build here could be used to translate from one language to another, such as translating from English to Hindi. However, language translation requires massive datasets and usually takes days of training on GPUs. To give you a place to experiment with these models even without using massive datasets, we will instead use a simpler "date translation" task. The network will input a date written in a variety of possible formats (e.g. "the 29th of August 1958", "03/30/1968", "24 JUNE 1987") and translate them into standardized, machine readable dates (e.g. "1958-08-29", "1968-03-30", "1987-06-24"). We will have the network learn to output dates in the common machine-readable format YYYY-MM-DD. <!-- Take a look at [nmt_utils.py](./nmt_utils.py) to see all the formatting. Count and figure out how the formats work, you will need this knowledge later. !--> 1.1 - Dataset We will train the model on a dataset of 10000 human readable dates and their equivalent, standardized, machine readable dates. Let's run the following cells to load the dataset and print some examples. End of explanation """ Tx = 30 Ty = 10 X, Y, Xoh, Yoh = preprocess_data(dataset, human_vocab, machine_vocab, Tx, Ty) print("X.shape:", X.shape) print("Y.shape:", Y.shape) print("Xoh.shape:", Xoh.shape) print("Yoh.shape:", Yoh.shape) """ Explanation: You've loaded: - dataset: a list of tuples of (human readable date, machine readable date) - human_vocab: a python dictionary mapping all characters used in the human readable dates to an integer-valued index - machine_vocab: a python dictionary mapping all characters used in machine readable dates to an integer-valued index. These indices are not necessarily consistent with human_vocab. - inv_machine_vocab: the inverse dictionary of machine_vocab, mapping from indices back to characters. Let's preprocess the data and map the raw text data into the index values. We will also use Tx=30 (which we assume is the maximum length of the human readable date; if we get a longer input, we would have to truncate it) and Ty=10 (since "YYYY-MM-DD" is 10 characters long). End of explanation """ index = 0 print("Source date:", dataset[index][0]) print("Target date:", dataset[index][1]) print() print("Source after preprocessing (indices):", X[index]) print("Target after preprocessing (indices):", Y[index]) print() print("Source after preprocessing (one-hot):", Xoh[index]) print("Target after preprocessing (one-hot):", Yoh[index]) """ Explanation: You now have: - X: a processed version of the human readable dates in the training set, where each character is replaced by an index mapped to the character via human_vocab. Each date is further padded to $T_x$ values with a special character (< pad >). X.shape = (m, Tx) - Y: a processed version of the machine readable dates in the training set, where each character is replaced by the index it is mapped to in machine_vocab. You should have Y.shape = (m, Ty). - Xoh: one-hot version of X, the "1" entry's index is mapped to the character thanks to human_vocab. Xoh.shape = (m, Tx, len(human_vocab)) - Yoh: one-hot version of Y, the "1" entry's index is mapped to the character thanks to machine_vocab. Yoh.shape = (m, Tx, len(machine_vocab)). Here, len(machine_vocab) = 11 since there are 11 characters ('-' as well as 0-9). Lets also look at some examples of preprocessed training examples. Feel free to play with index in the cell below to navigate the dataset and see how source/target dates are preprocessed. End of explanation """ # Defined shared layers as global variables repeator = RepeatVector(Tx) concatenator = Concatenate(axis=-1) densor1 = Dense(10, activation = "tanh") densor2 = Dense(1, activation = "relu") activator = Activation(softmax, name='attention_weights') # We are using a custom softmax(axis = 1) loaded in this notebook dotor = Dot(axes = 1) """ Explanation: 2 - Neural machine translation with attention If you had to translate a book's paragraph from French to English, you would not read the whole paragraph, then close the book and translate. Even during the translation process, you would read/re-read and focus on the parts of the French paragraph corresponding to the parts of the English you are writing down. The attention mechanism tells a Neural Machine Translation model where it should pay attention to at any step. 2.1 - Attention mechanism In this part, you will implement the attention mechanism presented in the lecture videos. Here is a figure to remind you how the model works. The diagram on the left shows the attention model. The diagram on the right shows what one "Attention" step does to calculate the attention variables $\alpha^{\langle t, t' \rangle}$, which are used to compute the context variable $context^{\langle t \rangle}$ for each timestep in the output ($t=1, \ldots, T_y$). <table> <td> <img src="images/attn_model.png" style="width:500;height:500px;"> <br> </td> <td> <img src="images/attn_mechanism.png" style="width:500;height:500px;"> <br> </td> </table> <caption><center> Figure 1: Neural machine translation with attention</center></caption> Here are some properties of the model that you may notice: There are two separate LSTMs in this model (see diagram on the left). Because the one at the bottom of the picture is a Bi-directional LSTM and comes before the attention mechanism, we will call it pre-attention Bi-LSTM. The LSTM at the top of the diagram comes after the attention mechanism, so we will call it the post-attention LSTM. The pre-attention Bi-LSTM goes through $T_x$ time steps; the post-attention LSTM goes through $T_y$ time steps. The post-attention LSTM passes $s^{\langle t \rangle}, c^{\langle t \rangle}$ from one time step to the next. In the lecture videos, we were using only a basic RNN for the post-activation sequence model, so the state captured by the RNN output activations $s^{\langle t\rangle}$. But since we are using an LSTM here, the LSTM has both the output activation $s^{\langle t\rangle}$ and the hidden cell state $c^{\langle t\rangle}$. However, unlike previous text generation examples (such as Dinosaurus in week 1), in this model the post-activation LSTM at time $t$ does will not take the specific generated $y^{\langle t-1 \rangle}$ as input; it only takes $s^{\langle t\rangle}$ and $c^{\langle t\rangle}$ as input. We have designed the model this way, because (unlike language generation where adjacent characters are highly correlated) there isn't as strong a dependency between the previous character and the next character in a YYYY-MM-DD date. We use $a^{\langle t \rangle} = [\overrightarrow{a}^{\langle t \rangle}; \overleftarrow{a}^{\langle t \rangle}]$ to represent the concatenation of the activations of both the forward-direction and backward-directions of the pre-attention Bi-LSTM. The diagram on the right uses a RepeatVector node to copy $s^{\langle t-1 \rangle}$'s value $T_x$ times, and then Concatenation to concatenate $s^{\langle t-1 \rangle}$ and $a^{\langle t \rangle}$ to compute $e^{\langle t, t'}$, which is then passed through a softmax to compute $\alpha^{\langle t, t' \rangle}$. We'll explain how to use RepeatVector and Concatenation in Keras below. Lets implement this model. You will start by implementing two functions: one_step_attention() and model(). 1) one_step_attention(): At step $t$, given all the hidden states of the Bi-LSTM ($[a^{<1>},a^{<2>}, ..., a^{<T_x>}]$) and the previous hidden state of the second LSTM ($s^{<t-1>}$), one_step_attention() will compute the attention weights ($[\alpha^{<t,1>},\alpha^{<t,2>}, ..., \alpha^{<t,T_x>}]$) and output the context vector (see Figure 1 (right) for details): $$context^{<t>} = \sum_{t' = 0}^{T_x} \alpha^{<t,t'>}a^{<t'>}\tag{1}$$ Note that we are denoting the attention in this notebook $context^{\langle t \rangle}$. In the lecture videos, the context was denoted $c^{\langle t \rangle}$, but here we are calling it $context^{\langle t \rangle}$ to avoid confusion with the (post-attention) LSTM's internal memory cell variable, which is sometimes also denoted $c^{\langle t \rangle}$. 2) model(): Implements the entire model. It first runs the input through a Bi-LSTM to get back $[a^{<1>},a^{<2>}, ..., a^{<T_x>}]$. Then, it calls one_step_attention() $T_y$ times (for loop). At each iteration of this loop, it gives the computed context vector $c^{<t>}$ to the second LSTM, and runs the output of the LSTM through a dense layer with softmax activation to generate a prediction $\hat{y}^{<t>}$. Exercise: Implement one_step_attention(). The function model() will call the layers in one_step_attention() $T_y$ using a for-loop, and it is important that all $T_y$ copies have the same weights. I.e., it should not re-initiaiize the weights every time. In other words, all $T_y$ steps should have shared weights. Here's how you can implement layers with shareable weights in Keras: 1. Define the layer objects (as global variables for examples). 2. Call these objects when propagating the input. We have defined the layers you need as global variables. Please run the following cells to create them. Please check the Keras documentation to make sure you understand what these layers are: RepeatVector(), Concatenate(), Dense(), Activation(), Dot(). End of explanation """ # GRADED FUNCTION: one_step_attention def one_step_attention(a, s_prev): """ Performs one step of attention: Outputs a context vector computed as a dot product of the attention weights "alphas" and the hidden states "a" of the Bi-LSTM. Arguments: a -- hidden state output of the Bi-LSTM, numpy-array of shape (m, Tx, 2*n_a) s_prev -- previous hidden state of the (post-attention) LSTM, numpy-array of shape (m, n_s) Returns: context -- context vector, input of the next (post-attetion) LSTM cell """ ### START CODE HERE ### # Use repeator to repeat s_prev to be of shape (m, Tx, n_s) so that you can concatenate it with all hidden states "a" (≈ 1 line) s_prev = None # Use concatenator to concatenate a and s_prev on the last axis (≈ 1 line) concat = None # Use densor1 to propagate concat through a small fully-connected neural network to compute the "intermediate energies" variable e. (≈1 lines) e = None # Use densor2 to propagate e through a small fully-connected neural network to compute the "energies" variable energies. (≈1 lines) energies = None # Use "activator" on "energies" to compute the attention weights "alphas" (≈ 1 line) alphas = None # Use dotor together with "alphas" and "a" to compute the context vector to be given to the next (post-attention) LSTM-cell (≈ 1 line) context = None ### END CODE HERE ### return context """ Explanation: Now you can use these layers to implement one_step_attention(). In order to propagate a Keras tensor object X through one of these layers, use layer(X) (or layer([X,Y]) if it requires multiple inputs.), e.g. densor(X) will propagate X through the Dense(1) layer defined above. End of explanation """ n_a = 32 n_s = 64 post_activation_LSTM_cell = LSTM(n_s, return_state = True) output_layer = Dense(len(machine_vocab), activation=softmax) """ Explanation: You will be able to check the expected output of one_step_attention() after you've coded the model() function. Exercise: Implement model() as explained in figure 2 and the text above. Again, we have defined global layers that will share weights to be used in model(). End of explanation """ # GRADED FUNCTION: model def model(Tx, Ty, n_a, n_s, human_vocab_size, machine_vocab_size): """ Arguments: Tx -- length of the input sequence Ty -- length of the output sequence n_a -- hidden state size of the Bi-LSTM n_s -- hidden state size of the post-attention LSTM human_vocab_size -- size of the python dictionary "human_vocab" machine_vocab_size -- size of the python dictionary "machine_vocab" Returns: model -- Keras model instance """ # Define the inputs of your model with a shape (Tx,) # Define s0 and c0, initial hidden state for the decoder LSTM of shape (n_s,) X = Input(shape=(Tx, human_vocab_size)) s0 = Input(shape=(n_s,), name='s0') c0 = Input(shape=(n_s,), name='c0') s = s0 c = c0 # Initialize empty list of outputs outputs = [] ### START CODE HERE ### # Step 1: Define your pre-attention Bi-LSTM. Remember to use return_sequences=True. (≈ 1 line) a = None # Step 2: Iterate for Ty steps for t in range(None): # Step 2.A: Perform one step of the attention mechanism to get back the context vector at step t (≈ 1 line) context = None # Step 2.B: Apply the post-attention LSTM cell to the "context" vector. # Don't forget to pass: initial_state = [hidden state, cell state] (≈ 1 line) s, _, c = None # Step 2.C: Apply Dense layer to the hidden state output of the post-attention LSTM (≈ 1 line) out = None # Step 2.D: Append "out" to the "outputs" list (≈ 1 line) None # Step 3: Create model instance taking three inputs and returning the list of outputs. (≈ 1 line) model = None ### END CODE HERE ### return model """ Explanation: Now you can use these layers $T_y$ times in a for loop to generate the outputs, and their parameters will not be reinitialized. You will have to carry out the following steps: Propagate the input into a Bidirectional LSTM Iterate for $t = 0, \dots, T_y-1$: Call one_step_attention() on $[\alpha^{<t,1>},\alpha^{<t,2>}, ..., \alpha^{<t,T_x>}]$ and $s^{<t-1>}$ to get the context vector $context^{<t>}$. Give $context^{<t>}$ to the post-attention LSTM cell. Remember pass in the previous hidden-state $s^{\langle t-1\rangle}$ and cell-states $c^{\langle t-1\rangle}$ of this LSTM using initial_state= [previous hidden state, previous cell state]. Get back the new hidden state $s^{<t>}$ and the new cell state $c^{<t>}$. Apply a softmax layer to $s^{<t>}$, get the output. Save the output by adding it to the list of outputs. Create your Keras model instance, it should have three inputs ("inputs", $s^{<0>}$ and $c^{<0>}$) and output the list of "outputs". End of explanation """ model = model(Tx, Ty, n_a, n_s, len(human_vocab), len(machine_vocab)) """ Explanation: Run the following cell to create your model. End of explanation """ model.summary() """ Explanation: Let's get a summary of the model to check if it matches the expected output. End of explanation """ ### START CODE HERE ### (≈2 lines) opt = None None ### END CODE HERE ### """ Explanation: Expected Output: Here is the summary you should see <table> <tr> <td> **Total params:** </td> <td> 52,960 </td> </tr> <tr> <td> **Trainable params:** </td> <td> 52,960 </td> </tr> <tr> <td> **Non-trainable params:** </td> <td> 0 </td> </tr> <tr> <td> **bidirectional_1's output shape ** </td> <td> (None, 30, 64) </td> </tr> <tr> <td> **repeat_vector_1's output shape ** </td> <td> (None, 30, 64) </td> </tr> <tr> <td> **concatenate_1's output shape ** </td> <td> (None, 30, 128) </td> </tr> <tr> <td> **attention_weights's output shape ** </td> <td> (None, 30, 1) </td> </tr> <tr> <td> **dot_1's output shape ** </td> <td> (None, 1, 64) </td> </tr> <tr> <td> **dense_3's output shape ** </td> <td> (None, 11) </td> </tr> </table> As usual, after creating your model in Keras, you need to compile it and define what loss, optimizer and metrics your are want to use. Compile your model using categorical_crossentropy loss, a custom Adam optimizer (learning rate = 0.005, $\beta_1 = 0.9$, $\beta_2 = 0.999$, decay = 0.01) and ['accuracy'] metrics: End of explanation """ s0 = np.zeros((m, n_s)) c0 = np.zeros((m, n_s)) outputs = list(Yoh.swapaxes(0,1)) """ Explanation: The last step is to define all your inputs and outputs to fit the model: - You already have X of shape $(m = 10000, T_x = 30)$ containing the training examples. - You need to create s0 and c0 to initialize your post_activation_LSTM_cell with 0s. - Given the model() you coded, you need the "outputs" to be a list of 11 elements of shape (m, T_y). So that: outputs[i][0], ..., outputs[i][Ty] represent the true labels (characters) corresponding to the $i^{th}$ training example (X[i]). More generally, outputs[i][j] is the true label of the $j^{th}$ character in the $i^{th}$ training example. End of explanation """ model.fit([Xoh, s0, c0], outputs, epochs=1, batch_size=100) """ Explanation: Let's now fit the model and run it for one epoch. End of explanation """ model.load_weights('models/model.h5') """ Explanation: While training you can see the loss as well as the accuracy on each of the 10 positions of the output. The table below gives you an example of what the accuracies could be if the batch had 2 examples: <img src="images/table.png" style="width:700;height:200px;"> <br> <caption><center>Thus, dense_2_acc_8: 0.89 means that you are predicting the 7th character of the output correctly 89% of the time in the current batch of data. </center></caption> We have run this model for longer, and saved the weights. Run the next cell to load our weights. (By training a model for several minutes, you should be able to obtain a model of similar accuracy, but loading our model will save you time.) End of explanation """ EXAMPLES = ['3 May 1979', '5 April 09', '21th of August 2016', 'Tue 10 Jul 2007', 'Saturday May 9 2018', 'March 3 2001', 'March 3rd 2001', '1 March 2001'] for example in EXAMPLES: source = string_to_int(example, Tx, human_vocab) source = np.array(list(map(lambda x: to_categorical(x, num_classes=len(human_vocab)), source))).swapaxes(0,1) prediction = model.predict([source, s0, c0]) prediction = np.argmax(prediction, axis = -1) output = [inv_machine_vocab[int(i)] for i in prediction] print("source:", example) print("output:", ''.join(output)) """ Explanation: You can now see the results on new examples. End of explanation """ model.summary() """ Explanation: You can also change these examples to test with your own examples. The next part will give you a better sense on what the attention mechanism is doing--i.e., what part of the input the network is paying attention to when generating a particular output character. 3 - Visualizing Attention (Optional / Ungraded) Since the problem has a fixed output length of 10, it is also possible to carry out this task using 10 different softmax units to generate the 10 characters of the output. But one advantage of the attention model is that each part of the output (say the month) knows it needs to depend only on a small part of the input (the characters in the input giving the month). We can visualize what part of the output is looking at what part of the input. Consider the task of translating "Saturday 9 May 2018" to "2018-05-09". If we visualize the computed $\alpha^{\langle t, t' \rangle}$ we get this: <img src="images/date_attention.png" style="width:600;height:300px;"> <br> <caption><center> Figure 8: Full Attention Map</center></caption> Notice how the output ignores the "Saturday" portion of the input. None of the output timesteps are paying much attention to that portion of the input. We see also that 9 has been translated as 09 and May has been correctly translated into 05, with the output paying attention to the parts of the input it needs to to make the translation. The year mostly requires it to pay attention to the input's "18" in order to generate "2018." 3.1 - Getting the activations from the network Lets now visualize the attention values in your network. We'll propagate an example through the network, then visualize the values of $\alpha^{\langle t, t' \rangle}$. To figure out where the attention values are located, let's start by printing a summary of the model . End of explanation """ attention_map = plot_attention_map(model, human_vocab, inv_machine_vocab, "Tuesday 09 Oct 1993", num = 7, n_s = 64) """ Explanation: Navigate through the output of model.summary() above. You can see that the layer named attention_weights outputs the alphas of shape (m, 30, 1) before dot_2 computes the context vector for every time step $t = 0, \ldots, T_y-1$. Lets get the activations from this layer. The function attention_map() pulls out the attention values from your model and plots them. End of explanation """
mne-tools/mne-tools.github.io
0.15/_downloads/decoding_rsa.ipynb
bsd-3-clause
# Authors: Jean-Remi King <jeanremi.king@gmail.com> # Jaakko Leppakangas <jaeilepp@student.jyu.fi> # Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr> # # License: BSD (3-clause) import os.path as op import numpy as np from pandas import read_csv import matplotlib.pyplot as plt from sklearn.model_selection import StratifiedKFold from sklearn.pipeline import make_pipeline from sklearn.preprocessing import StandardScaler from sklearn.linear_model import LogisticRegression from sklearn.metrics import roc_auc_score from sklearn.manifold import MDS import mne from mne.io import read_raw_fif, concatenate_raws from mne.datasets import visual_92_categories print(__doc__) data_path = visual_92_categories.data_path() # Define stimulus - trigger mapping fname = op.join(data_path, 'visual_stimuli.csv') conds = read_csv(fname) print(conds.head(5)) """ Explanation: Representational Similarity Analysis Representational Similarity Analysis is used to perform summary statistics on supervised classifications where the number of classes is relatively high. It consists in characterizing the structure of the confusion matrix to infer the similarity between brain responses and serves as a proxy for characterizing the space of mental representations [1] [2] [3]_. In this example, we perform RSA on responses to 24 object images (among a list of 92 images). Subjects were presented with images of human, animal and inanimate objects [4]_. Here we use the 24 unique images of faces and body parts. <div class="alert alert-info"><h4>Note</h4><p>this example will download a very large (~6GB) file, so we will not build the images below.</p></div> References .. [1] Shepard, R. "Multidimensional scaling, tree-fitting, and clustering." Science 210.4468 (1980): 390-398. .. [2] Laakso, A. & Cottrell, G.. "Content and cluster analysis: assessing representational similarity in neural systems." Philosophical psychology 13.1 (2000): 47-76. .. [3] Kriegeskorte, N., Marieke, M., & Bandettini. P. "Representational similarity analysis-connecting the branches of systems neuroscience." Frontiers in systems neuroscience 2 (2008): 4. .. [4] Cichy, R. M., Pantazis, D., & Oliva, A. "Resolving human object recognition in space and time." Nature neuroscience (2014): 17(3), 455-462. End of explanation """ max_trigger = 24 conds = conds[:max_trigger] # take only the first 24 rows """ Explanation: Let's restrict the number of conditions to speed up computation End of explanation """ conditions = [] for c in conds.values: cond_tags = list(c[:2]) cond_tags += [('not-' if i == 0 else '') + conds.columns[k] for k, i in enumerate(c[2:], 2)] conditions.append('/'.join(map(str, cond_tags))) print(conditions[:10]) """ Explanation: Define stimulus - trigger mapping End of explanation """ event_id = dict(zip(conditions, conds.trigger + 1)) event_id['0/human bodypart/human/not-face/animal/natural'] """ Explanation: Let's make the event_id dictionary End of explanation """ n_runs = 4 # 4 for full data (use less to speed up computations) fname = op.join(data_path, 'sample_subject_%i_tsss_mc.fif') raws = [read_raw_fif(fname % block) for block in range(n_runs)] raw = concatenate_raws(raws) events = mne.find_events(raw, min_duration=.002) events = events[events[:, 2] <= max_trigger] """ Explanation: Read MEG data End of explanation """ picks = mne.pick_types(raw.info, meg=True) epochs = mne.Epochs(raw, events=events, event_id=event_id, baseline=None, picks=picks, tmin=-.1, tmax=.500, preload=True) """ Explanation: Epoch data End of explanation """ epochs['face'].average().plot() epochs['not-face'].average().plot() """ Explanation: Let's plot some conditions End of explanation """ # Classify using the average signal in the window 50ms to 300ms # to focus the classifier on the time interval with best SNR. clf = make_pipeline(StandardScaler(), LogisticRegression(C=1, solver='lbfgs')) X = epochs.copy().crop(0.05, 0.3).get_data().mean(axis=2) y = epochs.events[:, 2] classes = set(y) cv = StratifiedKFold(n_splits=5, random_state=0, shuffle=True) # Compute confusion matrix for each cross-validation fold y_pred = np.zeros((len(y), len(classes))) for train, test in cv.split(X, y): # Fit clf.fit(X[train], y[train]) # Probabilistic prediction (necessary for ROC-AUC scoring metric) y_pred[test] = clf.predict_proba(X[test]) """ Explanation: Representational Similarity Analysis (RSA) is a neuroimaging-specific appelation to refer to statistics applied to the confusion matrix also referred to as the representational dissimilarity matrices (RDM). Compared to the approach from Cichy et al. we'll use a multiclass classifier (Multinomial Logistic Regression) while the paper uses all pairwise binary classification task to make the RDM. Also we use here the ROC-AUC as performance metric while the paper uses accuracy. Finally here for the sake of time we use RSA on a window of data while Cichy et al. did it for all time instants separately. End of explanation """ confusion = np.zeros((len(classes), len(classes))) for ii, train_class in enumerate(classes): for jj in range(ii, len(classes)): confusion[ii, jj] = roc_auc_score(y == train_class, y_pred[:, jj]) confusion[jj, ii] = confusion[ii, jj] """ Explanation: Compute confusion matrix using ROC-AUC End of explanation """ labels = [''] * 5 + ['face'] + [''] * 11 + ['bodypart'] + [''] * 6 fig, ax = plt.subplots(1) im = ax.matshow(confusion, cmap='RdBu_r', clim=[0.3, 0.7]) ax.set_yticks(range(len(classes))) ax.set_yticklabels(labels) ax.set_xticks(range(len(classes))) ax.set_xticklabels(labels, rotation=40, ha='left') ax.axhline(11.5, color='k') ax.axvline(11.5, color='k') plt.colorbar(im) plt.tight_layout() plt.show() """ Explanation: Plot End of explanation """ fig, ax = plt.subplots(1) mds = MDS(2, random_state=0, dissimilarity='precomputed') chance = 0.5 summary = mds.fit_transform(chance - confusion) cmap = plt.get_cmap('rainbow') colors = ['r', 'b'] names = list(conds['condition'].values) for color, name in zip(colors, set(names)): sel = np.where([this_name == name for this_name in names])[0] size = 500 if name == 'human face' else 100 ax.scatter(summary[sel, 0], summary[sel, 1], s=size, facecolors=color, label=name, edgecolors='k') ax.axis('off') ax.legend(loc='lower right', scatterpoints=1, ncol=2) plt.tight_layout() plt.show() """ Explanation: Confusion matrix related to mental representations have been historically summarized with dimensionality reduction using multi-dimensional scaling [1]. See how the face samples cluster together. End of explanation """
efoley/deep-learning
transfer-learning/Transfer_Learning.ipynb
mit
from urllib.request import urlretrieve from os.path import isfile, isdir from tqdm import tqdm vgg_dir = 'tensorflow_vgg/' # Make sure vgg exists if not isdir(vgg_dir): raise Exception("VGG directory doesn't exist!") class DLProgress(tqdm): last_block = 0 def hook(self, block_num=1, block_size=1, total_size=None): self.total = total_size self.update((block_num - self.last_block) * block_size) self.last_block = block_num if not isfile(vgg_dir + "vgg16.npy"): with DLProgress(unit='B', unit_scale=True, miniters=1, desc='VGG16 Parameters') as pbar: urlretrieve( 'https://s3.amazonaws.com/content.udacity-data.com/nd101/vgg16.npy', vgg_dir + 'vgg16.npy', pbar.hook) else: print("Parameter file already exists!") """ Explanation: Transfer Learning Most of the time you won't want to train a whole convolutional network yourself. Modern ConvNets training on huge datasets like ImageNet take weeks on multiple GPUs. Instead, most people use a pretrained network either as a fixed feature extractor, or as an initial network to fine tune. In this notebook, you'll be using VGGNet trained on the ImageNet dataset as a feature extractor. Below is a diagram of the VGGNet architecture. <img src="assets/cnnarchitecture.jpg" width=700px> VGGNet is great because it's simple and has great performance, coming in second in the ImageNet competition. The idea here is that we keep all the convolutional layers, but replace the final fully connected layers with our own classifier. This way we can use VGGNet as a feature extractor for our images then easily train a simple classifier on top of that. What we'll do is take the first fully connected layer with 4096 units, including thresholding with ReLUs. We can use those values as a code for each image, then build a classifier on top of those codes. You can read more about transfer learning from the CS231n course notes. Pretrained VGGNet We'll be using a pretrained network from https://github.com/machrisaa/tensorflow-vgg. Make sure to clone this repository to the directory you're working from. You'll also want to rename it so it has an underscore instead of a dash. git clone https://github.com/machrisaa/tensorflow-vgg.git tensorflow_vgg This is a really nice implementation of VGGNet, quite easy to work with. The network has already been trained and the parameters are available from this link. You'll need to clone the repo into the folder containing this notebook. Then download the parameter file using the next cell. End of explanation """ import tarfile dataset_folder_path = 'flower_photos' class DLProgress(tqdm): last_block = 0 def hook(self, block_num=1, block_size=1, total_size=None): self.total = total_size self.update((block_num - self.last_block) * block_size) self.last_block = block_num if not isfile('flower_photos.tar.gz'): with DLProgress(unit='B', unit_scale=True, miniters=1, desc='Flowers Dataset') as pbar: urlretrieve( 'http://download.tensorflow.org/example_images/flower_photos.tgz', 'flower_photos.tar.gz', pbar.hook) if not isdir(dataset_folder_path): with tarfile.open('flower_photos.tar.gz') as tar: tar.extractall() tar.close() """ Explanation: Flower power Here we'll be using VGGNet to classify images of flowers. To get the flower dataset, run the cell below. This dataset comes from the TensorFlow inception tutorial. End of explanation """ import os import numpy as np import tensorflow as tf from tensorflow_vgg import vgg16 from tensorflow_vgg import utils data_dir = 'flower_photos/' contents = os.listdir(data_dir) classes = [each for each in contents if os.path.isdir(data_dir + each)] """ Explanation: ConvNet Codes Below, we'll run through all the images in our dataset and get codes for each of them. That is, we'll run the images through the VGGNet convolutional layers and record the values of the first fully connected layer. We can then write these to a file for later when we build our own classifier. Here we're using the vgg16 module from tensorflow_vgg. The network takes images of size $224 \times 224 \times 3$ as input. Then it has 5 sets of convolutional layers. The network implemented here has this structure (copied from the source code): ``` self.conv1_1 = self.conv_layer(bgr, "conv1_1") self.conv1_2 = self.conv_layer(self.conv1_1, "conv1_2") self.pool1 = self.max_pool(self.conv1_2, 'pool1') self.conv2_1 = self.conv_layer(self.pool1, "conv2_1") self.conv2_2 = self.conv_layer(self.conv2_1, "conv2_2") self.pool2 = self.max_pool(self.conv2_2, 'pool2') self.conv3_1 = self.conv_layer(self.pool2, "conv3_1") self.conv3_2 = self.conv_layer(self.conv3_1, "conv3_2") self.conv3_3 = self.conv_layer(self.conv3_2, "conv3_3") self.pool3 = self.max_pool(self.conv3_3, 'pool3') self.conv4_1 = self.conv_layer(self.pool3, "conv4_1") self.conv4_2 = self.conv_layer(self.conv4_1, "conv4_2") self.conv4_3 = self.conv_layer(self.conv4_2, "conv4_3") self.pool4 = self.max_pool(self.conv4_3, 'pool4') self.conv5_1 = self.conv_layer(self.pool4, "conv5_1") self.conv5_2 = self.conv_layer(self.conv5_1, "conv5_2") self.conv5_3 = self.conv_layer(self.conv5_2, "conv5_3") self.pool5 = self.max_pool(self.conv5_3, 'pool5') self.fc6 = self.fc_layer(self.pool5, "fc6") self.relu6 = tf.nn.relu(self.fc6) ``` So what we want are the values of the first fully connected layer, after being ReLUd (self.relu6). To build the network, we use with tf.Session() as sess: vgg = vgg16.Vgg16() input_ = tf.placeholder(tf.float32, [None, 224, 224, 3]) with tf.name_scope("content_vgg"): vgg.build(input_) This creates the vgg object, then builds the graph with vgg.build(input_). Then to get the values from the layer, feed_dict = {input_: images} codes = sess.run(vgg.relu6, feed_dict=feed_dict) End of explanation """ # Set the batch size higher if you can fit in in your GPU memory batch_size = 10 codes_list = [] labels = [] batch = [] codes = None with tf.Session() as sess: # TODO: Build the vgg network here for each in classes: print("Starting {} images".format(each)) class_path = data_dir + each files = os.listdir(class_path) for ii, file in enumerate(files, 1): # Add images to the current batch # utils.load_image crops the input images for us, from the center img = utils.load_image(os.path.join(class_path, file)) batch.append(img.reshape((1, 224, 224, 3))) labels.append(each) # Running the batch through the network to get the codes if ii % batch_size == 0 or ii == len(files): # Image batch to pass to VGG network images = np.concatenate(batch) # TODO: Get the values from the relu6 layer of the VGG network codes_batch = # Here I'm building an array of the codes if codes is None: codes = codes_batch else: codes = np.concatenate((codes, codes_batch)) # Reset to start building the next batch batch = [] print('{} images processed'.format(ii)) # write codes to file with open('codes', 'w') as f: codes.tofile(f) # write labels to file import csv with open('labels', 'w') as f: writer = csv.writer(f, delimiter='\n') writer.writerow(labels) """ Explanation: Below I'm running images through the VGG network in batches. Exercise: Below, build the VGG network. Also get the codes from the first fully connected layer (make sure you get the ReLUd values). End of explanation """ # read codes and labels from file import csv with open('labels') as f: reader = csv.reader(f, delimiter='\n') labels = np.array([each for each in reader]).squeeze() with open('codes') as f: codes = np.fromfile(f, dtype=np.float32) codes = codes.reshape((len(labels), -1)) """ Explanation: Building the Classifier Now that we have codes for all the images, we can build a simple classifier on top of them. The codes behave just like normal input into a simple neural network. Below I'm going to have you do most of the work. End of explanation """ labels_vecs = # Your one-hot encoded labels array here """ Explanation: Data prep As usual, now we need to one-hot encode our labels and create validation/test sets. First up, creating our labels! Exercise: From scikit-learn, use LabelBinarizer to create one-hot encoded vectors from the labels. End of explanation """ train_x, train_y = val_x, val_y = test_x, test_y = print("Train shapes (x, y):", train_x.shape, train_y.shape) print("Validation shapes (x, y):", val_x.shape, val_y.shape) print("Test shapes (x, y):", test_x.shape, test_y.shape) """ Explanation: Now you'll want to create your training, validation, and test sets. An important thing to note here is that our labels and data aren't randomized yet. We'll want to shuffle our data so the validation and test sets contain data from all classes. Otherwise, you could end up with testing sets that are all one class. Typically, you'll also want to make sure that each smaller set has the same the distribution of classes as it is for the whole data set. The easiest way to accomplish both these goals is to use StratifiedShuffleSplit from scikit-learn. You can create the splitter like so: ss = StratifiedShuffleSplit(n_splits=1, test_size=0.2) Then split the data with splitter = ss.split(x, y) ss.split returns a generator of indices. You can pass the indices into the arrays to get the split sets. The fact that it's a generator means you either need to iterate over it, or use next(splitter) to get the indices. Be sure to read the documentation and the user guide. Exercise: Use StratifiedShuffleSplit to split the codes and labels into training, validation, and test sets. End of explanation """ inputs_ = tf.placeholder(tf.float32, shape=[None, codes.shape[1]]) labels_ = tf.placeholder(tf.int64, shape=[None, labels_vecs.shape[1]]) # TODO: Classifier layers and operations logits = # output layer logits cost = # cross entropy loss optimizer = # training optimizer # Operations for validation/test accuracy predicted = tf.nn.softmax(logits) correct_pred = tf.equal(tf.argmax(predicted, 1), tf.argmax(labels_, 1)) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) """ Explanation: If you did it right, you should see these sizes for the training sets: Train shapes (x, y): (2936, 4096) (2936, 5) Validation shapes (x, y): (367, 4096) (367, 5) Test shapes (x, y): (367, 4096) (367, 5) Classifier layers Once you have the convolutional codes, you just need to build a classfier from some fully connected layers. You use the codes as the inputs and the image labels as targets. Otherwise the classifier is a typical neural network. Exercise: With the codes and labels loaded, build the classifier. Consider the codes as your inputs, each of them are 4096D vectors. You'll want to use a hidden layer and an output layer as your classifier. Remember that the output layer needs to have one unit for each class and a softmax activation function. Use the cross entropy to calculate the cost. End of explanation """ def get_batches(x, y, n_batches=10): """ Return a generator that yields batches from arrays x and y. """ batch_size = len(x)//n_batches for ii in range(0, n_batches*batch_size, batch_size): # If we're not on the last batch, grab data with size batch_size if ii != (n_batches-1)*batch_size: X, Y = x[ii: ii+batch_size], y[ii: ii+batch_size] # On the last batch, grab the rest of the data else: X, Y = x[ii:], y[ii:] # I love generators yield X, Y """ Explanation: Batches! Here is just a simple way to do batches. I've written it so that it includes all the data. Sometimes you'll throw out some data at the end to make sure you have full batches. Here I just extend the last batch to include the remaining data. End of explanation """ saver = tf.train.Saver() with tf.Session() as sess: # TODO: Your training code here saver.save(sess, "checkpoints/flowers.ckpt") """ Explanation: Training Here, we'll train the network. Exercise: So far we've been providing the training code for you. Here, I'm going to give you a bit more of a challenge and have you write the code to train the network. Of course, you'll be able to see my solution if you need help. Use the get_batches function I wrote before to get your batches like for x, y in get_batches(train_x, train_y). Or write your own! End of explanation """ with tf.Session() as sess: saver.restore(sess, tf.train.latest_checkpoint('checkpoints')) feed = {inputs_: test_x, labels_: test_y} test_acc = sess.run(accuracy, feed_dict=feed) print("Test accuracy: {:.4f}".format(test_acc)) %matplotlib inline import matplotlib.pyplot as plt from scipy.ndimage import imread """ Explanation: Testing Below you see the test accuracy. You can also see the predictions returned for images. End of explanation """ test_img_path = 'flower_photos/roses/10894627425_ec76bbc757_n.jpg' test_img = imread(test_img_path) plt.imshow(test_img) # Run this cell if you don't have a vgg graph built if 'vgg' in globals(): print('"vgg" object already exists. Will not create again.') else: #create vgg with tf.Session() as sess: input_ = tf.placeholder(tf.float32, [None, 224, 224, 3]) vgg = vgg16.Vgg16() vgg.build(input_) with tf.Session() as sess: img = utils.load_image(test_img_path) img = img.reshape((1, 224, 224, 3)) feed_dict = {input_: img} code = sess.run(vgg.relu6, feed_dict=feed_dict) saver = tf.train.Saver() with tf.Session() as sess: saver.restore(sess, tf.train.latest_checkpoint('checkpoints')) feed = {inputs_: code} prediction = sess.run(predicted, feed_dict=feed).squeeze() plt.imshow(test_img) plt.barh(np.arange(5), prediction) _ = plt.yticks(np.arange(5), lb.classes_) """ Explanation: Below, feel free to choose images and see how the trained classifier predicts the flowers in them. End of explanation """
Upward-Spiral-Science/uhhh
code/[Assignment 12] JM.ipynb
apache-2.0
%matplotlib inline from matplotlib import pyplot as plt import numpy as np import pandas as pd import seaborn as sns """ Explanation: Verifying Non-Uniformity of Subvolumes Here, I sample subvolumes of a predetermined size, count the synapse contents, and then plot that distribution in order to show that the synapses are not uniformly distributed. If they are uniformly distributed, then the graph of x-center × y-center × count should be a perfectly straight line. End of explanation """ import csv data = open('../data/data.csv', 'r').readlines() fieldnames = ['x', 'y', 'z', 'unmasked', 'synapses'] reader = csv.reader(data) reader.next() rows = [[int(col) for col in row] for row in reader] sorted_x = sorted(list(set([r[0] for r in rows]))) sorted_y = sorted(list(set([r[1] for r in rows]))) sorted_z = sorted(list(set([r[2] for r in rows]))) vol = np.zeros((len(sorted_x), len(sorted_y), len(sorted_z))) for r in rows: vol[sorted_x.index(r[0]), sorted_y.index(r[1]), sorted_z.index(r[2])] = r[-1] SUBV_SIZE = (10, 10, 5) SUBV_COUNT = 500 MGN = 15 # margin """ Explanation: Import data: End of explanation """ import random print vol.shape sample_vol = vol[MGN : -MGN, MGN : -MGN, :] subvs = [] for i in range(SUBV_COUNT): x_origin = random.randint(0, sample_vol.shape[0] - SUBV_SIZE[0]) y_origin = random.randint(0, sample_vol.shape[1] - SUBV_SIZE[1]) z_origin = random.randint(0, sample_vol.shape[2] - SUBV_SIZE[2]) subv = sample_vol[ x_origin : x_origin + SUBV_SIZE[0], y_origin : y_origin + SUBV_SIZE[1], z_origin : z_origin + SUBV_SIZE[2] ] subvs.append((x_origin, y_origin, z_origin, np.sum(subv))) plt.scatter(x=[s[0] for s in subvs], y=[s[1] for s in subvs], c=[s[3]/400 for s in subvs]) plt.xlabel("Dataset x Axis") plt.ylabel("Dataset y Axis") plt.suptitle("Synapse count of subvolumes randomly selected across cortex", fontsize="14") """ Explanation: Randomly select SUBV_COUNT subvolumes (of size SUBV_SIZE) from the larger volume. Count their contents (sum), and plot x-origin, y-origin, and count (x,y,size). End of explanation """ plt.hist([s[3]/10000 for s in subvs])#, y=[s[1] for s in subvs]) plt.xlabel("Synapse Count in (10x10x10) Supervoxel (x10,000)") plt.ylabel("Number of supervoxels") plt.suptitle("Synapse count in randomly selected supervoxels follows nonuniform distribution", fontsize="14") plt.hist2d([s[0] for s in subvs], y=[s[2] for s in subvs]) plt.xlabel("Data x") plt.ylabel("Data z") plt.suptitle("Relative synapse sensities are not distributed uniformly over x/z", fontsize=16) """ Explanation: From this alone, we can see that the data are nonuniformly distributed. Now let us plot this to characterize the distribution: End of explanation """ plt.hist2d([s[0] for s in subvs], y=[s[1] for s in subvs]) plt.xlabel("Data x") plt.ylabel("Data y") plt.suptitle("Relative synapse sensities are not distributed uniformly over x/y", fontsize=16) plt.hist2d([s[1] for s in subvs], y=[s[2] for s in subvs]) plt.xlabel("Data y") plt.ylabel("Data z") plt.suptitle("Relative synapse sensities are not distributed uniformly over y/z", fontsize=16) """ Explanation: As this 2D histogram above shows, the synapses are not distributed uniformly in XZ-space. We know already that they are not distributed evenly across the y-axis, as XY and YZ graphs demonstrate below: End of explanation """
datapythonista/datapythonista.github.io
content/2018-05-31-psf-candidates.ipynb
apache-2.0
import pandas from matplotlib import pyplot directors = pandas.read_json('{"Location":{"Naomi Ceder":"Chicago, IL","Eric Holscher":"Portland, OR","Jackie Kazil":"DC \\/ Bradenton FL","Lorena Mesa":"Chicago, IL","Thomas Wouters":"Amsterdam","Kushal Das":"Kolkata, India","Marlene Mhangami":"Zimbawe","Van Lindberg":"San Antonio, TX","Ewa Jodlowska":"Chicago, IL"},"Gender":{"Naomi Ceder":"Woman","Eric Holscher":"Man","Jackie Kazil":"Woman","Lorena Mesa":"Woman","Thomas Wouters":"Man","Kushal Das":"Man","Marlene Mhangami":"Woman","Van Lindberg":"Man","Ewa Jodlowska":"Woman"},"Twitter":{"Naomi Ceder":"https:\\/\\/twitter.com\\/NaomiCeder","Eric Holscher":"https:\\/\\/twitter.com\\/ericholscher","Jackie Kazil":"https:\\/\\/twitter.com\\/JackieKazil","Lorena Mesa":"https:\\/\\/twitter.com\\/loooorenanicole","Thomas Wouters":"https:\\/\\/twitter.com\\/Yhg1s","Kushal Das":"https:\\/\\/twitter.com\\/kushaldas","Marlene Mhangami":"https:\\/\\/twitter.com\\/marlene_zw","Van Lindberg":"https:\\/\\/twitter.com\\/VanL","Ewa Jodlowska":"https:\\/\\/twitter.com\\/ewa_jodlowska"},"Area":{"Naomi Ceder":null,"Eric Holscher":null,"Jackie Kazil":"data","Lorena Mesa":"web","Thomas Wouters":"cpython","Kushal Das":"cpython","Marlene Mhangami":null,"Van Lindberg":null,"Ewa Jodlowska":null}}') candidates = pandas.read_json('{"Previous board member":{"Anna Ossowski":"Yes","Cleopatra Douglas":"No","Christopher Neugebauer":"No","Katie McLaughlin":"No","Younggun Kim":"Yes","Paola Katherine Pacheco":"Yes","Thea Flowers":"No","Jeff Triplett":"No","Jess Ingrassellino":"No","Maricela (a.k.a Mayela) Sanchez Miranda":"No","Nina Zakharenko":"No","Mario Corchero":"No","Sergey Sokolov":"No","Lilly Ryan":"No","Tania Sanchez":"No","John Roa":"No"},"Location":{"Anna Ossowski":"Cologne, Germany","Cleopatra Douglas":"Sierra Leone","Christopher Neugebauer":"Petaluma, CA","Katie McLaughlin":"Sidney, Australia","Younggun Kim":"Korea","Paola Katherine Pacheco":"Mendoza, Argentina","Thea Flowers":"Seattle, WA","Jeff Triplett":"Lawrence, KS","Jess Ingrassellino":"NYC","Maricela (a.k.a Mayela) Sanchez Miranda":"Mexico","Nina Zakharenko":"Portland, OR","Mario Corchero":"London, UK","Sergey Sokolov":"St Petersburg, Russia","Lilly Ryan":"Melbourne, Australia","Tania Sanchez":"Leeds, UK","John Roa":"NYC"},"Gender":{"Anna Ossowski":"Woman","Cleopatra Douglas":"Woman","Christopher Neugebauer":"Man","Katie McLaughlin":"Woman","Younggun Kim":"Man","Paola Katherine Pacheco":"Woman","Thea Flowers":"Woman","Jeff Triplett":"Man","Jess Ingrassellino":"Woman","Maricela (a.k.a Mayela) Sanchez Miranda":"Woman","Nina Zakharenko":"Woman","Mario Corchero":"Man","Sergey Sokolov":"Man","Lilly Ryan":"Woman","Tania Sanchez":"Woman","John Roa":"Man"},"Twitter":{"Anna Ossowski":"https:\\/\\/twitter.com\\/ossanna16","Cleopatra Douglas":"https:\\/\\/twitter.com\\/succedor23","Christopher Neugebauer":"https:\\/\\/twitter.com\\/chrisjrn","Katie McLaughlin":"https:\\/\\/twitter.com\\/glasnt","Younggun Kim":"https:\\/\\/twitter.com\\/scari_net","Paola Katherine Pacheco":"https:\\/\\/twitter.com\\/pk_pacheco","Thea Flowers":"https:\\/\\/twitter.com\\/theavalkyrie","Jeff Triplett":"https:\\/\\/twitter.com\\/webology","Jess Ingrassellino":"https:\\/\\/twitter.com\\/jess_ingrass","Maricela (a.k.a Mayela) Sanchez Miranda":"https:\\/\\/twitter.com\\/mayela0x14","Nina Zakharenko":"https:\\/\\/twitter.com\\/nnja","Mario Corchero":"https:\\/\\/twitter.com\\/mariocj89","Sergey Sokolov":null,"Lilly Ryan":"https:\\/\\/twitter.com\\/attacus_au","Tania Sanchez":"https:\\/\\/twitter.com\\/ixek","John Roa":"https:\\/\\/twitter.com\\/jhonjairoroa87"},"Conference \\/ meetup organiser":{"Anna Ossowski":"Yes","Cleopatra Douglas":"Yes","Christopher Neugebauer":"Yes","Katie McLaughlin":"Yes","Younggun Kim":"Yes","Paola Katherine Pacheco":"Yes","Thea Flowers":"No","Jeff Triplett":"Yes","Jess Ingrassellino":"Yes","Maricela (a.k.a Mayela) Sanchez Miranda":"Yes","Nina Zakharenko":"Yes","Mario Corchero":"Yes","Sergey Sokolov":"Yes","Lilly Ryan":"Yes","Tania Sanchez":"Yes","John Roa":"Yes"},"WG member":{"Anna Ossowski":"Grants","Cleopatra Douglas":"-","Christopher Neugebauer":"Grants","Katie McLaughlin":"-","Younggun Kim":"Grants","Paola Katherine Pacheco":"-","Thea Flowers":"-","Jeff Triplett":null,"Jess Ingrassellino":"-","Maricela (a.k.a Mayela) Sanchez Miranda":null,"Nina Zakharenko":null,"Mario Corchero":null,"Sergey Sokolov":null,"Lilly Ryan":null,"Tania Sanchez":null,"John Roa":null},"Area":{"Anna Ossowski":"web","Cleopatra Douglas":"web","Christopher Neugebauer":null,"Katie McLaughlin":"web","Younggun Kim":"data","Paola Katherine Pacheco":"data","Thea Flowers":null,"Jeff Triplett":"web","Jess Ingrassellino":null,"Maricela (a.k.a Mayela) Sanchez Miranda":"web","Nina Zakharenko":"web","Mario Corchero":"cpython","Sergey Sokolov":"web","Lilly Ryan":null,"Tania Sanchez":"data","John Roa":"web"}}') """ Explanation: title: PSF candidates author: Marc Garcia date: 2018-05-31 category: pyhton tags: python psf PSF board of directors analysis In a vile attempt to bias the results of the current election for new directors for the Python Software Foundation, I'm sharing this analysis I made. My goal was to better understand the current board of directors, and how the new candidates can impact it. Motivation As a Pythonista, the decisions made by the board of directors affect me. And while all candidates look great to me, I think it's key to have a board of directors as diverse as possible. The decisions of the BoD affect all us, and having only male candidates, candidates from the US, candidates working on the web site of Python... would make these decisions biased in an undesirable way. Disclaimer I wish the information I wanted to analyze was available in an structured and reliable way, but it's not. The information used was mainly obtained from the candidates wiki, and the directors and candidates Twitter and LinkedIn profiles. Meaning that information can be lacking or incorrect. If you find anything wrong, please contact me and I'll fix it. While there are 17 candidates to the PSF BoD, I just show the information of 16 here. I am the 17th candidate, and I wanted this analysis to be as fair as possible, and not a propaganda exercise. This being said, don't assume that removing myself from the analysis makes it unbiased. I analyse what is important for me (for example, I care a lot on who contributes to the Python projects code), and you may care about something different. End of explanation """ fig = pyplot.figure() pyplot.subplot(121) directors.groupby('Gender').size().plot(kind='bar') pyplot.title('Continuing directors') pyplot.subplot(122) candidates.groupby('Gender').size().plot(kind='bar') pyplot.title('Candidates'); """ Explanation: Gender I think the Python community did a great effort in terms of gender diversity. We surely need to continue the efforts to make sure conferences and user groups are more diverse, but at the BoD levels things seem as good as they can be. End of explanation """ fig = pyplot.figure() pyplot.subplot(121) directors.groupby('Area').size().plot(kind='bar') pyplot.title('Continuing directors') pyplot.subplot(122) candidates.groupby('Area').size().plot(kind='bar') pyplot.title('Candidates'); """ Explanation: Python areas It's difficult to cluster people in the different Python areas. Or to define which are the areas themselves. But at the same time, it wouldn't be nice to have the whole Python community represented by directors that never work with Django, or with the PyData stack. My approximation has been to divide the directors and candidates between the ones more focused on web, on data or on the language itself, and consider the rest neutral. From the plots below, seems like all areas are represented, but personally I don't think it'd harm to have more representation from the web and data world. End of explanation """
minesense/VisTrails
examples/api/ipython-notebook.ipynb
bsd-3-clause
import vistrails as vt """ Explanation: VisTrails API example This notebook showcases the new API. Inlined are some comments and explanations. End of explanation """ vt.ipython_mode(True) """ Explanation: The new API is exposed under the top-level vistrails package. The moment you use one of the API functions, like load_vistrail(), it will create an application and load the same configuration that the VisTrails application uses (although it will automatically enable packages the moment you need them). End of explanation """ vistrail = vt.load_vistrail('simplemath.vt') """ Explanation: This explicitely requests IPythonMode to be enabled on output modules, so that pipeline executions will put results on the notebook (similarly to %matplotlib inline for matplotlib plots). Vistrails and Pipelines You can get a Vistrail through load_vistrail(). End of explanation """ vistrail vistrail.select_latest_version() vistrail vistrail.get_pipeline(2) """ Explanation: A Vistrail is a whole version tree, where each version is a different pipeline. From it we can get Pipelines, but it is also stateful (i.e. has a current version); this is useful for editing (creating new versions from the current one). It also provides the interface that Pipeline has, implicitely acting on the current_pipeline. If GraphViz is available, Vistrail and Pipeline will be rendered in the IPython notebook. End of explanation """ tabledata = vt.load_package('org.vistrails.vistrails.tabledata') tabledata """ Explanation: Packages Only basic_modules (and abstractions?) are loaded on initialization, so that using the API stays fast. A package might be auto-enabled when it is requested, which is efficient and convenient. Note that load_package() only accepts package identifiers. End of explanation """ tabledata.convert from vistrails.core.modules.module_registry import MissingModule try: tabledata['convert'] # can't get namespaces this way, use a dot except MissingModule: pass else: assert False tabledata.BuildTable, tabledata['BuildTable'] tabledata.read.CSVFile, tabledata['read|CSVFile'] """ Explanation: You can get Modules from the package using the dot or bracket syntax. These modules are "dangling" modules, not yet instanciated in a specific pipeline/vistrail. These will be useful once editing pipelines is added to the API. End of explanation """ outputs = vt.load_vistrail('outputs.vt') outputs.select_version(1) outputs # Errors try: result = outputs.execute() except vt.ExecutionErrors: traceback.print_exc() else: assert False # Results outputs.select_latest_version() result = outputs.execute() result outputs outputs.current_pipeline """ Explanation: Execution In addition to executing a Pipeline or Vistrail, you can easily pass values in on InputPort modules (to use subworkflows as Python functions) and get results out (either on OutputPort modules or any port of any module). Execution returns a Results object from which you can get all of this. In addition, output modules (such as matplotlib's MplFigureOutput) will output to the IPython notebook if possible. Gets output End of explanation """ result.module_output(0) """ Explanation: This gets the value on any output port of any module (no need to insert OutputPort or GenericOutput modules, if you know how to find the module): End of explanation """ result.output_port('msg') """ Explanation: This gets the value passed to an OutputPort module, using the OutputPort's name: End of explanation """ pipeline = vistrail.current_pipeline pipeline in_a = pipeline.get_input('in_a') assert (in_a == pipeline.get_module('First input')) is True in_a """ Explanation: Sets inputs End of explanation """ result = pipeline.execute(in_a == 2, in_b=4) result.output_port('out_times'), result.output_port('out_plus') """ Explanation: We need to provide value to this workflow, for its two InputPort modules. Input can be supplied to execute() in two ways: * either by using module_obj == value, where module_obj is a module obtained from the pipeline, using get_input() or get_module(); * or by using module_name=value, where module_name is the name set on an InputPort module Note that, to Python, module_obj is a variable and must be bound to a value (of type Module), whereas module_name is a keyword-parameter name. End of explanation """ im = vt.load_vistrail('imagemagick.vt') im.select_version('read') im """ Explanation: Other example End of explanation """ im.execute().output_port('result') im.select_version('blur') im im.execute().output_port('result') im.select_version('edges') im.execute().output_port('result') """ Explanation: Note that if you print a File value, IPython will try to render it. End of explanation """ mpl = vt.load_vistrail('../matplotlib/pie_ex1.vt') mpl.select_latest_version() """ Explanation: Output mode End of explanation """ mpl.execute() richtext = vt.load_vistrail('out_html.xml') richtext.select_latest_version() """ Explanation: This workflow uses MplFigureOutput, which outputs to the IPython notebook if available (and since the spreadsheet is not running). End of explanation """ richtext.execute() tbl = vt.load_vistrail('table.xml') tbl.select_latest_version() """ Explanation: This one uses RichTextOutput: End of explanation """ tbl.execute() render = vt.load_vistrail('brain_output.xml') render.select_latest_version() """ Explanation: TableOutput: End of explanation """ render.execute() """ Explanation: And vtkRendererOutput: End of explanation """ import urllib2 basic = vt.load_package('org.vistrails.vistrails.basic') pythoncalc = vt.load_package('org.vistrails.vistrails.pythoncalc') new_vistrail = vt.Vistrail() # Simple Integer constant data = new_vistrail.controller.add_module_from_descriptor(basic.Integer.descriptor) new_vistrail.controller.update_function(data, 'value', ['8']) # PythonSource module pythonblock = new_vistrail.controller.add_module_from_descriptor(basic.PythonSource.descriptor) new_vistrail.controller.update_ports(pythonblock.id, [], [('output', 'computed_number', 'org.vistrails.vistrails.basic:Float')]) new_vistrail.controller.update_function(pythonblock, 'source', [urllib2.quote(r"""\ number = 211856436.75 while number > 10.0: number /= 7.0 computed_number = number """)]) # Python calc that multiply inputs multiply = new_vistrail.controller.add_module_from_descriptor(pythoncalc.PythonCalc.descriptor) new_vistrail.controller.update_function(multiply, 'op', ['*']) # StandardOutput, here it will print to the notebook output = new_vistrail.controller.add_module_from_descriptor(basic.StandardOutput.descriptor) # Add connections new_vistrail.controller.add_connection(data.id, 'value', multiply.id, 'value1') new_vistrail.controller.add_connection(pythonblock.id, 'computed_number', multiply.id, 'value2') new_vistrail.controller.add_connection(multiply.id, 'value', output.id, 'value') new_vistrail.current_version new_vistrail.current_pipeline new_vistrail.execute() """ Explanation: Pipeline manipulation The API wrapper doesn't currently provide easier methods to manipulate pipelines. This is mainly because these operations need all of the VisTrails concepts anyway. You can however call the controller methods on vistrail.controller directly, using modules obtained through load_package(). End of explanation """
betoesquivel/comment_summarization
.ipynb_checkpoints/guardian_first_attempt-checkpoint.ipynb
mit
import requests from bs4 import BeautifulSoup url = "http://www.theguardian.com/discussion/p/4fqc7" r = requests.get(url) html = r.text soup = BeautifulSoup(html, "html.parser") comments = soup.select(".d-comment__main") comment_authors = soup.select(".d-comment__author") print len (comments), " comments found in first page." print len (comment_authors), " authors found in first page." """ Explanation: Get the comments' HTML End of explanation """ comments_dict = [] parsed_comments = [] parsed_authors = [] for comment, author in zip(comments, comment_authors): c = comment.select(".d-comment__body")[0].text a = author['title'] comments_dict.append({"text": c, "author": a}) parsed_comments.append(c) parsed_authors.append(a) print comments_dict[:6] """ Explanation: Extract the comments End of explanation """ from sklearn.feature_extraction.text import TfidfVectorizer import nltk.stem english_stemmer = nltk.stem.SnowballStemmer('english') class StemmedTfidfVectorizer(TfidfVectorizer): def build_analyzer(self): analyzer=super(StemmedTfidfVectorizer,self).build_analyzer() return lambda doc:(english_stemmer.stem(w) for w in analyzer(doc)) """ Explanation: Create comment stemmer and TFIDF vectorizer We will do some stemming in the comment text in order to create shorter vectors that represent each comment. End of explanation """ stem_vectorizer = StemmedTfidfVectorizer(min_df=1, stop_words='english') stem_analyze = stem_vectorizer.build_analyzer() # print [tok for tok in stem_analyze ("When we have a real living wage, there will no longer need to be 'stupid tax credits'. Until then, people need a top up to support themselves, because the companies they work for, don't want to give people their dues.")] comment_vectors = stem_vectorizer.fit_transform(parsed_comments) print "%d features found" % (len(stem_vectorizer.get_feature_names())) print stem_vectorizer.get_feature_names() """ Explanation: Vectorize extracted comments End of explanation """ formatted = ["Comment #{0}\n{1}".format(i,cv) for i, cv in enumerate(comment_vectors)] for f in formatted: print f """ Explanation: These are the vectorized comments End of explanation """ from sklearn.cluster import KMeans km = KMeans(n_clusters=4, init='k-means++', max_iter=100, n_init=1) km.fit(comment_vectors) # Top terms per cluster (out of the 4 clusters) order_centroids = km.cluster_centers_.argsort()[:, ::-1] terms = stem_vectorizer.get_feature_names() for i in range(4): print "Cluster %d:"%(i) for ind in order_centroids[i, :10]: print " %s" % terms[ind] print "" """ Explanation: Apply clustering algorithm to vectorized comments KMeans with 4 clusters End of explanation """
openmrslab/suspect
docs/notebooks/tut06_mpl.ipynb
mit
import suspect import numpy as np import matplotlib.pyplot as plt """ Explanation: 6. Image co-registration One of the most important steps in MRS processing is visualising the spectroscopy region on a structural image. This not only allows us to validate that the voxel was correctly placed and assess any partial volume effects, but can also be used to estimate the mean T2 of the voxel from the gray and white matter content (for brain spectroscopy). Quantification tools use the T2 to correct for signal relaxation and adjust the metabolite concentrations, but by default they use a generic value. By calculating your own from the voxel location, you can get more accurate concentrations. Co-registration is also particularly important for spectroscopic imaging, for example when generating heatmap overlays on top of structural images. In this tutorial, we will look at how to relate the coordinate systems of the scanner, the image and the spectroscopy together, and how to plot a simple voxel outline on top of an image slice. In later tutorials, we will see how to use this for T2 estimation, and look at more advanced renderings for CSI data. End of explanation """ t1 = suspect.image.load_dicom_volume("mprage/IM-0001-0001.IMA") """ Explanation: Loading structural MR images is handled by the suspect.image module. Currently, suspect only supports working with DICOM files, although support for NIFTI and other formats is planned for the future. The load_dicom_volume() function finds all the DICOM files in the directory of the specified file which have the same SeriesUID, and loads them all into a single 3D volume of the ImageBase class. End of explanation """ plt.imshow(t1[100], cmap=plt.cm.gray) """ Explanation: Like the MRSData class, the ImageBase class is a numpy.ndarray subclass with a few extra helper functions. This makes it very easy to display image slices using matplotlib's imshow() function. End of explanation """ origin_position = t1.to_scanner(0, 0, 0) print(origin_position) """ Explanation: For this tutorial, the two most important functions we are going to look at are to_scanner() and from_scanner(). These functions are used to convert between the voxel coordinate system of the image, and the intrinsic coordinates of the scanner. For example, we can take the bottom left hand voxel of the image: (0, 0, 0) and see where that is located inside the scanner. End of explanation """ isocenter_voxel = t1.from_scanner(0, 0, 0) print(isocenter_voxel) """ Explanation: origin_position gives us the location of the (0, 0, 0) voxel relative to the scanner isocenter, in mm, using the standard scanner coordinates, with x increasing from right to left, y increasing from anterior to posterior and z increasing from inferior to superior. On the other hand, we can also work out which voxel is closest to isocenter using the from_scanner() function. End of explanation """ isocenter_indices = isocenter_voxel.round().astype(int) print(isocenter_indices) """ Explanation: Note that we don't get integer values for the voxel coordinates, this is important when we want to chain coordinate systems together later on. However, if we want to slice into our volume we can convert them to integers (making sure to round first so that e.g. 99.75 goes to 100 and not 99): End of explanation """ pcg = suspect.io.load_rda("pcg.rda") pcg_centre = pcg.to_scanner(0, 0, 0) print(pcg_centre) """ Explanation: A quick word here about ImageBase coordinates. Conceptually suspect loads the DICOM volume as a stack of slices which means that voxels are indexed as volume[slice, row, column]. This is done to make it easier to plot the images with matplotlib where the outer dimension is the vertical. However, the to_scanner() and from_scanner() functions both stick to the standard order of x, y, z, or column, row, slice. Therefore when accessing a particular slice of an image, the coordinates must be reversed. This is confusing but better than any of the alternatives so far proposed. Now that we have seen how to load an image and understand its coordinate systems, it is time to look at the spectroscopy side. Fortunately, MRSData is a subclass of ImageBase, so the spectroscopy object has exactly the same to_scanner() and from_scanner() methods available. In this example, we will be using a voxel acquired from the posterior cingulate gyrus, in the rda format. End of explanation """ pcg_centre_index = t1.from_scanner(*pcg_centre).round().astype(int) print(pcg_centre_index) """ Explanation: As we can see, this voxel is located slightly behind and above the isocenter position. Now that we know the location of the centre of the voxel in scanner coordinates, we can convert that into the image space to tell us which slice goes through the middle of the voxel. End of explanation """ corner_coords_pcg = [[-0.5, -0.5, 0], [0.5, -0.5, 0], [0.5, 0.5, 0], [-0.5, 0.5, 0], [-0.5, -0.5, 0]] corner_coords = np.array([t1.from_scanner(*pcg.to_scanner(*coord)) for coord in corner_coords_pcg]) """ Explanation: This tells us that slice 125 goes through the centre of the voxel. In this case the voxel is aligned with the plane of the image, so we don't have to worry about drawing all the sides. Instead we will draw the transverse slice through the middle of the voxel, as shown in this diagram. End of explanation """ plt.imshow(t1[pcg_centre_index[2]], cmap=plt.cm.gray) plt.plot(corner_coords[:, 0], corner_coords[:, 1]) plt.xlim([0, t1.shape[2] - 1]) plt.ylim([t1.shape[1] - 1, 0]) """ Explanation: We start with the list of points in the spectroscopy voxel coordinates. Note that we have added a second copy of the first point at the end to make it easier to plot a loop. We then use a list comprehension to convert each point first into the scanner coordinates and then into the image space. Finally we convert the list of coordinates into a numpy array to make it easier to access the data from matplotlib. Now we are ready to plot the voxel. We start by displaying the correct image slice with imshow(), then pass the x and y coordinates of the voxel to plot(). As a final touch, we use the xlim() and ylim() functions to restrict the axes only to the image range, otherwise matplotlib will pad the image with blank space. End of explanation """ sagittal_voxel = [[0, -0.5, -0.5], [0, 0.5, -0.5], [0, 0.5, 0.5], [0, -0.5, 0.5], [0, -0.5, -0.5]] sagittal_positions = np.array([t1.from_scanner(*pcg.to_scanner(*coord)) for coord in sagittal_voxel]) plt.imshow(t1[:, :, pcg_centre_index[0]], cmap=plt.cm.gray) plt.plot(sagittal_positions[:, 1], sagittal_positions[:, 2]) plt.xlim([0, t1.shape[1] - 1]) plt.ylim([0, t1.shape[0] - 1]) """ Explanation: And that is really all there is to it. Now that we know how to convert spectroscopy points into image space, it is very easy to get the voxel on coronal or sagittal images as well. We just have to change the points on the spectroscopy voxel and take a different slice through the image volume. The one important thing to remember is that for sagittal and coronal images the vertical axis is z, which increases from inferior to superior, unlike the default in matplotlib where images are plotted downwards. This means we have to use ylim() to reverse the direction of plotting, to get our images the right way up. End of explanation """
steven-murray/pydftools
docs/example_notebooks/basic_example.ipynb
mit
# Import relevant libraries %matplotlib inline import pydftools as df import time # Make figures a little bigger in the notebook import matplotlib as mpl mpl.rcParams['figure.dpi'] = 120 # For displaying equations from IPython.display import display, Markdown """ Explanation: Basic Example This example is a basic introduction to using pydftools. It mimics example 1 of dftools. End of explanation """ n = 1000 seed = 1234 sigma = 0.5 model =df.model.Schechter() p_true = model.p0 """ Explanation: Choose some parameters to use throughout End of explanation """ data, selection, model, other = df.mockdata(n = n, seed = seed, sigma = sigma, model=model, verbose=True) """ Explanation: Generate mock data with observing errors: End of explanation """ survey = df.DFFit(data=data, selection=selection, model=model) """ Explanation: Create a fitting object (the fit is not performed until the fit object is accessed): End of explanation """ start = time.time() print(survey.fit.p_best) print("Time for fitting: ", time.time() - start, " seconds") """ Explanation: Perform the fit and get the best set of parameters: End of explanation """ fig = df.plotting.plotcov([survey], p_true=p_true, figsize=1.3) """ Explanation: Plot the covariances: End of explanation """ fig, ax = df.mfplot(survey, xlim=(1e7,2e12), ylim=(1e-4,2), p_true = p_true, bin_xmin=7.5, bin_xmax=12) """ Explanation: Plot the mass function itself: End of explanation """ display(Markdown(survey.fit_summary(format_for_notebook=True))) """ Explanation: Write out fitted parameters with (Gaussian) uncertainties: End of explanation """
BDannowitz/polymath-progression-blog
jlab-ml-lunch-2/notebooks/03-Recurrent-Network-Model.ipynb
gpl-2.0
%matplotlib inline import pandas as pd import numpy as np from tensorflow.keras.models import Sequential from tensorflow.keras.layers import LSTM, Dense, LeakyReLU, Dropout, ReLU, GRU, TimeDistributed, Conv2D, MaxPooling2D, Flatten from tensorflow.keras.preprocessing.sequence import pad_sequences from tensorflow.keras.callbacks import EarlyStopping from jlab import load_test_data, get_test_detector_plane """ Explanation: 03 - Sequence Model Approach The more 'classical' approach to solving this problem Train a model that can take any number of 'steps' Makes a prediction on next step based on previous steps Learn from full tracks For test tracks, predict what the next step's values will be End of explanation """ X_train = pd.read_csv('MLchallenge2_training.csv') X_test = load_test_data('test_in.csv') eval_planes = get_test_detector_plane(X_test) # Also, load our truth values y_true = pd.read_csv('test_prediction.csv', names=['x', 'y', 'px', 'py', 'pz'], header=None) X_test.head() y_true.head() """ Explanation: Load up and prep the datasets End of explanation """ N_SAMPLES = len(X_train) N_DETECTORS = 25 N_KINEMATICS = 6 SHAPE = (N_SAMPLES, N_DETECTORS-1, N_KINEMATICS) X_train_list = [] y_train_array = np.ndarray(shape=(N_SAMPLES, N_KINEMATICS-1)) for ix in range(N_SAMPLES): seq_len = np.random.choice(range(8, 25)) track = X_train.iloc[ix].values.reshape(N_DETECTORS, N_KINEMATICS) X_train_list.append(track[0:seq_len]) # Store the kinematics of the next in the sequence # Ignore the 3rd one, which is z y_train_array[ix] = track[seq_len][[0,1,3,4,5]] for track in X_train_list[:10]: print(len(track)) X_train_list = pad_sequences(X_train_list, dtype=float) for track in X_train_list[:10]: print(len(track)) X_train_array = np.array(X_train_list) X_train_array.shape y_train_array.shape """ Explanation: Construct the training data and targets For each track Choose a number N between 8 and 24 That track will have 6 kinematics for N blocks The target variable will be the 6 kinematic variables for the N+1th detector block This will cause variable length sequences Apply pad_sequences to prepend with zeros appropriately Training Dataset End of explanation """ N_TEST_SAMPLES = len(X_test) y_test_array = y_true.values X_test_list = [] for ix in range(N_TEST_SAMPLES): seq_len = get_test_detector_plane(X_test.iloc[ix]) track = X_test.iloc[ix].values.reshape(N_DETECTORS, N_KINEMATICS) X_test_list.append(track[0:seq_len]) X_test_list = pad_sequences(X_test_list, dtype=float) X_test_array = np.array(X_test_list) X_test_array.shape y_test_array.shape y_true.values.shape import pandas as pd import numpy as np from math import floor from tensorflow.keras.preprocessing.sequence import pad_sequences from sklearn.model_selection import train_test_split data = pd.read_csv('MLchallenge2_training.csv') # Z values are constant -- what are they? Z_VALS = data[['z'] + [f'z{i}' for i in range(1, 25)]].loc[0].values # Z-distance from one timestep to another is set; calculate it Z_DIST = [Z_VALS[i+1] - Z_VALS[i] for i in range(0, 24)] + [0.0] # Number of timesteps N_DETECTORS = 25 # Provided number of kinematics N_KINEMATICS = 6 # Number of features after engineering them all N_FEATURES = 13 def get_detector_meta(kin_array, det_id): # Is there a large gap after this detector? # 0 is for padded timesteps # 1 is for No, 2 is for Yes mind_the_gap = int(det_id % 6 == 0) + 1 # Detector group: 1 (origin), 2, 3, 4, or 5 det_grp = floor((det_id-1) / 6) + 2 # Detectors numbered 1-6 (origin is 6) # (Which one in the group of six is it?) det_rank = ((det_id-1) % 6) + 1 # Distance to the next detector? z_dist = Z_DIST[det_id] # Transverse momentum (x-y component) pt = np.sqrt(np.square(kin_array[3]) + np.square(kin_array[4])) # Total momentum p_tot = np.sqrt(np.square(kin_array[3]) + np.square(kin_array[4]) + np.square(kin_array[5])) # Put all the calculated features together det_meta = np.array([det_id, mind_the_gap, det_grp, det_rank, z_dist, pt, p_tot]) # Return detector data plus calculated features return np.concatenate([kin_array, det_meta], axis=None) def tracks_to_time_series(X): """Convert training dataframe to multivariate time series training set Pivots each track to a series ot timesteps. Then randomly truncates them to be identical to the provided test set. The step after the truncated step is saved as the target. Truncated sequence are front-padded with zeros. Parameters ---------- X : pandas.DataFrame Returns ------- (numpy.ndarray, numpy.ndarray) Tuple of the training data and labels """ X_ts_list = [] n_samples = len(X) y_array = np.ndarray(shape=(n_samples, N_KINEMATICS-1)) for ix in range(n_samples): # Randomly choose how many detectors the track went through track_len = np.random.choice(range(8, 25)) # Reshape into ts-like track = X.iloc[ix].values.reshape(N_DETECTORS, N_KINEMATICS) #eng_track = np.zeros(shape=(N_DETECTORS, N_FEATURES)) #for i in range(0, N_DETECTORS): # eng_track[i] = get_detector_meta(track[i], i) # Truncate the track to only N detectors X_ts_list.append(track[0:track_len]) # Store the kinematics of the next in the sequence # Ignore the 3rd one, which is z y_array[ix] = track[track_len][[0,1,3,4,5]] # Pad the training sequence X_ts_list = pad_sequences(X_ts_list, dtype=float) X_ts_array = np.array(X_ts_list) return X_ts_array, y_array X, y = tracks_to_time_series(data) X[3] y[3] X_train, X_test, y_train, y_test = train_test_split(X, y) len(X_train), len(X_test) """ Explanation: Validation Dataset End of explanation """ from tensorflow.keras.models import Sequential from tensorflow.keras.layers import GRU, Dense, LeakyReLU, Dropout from tensorflow.keras.callbacks import EarlyStopping import joblib def lrelu(x): return LeakyReLU()(x) def gru_model(gru_units=35, dense_units=100, dropout_rate=0.25): """Model definition. Three layers of Gated Recurrent Units (GRUs), utilizing LeakyReLU activations, finally passing GRU block output to a dense layer, passing its output to the final output layer, with a touch of dropout in between. Bon apetit. Parameters ---------- gru_units : int dense_units : int dropout_rate : float Returns ------- tensorflow.keras.models.Sequential """ model = Sequential() model.add(GRU(gru_units, activation=lrelu, input_shape=(N_DETECTORS-1, N_KINEMATICS), return_sequences=True)) model.add(GRU(gru_units, activation=lrelu, return_sequences=True)) model.add(GRU(gru_units, activation=lrelu)) model.add(Dense(dense_units, activation=lrelu)) model.add(Dropout(dropout_rate)) model.add(Dense(N_KINEMATICS-1)) model.compile(loss='mse', optimizer='adam') return model model = gru_model() model.summary() from tensorflow.keras.utils import plot_model plot_model(model, to_file='gru_model.png', show_shapes=True) es = EarlyStopping(monitor='val_loss', mode='min', patience=5, restore_best_weights=True) history = model.fit( x=X_train, y=y_train, validation_data=(X_test, y_test), callbacks=[es], epochs=50, ) model.save("gru_model.h5") joblib.dump(history.history, "gru_model.history") history = joblib.load("dannowitz_jlab2_model_20191031.history") import matplotlib.pyplot as plt # Plot training & validation loss values plt.plot(history['loss']) plt.plot(history['val_loss']) plt.title('Model loss') plt.ylabel('Loss') plt.xlabel('Epoch') plt.legend(['Train', 'Test'], loc='upper right') plt.show() """ Explanation: Multi-layer GRU Model with LReLU End of explanation """ pred = pd.read_csv('data/submission/dannowitz_jlab2_submission_20191112.csv', header=None) truth = pd.read_csv('data/ANSWERS.csv', header=None) # Calculate square root of the mean squared error # Then apply weights and sum them all up sq_error = (truth - pred).applymap(np.square) mse = sq_error.sum() / len(truth) rmse = np.sqrt(mse) rms_weighted = rmse / [0.03, 0.03, 0.01, 0.01, 0.011] score = rms_weighted.sum() score """ Explanation: Calculate the score on my predictions Scoring code provided by Thomas Britton Each kinematic has different weight End of explanation """ def lstm_model(): model = Sequential() model.add(LSTM(200, activation=LeakyReLU(), input_shape=(N_DETECTORS-1, N_KINEMATICS))) model.add(Dense(100, activation=LeakyReLU())) model.add(Dropout(0.25)) model.add(Dense(N_KINEMATICS-1, activation='linear')) model.compile(loss='mse', optimizer='adam') return model model = lstm_model() model.summary() history = model.fit(x=X_train_array, y=y_train_array, validation_data=(X_test_array, y_test_array), epochs=5) history = model.fit(x=X_train_array, y=y_train_array, validation_data=(X_test_array, y_test_array), epochs=50, use_multiprocessing=True) model = lstm_model() es = EarlyStopping(monitor='val_loss', mode='min') history = model.fit(x=X_train_array, y=y_train_array, validation_data=(X_test_array, y_test_array), callbacks=[es], epochs=20, use_multiprocessing=True) model.save("lstm100-dense100-dropout025-epochs20-early-stopping.h5") def lstm_model_lin(): model = Sequential() model.add(LSTM(200, activation=LeakyReLU(), input_shape=(N_DETECTORS-1, N_KINEMATICS))) model.add(Dense(100, activation=LeakyReLU())) model.add(Dropout(0.25)) model.add(Dense(N_KINEMATICS-1, activation='linear')) model.compile(loss='mse', optimizer='adam') return model lin_act_model = lstm_model_lin() es = EarlyStopping(monitor='val_loss', mode='min') history = lin_act_model.fit(x=X_train_array[:10000], y=y_train_array[:10000], validation_data=(X_test_array, y_test_array), callbacks=[es], epochs=20, use_multiprocessing=True) def lstm_model_adam(): model = Sequential() model.add(LSTM(200, activation=LeakyReLU(), input_shape=(N_DETECTORS-1, N_KINEMATICS))) model.add(Dense(100, activation=LeakyReLU())) model.add(Dropout(0.25)) model.add(Dense(N_KINEMATICS-1)) model.compile(loss='mse', optimizer='adam') return model adam_model = lstm_model_adam() es = EarlyStopping(monitor='val_loss', mode='min') history = adam_model.fit(x=X_train_array[:10000], y=y_train_array[:10000], validation_data=(X_test_array, y_test_array), callbacks=[es], epochs=20, use_multiprocessing=True) def lstm_model_dropout50(): model = Sequential() model.add(LSTM(200, activation=LeakyReLU(), input_shape=(N_DETECTORS-1, N_KINEMATICS))) model.add(Dense(100, activation=LeakyReLU())) model.add(Dropout(0.50)) model.add(Dense(N_KINEMATICS-1)) model.compile(loss='mse', optimizer='adam') return model dropout50_model = lstm_model_dropout50() es = EarlyStopping(monitor='val_loss', mode='min') history = dropout50_model.fit(x=X_train_array[:10000], y=y_train_array[:10000], validation_data=(X_test_array, y_test_array), callbacks=[es], epochs=20, use_multiprocessing=True) def lstm_model_nodropout(): model = Sequential() model.add(LSTM(200, activation=LeakyReLU(), input_shape=(N_DETECTORS-1, N_KINEMATICS))) model.add(Dense(100, activation=LeakyReLU())) model.add(Dense(N_KINEMATICS-1)) model.compile(loss='mse', optimizer='adam') return model nodropout_model = lstm_model_nodropout() es = EarlyStopping(monitor='val_loss', mode='min') history = nodropout_model.fit(x=X_train_array[:10000], y=y_train_array[:10000], validation_data=(X_test_array, y_test_array), callbacks=[es], epochs=20, use_multiprocessing=True) def lstm_model_relu(): model = Sequential() model.add(LSTM(200, activation='relu', input_shape=(N_DETECTORS-1, N_KINEMATICS))) model.add(Dense(100, activation='relu')) model.add(Dropout(0.25)) model.add(Dense(N_KINEMATICS-1)) model.compile(loss='mse', optimizer='adam') return model relu_model = lstm_model_relu() es = EarlyStopping(monitor='val_loss', mode='min') history = relu_model.fit(x=X_train_array[:10000], y=y_train_array[:10000], validation_data=(X_test_array, y_test_array), callbacks=[es], epochs=20, use_multiprocessing=True) def model_gru(): model = Sequential() model.add(GRU(200, activation=LeakyReLU(), input_shape=(N_DETECTORS-1, N_KINEMATICS))) model.add(Dense(100, activation=LeakyReLU())) model.add(Dropout(0.25)) model.add(Dense(N_KINEMATICS-1)) model.compile(loss='mse', optimizer='adam') return model gru_model = model_gru() es = EarlyStopping(monitor='val_loss', mode='min') history = gru_model.fit(x=X_train_array[:10000], y=y_train_array[:10000], validation_data=(X_test_array, y_test_array), callbacks=[es], epochs=20, use_multiprocessing=True) """ Explanation: Visualize the predictions vs true You can slice and dice the stats however you want, but it helps to be able to see your predictions at work. Running history of me tinkering around I didn't arrive at this construction from the start. Many different changes and tweaks End of explanation """ def model_v2(): model = Sequential() model.add(GRU(200, activation=LeakyReLU(), input_shape=(N_DETECTORS-1, N_KINEMATICS))) model.add(Dense(100, activation=LeakyReLU())) model.add(Dropout(0.25)) model.add(Dense(N_KINEMATICS-1)) model.compile(loss='mse', optimizer='adam') return model v2_model = model_v2() es = EarlyStopping(monitor='val_loss', mode='min') history = v2_model.fit(x=X_train_array, y=y_train_array, validation_data=(X_test_array, y_test_array), callbacks=[es], epochs=8, use_multiprocessing=True) from tensorflow.keras.back def model_v2_deep(): model = Sequential() model.add(GRU(30, activation=LeakyReLU(), input_shape=(N_DETECTORS-1, N_KINEMATICS), return_sequences=True)) model.add(GRU(30, activation=LeakyReLU(), return_sequences=True)) model.add(GRU(30, activation=LeakyReLU())) model.add(Dense(100, activation=LeakyReLU())) model.add(Dropout(0.25)) model.add(Dense(N_KINEMATICS-1)) model.compile(loss='mse', optimizer='adam') return model v2_model_deep = model_v2_deep() v2_model_deep.summary() es = EarlyStopping(monitor='val_loss', mode='min', patience=2, restore_best_weights=True) history = v2_model_deep.fit(x=X_train_array, y=y_train_array, validation_data=(X_test_array, y_test_array), callbacks=[es], epochs=8, use_multiprocessing=True) def model_v2_dbl_gru(): model = Sequential() model.add(GRU(200, activation=LeakyReLU(), input_shape=(N_DETECTORS-1, N_KINEMATICS), return_sequences=True)) model.add(GRU(200, activation=LeakyReLU(), input_shape=(N_DETECTORS-1, N_KINEMATICS))) model.add(Dense(100, activation=LeakyReLU())) model.add(Dropout(0.25)) model.add(Dense(N_KINEMATICS-1)) model.compile(loss='mse', optimizer='adam') return model v2_model_dbl_gru = model_v2_dbl_gru() es = EarlyStopping(monitor='val_loss', mode='min') history = v2_model_dbl_gru.fit(x=X_train_array[:20000], y=y_train_array[:20000], validation_data=(X_test_array, y_test_array), #callbacks=[es], epochs=10, use_multiprocessing=True) def model_v2_2x_dropout(): model = Sequential() model.add(GRU(200, activation=LeakyReLU(), input_shape=(N_DETECTORS-1, N_KINEMATICS))) model.add(Dropout(0.25)) model.add(Dense(100, activation=LeakyReLU())) model.add(Dropout(0.25)) model.add(Dense(N_KINEMATICS-1)) model.compile(loss='mse', optimizer='adam') return model v2_model_dbl_dropout = model_v2_2x_dropout() es = EarlyStopping(monitor='val_loss', mode='min') history = v2_model_dbl_dropout.fit(x=X_train_array[:20000], y=y_train_array[:20000], validation_data=(X_test_array, y_test_array), callbacks=[es], epochs=20, use_multiprocessing=True) def model_v2_big_gru(): model = Sequential() model.add(GRU(400, activation=LeakyReLU(), input_shape=(N_DETECTORS-1, N_KINEMATICS))) model.add(Dense(100, activation=LeakyReLU())) model.add(Dropout(0.25)) model.add(Dense(N_KINEMATICS-1)) model.compile(loss='mse', optimizer='adam') return model v2_model_big_gru = model_v2_big_gru() es = EarlyStopping(monitor='val_loss', mode='min') history = v2_model_big_gru.fit(x=X_train_array[:20000], y=y_train_array[:20000], validation_data=(X_test_array, y_test_array), #callbacks=[es], epochs=10, use_multiprocessing=True) v2_model_big_gru.fit(x=X_train_array[:20000], y=y_train_array[:20000], validation_data=(X_test_array, y_test_array), #callbacks=[es], epochs=15, use_multiprocessing=True, initial_epoch=10) """ Explanation: Early Conclusions GRU > LSTM LeakyReLU > ReLU adam > rmsprop dropout 0.25 > dropout 0.5 > no dropout End of explanation """ X_train_array.shape def cnn_gru(): model = Sequential() model.add(Conv1D(filters=5, kernel_size=2, strides=1, input_shape=(N_DETECTORS-1, N_KINEMATICS))) #model.add(MaxPooling1D()) model.add(GRU(200, activation=LeakyReLU())) model.add(Dense(100, activation=LeakyReLU())) model.add(Dropout(0.25)) model.add(Dense(N_KINEMATICS-1)) model.compile(loss='mse', optimizer='adam') return model cnn_model = cnn_gru() cnn_model.summary() #es = EarlyStopping(monitor='val_loss', mode='min') history = cnn_model.fit(x=X_train_array[:20000], y=y_train_array[:20000], validation_data=(X_test_array, y_test_array), epochs=10, use_multiprocessing=True) history.history """ Explanation: Try CNN LSTM End of explanation """ from train import train from predict import predict model = train(frac=1.00, filename="dannowitz_jlab2_model", epochs=100, ret_model=True) preds = predict(model_filename="dannowitz_jlab2_model.h5", data_filename="test_in (1).csv", output_filename="danowitz_jlab2_submission.csv") """ Explanation: Enough tinkering around Formalize this into some scripts Make predictions on competition test data End of explanation """
nehal96/Deep-Learning-ND-Exercises
Sentiment Analysis/Sentiment Analysis with Andrew Trask/1-framing-problems-for-nns.ipynb
mit
def pretty_print_review_and_label(i): print(labels[i] + "\t:\t" + reviews[i][:80] + "...") g = open('reviews.txt','r') # What we know! reviews = list(map(lambda x:x[:-1],g.readlines())) g.close() g = open('labels.txt','r') # What we WANT to know! labels = list(map(lambda x:x[:-1].upper(),g.readlines())) g.close() len(reviews) reviews[0] labels[0] """ Explanation: Sentiment Classification & How To "Frame Problems" for a Neural Network by Andrew Trask Twitter: @iamtrask Blog: http://iamtrask.github.io What You Should Already Know neural networks, forward and back-propagation stochastic gradient descent mean squared error and train/test splits Where to Get Help if You Need it Re-watch previous Udacity Lectures Leverage the recommended Course Reading Material - Grokking Deep Learning (40% Off: traskud17) Shoot me a tweet @iamtrask Tutorial Outline: Intro: The Importance of "Framing a Problem" Curate a Dataset Developing a "Predictive Theory" PROJECT 1: Quick Theory Validation Transforming Text to Numbers PROJECT 2: Creating the Input/Output Data Putting it all together in a Neural Network PROJECT 3: Building our Neural Network Understanding Neural Noise PROJECT 4: Making Learning Faster by Reducing Noise Analyzing Inefficiencies in our Network PROJECT 5: Making our Network Train and Run Faster Further Noise Reduction PROJECT 6: Reducing Noise by Strategically Reducing the Vocabulary Analysis: What's going on in the weights? Lesson: Curate a Dataset End of explanation """ print("labels.txt \t : \t reviews.txt\n") pretty_print_review_and_label(2137) pretty_print_review_and_label(12816) pretty_print_review_and_label(6267) pretty_print_review_and_label(21934) pretty_print_review_and_label(5297) pretty_print_review_and_label(4998) """ Explanation: Lesson: Develop a Predictive Theory End of explanation """ from collections import Counter import numpy as np positive_count = Counter() negative_count = Counter() total_count = Counter() for i in range(len(reviews)): if (labels[i] == 'POSITIVE'): for word in reviews[i].split(" "): positive_count[word] += 1 total_count[word] += 1 else: for word in reviews[i].split(" "): negative_count[word] +=1 total_count[word] +=1 positive_count.most_common()[:15] pos_ratios = Counter() neg_ratios = Counter() for word, count in list(total_count.most_common()): if count > 100: pos_ratio = positive_count[word] / float(total_count[word] + 1) neg_ratio = negative_count[word] / float(total_count[word] + 1) pos_ratios[word] = pos_ratio neg_ratios[word] = neg_ratio pos_ratios.most_common()[:15] neg_ratios.most_common()[:15] """ Explanation: Test Theory End of explanation """
scotgl/sonify
ver_0.5.1/2. Full_or_Empty_Training_Module.ipynb
gpl-3.0
import random from gtts import gTTS import time from IPython.display import Image, display, clear_output from ipywidgets import widgets import os import platform speechflag = 0 if (platform.system()=='Windows'): speechflag = 2 if (platform.system()!='Windows'): speechflag = 1 display(Image('dep/images/glasses.jpg')) tts = gTTS(text=('In this sonification,we try to represent the emptiness or fullness of something. For instance a drop of water falling into an empty glass sounds like'), lang='en') tts.save("dep/audio/num.mp3") if (speechflag==1): os.system("dep/audio/afplay num.mp3") os.system("dep/audio/afplay empty.mp3") if (speechflag==2): os.system("cmdmp3 dep/audio/num.mp3") os.system("cmdmp3 dep/audio/empty.mp3") tts = gTTS(text=('Or the same drop of water falling into a full glass might sound like'), lang='en') tts.save("dep/audio/num.mp3") if (speechflag==1): os.system("afplay dep/audio/num.mp3") os.system("afplay dep/audio/full.mp3") if (speechflag==2): os.system("cmdmp3 dep/audio/num.mp3") os.system("cmdmp3 dep/audio/full.mp3") tts = gTTS(text=('Notice, as how the saying goes, it is difficult to say if a glass is half empty '), lang='en') tts.save("dep/audio/num.mp3") if (speechflag==1): os.system("afplay dep/audio/num.mp3") os.system("afplay half_empty.mp3") if (speechflag==2): os.system("cmdmp3 dep/audio/num.mp3") os.system("cmdmp3 dep/audio/half_empty.mp3") tts = gTTS(text=('or half full '), lang='en') tts.save("num.mp3") if (speechflag==1): os.system("afplay dep/audio/num.mp3") os.system("afplay dep/audio/half_full.mp3") if (speechflag==2): os.system("cmdmp3 dep/audio/num.mp3") os.system("cmdmp3 dep/audio/half_full.mp3") tts = gTTS(text=('Ok. Hit space bar to go to the next section where you can explore this sonification (before taking the test)'), lang='en') tts.save("num.mp3") if (speechflag==1): os.system("afplay dep/audio/num.mp3") if (speechflag==2): os.system("cmdmp3 dep/audio/num.mp3") """ Explanation: As Before, Please put on your best headphones on and Set your system Volume to 50%. Words are not sonification, exactly, but listen in this next section use your ears! Use Space Bar or arrows at bottom right to navigate to next section End of explanation """ ## Time to move onto the Exploration Module Now that you have the general idea of this straight forward sound representation. It is time to move on to the next phase of exploring the sonification. Go back to the list of available notebooks and select "Full or Empty Exploration Module" """ Explanation: Time to move onto the Exploration Module Now that you have the general idea of this straight forward sound representation. It is time to move on to the next phase of exploring the sonification. Go back to the list of available notebooks and select "Full or Empty Training Module" End of explanation """
qinwf-nuan/keras-js
notebooks/layers/pooling/GlobalAveragePooling3D.ipynb
mit
data_in_shape = (6, 6, 3, 4) L = GlobalAveragePooling3D(data_format='channels_last') layer_0 = Input(shape=data_in_shape) layer_1 = L(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) np.random.seed(270) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['pooling.GlobalAveragePooling3D.0'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'expected': {'data': data_out_formatted, 'shape': data_out_shape} } """ Explanation: GlobalAveragePooling3D [pooling.GlobalAveragePooling3D.0] input 6x6x3x4, data_format='channels_last' End of explanation """ data_in_shape = (3, 6, 6, 3) L = GlobalAveragePooling3D(data_format='channels_first') layer_0 = Input(shape=data_in_shape) layer_1 = L(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) np.random.seed(271) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['pooling.GlobalAveragePooling3D.1'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'expected': {'data': data_out_formatted, 'shape': data_out_shape} } """ Explanation: [pooling.GlobalAveragePooling3D.1] input 3x6x6x3, data_format='channels_first' End of explanation """ data_in_shape = (5, 3, 2, 1) L = GlobalAveragePooling3D(data_format='channels_last') layer_0 = Input(shape=data_in_shape) layer_1 = L(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) np.random.seed(272) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['pooling.GlobalAveragePooling3D.2'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'expected': {'data': data_out_formatted, 'shape': data_out_shape} } """ Explanation: [pooling.GlobalAveragePooling3D.2] input 5x3x2x1, data_format='channels_last' End of explanation """ print(json.dumps(DATA)) """ Explanation: export for Keras.js tests End of explanation """
BYUFLOWLab/MDOnotebooks
StepSize.ipynb
mit
%matplotlib inline import numpy as np from math import sin, cos, exp import matplotlib.pyplot as plt # just a simple 1D function to illustarate def f(x): return exp(x)*sin(x) # these are the exact derivatives so we can compare performance def g(x): return exp(x)*sin(x) + exp(x)*cos(x) """ Explanation: Step Size for Three Gradient Estimation Methods We will compare the accuracy of the following methods for estimating a gradient: - forward difference - centeral difference - complex step In each case we compare their accuracy as a function of step size ($h$). For simplicity we will use a 1D function. We start with the forward and central differences. Recall the formulas for forward and central difference. $$\text{Forward Difference:}\quad f^\prime(x) \approx \frac{f(x + h) - f(x)}{h} $$ $$\text{Central Difference:}\quad f^\prime(x) \approx \frac{f(x + h) - f(x - h)}{2 h} $$ End of explanation """ # let's take a bunch of different step sizes from very large to very small n = 26 step_size = np.logspace(0, -25, n) # initialize results array (forward difference, central difference) grad_fd = np.zeros(n) grad_cd = np.zeros(n) # arbitrarily chosen point x = 0.5 """ Explanation: We now setup the step sizes we want to try and initialize our data. Note that we use logarithmically spaced step sizes (e.g., 1e0, 1e-1, 1e-2, ...). We also need to pick a point about which to take gradients. End of explanation """ # loop through and try all the different starting points for i in range(n): h = step_size[i] grad_fd[i] = (f(x + h) - f(x))/h grad_cd[i] = (f(x + h) - f(x-h))/(2*h) """ Explanation: Using a for loop we will now test out each of those step sizes. End of explanation """ # compute relative error compared to the exact solution grad_exact = g(x) error_fd = np.abs((grad_fd - grad_exact)/grad_exact) error_cd = np.abs((grad_cd - grad_exact)/grad_exact) """ Explanation: We have an analytical solution for the gradient, so let's compute the relative error. End of explanation """ plt.style.use('ggplot') plt.figure() plt.loglog(step_size, error_fd, '.-', label='forward') plt.loglog(step_size, error_cd, '.-', label='central') plt.gca().set_ylim(ymin=1e-18, ymax=1e1) ticks = np.arange(-1, -26, -3) plt.xticks(10.0**ticks) plt.gca().invert_xaxis() plt.legend(loc='center right') plt.xlabel('step size') plt.ylabel('relative error') plt.show() """ Explanation: We now plot the results on a loglog scale. The x-axis shows the different step sizes we tried and the y-axis the relative error as compared to the exact gradients. Note that I've inverted the x-axis so that smaller step sizes are on the right. End of explanation """ from cmath import sin, cos, exp # initialize grad_cs = np.zeros(n) # loop through each step size for i in range(n): h = step_size[i] grad_cs[i] = f(x + complex(0, h)).imag / h """ Explanation: Notice that there is large truncation error at large step sizes (starting from the left). Then there is a downward decrease in error with a slope of 1 and 2 for forward and central respectively as expected (corresponding to error of $\mathcal{O}(h)$ and $\mathcal{O}(h^2)$). If not for finite precision arithmetic these lines would continue indefinitely. However, in a computer, subtractive cancellation starts becoming significant and the error starts increasing again. At very small step sizes are error is 100% (the methods predict derivatives of 0). Also note, as discussed, that the optimal step size for central diference is a bit larger than the optimal step size for forward difference. For this function they are around $10^{-8}$ and $10^{-5}$. Let's now try the same thing, but using the complex step method. Almost everything can stay the same except that we need to import from cmath rather than math. Doing this makes all of the functions we use (sin, cos, exp) defined for complex numbers and not just real numbers. In Matlab, the functions are already overloaded for complex number automatically. The formula using the complex step method is: $$\text{Complex Step:}\quad f^\prime(x) \approx \frac{\operatorname{Im}[f(x + ih)]}{h} $$ End of explanation """ # compute error error_cs = np.abs((grad_cs - grad_exact)/grad_exact) # the error is below machine precision in some cases so just add epsilon error so it shows on plot error_cs[error_cs == 0] = 1e-16 plt.figure() plt.loglog(step_size, error_fd, '.-', label='forward') plt.loglog(step_size, error_cd, '.-', label='central') plt.loglog(step_size, error_cs, '.-', label='complex') plt.gca().set_ylim(ymin=1e-18, ymax=1e1) ticks = np.arange(-1, -26, -3) plt.xticks(10.0**ticks) plt.gca().invert_xaxis() plt.legend(loc='center right') plt.xlabel('step size') plt.ylabel('relative error') plt.show() """ Explanation: Same as before, we will compute the error then plot the result from all three methods. End of explanation """
NuSTAR/nustar_pysolar
notebooks/Ephemeris_Test.ipynb
mit
dt = 0. # Using JPL Horizons web interface at 2017-05-19T01:34:40 horizon_ephem = SkyCoord(*[193.1535, -4.01689]*u.deg) for orbit in orbits: tstart = orbit[0] tend = orbit[1] print() # print('Orbit duration: ', tstart.isoformat(), tend.isoformat()) on_time = (tend - tstart).total_seconds() point_time = tstart + 0.5*(tend - tstart) print('Time used for ephemeris: ', point_time.isoformat()) astro_time = Time(point_time) solar_system_ephemeris.set('jpl') jupiter = get_body('Jupiter', astro_time) jplephem = SkyCoord(jupiter.ra.deg*u.deg, jupiter.dec.deg*u.deg) # Switch to the built in ephemris solar_system_ephemeris.set('builtin') jupiter = get_body('Jupiter', astro_time) builtin_ephem = SkyCoord(jupiter.ra.deg*u.deg, jupiter.dec.deg*u.deg) t = ts.from_astropy(astro_time) jupiter, earth = planets['jupiter'], planets['earth'] astrometric = earth.at(t).observe(jupiter) ra, dec, distance = astrometric.radec() radeg = ra.to(u.deg) decdeg = dec.to(u.deg) skyfield_ephem = SkyCoord(radeg, decdeg) print() print('Horizons offset to jplephem: ', horizon_ephem.separation(jplephem)) print() print('Horizons offset to "built in" ephemeris: ', horizon_ephem.separation(builtin_ephem)) print() print('Horizons offset to Skyfield ephemeris: ', horizon_ephem.separation(skyfield_ephem)) print() break """ Explanation: Use the astropy interface to get the location of Jupiter as the time that you want to use. End of explanation """ dt = 0. for orbit in orbits: tstart = orbit[0] tend = orbit[1] print() on_time = (tend - tstart).total_seconds() point_time = tstart + 0.5*(tend - tstart) print('Time used for ephemeris: ', point_time.isoformat()) astro_time = Time(point_time) solar_system_ephemeris.set('jpl') jupiter = get_body('Jupiter', astro_time) jplephem = SkyCoord(jupiter.ra.deg*u.deg, jupiter.dec.deg*u.deg) # Switch to the built in ephemris solar_system_ephemeris.set('builtin') jupiter = get_body('Jupiter', astro_time) builtin_ephem = SkyCoord(jupiter.ra.deg*u.deg, jupiter.dec.deg*u.deg) t = ts.from_astropy(astro_time) jupiter, earth = planets['jupiter'], planets['earth'] astrometric = earth.at(t).observe(jupiter) ra, dec, distance = astrometric.radec() radeg = ra.to(u.deg) decdeg = dec.to(u.deg) skyfield_ephem = SkyCoord(radeg, decdeg) print() print('Skyfield offset to jplephem: ', skyfield_ephem.separation(jplephem)) print() print('Skyfield offset to "built in" ephemeris: ', skyfield_ephem.separation(builtin_ephem)) print() """ Explanation: Conclusion: Use skyfield if you want to reproduce the JPL ephemerides Use the jup310.bsp file for Jupiter. Need to confirm which of the avaiable .bsp files are approriate for inner solar system objects as well as the Sun/Moon End of explanation """
josdaza/deep-toolbox
TensorFlow/03_Autoencoder.ipynb
mit
import tensorflow as tf import numpy as np class Autoencoder: def __init__(self, input_dim, hidden_dim, epoch=250, learning_rate=0.001): self.epoch = epoch # Numero de ciclos para aprender self.learning_rate = learning_rate #Hiper parametro para el optimizador (RMSProppOptimizer en este caso) # Capa de entrada (Input X que es el Dataset). shape=[None,input_dim] significa que en vez de None puede haber cualquier numero ahi (para poder hacer batch) x = tf.placeholder(dtype=tf.float32, shape=[None, input_dim]) # Definir variables y operador del encoder (Del Input a la Hidden Layer) with tf.name_scope('encode'): weights = tf.Variable(tf.random_normal([input_dim, hidden_dim], dtype=tf.float32), name='weights') biases = tf.Variable(tf.zeros([hidden_dim]), name='biases') encoded = tf.nn.tanh(tf.matmul(x,weights) + biases) # Definir variables y operador del decoder (De la Hidden Layer al Output) with tf.name_scope('decode'): weights = tf.Variable(tf.random_normal([hidden_dim, input_dim], dtype=tf.float32), name='weights') biases = tf.Variable(tf.zeros([hidden_dim]), name='biases') decoded = tf.nn.tanh(tf.matmul(encoded,weights) + biases) # Asignar a variables de la clase para que otro metodos de ella las puedan usar self.x = x self.encoded = encoded self.decoded = decoded # Funcion de costo (diferencia entre el output del autoencoder y el output esperado # que en este caso es el propio input) self.loss = tf.sqrt(tf.reduce_mean(tf.square(tf.subtract(self.x,self.decoded)))) # Definir el optimizador a utilizar y salvar los parametros que se iran aprendiendo en cada epoch self.train_op = tf.train.RMSPropOptimizer(self.learning_rate).minimize(self.loss) self.saver = tf.train.Saver() def train(self, data): num_samples = len(data) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for i in range(self.epoch): for j in range(num_samples): # Entrenat la red sample por sample durante i epoch's l, _ = sess.run([self.loss, self.train_op], feed_dict={self.x: [data[j]]}) if i%10==0: # Solo imprimir como va la red cada 10 iteraciones (Vigilar como desciende el error!) print("Epoch {}: loss = {}".format(i,l)) self.saver.save(sess, './checkpoints/autoencoder_model.ckpt') def test(self, data): #Data es un solo ejemplo (nunca antes visto) que entrara a la red y esta dara el resultado (que debe de ser el propio input si se entreno correctamente) with tf.Session() as sess: self.saver.restore(sess, './checkpoints/autoencoder_model.ckpt') hidden,reconstructed = sess.run([self.encoded, self.decoded], feed_dict={self.x: data}) print('input', data) print('compressed',hidden) print('reconstructed',reconstructed) return reconstructed """ Explanation: Definicion de un Autoencoder Como primer ejecricio se hara un autoencoder que codifique la entrada en menores dimensiones (encode) y luego reconstruya esa entrada (decode) para que la red de como salida la propia entrada. End of explanation """ from sklearn import datasets hidden_dim = 1 data = datasets.load_iris().data input_dim = len(data[0]) ae = Autoencoder(input_dim, hidden_dim,epoch=500) ae.train(data) ae.test([[ 5.0 ,3.9, 2.4, 1.2]]) """ Explanation: Ejecutar el autoencoder End of explanation """ import pickle import numpy as np def unpickle(file): fo = open(file,'rb') img_dict = pickle.load(fo, encoding='latin1') fo.close() return img_dict def grayscale(a): return a.reshape(a.shape[0], 3, 32, 32).mean(1).reshape(a.shape[0], -1) names = unpickle('./cifar-10-batches-py/batches.meta')['label_names'] data, labels = [],[] for i in range(1,6): filename = './cifar-10-batches-py/data_batch_' + str(i) batch_data = unpickle(filename) if len(data) > 0: data = np.vstack((data, batch_data['data'])) labels = np.hstack((labels, batch_data['labels'])) else: data = batch_data['data'] labels = batch_data['labels'] data = grayscale(data) x = np.matrix(data) y = np.array(labels) horse_indices = np.where(y == 7)[0] horse_x = x[horse_indices] print(np.shape(horse_x)) input_dim = np.shape(horse_x)[1] hidden_dim = 100 ae = Autoencoder(input_dim, hidden_dim) ae.train(horse_x) """ Explanation: Aplicando a Imagenes 1) Leer el dataset 2) Convertir a escala de grises 3) Aplicar el Autoencoder End of explanation """
maciejkula/lightfm
examples/quickstart/quickstart.ipynb
apache-2.0
import numpy as np from lightfm.datasets import fetch_movielens data = fetch_movielens(min_rating=5.0) """ Explanation: Quickstart In this example, we'll build an implicit feedback recommender using the Movielens 100k dataset (http://grouplens.org/datasets/movielens/100k/). The code behind this example is available as a Jupyter notebook LightFM includes functions for getting and processing this dataset, so obtaining it is quite easy. End of explanation """ print(repr(data['train'])) print(repr(data['test'])) """ Explanation: This downloads the dataset and automatically pre-processes it into sparse matrices suitable for further calculation. In particular, it prepares the sparse user-item matrices, containing positive entries where a user interacted with a product, and zeros otherwise. We have two such matrices, a training and a testing set. Both have around 1000 users and 1700 items. We'll train the model on the train matrix but test it on the test matrix. End of explanation """ from lightfm import LightFM """ Explanation: We need to import the model class to fit the model: End of explanation """ model = LightFM(loss='warp') %time model.fit(data['train'], epochs=30, num_threads=2) """ Explanation: We're going to use the WARP (Weighted Approximate-Rank Pairwise) model. WARP is an implicit feedback model: all interactions in the training matrix are treated as positive signals, and products that users did not interact with they implicitly do not like. The goal of the model is to score these implicit positives highly while assigining low scores to implicit negatives. Model training is accomplished via SGD (stochastic gradient descent). This means that for every pass through the data --- an epoch --- the model learns to fit the data more and more closely. We'll run it for 30 epochs in this example. We can also run it on multiple cores, so we'll set that to 2. (The dataset in this example is too small for that to make a difference, but it will matter on bigger datasets.) End of explanation """ from lightfm.evaluation import precision_at_k """ Explanation: Done! We should now evaluate the model to see how well it's doing. We're most interested in how good the ranking produced by the model is. Precision@k is one suitable metric, expressing the percentage of top k items in the ranking the user has actually interacted with. lightfm implements a number of metrics in the evaluation module. End of explanation """ print("Train precision: %.2f" % precision_at_k(model, data['train'], k=5).mean()) print("Test precision: %.2f" % precision_at_k(model, data['test'], k=5).mean()) """ Explanation: We'll measure precision in both the train and the test set. End of explanation """ def sample_recommendation(model, data, user_ids): n_users, n_items = data['train'].shape for user_id in user_ids: known_positives = data['item_labels'][data['train'].tocsr()[user_id].indices] scores = model.predict(user_id, np.arange(n_items)) top_items = data['item_labels'][np.argsort(-scores)] print("User %s" % user_id) print(" Known positives:") for x in known_positives[:3]: print(" %s" % x) print(" Recommended:") for x in top_items[:3]: print(" %s" % x) sample_recommendation(model, data, [3, 25, 450]) """ Explanation: Unsurprisingly, the model fits the train set better than the test set. For an alternative way of judging the model, we can sample a couple of users and get their recommendations. To make predictions for given user, we pass the id of that user and the ids of all products we want predictions for into the predict method. End of explanation """
HrantDavtyan/Data_Scraping
Week 4/B_soup_1_(my_page).ipynb
apache-2.0
import requests # import everything from BeautifulSoup from BeautifulSoup import * url = "https://hrantdavtyan.github.io/" """ Explanation: BeautifulSoup 1: scraping my page BeautifulSoup is a powerful Python library used for pulling data out of HTML documents. In this notebook we will use the requests library to get the HTML document from my website and then use BeautifulSoup to get data from that document. Unlike regular expressions, which deal all HTML document as a string/text, BeautifulSoup distinguishes between simple/plain text and HTML tags/attributes which is very helpful for scraping. If you do not have BeautifulSoup installed, then open a completely new command prompt (black window) and type the following command: pip install beautifulsoup Okay, let's start from importing abovementioned libraries and selecting the url to scrape. End of explanation """ response = requests.get(url) my_page = response.text print(response) type(my_page) """ Explanation: Once we have the libraries imported and the url selected, we should use the get() function from the requests library to get the website content as a response and then, convert it to text. End of explanation """ soup = BeautifulSoup(my_page) type(soup) print(soup) """ Explanation: In order to be able to initiate several function available from BeautifulSoup library, we need to pass my_page as an argument to BeautifulSoup() function. The content will still remain the same, yet the object type will change which will let us to use some nice methods. End of explanation """ a_tags = soup.findAll('a') type(a_tags) len(a_tags) """ Explanation: Fine. Let's now try to find all the a tags from my page. End of explanation """ print(a_tags) """ Explanation: As you can see above, we received a list as an output with 29 elements. The 29 elements are the 29 a tags from my website. We can print the outcome to see them. End of explanation """ a_tag = soup.find('a') type(a_tag) print(a_tag) """ Explanation: If you were interested in finding only the very first a tag, then the find() function could be useful instead of findAll(). This function already strings the a tag and its content as a string, rather than a list. End of explanation """ print(a_tag.get('href')) """ Explanation: As you can see above, although this is just s string, its type is a BeautifulSoup.Tag which will helps us to use some other methods on it. For example, we can get the link inside the a tag (href) by using a get() function. As the links are always inside a href attribute, we will try to get the value of href as follows: End of explanation """ for i in a_tags: print(i.get("href")) """ Explanation: If we want to get links from all a_tags (the latter was a list), then we should iterate over the list and get the href value from each element of the list as follows: End of explanation """ p_tags = soup.findAll('p') print(p_tags) """ Explanation: Similarly, one can get all the p_tags from my page by just searching for All p-s as follows: End of explanation """ for i in p_tags: print(i.text) """ Explanation: If you are interested only in paragraphs (text without tags) then you should again (as above in case of a_tags) iterate over the list and for each element of the list, get the text/string out of it as follows: End of explanation """
maojrs/riemann_book
Advection.ipynb
bsd-3-clause
%matplotlib inline %config InlineBackend.figure_format = 'svg' from ipywidgets import interact from exact_solvers import advection """ Explanation: Advection We start our study with the scalar, linear hyperbolic PDE: the advection equation. The solution to this equation simply consists of the initial condition propagating with constant velocity. We first describe the physical origin of this equation and then investigate the solution of the corresponding Riemann problem. This equation is studied in more detail e.g. in Chapter 2 of <cite data-cite="fvmhp"><a href="riemann.html#fvmhp">(LeVeque, 2002)</a></cite>, for example. To examine the Python code for this chapter, see: exact_solvers/advection.py ... on github. Conservation of mass Imagine a fluid flowing in a long, narrow tube. We'll use $q$ to indicate the density of the fluid and $u$ to indicate its velocity. Both of these are functions of space and time: $q = q(x,t)$; $u=u(x,t)$. The total mass in the section of tube $[x_1,x_2]$ is \begin{equation} \int_{x_1}^{x_2} q(x,t) dx. \end{equation} This total mass changes over time due to flow in or out of this section of the tube. We call the rate of flow the flux, and denote it by $f(q)$. Thus the net rate of flow of mass into (or out of) the interval $[x_1,x_2]$ at time $t$ is $$f(q(x_1,t)) - f(q(x_2,t)).$$ We just said that this rate of flow must equal the time rate of change of total mass; i.e. $$\frac{d}{dt} \int_{x_1}^{x_2} q(x,t) dx = f(q(x_1,t)) - f(q(x_2,t)).$$ Now since $\int_{x_1}^{x_2} \frac{\partial}{\partial x} f(q) dx = f(q(x_2,t)) - f(q(x_1,t))$, we can rewrite this as $$\frac{d}{dt} \int_{x_1}^{x_2} q(x,t) dx = -\int_{x_1}^{x_2} \frac{\partial}{\partial x} f(q) dx.$$ If $q$ is sufficiently smooth, we can move the time derivative inside the integral. We'll also put everything on the left side, to obtain $$\int_{x_1}^{x_2} \left(\frac{\partial}{\partial t}q(x,t) + \frac{\partial}{\partial x} f(q)\right) dx = 0.$$ Since this integral vanishes for any choice of $x_1,x_2$, it must be that the integrand vanishes everywhere. Therefore we can write the differential conservation law $$q_t + f(q)_x = 0.$$ This equation expresses the fact that the total mass is conserved, since locally the mass can change only due to a net inflow or outflow. Advection In order to solve the conservation law above, we need an expression for the flux, $f$. The rate of flow is just density times velocity: $f=u q$. Thus we obtain the continuity equation $$q_t + (uq)_x = 0.$$ In general, we need another equation to determine the velocity $u(x,t)$. In later chapters we'll examine models in which the velocity depends on the density (or other properties), but for now let's consider the simplest case, in which the flow is characterized by a single, constant velocity $u(x,t)=a$. Then the continuity equation becomes the advection equation \begin{align} \label{adv:advection} q_t + a q_x = 0. \end{align} This equation has a very simple solution. Given the initial density $q(x,0)=q_0(x)$, the solution is simply \begin{align} \label{adv:solution} q(x,t) = q_0(x-at). \end{align} End of explanation """ interact(advection.characteristics); """ Explanation: Characteristics Notice that the solution is constant along the line $x-at=C$, for each value of $C$. These are parallel, straight lines in the $x-t$ plane, with slope $1/a$, as shown in the figure below. End of explanation """ interact(advection.solution); """ Explanation: We can think of the initial values $q_0(x)$ being transmitted along these lines; we sometimes say that information is transmitted along characteristics. The next figure shows how an initial hump propagates to the right along characteristics. End of explanation """ interact(advection.riemann_demo); """ Explanation: For more complicated hyperbolic problems, we may have multiple sets of characteristics, they may not be parallel, and the solution may not be constant along them. But it will still be the case that information is propagated along characteristics. The idea that information propagates at finite speed is an essential property of hyperbolic PDEs. The Riemann problem The Riemann problem consists of a hyperbolic PDE, such as (\ref{adv:advection}), together with piecewise constant initial data consisting of two states. For convenience, we place the interface (or jump) at $x=0$ and refer to the left state (for $x<0$) as $q_\ell$ and the right state (for $x>0$) as $q_r$. We thus have \begin{align} q_0(x) & = \begin{cases} q_\ell & x<0 \ q_r & x>0. \end{cases} \end{align} For the advection equation, the solution to the Riemann problem is immediately obvious; it is simply a special case of (\ref{adv:solution}). The discontinuity initially at $x=0$ moves at speed $a$. We have \begin{align} q(x,t) & = \begin{cases} q_\ell & x-at<0 \ q_r & x-at>0. \end{cases} \end{align} End of explanation """ q_l = 1. q_r = 0. advection.plot_riemann_solution(q_l, q_r, a=1.); """ Explanation: Notice how the initial discontinuity follows the characteristic coming from $x=0$ at $t=0$. In the live notebook, you can adjust the advection speed $a$ and see how this changes the solution. Here's another way to look at the Riemann solution that will be very useful for more complicated equations. End of explanation """
gojomo/gensim
docs/notebooks/atmodel_prediction_tutorial.ipynb
lgpl-2.1
!wget -O - "https://archive.ics.uci.edu/ml/machine-learning-databases/00217/C50.zip" > /tmp/C50.zip import logging logging.basicConfig(format='%(asctime)s %(levelname)s:%(message)s', level=logging.DEBUG, datefmt='%I:%M:%S') import zipfile filename = '/tmp/C50.zip' zip_ref = zipfile.ZipFile(filename, 'r') zip_ref.extractall("/tmp/") zip_ref.close() """ Explanation: Authorship prediction with the author-topic model In this tutorial, you will learn how to use the author-topic model in Gensim for authorship prediction, based on the topic distributions and mesuring their similarity. We will train the author-topic model on a Reuters dataset, which contains 50 authors, each with 50 documents for trianing and another 50 documents for testing: https://archive.ics.uci.edu/ml/datasets/Reuter_50_50 . If you wish to learn more about the Author-topic model and LDA and how to train them, you should check out these tutorials beforehand. A lot of the preprocessing and configuration here has been done using their example: * LDA training tips * Training the author-topic model NOTE: To run this tutorial on your own, install Jupyter, Gensim, SpaCy, Scikit-Learn, Bokeh and Pandas, e.g. using pip: pip install jupyter gensim spacy sklearn bokeh pandas Note that you need to download some data for SpaCy using python -m spacy.en.download. Download the notebook at https://github.com/RaRe-Technologies/gensim/tree/develop/docs/notebooks/atmodel_prediction_tutorial.ipynb. Predicting the author of a document is a difficult task, where current approaches usually turn to neural networks. These base a lot of their predictions on learing stylistic and syntactic preferences of the authors and also other features which help rather identify the author. In our case, we first model the domain knowledge of a certain author, based on what the author writes about. We do this by calculating the topic distributions for each author using the author-topic model. After that, we perform the new author inference on the held-out subset. This again calculates a topic distribution for this new unknown author. In order to perform the prediction, we find out of all known authors, the most similar one to the new unknown. Mathematically speaking, we find the author, whose topic distribution is the closest to the topic distribution of the new author, by a certrain distrance function or metric. Here we explore the Hellinger distance for the measuring the distance between two discrete multinomial topic distributions. We start off by downloading the dataset. You can do it manually using the aforementioned link, or run the following code cell. End of explanation """ import os, re, io def preprocess_docs(data_dir): doc_ids = [] author2doc = {} docs = [] folders = os.listdir(data_dir) # List of filenames. for authorname in folders: files = file = os.listdir(data_dir + '/' + authorname) for filen in files: (idx1, idx2) = re.search('[0-9]+', filen).span() # Matches the indexes of the start end end of the ID. if not author2doc.get(authorname): # This is a new author. author2doc[authorname] = [] doc_id = str(int(filen[idx1:idx2])) doc_ids.append(doc_id) author2doc[authorname].extend([doc_id]) # Read document text. # Note: ignoring characters that cause encoding errors. with io.open(data_dir + '/' + authorname + '/' + filen, errors='ignore', encoding='utf-8') as fid: txt = fid.read() # Replace any whitespace (newline, tabs, etc.) by a single space. txt = re.sub('\s', ' ', txt) docs.append(txt) doc_id_dict = dict(zip(doc_ids, range(len(doc_ids)))) # Replace dataset IDs by integer IDs. for a, a_doc_ids in author2doc.items(): for i, doc_id in enumerate(a_doc_ids): author2doc[a][i] = doc_id_dict[doc_id] import spacy nlp = spacy.load('en') %%time processed_docs = [] for doc in nlp.pipe(docs, n_threads=4, batch_size=100): # Process document using Spacy NLP pipeline. ents = doc.ents # Named entities. # Keep only words (no numbers, no punctuation). # Lemmatize tokens, remove punctuation and remove stopwords. doc = [token.lemma_ for token in doc if token.is_alpha and not token.is_stop] # Remove common words from a stopword list. #doc = [token for token in doc if token not in STOPWORDS] # Add named entities, but only if they are a compound of more than word. doc.extend([str(entity) for entity in ents if len(entity) > 1]) processed_docs.append(doc) docs = processed_docs del processed_docs # Compute bigrams. from gensim.models import Phrases # Add bigrams and trigrams to docs (only ones that appear 20 times or more). bigram = Phrases(docs, min_count=20) for idx in range(len(docs)): for token in bigram[docs[idx]]: if '_' in token: # Token is a bigram, add to document. docs[idx].append(token) return docs, author2doc """ Explanation: We wrap all the preprocessing steps, that you can find more about in the author-topic notebook , in one fucntion so that we are able to iterate over different preprocessing parameters. End of explanation """ def create_corpus_dictionary(docs, max_freq=0.5, min_wordcount=20): # Create a dictionary representation of the documents, and filter out frequent and rare words. from gensim.corpora import Dictionary dictionary = Dictionary(docs) # Remove rare and common tokens. # Filter out words that occur too frequently or too rarely. max_freq = max_freq min_wordcount = min_wordcount dictionary.filter_extremes(no_below=min_wordcount, no_above=max_freq) _ = dictionary[0] # This sort of "initializes" dictionary.id2token. # Vectorize data. # Bag-of-words representation of the documents. corpus = [dictionary.doc2bow(doc) for doc in docs] return corpus, dictionary def create_test_corpus(train_dictionary, docs): # Create test corpus using the dictionary from the train data. return [train_dictionary.doc2bow(doc) for doc in docs] """ Explanation: We create the corpus of the train and test data using two separate functions, since each corpus is tied to a certain dictionary which maps the words to their ids. Also in order to create the test corpus, we use the dictionary from the train data, since the trained model has have the same id2word reference as the new test data. Otherwise token with id 1 from the test data wont't mean the same as the trained upon token with id 1 in the model. End of explanation """ traindata_dir = "/tmp/C50train" train_docs, train_author2doc = preprocess_docs(traindata_dir) train_corpus_50_20, train_dictionary_50_20 = create_corpus_dictionary(train_docs, 0.5, 20) print('Number of unique tokens: %d' % len(train_dictionary_50_20)) testdata_dir = "/tmp/C50test" test_docs, test_author2doc = preprocess_docs(testdata_dir) test_corpus_50_20 = create_test_corpus(train_dictionary_50_20, test_docs) """ Explanation: For our first training, we specify that we want the parameters max_freq and min_wordcoun to be 50 and 20, as proposed by the original notebook tutorial. We will find out if this configuration is good enough for us. End of explanation """ def train_model(corpus, author2doc, dictionary, num_topics=20, eval_every=0, iterations=50, passes=20): from gensim.models import AuthorTopicModel model = AuthorTopicModel(corpus=corpus, num_topics=num_topics, id2word=dictionary.id2token, \ author2doc=author2doc, chunksize=2500, passes=passes, \ eval_every=eval_every, iterations=iterations, random_state=1) top_topics = model.top_topics(corpus) tc = sum([t[1] for t in top_topics]) print(tc / num_topics) return model # NOTE: Author of the logic of this function is the Olavur Mortensen, from his notebook tutorial. def predict_author(new_doc, atmodel, top_n=10, smallest_author=1): from gensim import matutils import pandas as pd def similarity(vec1, vec2): '''Get similarity between two vectors''' dist = matutils.hellinger(matutils.sparse2full(vec1, atmodel.num_topics), \ matutils.sparse2full(vec2, atmodel.num_topics)) sim = 1.0 / (1.0 + dist) return sim def get_sims(vec): '''Get similarity of vector to all authors.''' sims = [similarity(vec, vec2) for vec2 in author_vecs] return sims author_vecs = [atmodel.get_author_topics(author) for author in atmodel.id2author.values()] new_doc_topics = atmodel.get_new_author_topics(new_doc) # Get similarities. sims = get_sims(new_doc_topics) # Arrange author names, similarities, and author sizes in a list of tuples. table = [] for elem in enumerate(sims): author_name = atmodel.id2author[elem[0]] sim = elem[1] author_size = len(atmodel.author2doc[author_name]) if author_size >= smallest_author: table.append((author_name, sim, author_size)) # Make dataframe and retrieve top authors. df = pd.DataFrame(table, columns=['Author', 'Score', 'Size']) df = df.sort_values('Score', ascending=False)[:top_n] return df """ Explanation: We wrap the model training also in a function, in order to, again, be able to iterate over different parametrizations. End of explanation """ def prediction_accuracy(test_author2doc, test_corpus, model, k=5): print("Precision@k: top_n={}".format(k)) matches=0 tries = 0 for author in test_author2doc: author_id = model.author2id[author] for doc_id in test_author2doc[author]: predicted_authors = predict_author(test_corpus[doc_id:doc_id+1], atmodel=model, top_n=k) tries = tries+1 if author_id in predicted_authors["Author"]: matches=matches+1 accuracy = matches/tries print("Prediction accuracy: {}".format(accuracy)) return accuracy, k def plot_accuracy(scores1, label1, scores2=None, label2=None): import matplotlib.pyplot as plt s = [score*100 for score in scores1.values()] t = list(scores1.keys()) plt.plot(t, s, "b-", label=label1) plt.plot(t, s, "r^", label=label1+" data points") if scores2 is not None: s2 = [score*100 for score in scores2.values()] plt.plot(t, s2, label=label2) plt.plot(t, s2, "o", label=label2+" data points") plt.legend(loc="lower right") plt.xlabel('parameter k') plt.ylabel('prediction accuracy') plt.title('Precision at k') plt.xticks(t) plt.grid(True) plt.yticks([30,40,50,60,70,80,90,100]) plt.axis([0, 11, 30, 100]) plt.show() """ Explanation: We define a custom function, which measures the prediction accuracy, following the precision at k principle. We parametrize the accuracy by a parameter k, k=1 meaning we need an exact match in order to be accurate, k=5 meaning our prediction has be in the top 5 results, ordered by similarity. End of explanation """ atmodel_standard = train_model(train_corpus_50_20, train_author2doc, train_dictionary_50_20) """ Explanation: We calculate the accuracy for a range of values for k=[1,2,3,4,5,6,8,10] and plot how exactly the prediction accuracy naturally rises with higher k. End of explanation """ accuracy_scores_20topic={} for i in [1,2,3,4,5,6,8,10]: accuracy, k = prediction_accuracy(test_author2doc, test_corpus_50_20, atmodel_standard, k=i) accuracy_scores_20topic[k] = accuracy plot_accuracy(scores1=accuracy_scores_20topic, label1="20 topics") """ Explanation: We run our first training and observe that the passes and iterations parameters are set high enough, so that the model converges. 07:47:24 INFO:PROGRESS: pass 15, at document #2500/2500 07:47:24 DEBUG:performing inference on a chunk of 2500 documents 07:47:27 DEBUG:2500/2500 documents converged within 50 iterations Tells us that the model indeed conveges well. End of explanation """ atmodel_100topics = train_model(train_corpus_50_20, train_author2doc, train_dictionary_50_20, num_topics=100, eval_every=0, iterations=50, passes=10) accuracy_scores_100topic={} for i in [1,2,3,4,5,6,8,10]: accuracy, k = prediction_accuracy(test_author2doc, test_corpus_50_20, atmodel_100topics, k=i) accuracy_scores_100topic[k] = accuracy plot_accuracy(scores1=accuracy_scores_20topic, label1="20 topics", scores2=accuracy_scores_100topic, label2="100 topics") """ Explanation: This is a rather poor accuracy performace. We increase the number of topic to 100. End of explanation """ atmodel_150topics = train_model(train_corpus_50_20, train_author2doc, train_dictionary_50_20, num_topics=150, eval_every=0, iterations=50, passes=15) accuracy_scores_150topic={} for i in [1,2,3,4,5,6,8,10]: accuracy, k = prediction_accuracy(test_author2doc, test_corpus_50_20, atmodel_150topics, k=i) accuracy_scores_150topic[k] = accuracy plot_accuracy(scores1=accuracy_scores_100topic, label1="100 topics", scores2=accuracy_scores_150topic, label2="150 topics") """ Explanation: The 100-topic model is much more accurate than the 20-topic model. We continue to increase the topic until convergence. End of explanation """ atmodel_200topics = train_model(train_corpus_50_20, train_author2doc, train_dictionary_50_20, num_topics=200, eval_every=0, iterations=50, passes=15) accuracy_scores_200topic={} for i in [1,2,3,4,5,6,8,10]: accuracy, k = prediction_accuracy(test_author2doc, test_corpus_50_20, atmodel_200topics, k=i) accuracy_scores_200topic[k] = accuracy plot_accuracy(scores1=accuracy_scores_150topic, label1="150 topics", scores2=accuracy_scores_200topic, label2="200 topics") """ Explanation: The 150-topic model is also slightly better, especially in the lower end of k. But we clearly see convergence. We try with 200 topic to be sure. End of explanation """ train_corpus_25_10, train_dictionary_25_10 = create_corpus_dictionary(train_docs, 0.25, 10) test_corpus_25_10 = create_test_corpus(train_dictionary_25_10, test_docs) print('Number of unique tokens: %d' % len(train_dictionary_25_10)) """ Explanation: The 200-topic seems to be performing a bit better for lower k, might be due to a slight overrepresentation with high topic number. So let us stop here with the topic number increase and focus some more on the dictionary. We choose either one of the models. Currently we are filtering out tokens, that appear in more 50% of all documents and no more than 20 times overall, which drastically decreaces the size of our dictionary. We know about this dataset, that the underlying topic are not so diverse and are structed around corporate/industrial topic class. Thus it makes sense to increase the dictionary by filtering less tokens. We set the parameters set max_freq=25%, min_wordcount=10 End of explanation """ atmodel_150topics_25_10 = train_model(train_corpus_25_10, train_author2doc, train_dictionary_25_10, num_topics=150, eval_every=0, iterations=50, passes=15) accuracy_scores_150topic_25_10={} for i in [1,2,3,4,5,6,8,10]: accuracy, k = prediction_accuracy(test_author2doc, test_corpus_25_10, atmodel_150topics_25_10, k=i) accuracy_scores_150topic_25_10[k] = accuracy plot_accuracy(scores1=accuracy_scores_150topic_25_10, label1="150 topics, max_freq=25%, min_wordcount=10", scores2=accuracy_scores_150topic, label2="150 topics, standard") """ Explanation: We now have now nearly doubled the tokens. Let's train and evaluate. End of explanation """
metpy/MetPy
v0.4/_downloads/Station_Plot_with_Layout.ipynb
bsd-3-clause
import cartopy.crs as ccrs import cartopy.feature as feat import matplotlib.pyplot as plt import numpy as np from metpy.calc import get_wind_components from metpy.cbook import get_test_data from metpy.plots import simple_layout, StationPlot, StationPlotLayout from metpy.units import units """ Explanation: Station Plot with Layout Make a station plot, complete with sky cover and weather symbols, using a station plot layout built into MetPy. The station plot itself is straightforward, but there is a bit of code to perform the data-wrangling (hopefully that situation will improve in the future). Certainly, if you have existing point data in a format you can work with trivially, the station plot will be simple. The StationPlotLayout class is used to standardize the plotting various parameters (i.e. temperature), keeping track of the location, formatting, and even the units for use in the station plot. This makes it easy (if using standardized names) to re-use a given layout of a station plot. End of explanation """ f = get_test_data('station_data.txt') all_data = np.loadtxt(f, skiprows=1, delimiter=',', usecols=(1, 2, 3, 4, 5, 6, 7, 17, 18, 19), dtype=np.dtype([('stid', '3S'), ('lat', 'f'), ('lon', 'f'), ('slp', 'f'), ('air_temperature', 'f'), ('cloud_fraction', 'f'), ('dewpoint', 'f'), ('weather', '16S'), ('wind_dir', 'f'), ('wind_speed', 'f')])) """ Explanation: The setup First read in the data. We use numpy.loadtxt to read in the data and use a structured numpy.dtype to allow different types for the various columns. This allows us to handle the columns with string data. End of explanation """ # Get the full list of stations in the data all_stids = [s.decode('ascii') for s in all_data['stid']] # Pull out these specific stations whitelist = ['OKC', 'ICT', 'GLD', 'MEM', 'BOS', 'MIA', 'MOB', 'ABQ', 'PHX', 'TTF', 'ORD', 'BIL', 'BIS', 'CPR', 'LAX', 'ATL', 'MSP', 'SLC', 'DFW', 'NYC', 'PHL', 'PIT', 'IND', 'OLY', 'SYR', 'LEX', 'CHS', 'TLH', 'HOU', 'GJT', 'LBB', 'LSV', 'GRB', 'CLT', 'LNK', 'DSM', 'BOI', 'FSD', 'RAP', 'RIC', 'JAN', 'HSV', 'CRW', 'SAT', 'BUY', '0CO', 'ZPC', 'VIH'] # Loop over all the whitelisted sites, grab the first data, and concatenate them data_arr = np.concatenate([all_data[all_stids.index(site)].reshape(1,) for site in whitelist]) # First, look at the names of variables that the layout is expecting: simple_layout.names() """ Explanation: This sample data has way too many stations to plot all of them. Instead, we just select a few from around the U.S. and pull those out of the data file. End of explanation """ # This is our container for the data data = dict() # Copy out to stage everything together. In an ideal world, this would happen on # the data reading side of things, but we're not there yet. data['longitude'] = data_arr['lon'] data['latitude'] = data_arr['lat'] data['air_temperature'] = data_arr['air_temperature'] * units.degC data['dew_point_temperature'] = data_arr['dewpoint'] * units.degC data['air_pressure_at_sea_level'] = data_arr['slp'] * units('mbar') """ Explanation: Next grab the simple variables out of the data we have (attaching correct units), and put them into a dictionary that we will hand the plotting function later: End of explanation """ # Get the wind components, converting from m/s to knots as will be appropriate # for the station plot u, v = get_wind_components(data_arr['wind_speed'] * units('m/s'), data_arr['wind_dir'] * units.degree) data['eastward_wind'], data['northward_wind'] = u, v # Convert the fraction value into a code of 0-8, which can be used to pull out # the appropriate symbol data['cloud_coverage'] = (8 * data_arr['cloud_fraction']).astype(int) # Map weather strings to WMO codes, which we can use to convert to symbols # Only use the first symbol if there are multiple wx_text = [s.decode('ascii') for s in data_arr['weather']] wx_codes = {'': 0, 'HZ': 5, 'BR': 10, '-DZ': 51, 'DZ': 53, '+DZ': 55, '-RA': 61, 'RA': 63, '+RA': 65, '-SN': 71, 'SN': 73, '+SN': 75} data['present_weather'] = [wx_codes[s.split()[0] if ' ' in s else s] for s in wx_text] """ Explanation: Notice that the names (the keys) in the dictionary are the same as those that the layout is expecting. Now perform a few conversions: Get wind components from speed and direction Convert cloud fraction values to integer codes [0 - 8] Map METAR weather codes to WMO codes for weather symbols End of explanation """ proj = ccrs.LambertConformal(central_longitude=-95, central_latitude=35, standard_parallels=[35]) state_boundaries = feat.NaturalEarthFeature(category='cultural', name='admin_1_states_provinces_lines', scale='110m', facecolor='none') """ Explanation: All the data wrangling is finished, just need to set up plotting and go: Set up the map projection and set up a cartopy feature for state borders End of explanation """ # Change the DPI of the resulting figure. Higher DPI drastically improves the # look of the text rendering plt.rcParams['savefig.dpi'] = 255 # Create the figure and an axes set to the projection fig = plt.figure(figsize=(20, 10)) ax = fig.add_subplot(1, 1, 1, projection=proj) # Add some various map elements to the plot to make it recognizable ax.add_feature(feat.LAND, zorder=-1) ax.add_feature(feat.OCEAN, zorder=-1) ax.add_feature(feat.LAKES, zorder=-1) ax.coastlines(resolution='110m', zorder=2, color='black') ax.add_feature(state_boundaries) ax.add_feature(feat.BORDERS, linewidth='2', edgecolor='black') # Set plot bounds ax.set_extent((-118, -73, 23, 50)) # # Here's the actual station plot # # Start the station plot by specifying the axes to draw on, as well as the # lon/lat of the stations (with transform). We also the fontsize to 12 pt. stationplot = StationPlot(ax, data['longitude'], data['latitude'], transform=ccrs.PlateCarree(), fontsize=12) # The layout knows where everything should go, and things are standardized using # the names of variables. So the layout pulls arrays out of `data` and plots them # using `stationplot`. simple_layout.plot(stationplot, data) plt.show() """ Explanation: The payoff End of explanation """ # Just winds, temps, and dewpoint, with colors. Dewpoint and temp will be plotted # out to Farenheit tenths. Extra data will be ignored custom_layout = StationPlotLayout() custom_layout.add_barb('eastward_wind', 'northward_wind', units='knots') custom_layout.add_value('NW', 'air_temperature', fmt='.1f', units='degF', color='darkred') custom_layout.add_value('SW', 'dew_point_temperature', fmt='.1f', units='degF', color='darkgreen') # Also, we'll add a field that we don't have in our dataset. This will be ignored custom_layout.add_value('E', 'precipitation', fmt='0.2f', units='inch', color='blue') # Create the figure and an axes set to the projection fig = plt.figure(figsize=(20, 10)) ax = fig.add_subplot(1, 1, 1, projection=proj) # Add some various map elements to the plot to make it recognizable ax.add_feature(feat.LAND, zorder=-1) ax.add_feature(feat.OCEAN, zorder=-1) ax.add_feature(feat.LAKES, zorder=-1) ax.coastlines(resolution='110m', zorder=2, color='black') ax.add_feature(state_boundaries) ax.add_feature(feat.BORDERS, linewidth='2', edgecolor='black') # Set plot bounds ax.set_extent((-118, -73, 23, 50)) # # Here's the actual station plot # # Start the station plot by specifying the axes to draw on, as well as the # lon/lat of the stations (with transform). We also the fontsize to 12 pt. stationplot = StationPlot(ax, data['longitude'], data['latitude'], transform=ccrs.PlateCarree(), fontsize=12) # The layout knows where everything should go, and things are standardized using # the names of variables. So the layout pulls arrays out of `data` and plots them # using `stationplot`. custom_layout.plot(stationplot, data) plt.show() """ Explanation: or instead, a custom layout can be used: End of explanation """
QuantEcon/QuantEcon.notebooks
ddp_ex_rust96_py.ipynb
bsd-3-clause
%matplotlib inline import numpy as np import itertools import scipy.optimize import matplotlib.pyplot as plt import pandas as pd from quantecon.markov import DiscreteDP # matplotlib settings plt.rcParams['axes.autolimit_mode'] = 'round_numbers' plt.rcParams['axes.xmargin'] = 0 plt.rcParams['axes.ymargin'] = 0 plt.rcParams['patch.force_edgecolor'] = True from cycler import cycler plt.rcParams['axes.prop_cycle'] = cycler(color='bgrcmyk') """ Explanation: DiscreteDP Example: Automobile Replacement Daisuke Oyama Faculty of Economics, University of Tokyo We study the finite-state version of the automobile replacement problem as considered in Rust (1996, Section 4.2.2). J. Rust, "Numerical Dynamic Programming in Economics", <i>Handbook of Computational Economics</i>, Volume 1, 619-729, 1996. End of explanation """ lambd = 0.5 # Exponential distribution parameter c = 200 # (Constant) marginal cost of maintainance net_price = 10**5 # Replacement cost n = 100 # Number of states; s = 0, ..., n-1: level of utilization of the asset m = 2 # Number of actions; 0: keep, 1: replace # Reward array R = np.empty((n, m)) R[:, 0] = -c * np.arange(n) # Costs for maintainance R[:, 1] = -net_price - c * 0 # Costs for replacement # Transition probability array # For each state s, s' distributes over # s, s+1, ..., min{s+supp_size-1, n-1} if a = 0 # 0, 1, ..., supp_size-1 if a = 1 # according to the (discretized and truncated) exponential distribution # with parameter lambd supp_size = 12 probs = np.empty(supp_size) probs[0] = 1 - np.exp(-lambd * 0.5) for j in range(1, supp_size-1): probs[j] = np.exp(-lambd * (j - 0.5)) - np.exp(-lambd * (j + 0.5)) probs[supp_size-1] = 1 - np.sum(probs[:-1]) Q = np.zeros((n, m, n)) # a = 0 for i in range(n-supp_size): Q[i, 0, i:i+supp_size] = probs for k in range(supp_size): Q[n-supp_size+k, 0, n-supp_size+k:] = probs[:supp_size-k]/probs[:supp_size-k].sum() # a = 1 for i in range(n): Q[i, 1, :supp_size] = probs # Discount factor beta = 0.95 """ Explanation: Setup End of explanation """ def f(x, s): return (c/(1-beta)) * \ ((x-s) - (beta/(lambd*(1-beta))) * (1 - np.exp(-lambd*(1-beta)*(x-s)))) """ Explanation: Continuous-state benchmark Let us compute the value function of the continuous-state version as described in equations (2.22) and (2.23) in Section 2.3. End of explanation """ gamma = scipy.optimize.brentq(lambda x: f(x, 0) - net_price, 0, 100) print(gamma) """ Explanation: The optimal stopping boundary $\gamma$ for the contiuous-state version, given by (2.23): End of explanation """ def value_func_cont_time(s): return -c*gamma/(1-beta) + (s < gamma) * f(gamma, s) v_cont = value_func_cont_time(np.arange(n)) """ Explanation: The value function for the continuous-state version, given by (2.24): End of explanation """ ddp = DiscreteDP(R, Q, beta) """ Explanation: Solving the problem with DiscreteDP Construct a DiscreteDP instance for the disrete-state version: End of explanation """ v_init = np.zeros(ddp.num_states) epsilon = 1164 methods = ['vi', 'mpi', 'pi', 'mpi'] labels = ['Value iteration', 'Value iteration with span-based termination', 'Policy iteration', 'Modified policy iteration'] results = {} for i in range(4): k = 20 if labels[i] == 'Modified policy iteration' else 0 results[labels[i]] = \ ddp.solve(method=methods[i], v_init=v_init, epsilon=epsilon, k=k) columns = [ 'Iterations', 'Time (second)', r'$\lVert v - v_{\mathrm{pi}} \rVert$', r'$\overline{b} - \underline{b}$', r'$\lVert v - T(v)\rVert$' ] df = pd.DataFrame(index=labels, columns=columns) """ Explanation: Let us solve the decision problem by (0) value iteration, (1) value iteration with span-based termination (equivalent to modified policy iteration with step $k = 0$), (2) policy iteration, (3) modified policy iteration. Following Rust (1996), we set: $\varepsilon = 1164$ (for value iteration and modified policy iteration), $v^0 \equiv 0$, the number of iteration for iterative policy evaluation $k = 20$. End of explanation """ for label in labels: print(results[label].num_iter, '\t' + '(' + label + ')') df[columns[0]].loc[label] = results[label].num_iter """ Explanation: The numbers of iterations: End of explanation """ print(results['Policy iteration'].sigma) """ Explanation: Policy iteration gives the optimal policy: End of explanation """ (1-results['Policy iteration'].sigma).sum() """ Explanation: Takes action 1 ("replace") if and only if $s \geq \bar{\gamma}$, where $\bar{\gamma}$ is equal to: End of explanation """ for result in results.values(): if result != results['Policy iteration']: print(np.array_equal(result.sigma, results['Policy iteration'].sigma)) """ Explanation: Check that the other methods gave the correct answer: End of explanation """ diffs_cont = {} for label in labels: diffs_cont[label] = np.abs(results[label].v - v_cont).max() print(diffs_cont[label], '\t' + '(' + label + ')') label = 'Policy iteration' fig, ax = plt.subplots(figsize=(8,5)) ax.plot(-v_cont, label='Continuous-state') ax.plot(-results[label].v, label=label) ax.set_title('Comparison of discrete vs. continuous value functions') ax.ticklabel_format(style='sci', axis='y', scilimits=(0,0)) ax.set_xlabel('State') ax.set_ylabel(r'Value $\times\ (-1)$') plt.legend(loc=4) plt.show() """ Explanation: The deviations of the returned value function from the continuous-state benchmark: End of explanation """ for label in labels: diff_pi = \ np.abs(results[label].v - results['Policy iteration'].v).max() print(diff_pi, '\t' + '(' + label + ')') df[columns[2]].loc[label] = diff_pi """ Explanation: In the following we try to reproduce Table 14.1 in Rust (1996), p.660, although the precise definitions and procedures there are not very clear. The maximum absolute differences of $v$ from that by policy iteration: End of explanation """ for label in labels: v = results[label].v diff_max = \ np.abs(v - ddp.bellman_operator(v)).max() print(diff_max, '\t' + '(' + label + ')') df[columns[4]].loc[label] = diff_max """ Explanation: Compute $\lVert v - T(v)\rVert$: End of explanation """ for i in range(4): if labels[i] != 'Policy iteration': k = 20 if labels[i] == 'Modified policy iteration' else 0 res = ddp.solve(method=methods[i], v_init=v_init, k=k, max_iter=results[labels[i]].num_iter-1) diff = ddp.bellman_operator(res.v) - res.v diff_span = (diff.max() - diff.min()) * ddp.beta / (1 - ddp.beta) print(diff_span, '\t' + '(' + labels[i] + ')') df[columns[3]].loc[labels[i]] = diff_span """ Explanation: Next we compute $\overline{b} - \underline{b}$ for the three methods other than policy iteration, where $I$ is the number of iterations required to fulfill the termination condition, and $$ \begin{aligned} \underline{b} &= \frac{\beta}{1-\beta} \min\left[T(v^{I-1}) - v^{I-1}\right], \\ \overline{b} &= \frac{\beta}{1-\beta} \max\left[T(v^{I-1}) - v^{I-1}\right]. \end{aligned} $$ End of explanation """ label = 'Policy iteration' v = results[label].v diff = ddp.bellman_operator(v) - v diff_span = (diff.max() - diff.min()) * ddp.beta / (1 - ddp.beta) print(diff_span, '\t' + '(' + label + ')') df[columns[3]].loc[label] = diff_span """ Explanation: For policy iteration, while it does not seem really relevant, we compute $\overline{b} - \underline{b}$ with the returned value of $v$ in place of $v^{I-1}$: End of explanation """ for i in range(4): k = 20 if labels[i] == 'Modified policy iteration' else 0 print(labels[i]) t = %timeit -o ddp.solve(method=methods[i], v_init=v_init, epsilon=epsilon, k=k) df[columns[1]].loc[labels[i]] = t.best df """ Explanation: Last, time each algorithm: End of explanation """ i = 1 k = 0 res = ddp.solve(method=methods[i], v_init=v_init, k=k, max_iter=results[labels[i]].num_iter-1) diff = ddp.bellman_operator(res.v) - res.v v = res.v + (diff.max() + diff.min()) * ddp.beta / (1 - ddp.beta) / 2 """ Explanation: Notes It appears that our value iteration with span-based termination is different in some details from the corresponding algorithm (successive approximation with error bounds) in Rust. In returing the value function, our algorithm returns $T(v^{I-1}) + (\overline{b} + \underline{b})/2$, while Rust's seems to return $v^{I-1} + (\overline{b} + \underline{b})/2$. In fact: End of explanation """ np.abs(v - results['Policy iteration'].v).max() """ Explanation: $\lVert v - v_{\mathrm{pi}}\rVert$: End of explanation """ np.abs(v - ddp.bellman_operator(v)).max() """ Explanation: $\lVert v - T(v)\rVert$: End of explanation """ label = 'Value iteration' iters = [2, 20, 40, 80] v = np.zeros(ddp.num_states) fig, ax = plt.subplots(figsize=(8,5)) for i in range(iters[-1]): v = ddp.bellman_operator(v) if i+1 in iters: ax.plot(-v, label='Iteration {0}'.format(i+1)) ax.plot(-results['Policy iteration'].v, label='Fixed Point') ax.ticklabel_format(style='sci', axis='y', scilimits=(0,0)) ax.set_ylim(0, 2.4e5) ax.set_yticks([0.4e5 * i for i in range(7)]) ax.set_title(label) ax.set_xlabel('State') ax.set_ylabel(r'Value $\times\ (-1)$') plt.legend(loc=(0.7, 0.2)) plt.show() """ Explanation: Compare the Table in Rust. Convergence of trajectories Let us plot the convergence of $v^i$ for the four algorithms; see also Figure 14.2 in Rust. Value iteration End of explanation """ label = 'Value iteration with span-based termination' iters = [1, 10, 15, 20] v = np.zeros(ddp.num_states) fig, ax = plt.subplots(figsize=(8,5)) for i in range(iters[-1]): u = ddp.bellman_operator(v) if i+1 in iters: diff = u - v w = u + ((diff.max() + diff.min()) / 2) * ddp.beta / (1 - ddp.beta) ax.plot(-w, label='Iteration {0}'.format(i+1)) v = u ax.plot(-results['Policy iteration'].v, label='Fixed Point') ax.ticklabel_format(style='sci', axis='y', scilimits=(0,0)) ax.set_ylim(1.0e5, 2.4e5) ax.set_yticks([1.0e5+0.2e5 * i for i in range(8)]) ax.set_title(label) ax.set_xlabel('State') ax.set_ylabel(r'Value $\times\ (-1)$') plt.legend(loc=(0.7, 0.2)) plt.show() """ Explanation: Value iteration with span-based termination End of explanation """ label = 'Policy iteration' iters = [1, 2, 3] v_init = np.zeros(ddp.num_states) fig, ax = plt.subplots(figsize=(8,5)) sigma = ddp.compute_greedy(v_init) for i in range(iters[-1]): # Policy evaluation v_sigma = ddp.evaluate_policy(sigma) if i+1 in iters: ax.plot(-v_sigma, label='Iteration {0}'.format(i+1)) # Policy improvement new_sigma = ddp.compute_greedy(v_sigma) sigma = new_sigma ax.plot(-results['Policy iteration'].v, label='Fixed Point') ax.ticklabel_format(style='sci', axis='y', scilimits=(0,0)) ax.set_ylim(1e5, 4.2e5) ax.set_yticks([1e5 + 0.4e5 * i for i in range(9)]) ax.set_title(label) ax.set_xlabel('State') ax.set_ylabel(r'Value $\times\ (-1)$') plt.legend(loc=4) plt.show() """ Explanation: Policy iteration End of explanation """ label = 'Modified policy iteration' iters = [1, 2, 3, 4] v = np.zeros(ddp.num_states) k = 20 #- 1 fig, ax = plt.subplots(figsize=(8,5)) for i in range(iters[-1]): # Policy improvement sigma = ddp.compute_greedy(v) u = ddp.bellman_operator(v) if i == results[label].num_iter-1: diff = u - v break # Partial policy evaluation with k=20 iterations for j in range(k): u = ddp.T_sigma(sigma)(u) v = u if i+1 in iters: ax.plot(-v, label='Iteration {0}'.format(i+1)) ax.plot(-results['Policy iteration'].v, label='Fixed Point') ax.ticklabel_format(style='sci', axis='y', scilimits=(0,0)) ax.set_ylim(0, 2.8e5) ax.set_yticks([0.4e5 * i for i in range(8)]) ax.set_title(label) ax.set_xlabel('State') ax.set_ylabel(r'Value $\times\ (-1)$') plt.legend(loc=4) plt.show() """ Explanation: Modified policy iteration End of explanation """ ddp.beta = 0.9999 v_init = np.zeros(ddp.num_states) epsilon = 1164 ddp.max_iter = 10**5 * 2 results_9999 = {} for i in range(4): k = 20 if labels[i] == 'Modified policy iteration' else 0 results_9999[labels[i]] = \ ddp.solve(method=methods[i], v_init=v_init, epsilon=epsilon, k=k) df_9999 = pd.DataFrame(index=labels, columns=columns) """ Explanation: Increasing the discount factor Let us consider the case with a discount factor closer to $1$, $\beta = 0.9999$. End of explanation """ for label in labels: print(results_9999[label].num_iter, '\t' + '(' + label + ')') df_9999[columns[0]].loc[label] = results_9999[label].num_iter """ Explanation: The numbers of iterations: End of explanation """ print(results_9999['Policy iteration'].sigma) """ Explanation: Policy iteration gives the optimal policy: End of explanation """ (1-results_9999['Policy iteration'].sigma).sum() """ Explanation: Takes action 1 ("replace") if and only if $s \geq \bar{\gamma}$, where $\bar{\gamma}$ is equal to: End of explanation """ for result in results_9999.values(): if result != results_9999['Policy iteration']: print(np.array_equal(result.sigma, results_9999['Policy iteration'].sigma)) """ Explanation: Check that the other methods gave the correct answer: End of explanation """ for label in labels: diff_pi = \ np.abs(results_9999[label].v - results_9999['Policy iteration'].v).max() print(diff_pi, '\t' + '(' + label + ')') df_9999[columns[2]].loc[label] = diff_pi """ Explanation: $\lVert v - v_{\mathrm{pi}}\rVert$: End of explanation """ for label in labels: v = results_9999[label].v diff_max = \ np.abs(v - ddp.bellman_operator(v)).max() print(diff_max, '\t' + '(' + label + ')') df_9999[columns[4]].loc[label] = diff_max """ Explanation: $\lVert v - T(v)\rVert$: End of explanation """ for i in range(4): if labels[i] != 'Policy iteration': k = 20 if labels[i] == 'Modified policy iteration' else 0 res = ddp.solve(method=methods[i], v_init=v_init, k=k, max_iter=results_9999[labels[i]].num_iter-1) diff = ddp.bellman_operator(res.v) - res.v diff_span = (diff.max() - diff.min()) * ddp.beta / (1 - ddp.beta) print(diff_span, '\t' + '(' + labels[i] + ')') df_9999[columns[3]].loc[labels[i]] = diff_span """ Explanation: $\overline{b} - \underline{b}$: End of explanation """ label = 'Policy iteration' v = results_9999[label].v diff = ddp.bellman_operator(v) - v diff_span = (diff.max() - diff.min()) * ddp.beta / (1 - ddp.beta) print(diff_span, '\t' + '(' + label + ')') df_9999[columns[3]].loc[label] = diff_span for i in range(4): k = 20 if labels[i] == 'Modified policy iteration' else 0 print(labels[i]) t = %timeit -o ddp.solve(method=methods[i], v_init=v_init, epsilon=epsilon, k=k) df_9999[columns[1]].loc[labels[i]] = t.best df_9999 df_time = pd.DataFrame(index=labels) df_time[r'$\beta = 0.95$'] = df[columns[1]] df_time[r'$\beta = 0.9999$'] = df_9999[columns[1]] second_max = df_time[r'$\beta = 0.9999$'][1:].max() for xlim in [None, (0, second_max*1.2)]: ax = df_time.loc[reversed(labels)][df_time.columns[::-1]].plot( kind='barh', legend='reverse', xlim=xlim, figsize=(8,5) ) ax.set_xlabel('Time (second)') import platform print(platform.platform()) import sys print(sys.version) print(np.__version__) """ Explanation: For policy iteration: End of explanation """
mdigiorgio/lisa
ipynb/tutorial/00_LisaInANutshell.ipynb
apache-2.0
import logging from conf import LisaLogging LisaLogging.setup() # Execute this cell to enable verbose SSH commands logging.getLogger('ssh').setLevel(logging.DEBUG) # Other python modules required by this notebook import json import os """ Explanation: Linux Interactive System Analysis DEMO Get LISA and start the Notebook Server Official repository on GitHub - ARM Software:<br> https://github.com/ARM-software/lisa Installation dependencies are listed in the main page of the repository:<br> https://github.com/ARM-software/lisa#required-dependencies Once cloned, source init_env to initialized the LISA Shell, which provides a convenient set of shell commands for easy access to many LISA related functions. shell $ source init_env To start the IPython Notebook Server required to use this Notebook, on a LISAShell run: ```shell [LISAShell lisa] > lisa-ipython start Starting IPython Notebooks... Starting IPython Notebook server... IP Address : http://127.0.0.1:8888/ Folder : /home/derkling/Code/lisa/ipynb Logfile : /home/derkling/Code/lisa/ipynb/server.log PYTHONPATH : /home/derkling/Code/lisa/libs/bart /home/derkling/Code/lisa/libs/trappy /home/derkling/Code/lisa/libs/devlib /home/derkling/Code/lisa/libs/wlgen /home/derkling/Code/lisa/libs/utils Notebook server task: [1] 24745 ``` The main folder served by the server is:<br> http://127.0.0.1:8888/ While the tutorial notebooks are accessible starting from this link:<br> http://127.0.0.1:8888/notebooks/tutorial/00_LisaInANutshell.ipynb What is an IPython Notebook? Let's do some example! Logging configuration and support modules import End of explanation """ # Setup a target configuration conf = { # Target is localhost "platform" : 'linux', "board" : "juno", # Login credentials "host" : "192.168.0.1", "username" : "root", "password" : "", # Binary tools required to run this experiment # These tools must be present in the tools/ folder for the architecture "tools" : ['rt-app', 'taskset', 'trace-cmd'], # Comment the following line to force rt-app calibration on your target # "rtapp-calib" : { # "0": 355, "1": 138, "2": 138, "3": 355, "4": 354, "5": 354 # }, # FTrace events end buffer configuration "ftrace" : { "events" : [ "sched_switch", "sched_wakeup", "sched_wakeup_new", "sched_contrib_scale_f", "sched_load_avg_cpu", "sched_load_avg_task", "sched_tune_config", "sched_tune_tasks_update", "sched_tune_boostgroup_update", "sched_tune_filter", "sched_boost_cpu", "sched_boost_task", "sched_energy_diff", "cpu_frequency", "cpu_capacity", ], "buffsize" : 10240 }, # Where results are collected "results_dir" : "LisaInANutshell", # Devlib module required (or not required) 'modules' : [ "cpufreq", "cgroups", "cpufreq" ], #"exclude_modules" : [ "hwmon" ], } # Support to access the remote target from env import TestEnv # Initialize a test environment using: # the provided target configuration (my_target_conf) # the provided test configuration (my_test_conf) te = TestEnv(conf) target = te.target print "DONE" """ Explanation: <br><br><br><br> Advanced usage: get more confident with IPython notebooks and discover some hidden features<br> notebooks/tutorial/01_IPythonNotebooksUsage.ipynb <br><br><br><br> Remote target connection and control End of explanation """ # Enable Energy-Aware scheduler target.execute("echo ENERGY_AWARE > /sys/kernel/debug/sched_features"); # Check which sched_feature are enabled sched_features = target.read_value("/sys/kernel/debug/sched_features"); print "sched_features:" print sched_features # It's possible also to run custom script # my_script = target.get_installed() # target.execute(my_script) """ Explanation: Commands execution on remote target End of explanation """ target.cpufreq.set_all_governors('sched'); # Check which governor is enabled on each CPU enabled_governors = target.cpufreq.get_all_governors() print enabled_governors """ Explanation: Example of frameworks configuration on remote target Configure CPUFreq governor to be "sched-freq" End of explanation """ cpuset = target.cgroups.controller('cpuset') # Configure a big partition cpuset_bigs = cpuset.cgroup('/big') cpuset_bigs.set(cpus=te.target.bl.bigs, mems=0) # Configure a LITTLE partition cpuset_littles = cpuset.cgroup('/LITTLE') cpuset_littles.set(cpus=te.target.bl.littles, mems=0) # Dump the configuraiton of each controller cgroups = cpuset.list_all() for cgname in cgroups: cgroup = cpuset.cgroup(cgname) attrs = cgroup.get() cpus = attrs['cpus'] print '{}:{:<15} cpus: {}'.format(cpuset.kind, cgroup.name, cpus) """ Explanation: Create a big/LITTLE partition using CGroups::CPUSet End of explanation """ # RTApp configurator for generation of PERIODIC tasks from wlgen import RTA, Periodic, Ramp # Light workload light = Periodic( duty_cycle_pct = 10, duration_s = 3, period_ms = 32, ) # Ramp workload ramp = Ramp( start_pct=10, end_pct=60, delta_pct=20, time_s=0.5, period_ms=16 ) # Heavy workload heavy = Periodic( duty_cycle_pct=60, duration_s=3, period_ms=16 ) # Composed workload lrh_task = light + ramp + heavy # Create a new RTApp workload generator using the calibration values # reported by the TestEnv module rtapp = RTA(target, 'test', calibration=te.calibration()) # Configure this RTApp instance to: rtapp.conf( # 1. generate a "profile based" set of tasks kind = 'profile', # 2. define the "profile" of each task params = { # 3. Composed task 'task_lrh': lrh_task.get(), }, #loadref='big', loadref='LITTLE', run_dir=target.working_directory ); # Inspect the JSON file used to run the application with open('./test_00.json', 'r') as fh: rtapp_json = json.load(fh) logging.info('Generated RTApp JSON file:') print json.dumps(rtapp_json, indent=4, sort_keys=True) """ Explanation: <br><br><br><br> Advanced usage: exploring more APIs exposed by TestEnv and Devlib<br> notebooks/tutorial/02_TestEnvUsage.ipynb <br><br><br><br> Using syntethic workloads Generate an RTApp configuration End of explanation """ def execute(te, wload, res_dir): logging.info('# Setup FTrace') te.ftrace.start() logging.info('## Start energy sampling') te.emeter.reset() logging.info('### Start RTApp execution') wload.run(out_dir=res_dir) logging.info('## Read energy consumption: %s/energy.json', res_dir) nrg_report = te.emeter.report(out_dir=res_dir) logging.info('# Stop FTrace') te.ftrace.stop() trace_file = os.path.join(res_dir, 'trace.dat') logging.info('# Save FTrace: %s', trace_file) te.ftrace.get_trace(trace_file) logging.info('# Save platform description: %s/platform.json', res_dir) plt, plt_file = te.platform_dump(res_dir) logging.info('# Report collected data:') logging.info(' %s', res_dir) !tree {res_dir} return nrg_report, plt, plt_file, trace_file nrg_report, plt, plt_file, trace_file = execute(te, rtapp, te.res_dir) """ Explanation: <br><br><br><br> Advanced usage: using WlGen to create more complex RTApp configurations or run other banchmarks (e.g. hackbench)<br> notebooks/tutorial/03_WlGenUsage.ipynb <br><br><br><br> Execution and Energy Sampling End of explanation """ import pandas as pd df = pd.DataFrame(list(nrg_report.channels.iteritems()), columns=['Cluster', 'Energy']) df = df.set_index('Cluster') df """ Explanation: Example of energy collected data End of explanation """ # Show the collected platform description with open(os.path.join(te.res_dir, 'platform.json'), 'r') as fh: platform = json.load(fh) print json.dumps(platform, indent=4) logging.info('LITTLE cluster max capacity: %d', platform['nrg_model']['little']['cpu']['cap_max']) """ Explanation: Example of platform description End of explanation """ # Let's look at the trace using kernelshark... trace_file = te.res_dir + '/trace.dat' !kernelshark {trace_file} 2>/dev/null """ Explanation: <br><br><br><br> Advanced Workload Execution: using the Executor module to automate data collection for multiple tests<br> notebooks/tutorial/04_ExecutorUsage.ipynb <br><br><br><br> Trace Visualization (the kernelshark way) Using kernelshark End of explanation """ # Suport for FTrace events parsing and visualization import trappy # NOTE: The interactive trace visualization is available only if you run # the workload to generate a new trace-file trappy.plotter.plot_trace(trace_file) """ Explanation: Using the TRAPpy Trace Plotter End of explanation """ # Load the LISA::Trace parsing module from trace import Trace # Define which event we are interested into trace = Trace(te.platform, te.res_dir, [ "sched_switch", "sched_load_avg_cpu", "sched_load_avg_task", "sched_boost_cpu", "sched_boost_task", "cpu_frequency", "cpu_capacity", ]) # Let's have a look at the set of events collected from the trace ftrace = trace.ftrace logging.info("List of events identified in the trace:") for event in ftrace.class_definitions.keys(): logging.info(" %s", event) # Trace events are converted into tables, let's have a look at one # of such tables df = trace.data_frame.trace_event('sched_load_avg_task') df.head() # Simple selection of events based on conditional values #df[df.comm == 'task_lrh'].head() # Simple selection of specific signals #df[df.comm == 'task_lrh'][['util_avg']].head() # Simple statistics reporting #df[df.comm == 'task_lrh'][['util_avg']].describe() """ Explanation: Example of Trace Analysis Generate DataFrames from Trace Events End of explanation """ # Signals can be easily plot using the ILinePlotter trappy.ILinePlot( # FTrace object ftrace, # Signals to be plotted signals=[ 'sched_load_avg_cpu:util_avg', 'sched_load_avg_task:util_avg' ], # # Generate one plot for each value of the specified column # pivot='cpu', # # Generate only plots which satisfy these filters # filters={ # 'comm': ['task_lrh'], # 'cpu' : [0,5] # }, # Formatting style per_line=2, drawstyle='steps-post', marker = '+' ).view() """ Explanation: <br><br><br><br> Advanced DataFrame usage: filtering by columns/rows, merging tables, plotting data<br> notebooks/tutorial/05_TrappyUsage.ipynb <br><br><br><br> Easy plot signals from DataFrams End of explanation """ from bart.sched.SchedMultiAssert import SchedAssert # Create an object to get/assert scheduling pbehaviors sa = SchedAssert(ftrace, te.topology, execname='task_lrh') """ Explanation: Example of Behavioral Analysis End of explanation """ # Check the residency of a task on the LITTLE cluster print "Task residency [%] on LITTLE cluster:",\ sa.getResidency( "cluster", te.target.bl.littles, percent=True ) # Check on which CPU the task start its execution print "Task initial CPU:",\ sa.getFirstCpu() """ Explanation: Get tasks behaviors End of explanation """ import operator # Define the time window where we want focus our assertions start_s = sa.getStartTime() little_residency_window = (start_s, start_s + 10) # Defined the expected task residency EXPECTED_RESIDENCY_PCT=99 result = sa.assertResidency( "cluster", te.target.bl.littles, EXPECTED_RESIDENCY_PCT, operator.ge, window=little_residency_window, percent=True ) print "Task running {} [%] of its time on LITTLE? {}"\ .format(EXPECTED_RESIDENCY_PCT, result) result = sa.assertFirstCpu(te.target.bl.bigs) print "Task starting on a big CPU? {}".format(result) """ Explanation: Check for expected behaviros End of explanation """ # Focus on sched_switch events df = ftrace.sched_switch.data_frame # # Select only interesting columns # df = df.ix[:,'next_comm':'prev_state'] # # Group sched_switch event by task switching into the CPU # df = df.groupby('next_pid').describe(include=['object']) # df = df.unstack() # # Sort sched_switch events by number of time a task switch into the CPU # df = df['next_comm'].sort_values(by=['count'], ascending=False) df.head() # # Get topmost task name and PID # most_switching_pid = df.index[1] # most_switching_task = df.values[1][2] # task_name = "{}:{}".format(most_switching_pid, most_switching_task) # # Print result # logging.info("The most swithing task is: [%s]", task_name) """ Explanation: Examples of Data analysis Which task is the most active switcher? End of explanation """ # Focus on cpu_frequency events for CPU0 df = ftrace.cpu_frequency.data_frame df = df[df.cpu == 0] # # Compute the residency on each OPP before switching to the next one # df.loc[:,'start'] = df.index # df.loc[:,'delta'] = (df['start'] - df['start'].shift()).fillna(0).shift(-1) # # Group by frequency and sum-up the deltas # freq_residencies = df.groupby('frequency')['delta'].sum() # logging.info("Residency time per OPP:") # df = pd.DataFrame(freq_residencies) df.head() # # Compute the relative residency time # tot = sum(freq_residencies) # #df = df.apply(lambda delta : 100*delta/tot) # for f in freq_residencies.index: # logging.info("Freq %10dHz : %5.1f%%", f, 100*freq_residencies[f]/tot) # Plot residency time import matplotlib.pyplot as plt # Enable generation of Notebook emebedded plots %matplotlib inline fig, axes = plt.subplots(1, 1, figsize=(16, 5)); df.plot(kind='bar', ax=axes); """ Explanation: What are the relative residency on different OPPs? End of explanation """ from perf_analysis import PerfAnalysis # Full analysis function def analysis(t_min=None, t_max=None): test_dir = te.res_dir platform_json = '{}/platform.json'.format(test_dir) trace_file = '{}/trace.dat'.format(test_dir) # Load platform description data with open(platform_json, 'r') as fh: platform = json.load(fh) # Load RTApp Performance data pa = PerfAnalysis(test_dir) logging.info("Loaded performance data for tasks: %s", pa.tasks()) # Load Trace data #events = my_tests_conf['ftrace']['events'] events = [ "sched_switch", "sched_contrib_scale_f", "sched_load_avg_cpu", "sched_load_avg_task", "cpu_frequency", "cpu_capacity", ] trace = Trace(platform, test_dir, events) # Define time ranges for all the temporal plots trace.setXTimeRange(t_min, t_max) # Tasks performances plots for task in pa.tasks(): pa.plotPerf(task) # Tasks plots trace.analysis.tasks.plotTasks(pa.tasks()) # Cluster and CPUs plots trace.analysis.frequency.plotClusterFrequencies() analysis() """ Explanation: Example of Custom Plotting End of explanation """
newsapps/public-notebooks
Download recent crimes from the data portal.ipynb
mit
import json import requests CRIME_SOCRATA_VIEW_ID = 'ijzp-q8t2' def get_data_portal_url(view_id): return 'http://data.cityofchicago.org/api/views/{view_id}'.format( view_id=view_id) def get_dataset_columns(view_id): """ Get dataset field names from the Socrata API Returns: A dictionary that acts as a lookup table from column ID to column name """ url = get_data_portal_url(view_id) meta_response = requests.get(url) if not meta_response.ok: meta_response.raise_for_status() meta = meta_response.json() return {c['id']: c['name'] for c in meta['columns']} columns = get_dataset_columns(CRIME_SOCRATA_VIEW_ID) for column_id, name in columns.items(): print("{}: {}".format(column_id, name)) """ Explanation: We're going to download recent crimes from the City of Chicago's data portal. Ultimately, we're going to construct a query using the REST API of Socrata, the service/software used to host the city's data. This data set has changed in the past, so before we build our query, let's get a list of the column names so we can figure out which column to filter on to find recent crimes. End of explanation """ date_column_id, date_column_name = next((i, n) for i, n in columns.items() if n.lower() == "date") print("Date column ID: {}".format(date_column_id)) """ Explanation: It looks like the column named "Date" with an ID of "154418879" is the one we want. End of explanation """ def slugify(s, replacement='_'): return s.replace(' ', replacement).lower() def get_clean_column_lookup(column_lookup): return {str(i): slugify(n) for i, n in column_lookup.items()} human_columns = get_clean_column_lookup(columns) import pprint pprint.pprint(human_columns) def humanize_columns(row, column_lookup): humanized = {} for column_id, value in row.items(): try: humanized[column_lookup[column_id]] = value except KeyError: humanized[column_id] = value return humanized """ Explanation: The response data contains column IDs rather than column names. Let's build a lookup table to convert them later, and a helper function to "fix" our rows that we get from the API. End of explanation """ from datetime import date, timedelta def build_query(since_date, date_column_id, view_id): """ Get a Socrata API query for all records updated after the last update Args: since_date (datetine.date): date object. All crimes since this date will be retrieved. date_column_id (str): String containing the column ID for the dates we'll filter on view_id (str): Socrata view ID for this dataset Returns: Dictionary that can be serialized into a JSON sring used as the POST body to the Socrata API """ query = { 'originalViewId': view_id, 'name': 'inline filter', 'query' : { 'filterCondition': { 'type': 'operator', 'value': 'AND', 'children' : [{ 'type' : 'operator', 'value' : 'GREATER_THAN', 'children': [{ 'columnId' : date_column_id, 'type' : 'column', }, { 'type' : 'literal', 'value' : since_date.strftime('%Y-%m-%d'), }], }], }, } } return query # Months are different lenghts. Let's just find the date 30 days ago today = date.today() date_30_days_ago = today - timedelta(days=30) query = build_query(date_30_days_ago, date_column_id, CRIME_SOCRATA_VIEW_ID) import pprint print("The query looks like this: ") pprint.pprint(query) """ Explanation: Let's build a query for the Socrata API. End of explanation """ import json import requests def get_rows_url(start, count): url_tpl = "https://data.cityofchicago.org/api/views/INLINE/rows.json?method=getRows&start={start}&length={length}" return url_tpl.format( start=start, length=count ) def get_rows(query, start=0, count=1000): url = get_rows_url(start, count) headers = { 'content-type' : 'application/json' } response = requests.post(url, data=json.dumps(query), headers=headers, verify=False) return response.json() def transform_row(row, transforms): transformed_row = row for transform in transforms: transformed_row = transform(transformed_row) return transformed_row def get_all_rows(query, transforms=[]): continue_fetching = True page_size = 1000 start = 0 while continue_fetching: rows = get_rows(query, start, page_size) if len(rows) < page_size: continue_fetching = False start += page_size for row in rows: yield(transform_row(row, transforms)) crimes = list(get_all_rows(query, transforms=[lambda r: humanize_columns(r, human_columns)])) import pprint print("There are {} crimes since {}".format(len(crimes), date_30_days_ago.strftime("%Y-%m-%d"))) print("The first one looks like: ") pprint.pprint(crimes[0]) """ Explanation: Now let's request the data from the API, using our query End of explanation """
MikeLing/shogun
doc/ipython-notebooks/intro/Introduction.ipynb
gpl-3.0
%pylab inline %matplotlib inline import os SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data') #To import all Shogun classes from shogun import * """ Explanation: Machine Learning with Shogun By Saurabh Mahindre - <a href="https://github.com/Saurabh7">github.com/Saurabh7</a> as a part of <a href="http://www.google-melange.com/gsoc/project/details/google/gsoc2014/saurabh7/5750085036015616">Google Summer of Code 2014 project</a> mentored by - Heiko Strathmann - <a href="https://github.com/karlnapf">github.com/karlnapf</a> - <a href="http://herrstrathmann.de/">herrstrathmann.de</a> In this notebook we will see how machine learning problems are generally represented and solved in Shogun. As a primer to Shogun's many capabilities, we will see how various types of data and its attributes are handled and also how prediction is done. Introduction Using datasets Feature representations Labels Preprocessing data Supervised Learning with Shogun's CMachine interface Evaluating performance and Model selection Example: Regression Introduction Machine learning concerns the construction and study of systems that can learn from data via exploiting certain types of structure within these. The uncovered patterns are then used to predict future data, or to perform other kinds of decision making. Two main classes (among others) of Machine Learning algorithms are: predictive or supervised learning and descriptive or Unsupervised learning. Shogun provides functionality to address those (and more) problem classes. End of explanation """ #Load the file data_file=LibSVMFile(os.path.join(SHOGUN_DATA_DIR, 'uci/diabetes/diabetes_scale.svm')) """ Explanation: In a general problem setting for the supervised learning approach, the goal is to learn a mapping from inputs $x_i\in\mathcal{X} $ to outputs $y_i \in \mathcal{Y}$, given a labeled set of input-output pairs $ \mathcal{D} = {(x_i,y_i)}^{\text N}{i=1} $$\subseteq \mathcal{X} \times \mathcal{Y}$. Here $ \mathcal{D}$ is called the training set, and $\text N$ is the number of training examples. In the simplest setting, each training input $x_i$ is a $\mathcal{D}$ -dimensional vector of numbers, representing, say, the height and weight of a person. These are called $\textbf {features}$, attributes or covariates. In general, however, $x_i$ could be a complex structured object, such as an image.<ul><li>When the response variable $y_i$ is categorical and discrete, $y_i \in$ {1,...,C} (say male or female) it is a classification problem.</li><li>When it is continuous (say the prices of houses) it is a regression problem.</li></ul> For the unsupervised learning approach we are only given inputs, $\mathcal{D} = {(x_i)}^{\text N}{i=1}$ , and the goal is to find “interesting patterns” in the data. Using datasets Let us consider an example, we have a dataset about various attributes of individuals and we know whether or not they are diabetic. The data reveals certain configurations of attributes that correspond to diabetic patients and others that correspond to non-diabetic patients. When given a set of attributes for a new patient, the goal is to predict whether the patient is diabetic or not. This type of learning problem falls under Supervised learning, in particular, classification. Shogun provides the capability to load datasets of different formats using CFile.</br> A real world dataset: Pima Indians Diabetes data set is used now. We load the LibSVM format file using Shogun's LibSVMFile class. The LibSVM format is: $$\space \text {label}\space \text{attribute1:value1 attribute2:value2 }...$$$$\space.$$$$\space .$$ LibSVM uses the so called "sparse" format where zero values do not need to be stored. End of explanation """ f=SparseRealFeatures() trainlab=f.load_with_labels(data_file) mat=f.get_full_feature_matrix() #exatract 2 attributes glucose_conc=mat[1] BMI=mat[5] #generate a numpy array feats=array(glucose_conc) feats=vstack((feats, array(BMI))) print feats, feats.shape """ Explanation: This results in a LibSVMFile object which we will later use to access the data. Feature representations To get off the mark, let us see how Shogun handles the attributes of the data using CFeatures class. Shogun supports wide range of feature representations. We believe it is a good idea to have different forms of data, rather than converting them all into matrices. Among these are: $\hspace {20mm}$<ul><li>String features: Implements a list of strings. Not limited to character strings, but could also be sequences of floating point numbers etc. Have varying dimensions. </li> <li>Dense features: Implements dense feature matrices</li> <li>Sparse features: Implements sparse matrices.</li><li>Streaming features: For algorithms working on data streams (which are too large to fit into memory) </li></ul> SpareRealFeatures (sparse features handling 64 bit float type data) are used to get the data from the file. Since LibSVM format files have labels included in the file, load_with_labels method of SpareRealFeatures is used. In this case it is interesting to play with two attributes, Plasma glucose concentration and Body Mass Index (BMI) and try to learn something about their relationship with the disease. We get hold of the feature matrix using get_full_feature_matrix and row vectors 1 and 5 are extracted. These are the attributes we are interested in. End of explanation """ #convert to shogun format feats_train=RealFeatures(feats) """ Explanation: In numpy, this is a matrix of 2 row-vectors of dimension 768. However, in Shogun, this will be a matrix of 768 column vectors of dimension 2. This is beacuse each data sample is stored in a column-major fashion, meaning each column here corresponds to an individual sample and each row in it to an atribute like BMI, Glucose concentration etc. To convert the extracted matrix into Shogun format, RealFeatures are used which are nothing but the above mentioned Dense features of 64bit Float type. To do this call RealFeatures with the matrix (this should be a 64bit 2D numpy array) as the argument. End of explanation """ #Get number of features(attributes of data) and num of vectors(samples) feat_matrix=feats_train.get_feature_matrix() num_f=feats_train.get_num_features() num_s=feats_train.get_num_vectors() print('Number of attributes: %s and number of samples: %s' %(num_f, num_s)) print('Number of rows of feature matrix: %s and number of columns: %s' %(feat_matrix.shape[0], feat_matrix.shape[1])) print('First column of feature matrix (Data for first individual):') print feats_train.get_feature_vector(0) """ Explanation: Some of the general methods you might find useful are: get_feature_matrix(): The feature matrix can be accessed using this. get_num_features(): The total number of attributes can be accesed using this. get_num_vectors(): To get total number of samples in data. get_feature_vector(): To get all the attribute values (A.K.A feature vector) for a particular sample by passing the index of the sample as argument.</li></ul> End of explanation """ #convert to shogun format labels labels=BinaryLabels(trainlab) """ Explanation: Assigning labels In supervised learning problems, training data is labelled. Shogun provides various types of labels to do this through Clabels. Some of these are:<ul><li>Binary labels: Binary Labels for binary classification which can have values +1 or -1.</li><li>Multiclass labels: Multiclass Labels for multi-class classification which can have values from 0 to (num. of classes-1).</li><li>Regression labels: Real-valued labels used for regression problems and are returned as output of classifiers.</li><li>Structured labels: Class of the labels used in Structured Output (SO) problems</li></ul></br> In this particular problem, our data can be of two types: diabetic or non-diabetic, so we need binary labels. This makes it a Binary Classification problem, where the data has to be classified in two groups. End of explanation """ n=labels.get_num_labels() print 'Number of labels:', n """ Explanation: The labels can be accessed using get_labels and the confidence vector using get_values. The total number of labels is available using get_num_labels. End of explanation """ preproc=PruneVarSubMean(True) preproc.init(feats_train) feats_train.add_preprocessor(preproc) feats_train.apply_preprocessor() # Store preprocessed feature matrix. preproc_data=feats_train.get_feature_matrix() # Plot the raw training data. figure(figsize=(13,6)) pl1=subplot(121) gray() _=scatter(feats[0, :], feats[1,:], c=labels, s=50) vlines(0, -1, 1, linestyle='solid', linewidths=2) hlines(0, -1, 1, linestyle='solid', linewidths=2) title("Raw Training Data") _=xlabel('Plasma glucose concentration') _=ylabel('Body mass index') p1 = Rectangle((0, 0), 1, 1, fc="w") p2 = Rectangle((0, 0), 1, 1, fc="k") pl1.legend((p1, p2), ["Non-diabetic", "Diabetic"], loc=2) #Plot preprocessed data. pl2=subplot(122) _=scatter(preproc_data[0, :], preproc_data[1,:], c=labels, s=50) vlines(0, -5, 5, linestyle='solid', linewidths=2) hlines(0, -5, 5, linestyle='solid', linewidths=2) title("Training data after preprocessing") _=xlabel('Plasma glucose concentration') _=ylabel('Body mass index') p1 = Rectangle((0, 0), 1, 1, fc="w") p2 = Rectangle((0, 0), 1, 1, fc="k") pl2.legend((p1, p2), ["Non-diabetic", "Diabetic"], loc=2) gray() """ Explanation: Preprocessing data It is usually better to preprocess data to a standard form rather than handling it in raw form. The reasons are having a well behaved-scaling, many algorithms assume centered data, and that sometimes one wants to de-noise data (with say PCA). Preprocessors do not change the domain of the input features. It is possible to do various type of preprocessing using methods provided by CPreprocessor class. Some of these are:<ul><li>Norm one: Normalize vector to have norm 1.</li><li>PruneVarSubMean: Substract the mean and remove features that have zero variance. </li><li>Dimension Reduction: Lower the dimensionality of given simple features.<ul><li>PCA: Principal component analysis.</li><li>Kernel PCA: PCA using kernel methods.</li></ul></li></ul> The training data will now be preprocessed using CPruneVarSubMean. This will basically remove data with zero variance and subtract the mean. Passing a True to the constructor makes the class normalise the varaince of the variables. It basically dividies every dimension through its standard-deviation. This is the reason behind removing dimensions with constant values. It is required to initialize the preprocessor by passing the feature object to init before doing anything else. The raw and processed data is now plotted. End of explanation """ #prameters to svm C=0.9 svm=LibLinear(C, feats_train, labels) svm.set_liblinear_solver_type(L2R_L2LOSS_SVC) #train svm.train() size=100 """ Explanation: Horizontal and vertical lines passing through zero are included to make the processing of data clear. Note that the now processed data has zero mean. <a id='supervised'>Supervised Learning with Shogun's <a href='http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CMachine.html'>CMachine</a> interface</a> CMachine is Shogun's interface for general learning machines. Basically one has to train() the machine on some training data to be able to learn from it. Then we apply() it to test data to get predictions. Some of these are: <ul><li>Kernel machine: Kernel based learning tools.</li><li>Linear machine: Interface for all kinds of linear machines like classifiers.</li><li>Distance machine: A distance machine is based on a a-priori choosen distance.</li><li>Gaussian process machine: A base class for Gaussian Processes. </li><li>And many more</li></ul> Moving on to the prediction part, Liblinear, a linear SVM is used to do the classification (more on SVMs in this notebook). A linear SVM will find a linear separation with the largest possible margin. Here C is a penalty parameter on the loss function. End of explanation """ x1=linspace(-5.0, 5.0, size) x2=linspace(-5.0, 5.0, size) x, y=meshgrid(x1, x2) #Generate X-Y grid test data grid=RealFeatures(array((ravel(x), ravel(y)))) #apply on test grid predictions = svm.apply(grid) #get output labels z=predictions.get_values().reshape((size, size)) #plot jet() figure(figsize=(9,6)) title("Classification") c=pcolor(x, y, z) _=contour(x, y, z, linewidths=1, colors='black', hold=True) _=colorbar(c) _=scatter(preproc_data[0, :], preproc_data[1,:], c=trainlab, cmap=gray(), s=50) _=xlabel('Plasma glucose concentration') _=ylabel('Body mass index') p1 = Rectangle((0, 0), 1, 1, fc="w") p2 = Rectangle((0, 0), 1, 1, fc="k") legend((p1, p2), ["Non-diabetic", "Diabetic"], loc=2) gray() """ Explanation: We will now apply on test features to get predictions. For visualising the classification boundary, the whole XY is used as test data, i.e. we predict the class on every point in the grid. End of explanation """ w=svm.get_w() b=svm.get_bias() x1=linspace(-2.0, 3.0, 100) #solve for w.x+b=0 def solve (x1): return -( ( (w[0])*x1 + b )/w[1] ) x2=map(solve, x1) #plot figure(figsize=(7,6)) plot(x1,x2, linewidth=2) title("Decision boundary using w and bias") _=scatter(preproc_data[0, :], preproc_data[1,:], c=trainlab, cmap=gray(), s=50) _=xlabel('Plasma glucose concentration') _=ylabel('Body mass index') p1 = Rectangle((0, 0), 1, 1, fc="w") p2 = Rectangle((0, 0), 1, 1, fc="k") legend((p1, p2), ["Non-diabetic", "Diabetic"], loc=2) print 'w :', w print 'b :', b """ Explanation: Let us have a look at the weight vector of the separating hyperplane. It should tell us about the linear relationship between the features. The decision boundary is now plotted by solving for $\bf{w}\cdot\bf{x}$ + $\text{b}=0$. Here $\text b$ is a bias term which allows the linear function to be offset from the origin of the used coordinate system. Methods get_w() and get_bias() are used to get the necessary values. End of explanation """ #split features for training and evaluation num_train=700 feats=array(glucose_conc) feats_t=feats[:num_train] feats_e=feats[num_train:] feats=array(BMI) feats_t1=feats[:num_train] feats_e1=feats[num_train:] feats_t=vstack((feats_t, feats_t1)) feats_e=vstack((feats_e, feats_e1)) feats_train=RealFeatures(feats_t) feats_evaluate=RealFeatures(feats_e) """ Explanation: For this problem, a linear classifier does a reasonable job in distinguishing labelled data. An interpretation could be that individuals below a certain level of BMI and glucose are likely to have no Diabetes. For problems where the data cannot be separated linearly, there are more advanced classification methods, as for example all of Shogun's kernel machines, but more on this later. To play with this interactively have a look at this: web demo Evaluating performance and Model selection How do you assess the quality of a prediction? Shogun provides various ways to do this using CEvaluation. The preformance is evaluated by comparing the predicted output and the expected output. Some of the base classes for performance measures are: Binary class evaluation: used to evaluate binary classification labels. Clustering evaluation: used to evaluate clustering. Mean absolute error: used to compute an error of regression model. Multiclass accuracy: used to compute accuracy of multiclass classification. Evaluating on training data should be avoided since the learner may adjust to very specific random features of the training data which are not very important to the general relation. This is called overfitting. Maximising performance on the training examples usually results in algorithms explaining the noise in data (rather than actual patterns), which leads to bad performance on unseen data. The dataset will now be split into two, we train on one part and evaluate performance on other using CAccuracyMeasure. End of explanation """ label_t=trainlab[:num_train] labels=BinaryLabels(label_t) label_e=trainlab[num_train:] labels_true=BinaryLabels(label_e) svm=LibLinear(C, feats_train, labels) svm.set_liblinear_solver_type(L2R_L2LOSS_SVC) #train and evaluate svm.train() output=svm.apply(feats_evaluate) #use AccuracyMeasure to get accuracy acc=AccuracyMeasure() acc.evaluate(output,labels_true) accuracy=acc.get_accuracy()*100 print 'Accuracy(%):', accuracy """ Explanation: Let's see the accuracy by applying on test features. End of explanation """ temp_feats=RealFeatures(CSVFile(os.path.join(SHOGUN_DATA_DIR, 'uci/housing/fm_housing.dat'))) labels=RegressionLabels(CSVFile(os.path.join(SHOGUN_DATA_DIR, 'uci/housing/housing_label.dat'))) #rescale to 0...1 preproc=RescaleFeatures() preproc.init(temp_feats) temp_feats.add_preprocessor(preproc) temp_feats.apply_preprocessor(True) mat = temp_feats.get_feature_matrix() dist_centres=mat[7] lower_pop=mat[12] feats=array(dist_centres) feats=vstack((feats, array(lower_pop))) print feats, feats.shape #convert to shogun format features feats_train=RealFeatures(feats) """ Explanation: To evaluate more efficiently cross-validation is used. As you might have wondered how are the parameters of the classifier selected? Shogun has a model selection framework to select the best parameters. More description of these things in this notebook. More predictions: Regression This section will demonstrate another type of machine learning problem on real world data.</br> The task is to estimate prices of houses in Boston using the Boston Housing Dataset provided by StatLib library. The attributes are: Weighted distances to employment centres and percentage lower status of the population. Let us see if we can predict a good relationship between the pricing of houses and the attributes. This type of problems are solved using Regression analysis. The data set is now loaded using LibSVMFile as in the previous sections and the attributes required (7th and 12th vector ) are converted to Shogun format features. End of explanation """ from mpl_toolkits.mplot3d import Axes3D size=100 x1=linspace(0, 1.0, size) x2=linspace(0, 1.0, size) x, y=meshgrid(x1, x2) #Generate X-Y grid test data grid=RealFeatures(array((ravel(x), ravel(y)))) #Train on data(both attributes) and predict width=1.0 tau=0.5 kernel=GaussianKernel(feats_train, feats_train, width) krr=KernelRidgeRegression(tau, kernel, labels) krr.train(feats_train) kernel.init(feats_train, grid) out = krr.apply().get_labels() """ Explanation: The tool we will use here to perform regression is Kernel ridge regression. Kernel Ridge Regression is a non-parametric version of ridge regression where the kernel trick is used to solve a related linear ridge regression problem in a higher-dimensional space, whose results correspond to non-linear regression in the data-space. Again we train on the data and apply on the XY grid to get predicitions. End of explanation """ #create feature objects for individual attributes. feats_test=RealFeatures(x1.reshape(1,len(x1))) feats_t0=array(dist_centres) feats_train0=RealFeatures(feats_t0.reshape(1,len(feats_t0))) feats_t1=array(lower_pop) feats_train1=RealFeatures(feats_t1.reshape(1,len(feats_t1))) #Regression with first attribute kernel=GaussianKernel(feats_train0, feats_train0, width) krr=KernelRidgeRegression(tau, kernel, labels) krr.train(feats_train0) kernel.init(feats_train0, feats_test) out0 = krr.apply().get_labels() #Regression with second attribute kernel=GaussianKernel(feats_train1, feats_train1, width) krr=KernelRidgeRegression(tau, kernel, labels) krr.train(feats_train1) kernel.init(feats_train1, feats_test) out1 = krr.apply().get_labels() #Visualization of regression fig=figure(figsize(20,6)) #first plot with only one attribute fig.add_subplot(131) title("Regression with 1st attribute") _=scatter(feats[0, :], labels.get_labels(), cmap=gray(), s=20) _=xlabel('Weighted distances to employment centres ') _=ylabel('Median value of homes') _=plot(x1,out0, linewidth=3) #second plot with only one attribute fig.add_subplot(132) title("Regression with 2nd attribute") _=scatter(feats[1, :], labels.get_labels(), cmap=gray(), s=20) _=xlabel('% lower status of the population') _=ylabel('Median value of homes') _=plot(x1,out1, linewidth=3) #Both attributes and regression output ax=fig.add_subplot(133, projection='3d') z=out.reshape((size, size)) gray() title("Regression") ax.plot_wireframe(y, x, z, linewidths=2, alpha=0.4) ax.set_xlabel('% lower status of the population') ax.set_ylabel('Distances to employment centres ') ax.set_zlabel('Median value of homes') ax.view_init(25, 40) """ Explanation: The out variable now contains a relationship between the attributes. Below is an attempt to establish such relationship between the attributes individually. Separate feature instances are created for each attribute. You could skip the code and have a look at the plots directly if you just want the essence. End of explanation """
zakandrewking/cobrapy
documentation_builder/getting_started.ipynb
lgpl-2.1
from __future__ import print_function import cobra import cobra.test # "ecoli" and "salmonella" are also valid arguments model = cobra.test.create_test_model("textbook") """ Explanation: Getting Started Loading a model and inspecting it To begin with, cobrapy comes with bundled models for Salmonella and E. coli, as well as a "textbook" model of E. coli core metabolism. To load a test model, type End of explanation """ print(len(model.reactions)) print(len(model.metabolites)) print(len(model.genes)) """ Explanation: The reactions, metabolites, and genes attributes of the cobrapy model are a special type of list called a cobra.DictList, and each one is made up of cobra.Reaction, cobra.Metabolite and cobra.Gene objects respectively. End of explanation """ model """ Explanation: When using Jupyter notebook this type of information is rendered as a table. End of explanation """ model.reactions[29] """ Explanation: Just like a regular list, objects in the DictList can be retrieved by index. For example, to get the 30th reaction in the model (at index 29 because of 0-indexing): End of explanation """ model.metabolites.get_by_id("atp_c") """ Explanation: Additionally, items can be retrieved by their id using the DictList.get_by_id() function. For example, to get the cytosolic atp metabolite object (the id is "atp_c"), we can do the following: End of explanation """ model.reactions.EX_glc__D_e.bounds """ Explanation: As an added bonus, users with an interactive shell such as IPython will be able to tab-complete to list elements inside a list. While this is not recommended behavior for most code because of the possibility for characters like "-" inside ids, this is very useful while in an interactive prompt: End of explanation """ pgi = model.reactions.get_by_id("PGI") pgi """ Explanation: Reactions We will consider the reaction glucose 6-phosphate isomerase, which interconverts glucose 6-phosphate and fructose 6-phosphate. The reaction id for this reaction in our test model is PGI. End of explanation """ print(pgi.name) print(pgi.reaction) """ Explanation: We can view the full name and reaction catalyzed as strings End of explanation """ print(pgi.lower_bound, "< pgi <", pgi.upper_bound) print(pgi.reversibility) """ Explanation: We can also view reaction upper and lower bounds. Because the pgi.lower_bound < 0, and pgi.upper_bound > 0, pgi is reversible. End of explanation """ pgi.check_mass_balance() """ Explanation: We can also ensure the reaction is mass balanced. This function will return elements which violate mass balance. If it comes back empty, then the reaction is mass balanced. End of explanation """ pgi.add_metabolites({model.metabolites.get_by_id("h_c"): -1}) pgi.reaction """ Explanation: In order to add a metabolite, we pass in a dict with the metabolite object and its coefficient End of explanation """ pgi.check_mass_balance() """ Explanation: The reaction is no longer mass balanced End of explanation """ pgi.subtract_metabolites({model.metabolites.get_by_id("h_c"): -1}) print(pgi.reaction) print(pgi.check_mass_balance()) """ Explanation: We can remove the metabolite, and the reaction will be balanced once again. End of explanation """ pgi.reaction = "g6p_c --> f6p_c + h_c + green_eggs + ham" pgi.reaction pgi.reaction = "g6p_c <=> f6p_c" pgi.reaction """ Explanation: It is also possible to build the reaction from a string. However, care must be taken when doing this to ensure reaction id's match those in the model. The direction of the arrow is also used to update the upper and lower bounds. End of explanation """ atp = model.metabolites.get_by_id("atp_c") atp """ Explanation: Metabolites We will consider cytosolic atp as our metabolite, which has the id "atp_c" in our test model. End of explanation """ print(atp.name) print(atp.compartment) """ Explanation: We can print out the metabolite name and compartment (cytosol in this case) directly as string. End of explanation """ atp.charge """ Explanation: We can see that ATP is a charged molecule in our model. End of explanation """ print(atp.formula) """ Explanation: We can see the chemical formula for the metabolite as well. End of explanation """ len(atp.reactions) """ Explanation: The reactions attribute gives a frozenset of all reactions using the given metabolite. We can use this to count the number of reactions which use atp. End of explanation """ model.metabolites.get_by_id("g6p_c").reactions """ Explanation: A metabolite like glucose 6-phosphate will participate in fewer reactions. End of explanation """ gpr = pgi.gene_reaction_rule gpr """ Explanation: Genes The gene_reaction_rule is a boolean representation of the gene requirements for this reaction to be active as described in Schellenberger et al 2011 Nature Protocols 6(9):1290-307. The GPR is stored as the gene_reaction_rule for a Reaction object as a string. End of explanation """ pgi.genes pgi_gene = model.genes.get_by_id("b4025") pgi_gene """ Explanation: Corresponding gene objects also exist. These objects are tracked by the reactions itself, as well as by the model End of explanation """ pgi_gene.reactions """ Explanation: Each gene keeps track of the reactions it catalyzes End of explanation """ pgi.gene_reaction_rule = "(spam or eggs)" pgi.genes pgi_gene.reactions """ Explanation: Altering the gene_reaction_rule will create new gene objects if necessary and update all relationships. End of explanation """ model.genes.get_by_id("spam") """ Explanation: Newly created genes are also added to the model End of explanation """ cobra.manipulation.delete_model_genes( model, ["spam"], cumulative_deletions=True) print("after 1 KO: %4d < flux_PGI < %4d" % (pgi.lower_bound, pgi.upper_bound)) cobra.manipulation.delete_model_genes( model, ["eggs"], cumulative_deletions=True) print("after 2 KO: %4d < flux_PGI < %4d" % (pgi.lower_bound, pgi.upper_bound)) """ Explanation: The delete_model_genes function will evaluate the GPR and set the upper and lower bounds to 0 if the reaction is knocked out. This function can preserve existing deletions or reset them using the cumulative_deletions flag. End of explanation """ cobra.manipulation.undelete_model_genes(model) print(pgi.lower_bound, "< pgi <", pgi.upper_bound) """ Explanation: The undelete_model_genes can be used to reset a gene deletion End of explanation """ model = cobra.test.create_test_model('textbook') for reaction in model.reactions[:5]: with model as model: reaction.knock_out() model.optimize() print('%s blocked (bounds: %s), new growth rate %f' % (reaction.id, str(reaction.bounds), model.objective.value)) """ Explanation: Making changes reversibly using models as contexts Quite often, one wants to make small changes to a model and evaluate the impacts of these. For example, we may want to knock-out all reactions sequentially, and see what the impact of this is on the objective function. One way of doing this would be to create a new copy of the model before each knock-out with model.copy(). However, even with small models, this is a very slow approach as models are quite complex objects. Better then would be to do the knock-out, optimizing and then manually resetting the reaction bounds before proceeding with the next reaction. Since this is such a common scenario however, cobrapy allows us to use the model as a context, to have changes reverted automatically. End of explanation """ [reaction.bounds for reaction in model.reactions[:5]] """ Explanation: If we look at those knocked reactions, see that their bounds have all been reverted. End of explanation """ print('original objective: ', model.objective.expression) with model: model.objective = 'ATPM' print('print objective in first context:', model.objective.expression) with model: model.objective = 'ACALD' print('print objective in second context:', model.objective.expression) print('objective after exiting second context:', model.objective.expression) print('back to original objective:', model.objective.expression) """ Explanation: Nested contexts are also supported End of explanation """ with model as inner: inner.reactions.PFK.knock_out """ Explanation: Most methods that modify the model are supported like this including adding and removing reactions and metabolites and setting the objective. Supported methods and functions mention this in the corresponding documentation. While it does not have any actual effect, for syntactic convenience it is also possible to refer to the model by a different name than outside the context. Such as End of explanation """
GoogleCloudPlatform/vertex-ai-samples
community-content/pytorch_image_classification_single_gpu_with_vertex_sdk_and_torchserve/vertex_training_with_custom_container.ipynb
apache-2.0
PROJECT_ID = "YOUR PROJECT ID" BUCKET_NAME = "gs://YOUR BUCKET NAME" REGION = "YOUR REGION" SERVICE_ACCOUNT = "YOUR SERVICE ACCOUNT" content_name = "pt-img-cls-gpu-cust-cont-torchserve" """ Explanation: PyTorch Image Classification Single GPU using Vertex Training with Custom Container <table align="left"> <td> <a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/community-content/pytorch_image_classification_single_gpu_with_vertex_sdk_and_torchserve/vertex_training_with_custom_container.ipynb"> <img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo"> View on GitHub </a> </td> </table> Setup End of explanation """ ! ls trainer ! cat trainer/requirements.txt ! pip install -r trainer/requirements.txt ! cat trainer/task.py %run trainer/task.py --epochs 5 --local-mode ! ls ./tmp ! rm -rf ./tmp """ Explanation: Local Training End of explanation """ hostname = "gcr.io" image_name_train = content_name tag = "latest" custom_container_image_uri_train = f"{hostname}/{PROJECT_ID}/{image_name_train}:{tag}" ! cd trainer && docker build -t $custom_container_image_uri_train -f Dockerfile . ! docker run --rm $custom_container_image_uri_train --epochs 5 --local-mode ! docker push $custom_container_image_uri_train ! gcloud container images list --repository $hostname/$PROJECT_ID """ Explanation: Vertex Training using Vertex SDK and Custom Container Build Custom Container End of explanation """ ! pip install -r requirements.txt from google.cloud import aiplatform aiplatform.init( project=PROJECT_ID, staging_bucket=BUCKET_NAME, location=REGION, ) """ Explanation: Initialize Vertex SDK End of explanation """ tensorboard = aiplatform.Tensorboard.create( display_name=content_name, ) """ Explanation: Create a Vertex Tensorboard Instance End of explanation """ display_name = content_name gcs_output_uri_prefix = f"{BUCKET_NAME}/{display_name}" machine_type = "n1-standard-4" accelerator_count = 1 accelerator_type = "NVIDIA_TESLA_K80" container_args = [ "--batch-size", "256", "--epochs", "100", ] custom_container_training_job = aiplatform.CustomContainerTrainingJob( display_name=display_name, container_uri=custom_container_image_uri_train, ) custom_container_training_job.run( args=container_args, base_output_dir=gcs_output_uri_prefix, machine_type=machine_type, accelerator_type=accelerator_type, accelerator_count=accelerator_count, tensorboard=tensorboard.resource_name, service_account=SERVICE_ACCOUNT, ) print(f"Custom Training Job Name: {custom_container_training_job.resource_name}") print(f"GCS Output URI Prefix: {gcs_output_uri_prefix}") """ Explanation: Option: Use a Previously Created Vertex Tensorboard Instance tensorboard_name = "Your Tensorboard Resource Name or Tensorboard ID" tensorboard = aiplatform.Tensorboard(tensorboard_name=tensorboard_name) Run a Vertex SDK CustomContainerTrainingJob End of explanation """ ! gsutil ls $gcs_output_uri_prefix """ Explanation: Training Artifact End of explanation """
girving/tensorflow
tensorflow/contrib/eager/python/examples/nmt_with_attention/nmt_with_attention.ipynb
apache-2.0
from __future__ import absolute_import, division, print_function # Import TensorFlow >= 1.10 and enable eager execution import tensorflow as tf tf.enable_eager_execution() import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split import unicodedata import re import numpy as np import os import time print(tf.__version__) """ Explanation: Copyright 2018 The TensorFlow Authors. Licensed under the Apache License, Version 2.0 (the "License"). Neural Machine Translation with Attention <table class="tfo-notebook-buttons" align="left"><td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/nmt_with_attention/nmt_with_attention.ipynb"> <img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td><td> <a target="_blank" href="https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/examples/nmt_with_attention/nmt_with_attention.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a></td></table> This notebook trains a sequence to sequence (seq2seq) model for Spanish to English translation using tf.keras and eager execution. This is an advanced example that assumes some knowledge of sequence to sequence models. After training the model in this notebook, you will be able to input a Spanish sentence, such as "¿todavia estan en casa?", and return the English translation: "are you still at home?" The translation quality is reasonable for a toy example, but the generated attention plot is perhaps more interesting. This shows which parts of the input sentence has the model's attention while translating: <img src="https://tensorflow.org/images/spanish-english.png" alt="spanish-english attention plot"> Note: This example takes approximately 10 mintues to run on a single P100 GPU. End of explanation """ # Download the file path_to_zip = tf.keras.utils.get_file( 'spa-eng.zip', origin='http://download.tensorflow.org/data/spa-eng.zip', extract=True) path_to_file = os.path.dirname(path_to_zip)+"/spa-eng/spa.txt" # Converts the unicode file to ascii def unicode_to_ascii(s): return ''.join(c for c in unicodedata.normalize('NFD', s) if unicodedata.category(c) != 'Mn') def preprocess_sentence(w): w = unicode_to_ascii(w.lower().strip()) # creating a space between a word and the punctuation following it # eg: "he is a boy." => "he is a boy ." # Reference:- https://stackoverflow.com/questions/3645931/python-padding-punctuation-with-white-spaces-keeping-punctuation w = re.sub(r"([?.!,¿])", r" \1 ", w) w = re.sub(r'[" "]+', " ", w) # replacing everything with space except (a-z, A-Z, ".", "?", "!", ",") w = re.sub(r"[^a-zA-Z?.!,¿]+", " ", w) w = w.rstrip().strip() # adding a start and an end token to the sentence # so that the model know when to start and stop predicting. w = '<start> ' + w + ' <end>' return w # 1. Remove the accents # 2. Clean the sentences # 3. Return word pairs in the format: [ENGLISH, SPANISH] def create_dataset(path, num_examples): lines = open(path, encoding='UTF-8').read().strip().split('\n') word_pairs = [[preprocess_sentence(w) for w in l.split('\t')] for l in lines[:num_examples]] return word_pairs # This class creates a word -> index mapping (e.g,. "dad" -> 5) and vice-versa # (e.g., 5 -> "dad") for each language, class LanguageIndex(): def __init__(self, lang): self.lang = lang self.word2idx = {} self.idx2word = {} self.vocab = set() self.create_index() def create_index(self): for phrase in self.lang: self.vocab.update(phrase.split(' ')) self.vocab = sorted(self.vocab) self.word2idx['<pad>'] = 0 for index, word in enumerate(self.vocab): self.word2idx[word] = index + 1 for word, index in self.word2idx.items(): self.idx2word[index] = word def max_length(tensor): return max(len(t) for t in tensor) def load_dataset(path, num_examples): # creating cleaned input, output pairs pairs = create_dataset(path, num_examples) # index language using the class defined above inp_lang = LanguageIndex(sp for en, sp in pairs) targ_lang = LanguageIndex(en for en, sp in pairs) # Vectorize the input and target languages # Spanish sentences input_tensor = [[inp_lang.word2idx[s] for s in sp.split(' ')] for en, sp in pairs] # English sentences target_tensor = [[targ_lang.word2idx[s] for s in en.split(' ')] for en, sp in pairs] # Calculate max_length of input and output tensor # Here, we'll set those to the longest sentence in the dataset max_length_inp, max_length_tar = max_length(input_tensor), max_length(target_tensor) # Padding the input and output tensor to the maximum length input_tensor = tf.keras.preprocessing.sequence.pad_sequences(input_tensor, maxlen=max_length_inp, padding='post') target_tensor = tf.keras.preprocessing.sequence.pad_sequences(target_tensor, maxlen=max_length_tar, padding='post') return input_tensor, target_tensor, inp_lang, targ_lang, max_length_inp, max_length_tar """ Explanation: Download and prepare the dataset We'll use a language dataset provided by http://www.manythings.org/anki/. This dataset contains language translation pairs in the format: May I borrow this book? ¿Puedo tomar prestado este libro? There are a variety of languages available, but we'll use the English-Spanish dataset. For convenience, we've hosted a copy of this dataset on Google Cloud, but you can also download your own copy. After downloading the dataset, here are the steps we'll take to prepare the data: Add a start and end token to each sentence. Clean the sentences by removing special characters. Create a word index and reverse word index (dictionaries mapping from word → id and id → word). Pad each sentence to a maximum length. End of explanation """ # Try experimenting with the size of that dataset num_examples = 30000 input_tensor, target_tensor, inp_lang, targ_lang, max_length_inp, max_length_targ = load_dataset(path_to_file, num_examples) # Creating training and validation sets using an 80-20 split input_tensor_train, input_tensor_val, target_tensor_train, target_tensor_val = train_test_split(input_tensor, target_tensor, test_size=0.2) # Show length len(input_tensor_train), len(target_tensor_train), len(input_tensor_val), len(target_tensor_val) """ Explanation: Limit the size of the dataset to experiment faster (optional) Training on the complete dataset of >100,000 sentences will take a long time. To train faster, we can limit the size of the dataset to 30,000 sentences (of course, translation quality degrades with less data): End of explanation """ BUFFER_SIZE = len(input_tensor_train) BATCH_SIZE = 64 N_BATCH = BUFFER_SIZE//BATCH_SIZE embedding_dim = 256 units = 1024 vocab_inp_size = len(inp_lang.word2idx) vocab_tar_size = len(targ_lang.word2idx) dataset = tf.data.Dataset.from_tensor_slices((input_tensor_train, target_tensor_train)).shuffle(BUFFER_SIZE) dataset = dataset.batch(BATCH_SIZE, drop_remainder=True) """ Explanation: Create a tf.data dataset End of explanation """ def gru(units): # If you have a GPU, we recommend using CuDNNGRU(provides a 3x speedup than GRU) # the code automatically does that. if tf.test.is_gpu_available(): return tf.keras.layers.CuDNNGRU(units, return_sequences=True, return_state=True, recurrent_initializer='glorot_uniform') else: return tf.keras.layers.GRU(units, return_sequences=True, return_state=True, recurrent_activation='sigmoid', recurrent_initializer='glorot_uniform') class Encoder(tf.keras.Model): def __init__(self, vocab_size, embedding_dim, enc_units, batch_sz): super(Encoder, self).__init__() self.batch_sz = batch_sz self.enc_units = enc_units self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim) self.gru = gru(self.enc_units) def call(self, x, hidden): x = self.embedding(x) output, state = self.gru(x, initial_state = hidden) return output, state def initialize_hidden_state(self): return tf.zeros((self.batch_sz, self.enc_units)) class Decoder(tf.keras.Model): def __init__(self, vocab_size, embedding_dim, dec_units, batch_sz): super(Decoder, self).__init__() self.batch_sz = batch_sz self.dec_units = dec_units self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim) self.gru = gru(self.dec_units) self.fc = tf.keras.layers.Dense(vocab_size) # used for attention self.W1 = tf.keras.layers.Dense(self.dec_units) self.W2 = tf.keras.layers.Dense(self.dec_units) self.V = tf.keras.layers.Dense(1) def call(self, x, hidden, enc_output): # enc_output shape == (batch_size, max_length, hidden_size) # hidden shape == (batch_size, hidden size) # hidden_with_time_axis shape == (batch_size, 1, hidden size) # we are doing this to perform addition to calculate the score hidden_with_time_axis = tf.expand_dims(hidden, 1) # score shape == (batch_size, max_length, hidden_size) score = tf.nn.tanh(self.W1(enc_output) + self.W2(hidden_with_time_axis)) # attention_weights shape == (batch_size, max_length, 1) # we get 1 at the last axis because we are applying score to self.V attention_weights = tf.nn.softmax(self.V(score), axis=1) # context_vector shape after sum == (batch_size, hidden_size) context_vector = attention_weights * enc_output context_vector = tf.reduce_sum(context_vector, axis=1) # x shape after passing through embedding == (batch_size, 1, embedding_dim) x = self.embedding(x) # x shape after concatenation == (batch_size, 1, embedding_dim + hidden_size) x = tf.concat([tf.expand_dims(context_vector, 1), x], axis=-1) # passing the concatenated vector to the GRU output, state = self.gru(x) # output shape == (batch_size * 1, hidden_size) output = tf.reshape(output, (-1, output.shape[2])) # output shape == (batch_size * 1, vocab) x = self.fc(output) return x, state, attention_weights def initialize_hidden_state(self): return tf.zeros((self.batch_sz, self.dec_units)) encoder = Encoder(vocab_inp_size, embedding_dim, units, BATCH_SIZE) decoder = Decoder(vocab_tar_size, embedding_dim, units, BATCH_SIZE) """ Explanation: Write the encoder and decoder model Here, we'll implement an encoder-decoder model with attention which you can read about in the TensorFlow Neural Machine Translation (seq2seq) tutorial. This example uses a more recent set of APIs. This notebook implements the attention equations from the seq2seq tutorial. The following diagram shows that each input words is assigned a weight by the attention mechanism which is then used by the decoder to predict the next word in the sentence. <img src="https://www.tensorflow.org/images/seq2seq/attention_mechanism.jpg" width="500" alt="attention mechanism"> The input is put through an encoder model which gives us the encoder output of shape (batch_size, max_length, hidden_size) and the encoder hidden state of shape (batch_size, hidden_size). Here are the equations that are implemented: <img src="https://www.tensorflow.org/images/seq2seq/attention_equation_0.jpg" alt="attention equation 0" width="800"> <img src="https://www.tensorflow.org/images/seq2seq/attention_equation_1.jpg" alt="attention equation 1" width="800"> We're using Bahdanau attention. Lets decide on notation before writing the simplified form: FC = Fully connected (dense) layer EO = Encoder output H = hidden state X = input to the decoder And the pseudo-code: score = FC(tanh(FC(EO) + FC(H))) attention weights = softmax(score, axis = 1). Softmax by default is applied on the last axis but here we want to apply it on the 1st axis, since the shape of score is (batch_size, max_length, hidden_size). Max_length is the length of our input. Since we are trying to assign a weight to each input, softmax should be applied on that axis. context vector = sum(attention weights * EO, axis = 1). Same reason as above for choosing axis as 1. embedding output = The input to the decoder X is passed through an embedding layer. merged vector = concat(embedding output, context vector) This merged vector is then given to the GRU The shapes of all the vectors at each step have been specified in the comments in the code: End of explanation """ optimizer = tf.train.AdamOptimizer() def loss_function(real, pred): mask = 1 - np.equal(real, 0) loss_ = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=real, logits=pred) * mask return tf.reduce_mean(loss_) """ Explanation: Define the optimizer and the loss function End of explanation """ checkpoint_dir = './training_checkpoints' checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt") checkpoint = tf.train.Checkpoint(optimizer=optimizer, encoder=encoder, decoder=decoder) """ Explanation: Checkpoints (Object-based saving) End of explanation """ EPOCHS = 10 for epoch in range(EPOCHS): start = time.time() hidden = encoder.initialize_hidden_state() total_loss = 0 for (batch, (inp, targ)) in enumerate(dataset): loss = 0 with tf.GradientTape() as tape: enc_output, enc_hidden = encoder(inp, hidden) dec_hidden = enc_hidden dec_input = tf.expand_dims([targ_lang.word2idx['<start>']] * BATCH_SIZE, 1) # Teacher forcing - feeding the target as the next input for t in range(1, targ.shape[1]): # passing enc_output to the decoder predictions, dec_hidden, _ = decoder(dec_input, dec_hidden, enc_output) loss += loss_function(targ[:, t], predictions) # using teacher forcing dec_input = tf.expand_dims(targ[:, t], 1) batch_loss = (loss / int(targ.shape[1])) total_loss += batch_loss variables = encoder.variables + decoder.variables gradients = tape.gradient(loss, variables) optimizer.apply_gradients(zip(gradients, variables)) if batch % 100 == 0: print('Epoch {} Batch {} Loss {:.4f}'.format(epoch + 1, batch, batch_loss.numpy())) # saving (checkpoint) the model every 2 epochs if (epoch + 1) % 2 == 0: checkpoint.save(file_prefix = checkpoint_prefix) print('Epoch {} Loss {:.4f}'.format(epoch + 1, total_loss / N_BATCH)) print('Time taken for 1 epoch {} sec\n'.format(time.time() - start)) """ Explanation: Training Pass the input through the encoder which return encoder output and the encoder hidden state. The encoder output, encoder hidden state and the decoder input (which is the start token) is passed to the decoder. The decoder returns the predictions and the decoder hidden state. The decoder hidden state is then passed back into the model and the predictions are used to calculate the loss. Use teacher forcing to decide the next input to the decoder. Teacher forcing is the technique where the target word is passed as the next input to the decoder. The final step is to calculate the gradients and apply it to the optimizer and backpropagate. End of explanation """ def evaluate(sentence, encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ): attention_plot = np.zeros((max_length_targ, max_length_inp)) sentence = preprocess_sentence(sentence) inputs = [inp_lang.word2idx[i] for i in sentence.split(' ')] inputs = tf.keras.preprocessing.sequence.pad_sequences([inputs], maxlen=max_length_inp, padding='post') inputs = tf.convert_to_tensor(inputs) result = '' hidden = [tf.zeros((1, units))] enc_out, enc_hidden = encoder(inputs, hidden) dec_hidden = enc_hidden dec_input = tf.expand_dims([targ_lang.word2idx['<start>']], 0) for t in range(max_length_targ): predictions, dec_hidden, attention_weights = decoder(dec_input, dec_hidden, enc_out) # storing the attention weigths to plot later on attention_weights = tf.reshape(attention_weights, (-1, )) attention_plot[t] = attention_weights.numpy() predicted_id = tf.argmax(predictions[0]).numpy() result += targ_lang.idx2word[predicted_id] + ' ' if targ_lang.idx2word[predicted_id] == '<end>': return result, sentence, attention_plot # the predicted ID is fed back into the model dec_input = tf.expand_dims([predicted_id], 0) return result, sentence, attention_plot # function for plotting the attention weights def plot_attention(attention, sentence, predicted_sentence): fig = plt.figure(figsize=(10,10)) ax = fig.add_subplot(1, 1, 1) ax.matshow(attention, cmap='viridis') fontdict = {'fontsize': 14} ax.set_xticklabels([''] + sentence, fontdict=fontdict, rotation=90) ax.set_yticklabels([''] + predicted_sentence, fontdict=fontdict) plt.show() def translate(sentence, encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ): result, sentence, attention_plot = evaluate(sentence, encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ) print('Input: {}'.format(sentence)) print('Predicted translation: {}'.format(result)) attention_plot = attention_plot[:len(result.split(' ')), :len(sentence.split(' '))] plot_attention(attention_plot, sentence.split(' '), result.split(' ')) """ Explanation: Translate The evaluate function is similar to the training loop, except we don't use teacher forcing here. The input to the decoder at each time step is its previous predictions along with the hidden state and the encoder output. Stop predicting when the model predicts the end token. And store the attention weights for every time step. Note: The encoder output is calculated only once for one input. End of explanation """ # restoring the latest checkpoint in checkpoint_dir checkpoint.restore(tf.train.latest_checkpoint(checkpoint_dir)) translate('hace mucho frio aqui.', encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ) translate('esta es mi vida.', encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ) translate('¿todavia estan en casa?', encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ) # wrong translation translate('trata de averiguarlo.', encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ) """ Explanation: Restore the latest checkpoint and test End of explanation """
vinit-n/dataAnalysis
Python Pandas US Census names/Vinit_Nalawade_Project_Pandas.ipynb
apache-2.0
#import required libraries import pandas as pd import numpy as np #for counter operations from collections import Counter #for plotting graphs import matplotlib.pyplot as plt # Make the graphs a bit prettier, and bigger pd.set_option('display.mpl_style', 'default') pd.set_option('display.width', 5000) pd.set_option('display.max_columns', 60) %matplotlib inline """ Explanation: Name: Vinit Nalawade End of explanation """ #informing python that ',' indicates thousands df = pd.read_clipboard(thousands = ',') df #plot male and female births for the years covered in the data plt.plot(df['Year of birth'], df['Male'], c = 'b', label = 'Male') plt.plot(df['Year of birth'], df['Female'],c = 'r', label = 'Female') plt.legend(loc = 'upper left') #plt.axis([1880, 2015, 0, 2500000]) plt.xlabel('Year of birth') plt.ylabel('No. of births') plt.title('Total births by Sex and Year') #double the size of plot for visibility size = 2 params = plt.gcf() plSize = params.get_size_inches() params.set_size_inches((plSize[0]*size, plSize[1]*size)) plt.show() """ Explanation: Part One: Go to the <a href = "https://www.ssa.gov/oact/babynames/numberUSbirths.html">Social Security Administration US births website</a> and select the births table there and copy it to your clipboard. Use the pandas read_clipboard function to read the table into Python, and use matplotlib to plot male and female births for the years covered in the data. End of explanation """ years = range(1881,2011) pieces = [] columns = ['name','sex','births'] for year in years: path = 'names/yob{0:d}.txt'.format(year) frame = pd.read_csv(path,names=columns) frame['year'] = year pieces.append(frame) names = pd.concat(pieces, ignore_index=True) names.head() names.tail() """ Explanation: plot xkcd style :) with plt.xkcd(): #plt.plot(df['Year of birth'], df['Male'], c = 'b', label = 'Male') #plt.plot(df['Year of birth'], df['Female'],c = 'r', label = 'Female') #plt.legend(loc = 'upper left') #plt.xlim(xmax = 2015) #plt.xlabel('Year of birth') #plt.ylabel('No. of births') #plt.title('Male and Female births from 1880 to 2015') #plt.show() In the same notebook, use Python to get a list of male and female names from these files. This data is broken down by year of birth. <br>The files contain names data of the years from 1881 to 2010. <br>Aggregating this data in "names" dataframe below. End of explanation """ female_names = names[names.sex == 'F'] male_names = names[names.sex == 'M'] print "For Female names" print female_names.head() print "\nFor Male names" print male_names.tail() female_list = list(female_names['name']) male_list = list(male_names['name']) """ Explanation: Part Two: Aggregate the data for all years (see the examples in the Pandas notebooks). Use Python Counters to get letter frequencies for male and female names. Use matplotlib to draw a plot that for each letter (x-axis) shows the frequency of that letter (y-axis) as the last letter for both for male and female names. The data is already agregated in "names" dataframe. <br>Getting separate dataframes for Males and Females. <br>Defining a List for male and female names. End of explanation """ male_letter_freq = Counter() #converting every letter to lowercase for name in map(lambda x:x.lower(),male_names['name']): for i in name: male_letter_freq[i] += 1 male_letter_freq """ Explanation: Calculating the letter frequency for male names. End of explanation """ female_letter_freq = Counter() #converting every letter to lowercase for name in map(lambda x:x.lower(),female_names['name']): for i in name: female_letter_freq[i] += 1 female_letter_freq """ Explanation: Calculating the letter frequency for female names. End of explanation """ male_last_letter_freq = Counter() for name in male_names['name']: male_last_letter_freq[name[-1]] += 1 male_last_letter_freq """ Explanation: Calculating the last letter frequency for male names. End of explanation """ female_last_letter_freq = Counter() for name in female_names['name']: female_last_letter_freq[name[-1]] += 1 female_last_letter_freq """ Explanation: Calculating the last letter frequency for female names. End of explanation """ #for ordering items of counter in ascending order from collections import OrderedDict #plot of last letter frequency of male names in ascending order of letters male_last_letter_freq_asc = OrderedDict(sorted(male_last_letter_freq.items())) plt.bar(range(len(male_last_letter_freq_asc)), male_last_letter_freq_asc.values(), align='center') plt.xticks(range(len(male_last_letter_freq_asc)), male_last_letter_freq_asc.keys()) plt.xlabel('Letters') plt.ylabel('Frequency') plt.title('Frequency of last letter for Male names') plt.show() #plot of last letter frequency of female names in ascending order of letters female_last_letter_freq_asc = OrderedDict(sorted(female_last_letter_freq.items())) plt.bar(range(len(female_last_letter_freq_asc)), female_last_letter_freq_asc.values(), align='center') plt.xticks(range(len(female_last_letter_freq_asc)), female_last_letter_freq_asc.keys()) plt.xlabel('Letters') plt.ylabel('Frequency') plt.title('Frequency of last letter for Female names') plt.show() female_last_letter_freq_asc = OrderedDict(sorted(female_last_letter_freq.items())) plt.plot(range(len(female_last_letter_freq_asc)), female_last_letter_freq_asc.values(), c = 'r', label = 'Female') plt.plot(range(len(male_last_letter_freq_asc)), male_last_letter_freq_asc.values(), c = 'b', label = 'Male') plt.xticks(range(len(male_last_letter_freq_asc)), male_last_letter_freq_asc.keys()) plt.xlabel('Letters') plt.ylabel('Frequency') plt.legend(loc = 'upper right') plt.title('Frequency of last letter in names by Sex') #double the size of plot for visibility size = 2 params = plt.gcf() plSize = params.get_size_inches() params.set_size_inches((plSize[0]*size, plSize[1]*size)) plt.show() """ Explanation: Plot for each letter showing the frequency of that letter as the last letter for both for male and female names. <br>I use the OrderedDict function from collections here to arrange the letters present in counter in acsending order for plotting. End of explanation """ #to get the decade lists #female_1880 = female_names[female_names['year'] < 1890] #female_1890 = female_names[(female_names['year'] >= 1890) & (female_names['year'] < 1900)] #female_1900 = female_names[(female_names['year'] >= 1900) & (female_names['year'] < 1910)] #female_1910 = female_names[(female_names['year'] >= 1910) & (female_names['year'] < 1920)] #female_1920 = female_names[(female_names['year'] >= 1920) & (female_names['year'] < 1930)] #female_1930 = female_names[(female_names['year'] >= 1930) & (female_names['year'] < 1940)] #female_1940 = female_names[(female_names['year'] >= 1940) & (female_names['year'] < 1950)] #female_1950 = female_names[(female_names['year'] >= 1950) & (female_names['year'] < 1960)] #female_1960 = female_names[(female_names['year'] >= 1960) & (female_names['year'] < 1970)] #female_1970 = female_names[(female_names['year'] >= 1970) & (female_names['year'] < 1980)] #female_1980 = female_names[(female_names['year'] >= 1980) & (female_names['year'] < 1990)] #female_1990 = female_names[(female_names['year'] >= 1990) & (female_names['year'] < 2000)] #female_2000 = female_names[(female_names['year'] >= 2000) & (female_names['year'] < 2010)] #female_2010 = female_names[female_names['year'] >= 2010] #another earier way to get the decade lists for females female_1880 = female_names[female_names.year.isin(range(1880,1890))] female_1890 = female_names[female_names.year.isin(range(1890,1900))] female_1900 = female_names[female_names.year.isin(range(1900,1910))] female_1910 = female_names[female_names.year.isin(range(1910,1920))] female_1920 = female_names[female_names.year.isin(range(1920,1930))] female_1930 = female_names[female_names.year.isin(range(1930,1940))] female_1940 = female_names[female_names.year.isin(range(1940,1950))] female_1950 = female_names[female_names.year.isin(range(1950,1960))] female_1960 = female_names[female_names.year.isin(range(1960,1970))] female_1970 = female_names[female_names.year.isin(range(1970,1980))] female_1980 = female_names[female_names.year.isin(range(1980,1990))] female_1990 = female_names[female_names.year.isin(range(1990,2000))] female_2000 = female_names[female_names.year.isin(range(2000,2010))] female_2010 = female_names[female_names.year.isin(range(2010,2011))] #just the year 2010 present #to verify sorting of data print female_1880.head() print female_1880.tail() """ Explanation: Part Three: Now do just female names, but aggregate your data in decades (10 year) increments. Produce a plot that contains the 1880s line, the 1940s line, and the 1990s line, as well as the female line for all years aggregated together from Part Two. Evaluate how stable this statistic is. Speculate on why it is is stable, if it is, or on what demographic facts might explain any changes, if there are any. Turn in your ipython notebook file, showing the code you used to complete parts One, Two, an Three. End of explanation """ female_1880_freq = Counter() for name in female_1880['name']: female_1880_freq[name[-1]] += 1 female_1880_freq """ Explanation: Preparing data for the 1880s. <br>A counter for last letter frequencies. End of explanation """ female_1940_freq = Counter() for name in female_1940['name']: female_1940_freq[name[-1]] += 1 female_1940_freq """ Explanation: Preparing data for the 1940s. <br>A counter for last letter frequencies. End of explanation """ female_1990_freq = Counter() for name in female_1990['name']: female_1990_freq[name[-1]] += 1 female_1990_freq """ Explanation: Preparing data for the 1990s. <br>A counter for last letter frequencies. End of explanation """ #for 1880s first = pd.DataFrame.from_dict((OrderedDict(sorted(female_1880_freq.items()))), orient = 'index').reset_index() first.columns = ['letter','frequency'] first['decade'] = '1880s' print first.head() #for 1940s second = pd.DataFrame.from_dict((OrderedDict(sorted(female_1940_freq.items()))), orient = 'index').reset_index() second.columns = ['letter','frequency'] second['decade'] = '1940s' print second.head() #for 1990s third = pd.DataFrame.from_dict((OrderedDict(sorted(female_1990_freq.items()))), orient = 'index').reset_index() third.columns = ['letter','frequency'] third['decade'] = '1990s' print third.head() """ Explanation: Converting the frequency data from counter to dataframes after sorting the letters alphabetically. End of explanation """ #Aggregate 1880s, 1940s and 1990s frequencies frames = [first, second, third] columns = ["letter","frequency", "decade"] req_decades = pd.DataFrame(pd.concat(frames)) req_decades.columns = columns print req_decades.head() print req_decades.tail() #Get data into a pivot table for ease in plotting decades_table = pd.pivot_table(req_decades, index=['letter'], values=['frequency'], columns=['decade']) decades_table.head() """ Explanation: Aggregating all required decades (1880s, 1940s, 1990s) into a single dataframe and then into a pivot table for ease in plotting graphs. End of explanation """ #plot the decades as bars and the female line for all years as a line c = ['m','g','c'] decades_table['frequency'].plot(kind = 'bar', rot = 0,color = c, title = 'Frequency of Last letter of Female names by Female Births') #the female line for all years taken from part 2 plt.plot(range(len(female_last_letter_freq_asc)), female_last_letter_freq_asc.values(), c = 'r', label = 'All Female births') plt.xlabel('Letters') plt.ylabel('Frequency') plt.legend(loc = 'best') #double the size of plot for visibility size = 2 params = plt.gcf() plSize = params.get_size_inches() params.set_size_inches((plSize[0]*size, plSize[1]*size)) plt.show() """ Explanation: Plot of last letter of females for 1880s , 1940s, 1990s, and for all years (from part 2). End of explanation """ #plot the decades as bars and the female line for all years as a line c = ['m','g','c'] decades_table['frequency'].plot(kind = 'bar', rot = 0, logy = 'True',color = c, title = 'Log(Frequency) of Last letter of Female names by Female Births') #the female line for all years taken from part 2 plt.plot(range(len(female_last_letter_freq_asc)), female_last_letter_freq_asc.values(), c = 'r', label = 'All Female births') plt.xlabel('Letters') plt.ylabel('Log(Frequency)') plt.legend(loc = 'best') #double the size of plot for visibility size = 2 params = plt.gcf() plSize = params.get_size_inches() params.set_size_inches((plSize[0]*size, plSize[1]*size)) plt.show() """ Explanation: The graph has extreme variations in highs and lows. <br>Plotting the logarithmic scale of frequencies takes care of this and makes it easier for comparison. End of explanation """ decades_table.sum() #plot the decades as bars and the female line for all years as a line c = ['m','g','c'] decades_table_prop = decades_table/decades_table.sum().astype(float) decades_table_prop['frequency'].plot(kind = 'bar', rot = 0,color = c, title = 'Normalized Frequency of Last letter of Female names by Female Births') #the female line for all years taken from part 2 #plt.plot(range(len(female_last_letter_freq_asc)), female_last_letter_freq_asc.values(), c = 'r', label = 'All Female births') plt.xlabel('Letters') plt.ylabel('Normalized Frequency') plt.legend(loc = 'best') #double the size of plot for visibility size = 2 params = plt.gcf() plSize = params.get_size_inches() params.set_size_inches((plSize[0]*size, plSize[1]*size)) plt.show() """ Explanation: Evaluate how stable this statistic is. Speculate on why it is is stable, if it is, or on what demographic facts might explain any changes, if there are any. We can normalize the table by total births in each particular decades to compute a new table containing proportion of total births for each decade ending in each letter. End of explanation """
infilect/ml-course1
week3/seq2seq/language-translation-notebook/dlnd_language_translation.ipynb
mit
""" DON'T MODIFY ANYTHING IN THIS CELL """ import helper import problem_unittests as tests source_path = 'data/small_vocab_en' target_path = 'data/small_vocab_fr' source_text = helper.load_data(source_path) target_text = helper.load_data(target_path) """ Explanation: Language Translation In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French. Get the Data Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus. End of explanation """ view_sentence_range = (0, 10) """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()}))) sentences = source_text.split('\n') word_counts = [len(sentence.split()) for sentence in sentences] print('Number of sentences: {}'.format(len(sentences))) print('Average number of words in a sentence: {}'.format(np.average(word_counts))) print() print('English sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) print() print('French sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) """ Explanation: Explore the Data Play around with view_sentence_range to view different parts of the data. End of explanation """ def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int): """ Convert source and target text to proper word ids :param source_text: String that contains all the source text. :param target_text: String that contains all the target text. :param source_vocab_to_int: Dictionary to go from the source words to an id :param target_vocab_to_int: Dictionary to go from the target words to an id :return: A tuple of lists (source_id_text, target_id_text) """ # TODO: Implement Function return None, None """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_text_to_ids(text_to_ids) """ Explanation: Implement Preprocessing Function Text to Word Ids As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the &lt;EOS&gt; word id at the end of target_text. This will help the neural network predict when the sentence should end. You can get the &lt;EOS&gt; word id by doing: python target_vocab_to_int['&lt;EOS&gt;'] You can get other word ids using source_vocab_to_int and target_vocab_to_int. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ helper.preprocess_and_save_data(source_path, target_path, text_to_ids) """ Explanation: Preprocess all the data and save it Running the code cell below will preprocess all the data and save it to file. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np import helper (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() """ Explanation: Check Point This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ from distutils.version import LooseVersion import warnings import tensorflow as tf from tensorflow.python.layers.core import Dense # Check TensorFlow Version assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer' print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) """ Explanation: Check the Version of TensorFlow and Access to GPU This will check to make sure you have the correct version of TensorFlow and access to a GPU End of explanation """ def model_inputs(): """ Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences. :return: Tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length) """ # TODO: Implement Function return None, None, None, None, None, None, None """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_model_inputs(model_inputs) """ Explanation: Build the Neural Network You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below: - model_inputs - process_decoder_input - encoding_layer - decoding_layer_train - decoding_layer_infer - decoding_layer - seq2seq_model Input Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders: Input text placeholder named "input" using the TF Placeholder name parameter with rank 2. Targets placeholder with rank 2. Learning rate placeholder with rank 0. Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0. Target sequence length placeholder named "target_sequence_length" with rank 1 Max target sequence length tensor named "max_target_len" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0. Source sequence length placeholder named "source_sequence_length" with rank 1 Return the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length) End of explanation """ def process_decoder_input(target_data, target_vocab_to_int, batch_size): """ Preprocess target data for encoding :param target_data: Target Placehoder :param target_vocab_to_int: Dictionary to go from the target words to an id :param batch_size: Batch Size :return: Preprocessed target data """ # TODO: Implement Function return None """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_process_encoding_input(process_decoder_input) """ Explanation: Process Decoder Input Implement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch. End of explanation """ from imp import reload reload(tests) def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size): """ Create encoding layer :param rnn_inputs: Inputs for the RNN :param rnn_size: RNN Size :param num_layers: Number of layers :param keep_prob: Dropout keep probability :param source_sequence_length: a list of the lengths of each sequence in the batch :param source_vocab_size: vocabulary size of source data :param encoding_embedding_size: embedding size of source data :return: tuple (RNN output, RNN state) """ # TODO: Implement Function return None, None """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_encoding_layer(encoding_layer) """ Explanation: Encoding Implement encoding_layer() to create a Encoder RNN layer: * Embed the encoder input using tf.contrib.layers.embed_sequence * Construct a stacked tf.contrib.rnn.LSTMCell wrapped in a tf.contrib.rnn.DropoutWrapper * Pass cell and embedded input to tf.nn.dynamic_rnn() End of explanation """ def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_summary_length, output_layer, keep_prob): """ Create a decoding layer for training :param encoder_state: Encoder State :param dec_cell: Decoder RNN Cell :param dec_embed_input: Decoder embedded input :param target_sequence_length: The lengths of each sequence in the target batch :param max_summary_length: The length of the longest sequence in the batch :param output_layer: Function to apply the output layer :param keep_prob: Dropout keep probability :return: BasicDecoderOutput containing training logits and sample_id """ # TODO: Implement Function return None """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_train(decoding_layer_train) """ Explanation: Decoding - Training Create a training decoding layer: * Create a tf.contrib.seq2seq.TrainingHelper * Create a tf.contrib.seq2seq.BasicDecoder * Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode End of explanation """ def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob): """ Create a decoding layer for inference :param encoder_state: Encoder state :param dec_cell: Decoder RNN Cell :param dec_embeddings: Decoder embeddings :param start_of_sequence_id: GO ID :param end_of_sequence_id: EOS Id :param max_target_sequence_length: Maximum length of target sequences :param vocab_size: Size of decoder/target vocabulary :param decoding_scope: TenorFlow Variable Scope for decoding :param output_layer: Function to apply the output layer :param batch_size: Batch size :param keep_prob: Dropout keep probability :return: BasicDecoderOutput containing inference logits and sample_id """ # TODO: Implement Function return None """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_infer(decoding_layer_infer) """ Explanation: Decoding - Inference Create inference decoder: * Create a tf.contrib.seq2seq.GreedyEmbeddingHelper * Create a tf.contrib.seq2seq.BasicDecoder * Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode End of explanation """ def decoding_layer(dec_input, encoder_state, target_sequence_length, max_target_sequence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, decoding_embedding_size): """ Create decoding layer :param dec_input: Decoder input :param encoder_state: Encoder state :param target_sequence_length: The lengths of each sequence in the target batch :param max_target_sequence_length: Maximum length of target sequences :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :param target_vocab_size: Size of target vocabulary :param batch_size: The size of the batch :param keep_prob: Dropout keep probability :param decoding_embedding_size: Decoding embedding size :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput) """ # TODO: Implement Function return None, None """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer(decoding_layer) """ Explanation: Build the Decoding Layer Implement decoding_layer() to create a Decoder RNN layer. Embed the target sequences Construct the decoder LSTM cell (just like you constructed the encoder cell above) Create an output layer to map the outputs of the decoder to the elements of our vocabulary Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits. Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits. Note: You'll need to use tf.variable_scope to share variables between training and inference. End of explanation """ def seq2seq_model(input_data, target_data, keep_prob, batch_size, source_sequence_length, target_sequence_length, max_target_sentence_length, source_vocab_size, target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int): """ Build the Sequence-to-Sequence part of the neural network :param input_data: Input placeholder :param target_data: Target placeholder :param keep_prob: Dropout keep probability placeholder :param batch_size: Batch Size :param source_sequence_length: Sequence Lengths of source sequences in the batch :param target_sequence_length: Sequence Lengths of target sequences in the batch :param source_vocab_size: Source vocabulary size :param target_vocab_size: Target vocabulary size :param enc_embedding_size: Decoder embedding size :param dec_embedding_size: Encoder embedding size :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput) """ # TODO: Implement Function return None, None """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_seq2seq_model(seq2seq_model) """ Explanation: Build the Neural Network Apply the functions you implemented above to: Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size). Process target data using your process_decoder_input(target_data, target_vocab_to_int, batch_size) function. Decode the encoded input using your decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) function. End of explanation """ # Number of Epochs epochs = None # Batch Size batch_size = None # RNN Size rnn_size = None # Number of Layers num_layers = None # Embedding Size encoding_embedding_size = None decoding_embedding_size = None # Learning Rate learning_rate = None # Dropout Keep Probability keep_probability = None display_step = None """ Explanation: Neural Network Training Hyperparameters Tune the following parameters: Set epochs to the number of epochs. Set batch_size to the batch size. Set rnn_size to the size of the RNNs. Set num_layers to the number of layers. Set encoding_embedding_size to the size of the embedding for the encoder. Set decoding_embedding_size to the size of the embedding for the decoder. Set learning_rate to the learning rate. Set keep_probability to the Dropout keep probability Set display_step to state how many steps between each debug output statement End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ save_path = 'checkpoints/dev' (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() max_target_sentence_length = max([len(sentence) for sentence in source_int_text]) train_graph = tf.Graph() with train_graph.as_default(): input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs() #sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length') input_shape = tf.shape(input_data) train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, source_sequence_length, target_sequence_length, max_target_sequence_length, len(source_vocab_to_int), len(target_vocab_to_int), encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int) training_logits = tf.identity(train_logits.rnn_output, name='logits') inference_logits = tf.identity(inference_logits.sample_id, name='predictions') masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks') with tf.name_scope("optimization"): # Loss function cost = tf.contrib.seq2seq.sequence_loss( training_logits, targets, masks) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients) """ Explanation: Build the Graph Build the graph using the neural network you implemented. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ def pad_sentence_batch(sentence_batch, pad_int): """Pad sentences with <PAD> so that each sentence of a batch has the same length""" max_sentence = max([len(sentence) for sentence in sentence_batch]) return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch] def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int): """Batch targets, sources, and the lengths of their sentences together""" for batch_i in range(0, len(sources)//batch_size): start_i = batch_i * batch_size # Slice the right amount for the batch sources_batch = sources[start_i:start_i + batch_size] targets_batch = targets[start_i:start_i + batch_size] # Pad pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int)) pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int)) # Need the lengths for the _lengths parameters pad_targets_lengths = [] for target in pad_targets_batch: pad_targets_lengths.append(len(target)) pad_source_lengths = [] for source in pad_sources_batch: pad_source_lengths.append(len(source)) yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths """ Explanation: Batch and pad the source and target sequences End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ def get_accuracy(target, logits): """ Calculate accuracy """ max_seq = max(target.shape[1], logits.shape[1]) if max_seq - target.shape[1]: target = np.pad( target, [(0,0),(0,max_seq - target.shape[1])], 'constant') if max_seq - logits.shape[1]: logits = np.pad( logits, [(0,0),(0,max_seq - logits.shape[1])], 'constant') return np.mean(np.equal(target, logits)) # Split data to training and validation sets train_source = source_int_text[batch_size:] train_target = target_int_text[batch_size:] valid_source = source_int_text[:batch_size] valid_target = target_int_text[:batch_size] (valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source, valid_target, batch_size, source_vocab_to_int['<PAD>'], target_vocab_to_int['<PAD>'])) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(epochs): for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate( get_batches(train_source, train_target, batch_size, source_vocab_to_int['<PAD>'], target_vocab_to_int['<PAD>'])): _, loss = sess.run( [train_op, cost], {input_data: source_batch, targets: target_batch, lr: learning_rate, target_sequence_length: targets_lengths, source_sequence_length: sources_lengths, keep_prob: keep_probability}) if batch_i % display_step == 0 and batch_i > 0: batch_train_logits = sess.run( inference_logits, {input_data: source_batch, source_sequence_length: sources_lengths, target_sequence_length: targets_lengths, keep_prob: 1.0}) batch_valid_logits = sess.run( inference_logits, {input_data: valid_sources_batch, source_sequence_length: valid_sources_lengths, target_sequence_length: valid_targets_lengths, keep_prob: 1.0}) train_acc = get_accuracy(target_batch, batch_train_logits) valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits) print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}' .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_path) print('Model Trained and Saved') """ Explanation: Train Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ # Save parameters for checkpoint helper.save_params(save_path) """ Explanation: Save Parameters Save the batch_size and save_path parameters for inference. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ import tensorflow as tf import numpy as np import helper import problem_unittests as tests _, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess() load_path = helper.load_params() """ Explanation: Checkpoint End of explanation """ def sentence_to_seq(sentence, vocab_to_int): """ Convert a sentence to a sequence of ids :param sentence: String :param vocab_to_int: Dictionary to go from the words to an id :return: List of word ids """ # TODO: Implement Function return None """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_sentence_to_seq(sentence_to_seq) """ Explanation: Sentence to Sequence To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences. Convert the sentence to lowercase Convert words into ids using vocab_to_int Convert words not in the vocabulary, to the &lt;UNK&gt; word id. End of explanation """ translate_sentence = 'he saw a old yellow truck .' """ DON'T MODIFY ANYTHING IN THIS CELL """ translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_path + '.meta') loader.restore(sess, load_path) input_data = loaded_graph.get_tensor_by_name('input:0') logits = loaded_graph.get_tensor_by_name('predictions:0') target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0') source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0') keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size, target_sequence_length: [len(translate_sentence)*2]*batch_size, source_sequence_length: [len(translate_sentence)]*batch_size, keep_prob: 1.0})[0] print('Input') print(' Word Ids: {}'.format([i for i in translate_sentence])) print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence])) print('\nPrediction') print(' Word Ids: {}'.format([i for i in translate_logits])) print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits]))) """ Explanation: Translate This will translate translate_sentence from English to French. End of explanation """
feststelltaste/software-analytics
demos/20190425_JUGH_Kassel/Code-Hotspots.ipynb
gpl-3.0
from ozapfdis import git log = git.log_numstat_existing("../../../dropover/") log.head() """ Explanation: Code-HotSpots Welche Dateien werden wie oft geändert? Input Git-Versionskontrollsystemdaten einlesen. End of explanation """ java_prod = log[log['file'].str.contains("backend/src/main/java/")].copy() java_prod = java_prod[~java_prod['file'].str.contains("package-info.java")] java_prod.head() """ Explanation: Bereinigen Nur Produktions-Code ausgewerten. End of explanation """ hotspots = java_prod['file'].value_counts() hotspots.head() """ Explanation: Aggregation HotSpots ermitteln End of explanation """ hotspots.head(10).plot.barh(); """ Explanation: Visualisierung TOP 10 Hotspots anzeigen. End of explanation """
inageorgescu/OpenStreeMap
P3_Open_street_map_20170416.ipynb
mit
from IPython.display import Image Image("Malaga_map.jpg") """ Explanation: OpenStreetMap Project Udacity Data Analyst Nanodegree Project 3: Data Wrangling with MongoDB Florina Georgescu - Airbus Operations SL Github: https://github.com/inageorgescu Map Area: Málaga, Spain https://www.openstreetmap.org/relation/340746 http://www.malaga.eu/ <body> I will analize Málaga and 3 nearby cities( Rincon de la Victoria, Alhaurin de la Torre and Torremolinos) that are located in the Province of Málaga. This is the place where I got married and I usually visit in summer time. I will use data wrangling techniques and querry the data base with MongoDB. The data has been downloaded with Mapzen. <body> End of explanation """ Image("workflow.jpg") #Import all necesary modules as follows: #import flexible container object, designed to store hierarchical data structures in memory import xml.etree.cElementTree as ET #import function to supply missing values from collections import defaultdict #import Regular expression operations import re #import “pretty-print” arbitrary Python data structures in a form which can be used as input to the interpreter import pprint #import Codec registry and base classes import codecs #import JSON encoder and decoder import json import pymongo import os #Create a sample file in order to perform the tests and be able to visualize easily OSM_FILE = "Malaga.osm" # OSM file for Malaga SAMPLE_FILE = "sample.osm" k = 10 # Parameter: take every k-th top level element def get_element(osm_file, tags=('node', 'way', 'relation')): """Yield element if it is the right type of tag Reference: http://stackoverflow.com/questions/3095434/inserting-newlines-in-xml-file-generated-via-xml-etree-elementtree-in-python """ context = iter(ET.iterparse(osm_file, events=('start', 'end'))) _, root = next(context) for event, elem in context: if event == 'end' and elem.tag in tags: yield elem root.clear() with open(SAMPLE_FILE, 'wb') as output: output.write('<?xml version="1.0" encoding="UTF-8"?>\n') output.write('<osm>\n ') # Write every kth top level element for i, element in enumerate(get_element(OSM_FILE)): if i % k == 0: output.write(ET.tostring(element, encoding='utf-8')) output.write('</osm>') """ Explanation: <body> In order to complete the project the following steps shown in the diagram below must be followed <body> End of explanation """ #Parse file and count number of unique element types def count_tags(filename): tags = {} for event, elem in ET.iterparse(filename): if elem.tag in tags: tags[elem.tag] += 1 else: tags[elem.tag] = 1 return tags view_tags = count_tags(OSM_FILE) # pprint.pprint(view_tags) from IPython.display import display from audit_atributes import audit, update_names, audit_pc, update_postal_codes OSM_FILE ='./Malaga.osm' CREATED = [ "version", "changeset", "timestamp", "user", "uid"] street_types = audit(OSM_FILE) # display(street_types) #Update street type updates = update_names(OSM_FILE) # display(updates) updates_pc = update_postal_codes(OSM_FILE) # display(updates_pc) """ Explanation: 1. Audit data The data that I will audit is a OSM file for a given map, in this case Málaga. OSM XML is list of instances of data (nodes, ways, and relations) that represent points on the map. Ways contain node references to form either a polyline or polygon on the map. Nodes and ways both contain children tag elements that represent key value pairs of descriptive information about a given node or way. Problems Encountered in the Map After using the a sample file of the Malaga area and running it against a provisional audit_atributes.py file, I noticed three main problems with the data, which I will tackle: 1.1. Street type In Europe the street type is written after the name of the street and must have a standard naming. It is common to use abreviation for it therefore you can have same street type called different ways e.g. "Avenida" can be founf as "AV". "AVDA" etc. This problem was programmatically solved by function update_names() before creating the database. 1.2. Street without a type of street There are streets without a street type. This probelm can be solved only if you have a list with street names and their type. It is not our case therefore I will only list ( with the function audit()) the streets that meet this condition in order to identify that there is a problem in the map. 1.3 Postal codes There are some postal codes that might have not been corectly introduced because they do not belong to the valid postal codes in the area or they are wornlgly introduced. This problem was programmatically solved by function update_postal_codes() before creating the database. End of explanation """ import os print "The downloaded OSM file is {} MB".format(os.path.getsize('Malaga.osm')/1.0e6) # convert from bytes to megabytes from pymongo import MongoClient db_name = 'openstreetmap1' # Connect to Mongo DB client = MongoClient('localhost:27017') # Database 'openstreetmap' will be created db = client[db_name] collection = db['Malaga'] """ Explanation: 2. Data Overview with MongoDB End of explanation """ # Number of documents documents = collection.find().count() display(documents) #Number of unique users users=len(collection.distinct('created.user')) display(users) #number of nodes & ways nodes=collection.find({'type':'node'}).count() ways=collection.find({'type':'way'}).count() display(nodes) display(ways) #disply types and number of nodes & ways node_way = collection.aggregate([ {"$group" : {"_id" : "$type", "count" : {"$sum" : 1}}}]) pprint.pprint(list(node_way)) #top3 contributors to the map top3 = collection.aggregate([{ '$group' : {'_id' : '$created.user', 'count' : { '$sum' : 1}}}, { '$sort' : {'count' : -1}}, { '$limit' : 3 }]) display(list(top3)) #number of documents with street addresses addresses=collection.find({'address.street': {'$exists': 1}}).count() print(addresses) #top3 postal codes top10_pc = collection.aggregate([{ '$group' : {'_id' : '$address.postcode', 'count' : { '$sum' : 1}}}, { '$sort' : {'count' : -1}}, { '$limit' : 3 }]) display(list(top10_pc)) #list of postal codes limited to 3 postal_codes = collection.aggregate([ {"$match" : {"address.postcode" : {"$exists" : 1}}}, \ {"$group" : {"_id" : "$address.postcode", "count" : {"$sum" : 1}}}, \ {"$sort" : {"count" : -1}}]) # pprint.pprint(list(postal_codes)) #list top 10 street names in data base streets = collection.aggregate([ {"$match" : {"address.street" : {"$exists" : 1}}}, \ {"$group" : {"_id" : "$address.street", "count" : {"$sum" : 1}}}, \ {"$sort" : {"count" : -1}}, {"$limit":10}]) pprint.pprint(list(streets)) #Top 10 amenities top10_amenities = collection.aggregate([{"$match":{"amenity":{"$exists":1}}}, {"$group":{"_id":"$amenity","count":{"$sum":1}}}, {"$sort":{"count":-1}}, {"$limit":10}]) display(list(top10_amenities)) """ Explanation: There is a code for running directly from python the mongoimport ( see https://github.com/bestkao/data-wrangling-with-openstreetmap-and-mongodb) but as I do not have administrator right for my computer I had to manually instroduce the command In order to create the data base the for the openstreet execute in console cmd the following code: mongoimport -h 127.0.0.1:27017 --db openstreetmap1 --collection Malaga --file C:\Users\c41237\Desktop\Udacity\P3\P3_OpenStreetMap\Malaga.osm.json End of explanation """ #Top 5 building types type_buildings = collection.aggregate([ {'$match': {'building': {'$exists': 1}}}, {'$group': { '_id': '$building','count': {'$sum': 1}}}, {'$sort': {'count': -1}}, {'$limit': 5} ]) pprint.pprint(list(type_buildings)) #Top 3 cult buildings religion_buildings = collection.aggregate([ {"$match" : {"amenity" : "place_of_worship"}}, \ {"$group" : {"_id" : {"religion" : "$religion", "denomination" : "$denomination"}, "count" : {"$sum" : 1}}}, \ {"$sort" : {"count" : -1}}, {'$limit': 3}]) pprint.pprint(list(religion_buildings)) #Top 10 leisures leisures = collection.aggregate([{"$match" : {"leisure" : {"$exists" : 1}}}, \ {"$group" : {"_id" : "$leisure", "count" : {"$sum" : 1}}}, \ {"$sort" : {"count" : -1}}, \ {"$limit" : 10}]) pprint.pprint(list(leisures)) #Top 10 univeristies universities = collection.aggregate([ {"$match" : {"amenity" : "university"}}, \ {"$group" : {"_id" : {"name" : "$name"}, "count" : {"$sum" : 1}}}, \ {"$sort" : {"count" : -1}}, {"$limit":10} ]) # pprint.pprint(list(universities)) #Top 10 cuisines restaurant = collection.aggregate([ {"$match":{"cuisine":{"$exists":1},"amenity":"restaurant"}}, {"$group":{"_id":"$cuisine","count":{"$sum":1}}}, {"$sort":{"count":-1}}, {"$limit":10} ]) # pprint.pprint(list(restaurant)) """ Explanation: 3. Additional data exploration using MongoDB queries End of explanation """ noaddresses=collection.find({'address.street': {'$exists': 0}}).count() display(noaddresses) """ Explanation: 4. Ideas for additional improvements The data base needs to be updated with the missing street types as there are a large number without it. End of explanation """ Image("improvement_openstreetmap.jpg") """ Explanation: This can be done either by having the information first hand, entering the data one by one or using other map data such as https://www.google.es/maps or https://www.viamichelin.es/. If you go to one of this websites and you tipe in the name ( e.g. Gustavo Pittaluga you will identify that is a street type "Calle". One other option it will be to use the official governamental data base from the cadastral reference that can be found at this webpage https://www1.sedecatastro.gob.es/OVCFrames.aspx?TIPO=CONSULTA and we have available at hand the help page: http://www.catastro.meh.es/ayuda/ayuda_rc.htm The expected benefits are: •the data comes form an official source (governement), therefore reliable and up to date. Any modification can be identifyed right ayway. •the other two websites belong to the most used maps ( google and viamihelin) and you can also get a visual of their location on the map, street view etc. As it can be observed that the maps have to be cleaned for any imput done by hand and not chosen from a list, therefore we might encounter some problems for this ideea. The anticipated problems are: •high volume of information therefore proun of type errors •street name can be the same in two or more cities and the author shouldpay attention when looking for the street to introduce the right city. Anyhow the task can be completed in this way with reliable information and in the case that the info for the street tyoe is not introduced corectly we can allways run the audit routine above to clean the data. End of explanation """
ES-DOC/esdoc-jupyterhub
notebooks/nerc/cmip6/models/ukesm1-0-ll/toplevel.ipynb
gpl-3.0
# DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'nerc', 'ukesm1-0-ll', 'toplevel') """ Explanation: ES-DOC CMIP6 Model Properties - Toplevel MIP Era: CMIP6 Institute: NERC Source ID: UKESM1-0-LL Sub-Topics: Radiative Forcings. Properties: 85 (42 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:54:26 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation """ # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) """ Explanation: Document Authors Set document authors End of explanation """ # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) """ Explanation: Document Contributors Specify document contributors End of explanation """ # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) """ Explanation: Document Publication Specify document publication status End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Flux Correction 3. Key Properties --&gt; Genealogy 4. Key Properties --&gt; Software Properties 5. Key Properties --&gt; Coupling 6. Key Properties --&gt; Tuning Applied 7. Key Properties --&gt; Conservation --&gt; Heat 8. Key Properties --&gt; Conservation --&gt; Fresh Water 9. Key Properties --&gt; Conservation --&gt; Salt 10. Key Properties --&gt; Conservation --&gt; Momentum 11. Radiative Forcings 12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2 13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4 14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O 15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3 16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3 17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC 18. Radiative Forcings --&gt; Aerosols --&gt; SO4 19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon 20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon 21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate 22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect 23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect 24. Radiative Forcings --&gt; Aerosols --&gt; Dust 25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic 26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic 27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt 28. Radiative Forcings --&gt; Other --&gt; Land Use 29. Radiative Forcings --&gt; Other --&gt; Solar 1. Key Properties Key properties of the model 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Top level overview of coupled model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of coupled model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2. Key Properties --&gt; Flux Correction Flux correction properties of the model 2.1. Details Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how flux corrections are applied in the model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3. Key Properties --&gt; Genealogy Genealogy and history of the model 3.1. Year Released Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Year the model was released End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3.2. CMIP3 Parent Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 CMIP3 parent if any End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3.3. CMIP5 Parent Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 CMIP5 parent if any End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3.4. Previous Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Previously known as End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4. Key Properties --&gt; Software Properties Software properties of model 4.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4.4. Components Structure Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "OASIS" # "OASIS3-MCT" # "ESMF" # "NUOPC" # "Bespoke" # "Unknown" # "None" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 4.5. Coupler Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Overarching coupling framework for model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.coupling.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5. Key Properties --&gt; Coupling ** 5.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of coupling in the model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 5.2. Atmosphere Double Flux Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Atmosphere grid" # "Ocean grid" # "Specific coupler grid" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 5.3. Atmosphere Fluxes Calculation Grid Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Where are the air-sea fluxes calculated End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 5.4. Atmosphere Relative Winds Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6. Key Properties --&gt; Tuning Applied Tuning methodology for model 6.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.2. Global Mean Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List set of metrics/diagnostics of the global mean state used in tuning model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.3. Regional Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.4. Trend Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List observed trend metrics/diagnostics used in tuning model/component (such as 20th century) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.5. Energy Balance Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.6. Fresh Water Balance Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7. Key Properties --&gt; Conservation --&gt; Heat Global heat convervation properties of the model 7.1. Global Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how heat is conserved globally End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.2. Atmos Ocean Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how heat is conserved at the atmosphere/ocean coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.3. Atmos Land Interface Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how heat is conserved at the atmosphere/land coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.4. Atmos Sea-ice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.5. Ocean Seaice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how heat is conserved at the ocean/sea-ice coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.6. Land Ocean Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how heat is conserved at the land/ocean coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8. Key Properties --&gt; Conservation --&gt; Fresh Water Global fresh water convervation properties of the model 8.1. Global Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how fresh_water is conserved globally End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.2. Atmos Ocean Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.3. Atmos Land Interface Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how fresh water is conserved at the atmosphere/land coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.4. Atmos Sea-ice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.5. Ocean Seaice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.6. Runoff Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe how runoff is distributed and conserved End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.7. Iceberg Calving Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how iceberg calving is modeled and conserved End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.8. Endoreic Basins Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how endoreic basins (no ocean access) are treated End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.9. Snow Accumulation Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe how snow accumulation over land and over sea-ice is treated End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9. Key Properties --&gt; Conservation --&gt; Salt Global salt convervation properties of the model 9.1. Ocean Seaice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how salt is conserved at the ocean/sea-ice coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 10. Key Properties --&gt; Conservation --&gt; Momentum Global momentum convervation properties of the model 10.1. Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how momentum is conserved in the model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11. Radiative Forcings Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5) 11.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of radiative forcings (GHG and aerosols) implementation in model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2 Carbon dioxide forcing 12.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 12.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4 Methane forcing 13.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 13.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O Nitrous oxide forcing 14.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 14.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3 Troposheric ozone forcing 15.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3 Stratospheric ozone forcing 16.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 16.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC Ozone-depleting and non-ozone-depleting fluorinated gases forcing 17.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "Option 1" # "Option 2" # "Option 3" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.2. Equivalence Concentration Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Details of any equivalence concentrations used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.3. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 18. Radiative Forcings --&gt; Aerosols --&gt; SO4 SO4 aerosol forcing 18.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 18.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon Black carbon aerosol forcing 19.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 19.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon Organic carbon aerosol forcing 20.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 20.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate Nitrate forcing 21.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 21.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect Cloud albedo effect forcing (RFaci) 22.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 22.2. Aerosol Effect On Ice Clouds Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Radiative effects of aerosols on ice clouds are represented? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 22.3. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect Cloud lifetime effect forcing (ERFaci) 23.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 23.2. Aerosol Effect On Ice Clouds Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Radiative effects of aerosols on ice clouds are represented? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 23.3. RFaci From Sulfate Only Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Radiative forcing from aerosol cloud interactions from sulfate aerosol only? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 23.4. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 24. Radiative Forcings --&gt; Aerosols --&gt; Dust Dust forcing 24.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 24.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic Tropospheric volcanic forcing 25.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Type A" # "Type B" # "Type C" # "Type D" # "Type E" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How explosive volcanic aerosol is implemented in historical simulations End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Type A" # "Type B" # "Type C" # "Type D" # "Type E" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How explosive volcanic aerosol is implemented in future simulations End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 25.4. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic Stratospheric volcanic forcing 26.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Type A" # "Type B" # "Type C" # "Type D" # "Type E" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How explosive volcanic aerosol is implemented in historical simulations End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Type A" # "Type B" # "Type C" # "Type D" # "Type E" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How explosive volcanic aerosol is implemented in future simulations End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 26.4. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt Sea salt forcing 27.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 27.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 28. Radiative Forcings --&gt; Other --&gt; Land Use Land use forcing 28.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 28.2. Crop Change Only Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Land use change represented via crop change only? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 28.3. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "irradiance" # "proton" # "electron" # "cosmic ray" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 29. Radiative Forcings --&gt; Other --&gt; Solar Solar forcing 29.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How solar forcing is provided End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 29.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """
jerkos/cobrapy
documentation_builder/simulating.ipynb
lgpl-2.1
import pandas pandas.options.display.max_rows = 100 import cobra.test model = cobra.test.create_test_model("textbook") """ Explanation: Simulating with FBA Simulations using flux balance analysis can be solved using Model.optimize(). This will maximize or minimize (maximizing is the default) flux through the objective reactions. End of explanation """ model.optimize() """ Explanation: Running FBA End of explanation """ model.solution.status model.solution.f """ Explanation: The Model.optimize() function will return a Solution object, which will also be stored at model.solution. A solution object has several attributes: f: the objective value status: the status from the linear programming solver x_dict: a dictionary of {reaction_id: flux_value} (also called "primal") x: a list for x_dict y_dict: a dictionary of {metabolite_id: dual_value}. y: a list for y_dict For example, after the last call to model.optimize(), the status should be 'optimal' if the solver returned no errors, and f should be the objective value End of explanation """ model.objective """ Explanation: Changing the Objectives The objective function is determined from the objective_coefficient attribute of the objective reaction(s). Currently in the model, there is only one objective reaction, with an objective coefficient of 1. End of explanation """ # change the objective to ATPM # the upper bound should be 1000 so we get the actual optimal value model.reactions.get_by_id("ATPM").upper_bound = 1000. model.objective = "ATPM" model.objective model.optimize() """ Explanation: The objective function can be changed by assigning Model.objective, which can be a reaction object (or just it's name), or a dict of {Reaction: objective_coefficient}. End of explanation """ model.reactions.get_by_id("ATPM").objective_coefficient = 0. model.reactions.get_by_id("Biomass_Ecoli_core").objective_coefficient = 1. model.objective """ Explanation: The objective function can also be changed by setting Reaction.objective_coefficient directly. End of explanation """ fva_result = cobra.flux_analysis.flux_variability_analysis(model, model.reactions[:20]) pandas.DataFrame.from_dict(fva_result).T """ Explanation: Running FVA FBA will not give always give unique solution, because multiple flux states can achieve the same optimum. FVA (or flux variability analysis) finds the ranges of each metabolic flux at the optimum. End of explanation """ fva_result = cobra.flux_analysis.flux_variability_analysis(model, model.reactions[:20], fraction_of_optimum=0.9) pandas.DataFrame.from_dict(fva_result).T """ Explanation: Setting parameter fraction_of_optimium=0.90 would give the flux ranges for reactions at 90% optimality. End of explanation """
lorisercole/thermocepstrum
examples/example_cepstrum_singlecomp_silica.ipynb
gpl-3.0
import numpy as np import scipy as sp import matplotlib.pyplot as plt try: import sportran as st except ImportError: from sys import path path.append('..') import sportran as st c = plt.rcParams['axes.prop_cycle'].by_key()['color'] %matplotlib notebook """ Explanation: Example 1: Cepstral Analysis of solid amorphous Silica This example shows the basic usage of sportran to compute the thermal conductivity of a classical MD simulation of a-SiO$_2$. End of explanation """ jfile = st.i_o.TableFile('./data/Silica.dat', group_vectors=True) jfile.read_datalines(start_step=0, NSTEPS=0, select_ckeys=['flux1']) """ Explanation: 1. Load trajectory Read the heat current from a simple column-formatted file. The desired columns are selected based on their header (e.g. with LAMMPS format). For other input formats see corresponding the example. End of explanation """ DT_FS = 1.0 # time step [fs] TEMPERATURE = 1065.705630 # temperature [K] VOLUME = 3130.431110818 # volume [A^3] j = st.HeatCurrent(jfile.data['flux1'], UNITS= 'metal', DT_FS=DT_FS, TEMPERATURE=TEMPERATURE, VOLUME=VOLUME) # trajectory f = plt.figure() ax = plt.plot(j.timeseries()/1000., j.traj); plt.xlim([0, 1.0]) plt.xlabel(r'$t$ [ps]') plt.ylabel(r'$J$ [eV A/ps]'); """ Explanation: 2. Heat Current Define a HeatCurrent from the trajectory, with the correct parameters. End of explanation """ # Periodogram with given filtering window width ax = j.plot_periodogram(PSD_FILTER_W=0.5, kappa_units=True) print(j.Nyquist_f_THz) plt.xlim([0, 50]) ax[0].set_ylim([0, 150]); ax[1].set_ylim([12, 18]); """ Explanation: Compute the Power Spectral Density and filter it for visualization. End of explanation """ FSTAR_THZ = 28.0 jf,ax = j.resample(fstar_THz=FSTAR_THZ, plot=True, freq_units='thz') plt.xlim([0, 80]) ax[1].set_ylim([12,18]); ax = jf.plot_periodogram(PSD_FILTER_W=0.1) ax[1].set_ylim([12, 18]); """ Explanation: 3. Resampling If the Nyquist frequency is very high (i.e. the sampling time is small), such that the log-spectrum goes to low values, you may want resample your time series to obtain a maximum frequency $f^$. Before performing that operation, the time series is automatically filtered to reduce the amount of aliasing introduced. Ideally you do not want to go too low in $f^$. In an intermediate region the results should not change. To perform resampling you can choose the resampling frequency $f^$ or the resampling step (TSKIP). If you choose $f^$, the code will try to choose the closest value allowed. The resulting PSD is visualized to ensure that the low-frequency region is not affected. End of explanation """ jf.cepstral_analysis() # Cepstral Coefficients print('c_k = ', jf.dct.logpsdK) ax = jf.plot_ck() ax.set_xlim([0, 50]) ax.set_ylim([-0.5, 0.5]) ax.grid(); # AIC function f = plt.figure() plt.plot(jf.dct.aic, '.-', c=c[0]) plt.xlim([0, 200]) plt.ylim([2800, 3000]); print('K of AIC_min = {:d}'.format(jf.dct.aic_Kmin)) print('AIC_min = {:f}'.format(jf.dct.aic_min)) """ Explanation: 4. Cepstral Analysis Perform Cepstral Analysis. The code will: 1. the parameters describing the theoretical distribution of the PSD are computed 2. the Cepstral coefficients are computed by Fourier transforming the log(PSD) 3. the Akaike Information Criterion is applied 4. the resulting $\kappa$ is returned End of explanation """ # L_0 as a function of cutoff K ax = jf.plot_L0_Pstar() ax.set_xlim([0, 200]) ax.set_ylim([12.5, 14.5]); print('K of AIC_min = {:d}'.format(jf.dct.aic_Kmin)) print('AIC_min = {:f}'.format(jf.dct.aic_min)) # kappa as a function of cutoff K ax = jf.plot_kappa_Pstar() ax.set_xlim([0,200]) ax.set_ylim([0, 5.0]); print('K of AIC_min = {:d}'.format(jf.dct.aic_Kmin)) print('AIC_min = {:f}'.format(jf.dct.aic_min)) """ Explanation: Plot the thermal conductivity $\kappa$ as a function of the cutoff $P^*$ End of explanation """ results = jf.cepstral_log print(results) """ Explanation: Print the results :) End of explanation """ # filtered log-PSD ax = j.plot_periodogram(0.5, kappa_units=True) ax = jf.plot_periodogram(0.5, axes=ax, kappa_units=True) ax = jf.plot_cepstral_spectrum(axes=ax, kappa_units=True) ax[0].axvline(x = jf.Nyquist_f_THz, ls='--', c='r') ax[1].axvline(x = jf.Nyquist_f_THz, ls='--', c='r') plt.xlim([0., 50.]) ax[1].set_ylim([12,18]) ax[0].legend(['original', 'resampled', 'cepstrum-filtered']) ax[1].legend(['original', 'resampled', 'cepstrum-filtered']); """ Explanation: You can now visualize the filtered PSD... End of explanation """
Bihaqo/t3f
docs/tutorials/tensor_completion.ipynb
mit
import numpy as np import matplotlib.pyplot as plt # Import TF 2. %tensorflow_version 2.x import tensorflow as tf # Fix seed so that the results are reproducable. tf.random.set_seed(0) np.random.seed(0) try: import t3f except ImportError: # Install T3F if it's not already installed. !git clone https://github.com/Bihaqo/t3f.git !cd t3f; pip install . import t3f """ Explanation: Tensor completion (example of minimizing a loss w.r.t. TT-tensor) Open this page in an interactive mode via Google Colaboratory. In this example we will see how can we do tensor completion with t3f, i.e. observe a fraction of values in a tensor and recover the rest by assuming that the original tensor has low TT-rank. Mathematically it means that we have a binary mask $P$ and a ground truth tensor $A$, but we observe only a noisy and sparsified version of $A$: $P \odot (\hat{A})$, where $\odot$ is the elementwise product (applying the binary mask) and $\hat{A} = A + \text{noise}$. In this case our task reduces to the following optimization problem: $$ \begin{aligned} & \underset{X}{\text{minimize}} & & \|P \odot (X - \hat{A})\|_F^2 \ & \text{subject to} & & \text{tt_rank}(X) \leq r_0 \end{aligned} $$ End of explanation """ shape = (3, 4, 4, 5, 7, 5) # Generate ground truth tensor A. To make sure that it has low TT-rank, # let's generate a random tt-rank 5 tensor and apply t3f.full to it to convert to actual tensor. ground_truth = t3f.full(t3f.random_tensor(shape, tt_rank=5)) # Make a (non trainable) variable out of ground truth. Otherwise, it will be randomly regenerated on each sess.run. ground_truth = tf.Variable(ground_truth, trainable=False) noise = 1e-2 * tf.Variable(tf.random.normal(shape), trainable=False) noisy_ground_truth = ground_truth + noise # Observe 25% of the tensor values. sparsity_mask = tf.cast(tf.random.uniform(shape) <= 0.25, tf.float32) sparsity_mask = tf.Variable(sparsity_mask, trainable=False) sparse_observation = noisy_ground_truth * sparsity_mask """ Explanation: Generating problem instance Lets generate a random matrix $A$, noise, and mask $P$. End of explanation """ observed_total = tf.reduce_sum(sparsity_mask) total = np.prod(shape) initialization = t3f.random_tensor(shape, tt_rank=5) estimated = t3f.get_variable('estimated', initializer=initialization) """ Explanation: Initialize the variable and compute the loss End of explanation """ optimizer = tf.keras.optimizers.Adam(learning_rate=0.01) def step(): with tf.GradientTape() as tape: # Loss is MSE between the estimated and ground-truth tensor as computed in the observed cells. loss = 1.0 / observed_total * tf.reduce_sum((sparsity_mask * t3f.full(estimated) - sparse_observation)**2) gradients = tape.gradient(loss, estimated.tt_cores) optimizer.apply_gradients(zip(gradients, estimated.tt_cores)) # Test loss is MSE between the estimated tensor and full (and not noisy) ground-truth tensor A. test_loss = 1.0 / total * tf.reduce_sum((t3f.full(estimated) - ground_truth)**2) return loss, test_loss train_loss_hist = [] test_loss_hist = [] for i in range(5000): tr_loss_v, test_loss_v = step() tr_loss_v, test_loss_v = tr_loss_v.numpy(), test_loss_v.numpy() train_loss_hist.append(tr_loss_v) test_loss_hist.append(test_loss_v) if i % 1000 == 0: print(i, tr_loss_v, test_loss_v) plt.loglog(train_loss_hist, label='train') plt.loglog(test_loss_hist, label='test') plt.xlabel('Iteration') plt.ylabel('MSE Loss value') plt.title('SGD completion') plt.legend() """ Explanation: SGD optimization The simplest way to solve the optimization problem is Stochastic Gradient Descent: let TensorFlow differentiate the loss w.r.t. the factors (cores) of the TensorTrain decomposition of the estimated tensor and minimize the loss with your favourite SGD variation. End of explanation """ shape = (10, 10, 10, 10, 10, 10, 10) total_observed = np.prod(shape) # Since now the tensor is too large to work with explicitly, # we don't want to generate binary mask, # but we would rather generate indecies of observed cells. ratio = 0.001 # Let us simply randomly pick some indecies (it may happen # that we will get duplicates but probability of that # is 10^(-14) so lets not bother for now). num_observed = int(ratio * total_observed) observation_idx = np.random.randint(0, 10, size=(num_observed, len(shape))) # and let us generate some values of the tensor to be approximated observations = np.random.randn(num_observed) # Our strategy is to feed the observation_idx # into the tensor in the Tensor Train format and compute MSE between # the obtained values and the desired values initialization = t3f.random_tensor(shape, tt_rank=16) estimated = t3f.get_variable('estimated', initializer=initialization) # To collect the values of a TT tensor (withour forming the full tensor) # we use the function t3f.gather_nd def loss(): estimated_vals = t3f.gather_nd(estimated, observation_idx) return tf.reduce_mean((estimated_vals - observations) ** 2) optimizer = tf.keras.optimizers.Adam(learning_rate=0.01) def step(): with tf.GradientTape() as tape: loss_value = loss() gradients = tape.gradient(loss_value, estimated.tt_cores) optimizer.apply_gradients(zip(gradients, estimated.tt_cores)) return loss_value """ Explanation: Speeding it up The simple solution we have so far assumes that loss is computed by materializing the full estimated tensor and then zeroing out unobserved elements. If the tensors are really large and the fraction of observerd values is small (e.g. less than 1%), it may be much more efficient to directly work only with the observed elements. End of explanation """ # In TF eager mode you're supposed to first implement and debug # a function, and then compile it to make it faster. faster_step = tf.function(step) loss_hist = [] for i in range(2000): loss_v = faster_step().numpy() loss_hist.append(loss_v) if i % 100 == 0: print(i, loss_v) plt.loglog(loss_hist) plt.xlabel('Iteration') plt.ylabel('MSE Loss value') plt.title('smarter SGD completion') plt.legend() print(t3f.gather_nd(estimated, observation_idx)) print(observations) """ Explanation: Compiling the function to additionally speed things up End of explanation """
arturops/deep-learning
image-classification/dlnd_image_classification.ipynb
mit
""" DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ from urllib.request import urlretrieve from os.path import isfile, isdir from tqdm import tqdm import problem_unittests as tests import tarfile cifar10_dataset_folder_path = 'cifar-10-batches-py' # Use Floyd's cifar-10 dataset if present floyd_cifar10_location = '/input/cifar-10/python.tar.gz' if isfile(floyd_cifar10_location): tar_gz_path = floyd_cifar10_location else: tar_gz_path = 'cifar-10-python.tar.gz' class DLProgress(tqdm): last_block = 0 def hook(self, block_num=1, block_size=1, total_size=None): self.total = total_size self.update((block_num - self.last_block) * block_size) self.last_block = block_num if not isfile(tar_gz_path): with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar: urlretrieve( 'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz', tar_gz_path, pbar.hook) if not isdir(cifar10_dataset_folder_path): with tarfile.open(tar_gz_path) as tar: tar.extractall() tar.close() tests.test_folder_path(cifar10_dataset_folder_path) """ Explanation: Image Classification In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images. Get the Data Run the following cell to download the CIFAR-10 dataset for python. End of explanation """ %matplotlib inline %config InlineBackend.figure_format = 'retina' import helper import numpy as np # Explore the dataset batch_id = 1 sample_id = 17 helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id) """ Explanation: Explore the Data The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following: * airplane * automobile * bird * cat * deer * dog * frog * horse * ship * truck Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch. Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions. End of explanation """ def normalize(x): """ Normalize a list of sample image data in the range of 0 to 1 : x: List of image data. The image shape is (32, 32, 3) : return: Numpy array of normalize data """ # TODO: Implement Function np_x = np.array(x) norm_x = (np_x)/255 # current normalization only divides by max value # ANOTHER METHOD with range from -1 to 1 # (x - mean)/std = in an image mean=128 (expected) #and std=128 (expected) since we want to center our distribution in 0. If 255 colors, approx -128..0..128 return norm_x """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_normalize(normalize) """ Explanation: Implement Preprocess Functions Normalize In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x. End of explanation """ from sklearn import preprocessing def one_hot_encode(x): """ One hot encode a list of sample labels. Return a one-hot encoded vector for each label. : x: List of sample Labels : return: Numpy array of one-hot encoded labels """ # TODO: Implement Function encode = preprocessing.LabelBinarizer() encode.fit([0,1,2,3,4,5,6,7,8,9]) #possible values of labels to be encoded in vectors of 0's and 1's #show the encoding that corresponds to each label #print (encode.classes_) #print (encode.transform([9,8,7,6,5,4,3,2,1,0])) labels_one_hot_encode = encode.transform(x) # encodes the labels with 0,1 values based on labelID [0,9] return labels_one_hot_encode """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_one_hot_encode(one_hot_encode) """ Explanation: One-hot encode Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function. Hint: Don't reinvent the wheel. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ # Preprocess Training, Validation, and Testing Data helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode) """ Explanation: Randomize Data As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset. Preprocess all the data and save it Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ import pickle import problem_unittests as tests import helper import numpy as np # Load the Preprocessed Validation data valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb')) """ Explanation: Check Point This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. End of explanation """ import tensorflow as tf def neural_net_image_input(image_shape): """ Return a Tensor for a batch of image input : image_shape: Shape of the images : return: Tensor for image input. """ # TODO: Implement Function batch_size = None return tf.placeholder(tf.float32, shape=([batch_size, image_shape[0], image_shape[1], image_shape[2]]), name="x") def neural_net_label_input(n_classes): """ Return a Tensor for a batch of label input : n_classes: Number of classes : return: Tensor for label input. """ # TODO: Implement Function batch_size = None return tf.placeholder(tf.float32, shape=([batch_size, n_classes]), name="y") def neural_net_keep_prob_input(): """ Return a Tensor for keep probability : return: Tensor for keep probability. """ # TODO: Implement Function return tf.placeholder(tf.float32, shape=None, name="keep_prob") #------------------Added the Batch Normalization option to the network-------------------------------- def neural_net_batch_norm_mode_input(use_batch_norm, batch_norm_mode): """ Return a Tensor for batch normalization Tensor 'use_batch_norm': Batch Normalization on/off (True = on, False = off) Tensor 'batch_norm_mode': Batch Normalization mode (True = net is training, False = net in test mode) : return: Tensor for batch normalization mode """ return tf.Variable(use_batch_norm, name ="use_batch_norm"), tf.placeholder(batch_norm_mode, name="batch_norm_mode") #------------------------------------------------------------------------------------------------------ """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tf.reset_default_graph() tests.test_nn_image_inputs(neural_net_image_input) tests.test_nn_label_inputs(neural_net_label_input) tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input) """ Explanation: Build the network For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project. Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup. However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d. Let's begin! Input The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions * Implement neural_net_image_input * Return a TF Placeholder * Set the shape using image_shape with batch size set to None. * Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder. * Implement neural_net_label_input * Return a TF Placeholder * Set the shape using n_classes with batch size set to None. * Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder. * Implement neural_net_keep_prob_input * Return a TF Placeholder for dropout keep probability. * Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder. These names will be used at the end of the project to load your saved model. Note: None for shapes in TensorFlow allow for a dynamic size. End of explanation """ def batch_norm_wrapper(inputs, is_training,is_conv_layer=True, decay=0.999): """ Function that implements batch normalization. Stores the population mean and variance as tf.Variables, and decides whether to use the batch statistics or the population statistics for normalization. inputs: the dataset to learn(train)/predict(test) * weights of layer that uses this batch normalization, also used to get num_outputs is_training: set to True to learn the population and variance during training. False to use in test dataset decay: is a moving average decay rate to estimate the population mean and variance during training Batch_Norm = Gamma * X + Beta <=> BN(x*weights + bias) References: https://gist.github.com/tomokishii/0ce3bdac1588b5cca9fa5fbdf6e1c412 http://stackoverflow.com/questions/33949786/how-could-i-use-batch-normalization-in-tensorflow https://r2rt.com/implementing-batch-normalization-in-tensorflow.html """ epsilon = 1e-3 scale = tf.Variable(tf.ones([inputs.get_shape()[-1]])) #gamma beta = tf.Variable(tf.zeros([inputs.get_shape()[-1]])) pop_mean = tf.Variable(tf.zeros([inputs.get_shape()[-1]]), trainable = False) #False means we will train it rather than optimizer pop_var = tf.Variable(tf.ones([inputs.get_shape()[-1]]), trainable = False) #False means we will train it rather than optimizer if is_training: # update/compute the population mean and variance of our total training dataset split into batches # do this to know the value to use for the test and predictions if is_conv_layer: batch_mean, batch_var = tf.nn.moments(inputs,[0,1,2]) #conv layer needs 3 planes, dimensions -> [height,depth,colors] else: batch_mean, batch_var = tf.nn.moments(inputs,[0]) #fully connected layer only one plane (flat layer) train_mean = tf.assign(pop_mean, pop_mean*decay + batch_mean*(1 - decay)) train_var = tf.assign(pop_var, pop_var * decay + batch_var * (1 - decay)) with tf.control_dependencies([train_mean, train_var]): return tf.nn.batch_normalization(inputs, batch_mean, batch_var, beta, scale, epsilon) else: # when in test mode we need to use the population mean and var computed/learned from training return tf.nn.batch_normalization(inputs, pop_mean, pop_var, beta, scale, epsilon) """ Explanation: Batch Normalization (Added) Implemented a batch normalization wrapper that during training accumulates and computes the batches population mean and variance. Thus, in test it only uses the final computed mean and variance for the predictions in the neural net. Parameter is_training comes from a tf.placeholder in function neural_net_batch_norm_mode_input(). This parameter was added to the conv2d_maxpool() and fully_conn() functions to be able to define if the network is training or predicting(test), so that batch normalization performs accordingly [remember that batch norm works different for training than for predicting(test)]. Additional inputs were added to the feed_dict of train_neural_net() for conv_net so that the batch normalization mode can be turn on/off or set to training/test mode. End of explanation """ #def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides): #original function definition def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides, is_training=True, batch_norm_on=False): """ Apply convolution then max pooling to x_tensor :param x_tensor: TensorFlow Tensor :param conv_num_outputs: Number of outputs for the convolutional layer :param conv_ksize: kernal size 2-D Tuple for the convolutional layer :param conv_strides: Stride 2-D Tuple for convolution :param pool_ksize: kernal size 2-D Tuple for pool :param pool_strides: Stride 2-D Tuple for pool : return: A tensor that represents convolution and max pooling of x_tensor """ # TODO: Implement Function #weights filter_height = conv_ksize[0] filter_width = conv_ksize[1] color_channels = x_tensor.get_shape().as_list()[-1] # to read last value of list that contains the number of color channels # truncated normal std dev initialization of weights weights = tf.Variable(tf.truncated_normal([filter_height,filter_width,color_channels,conv_num_outputs], mean=0.0, stddev=0.1)) # Xavier Initialization - needs different names for vars #weights = tf.get_variable("w_conv",shape=[filter_height,filter_width,color_channels,conv_num_outputs],initializer=tf.contrib.layers.xavier_initializer()) #bias bias = tf.Variable(tf.zeros(conv_num_outputs)) #Convolution #batch and channel are commonly set to 1 conv_batch_size = 1 conv_channel_size = 1 conv_strides4D = [conv_batch_size, conv_strides[0], conv_strides[1], conv_channel_size] conv_layer = tf.nn.conv2d(x_tensor, weights, conv_strides4D, padding='SAME') #Add Bias conv_layer = tf.nn.bias_add(conv_layer, bias) #Non-linear activation conv_layer = tf.nn.relu(conv_layer) #----------------------- Added the Batch Normalization after ReLU -------------------- if batch_norm_on: conv_layer = batch_norm_wrapper(conv_layer, is_training, True) #true means this is a conv_layer that uses Batch norm #conv_layer = tf.cond(is_training, lambda: batch_norm_wrapper(conv_layer, 1), lambda: conv_layer) # Apparently doing batch norm after ReLU works well, but you might want to do ReLU again and then pooling conv_layer = tf.nn.relu(conv_layer) #------------------------------------------------------------------------------------- #Max pooling # batch and channel are commonly set to 1 pool_batch_size = 1 pool_channel_size = 1 pool_ksize4D = [pool_batch_size, pool_ksize[0], pool_ksize[1], pool_channel_size] pool_strides4D = [1, pool_strides[0], pool_strides[1], 1] conv_layer = tf.nn.max_pool(conv_layer, pool_ksize4D, pool_strides4D, padding='SAME') return conv_layer """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_con_pool(conv2d_maxpool) """ Explanation: Convolution and Max Pooling Layer Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling: * Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor. * Apply a convolution to x_tensor using weight and conv_strides. * We recommend you use same padding, but you're welcome to use any padding. * Add bias * Add a nonlinear activation to the convolution. * Apply Max Pooling using pool_ksize and pool_strides. * We recommend you use same padding, but you're welcome to use any padding. Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers. End of explanation """ def flatten(x_tensor): """ Flatten x_tensor to (Batch Size, Flattened Image Size) : x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions. : return: A tensor of size (Batch Size, Flattened Image Size). """ # TODO: Implement Function # First way to do this # Need to convert the get_shape() result to int() since at this point is a class Dimension object flat_image = np.prod(x_tensor.get_shape()[1:]) x_tensor_flatten = tf.reshape(x_tensor,[-1, int(flat_image)]) # Second way to do it # No Need to convert the get_shape().as_list() result since it is already an int #flat_image2 = np.prod(x_tensor.get_shape().as_list()[1:]) #x_tensor_flatten2 = tf.reshape(x_tensor,[-1, flat_image]) return x_tensor_flatten """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_flatten(flatten) """ Explanation: Flatten Layer Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages. End of explanation """ #def fully_conn(x_tensor, num_outputs): #original function definition def fully_conn(x_tensor, num_outputs, is_training=True, batch_norm_on=False): """ Apply a fully connected layer to x_tensor using weight and bias : x_tensor: A 2-D tensor where the first dimension is batch size. : num_outputs: The number of output that the new tensor should be. : return: A 2-D tensor where the second dimension is num_outputs. """ # TODO: Implement Function batch_size, num_inputs = x_tensor.get_shape().as_list() # truncated normal std dev initialization of weights weights = tf.Variable(tf.truncated_normal([num_inputs, num_outputs], mean=0.0, stddev=0.1)) # Xavier Initialization #weights = tf.get_variable("w_fc", shape=[filter_height,filter_width,color_channels,conv_num_outputs],initializer=tf.contrib.layers.xavier_initializer()) bias = tf.Variable(tf.zeros(num_outputs)) #------------------------------------------------------------------- #Batch normalization - Attempted to do it, but better not have dropout that is a unit test here, so not using it # moreover the implementation requires an extended class that creates the model to receive a flag indicating # if the model is training or in test (batch normalization takes a different behavior in each) #epsilon = 1e-3 # epsilon for Batch Normalization - avoids div with 0 #z_BN = tf.matmul(x_tensor,weights) #batch_mean, batch_var = tf.nn.moments(z_BN,[0]) #scale = tf.Variable(tf.ones(num_outputs)) #beta = tf.Variable(tf.zeros(num_outputs)) #fc_BN = tf.nn.batch_normalization(z_BN, batch_mean, batch_var, beta, scale, epsilon) #fc = tf.nn.relu(fc_BN) #------------------------------------------------------------------- # Batch Norm wrapper if batch_norm_on: z_BN = tf.matmul(x_tensor,weights) fc_BN = batch_norm_wrapper(z_BN, is_training, is_conv_layer=False) fc = tf.nn.relu(fc_BN) else: #------------------------------------------------------------------- #Normal FC - no BatchNormalization fc = tf.matmul(x_tensor, weights) + bias fc = tf.nn.relu(fc) return fc """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_fully_conn(fully_conn) """ Explanation: Fully-Connected Layer Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages. End of explanation """ def output(x_tensor, num_outputs): """ Apply a output layer to x_tensor using weight and bias : x_tensor: A 2-D tensor where the first dimension is batch size. : num_outputs: The number of output that the new tensor should be. : return: A 2-D tensor where the second dimension is num_outputs. """ # TODO: Implement Function batch_size, num_inputs = x_tensor.get_shape().as_list() # truncated normal std dev initialization of weights weights = tf.Variable(tf.truncated_normal([num_inputs, num_outputs], mean=0.0, stddev=0.1)) # Xavier Initialization #weights = tf.get_variable("w_out", shape=[filter_height,filter_width,color_channels,conv_num_outputs],initializer=tf.contrib.layers.xavier_initializer()) bias = tf.Variable(tf.zeros(num_outputs)) # Normal Linear prediction - no BN linear_prediction = tf.matmul(x_tensor, weights) + bias #linear activation #Batch normalization #epsilon = 1e-3 # epsilon for Batch Normalization - avoids div with 0 #z_BN = tf.matmul(x_tensor,weights) #batch_mean, batch_var = tf.nn.moments(z_BN,[0]) #scale = tf.Variable(tf.ones(num_outputs)) #beta = tf.Variable(tf.zeros(num_outputs)) #linear_prediction = tf.nn.batch_normalization(z_BN, batch_mean, batch_var, beta, scale, epsilon) return linear_prediction """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_output(output) """ Explanation: Output Layer Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages. Note: Activation, softmax, or cross entropy should not be applied to this. End of explanation """ #---------------- Added to try to implement Batch Norm ----------------------------- # issue with tensors inside the conv2d_maxpool and fully_conn being apparently # in different graphs than the feeded x tensor def run_conv_layers(x, is_training, batch_norm): conv_num_outputs = [16,40,60] #[36,70,100] conv_ksize = [[5,5],[5,5],[5,5]] #[[3,3],[3,3],[1,1]] conv_strides = [1,1] pool_ksize = [[2,2],[2,2],[2,2]] pool_strides = [2,2] x_conv = conv2d_maxpool(x, conv_num_outputs[0], conv_ksize[0], conv_strides, pool_ksize[0], pool_strides, is_training, batch_norm) x_conv = conv2d_maxpool(x_conv, conv_num_outputs[1], conv_ksize[1], conv_strides, pool_ksize[1], pool_strides, is_training, batch_norm) x_conv = conv2d_maxpool(x_conv, conv_num_outputs[2], conv_ksize[2], conv_strides, pool_ksize[2], pool_strides, is_training, batch_norm) return x_conv def run_fc_layer(x_flat, keep_prob, is_training, batch_norm): x_fc = tf.nn.dropout(fully_conn(x_flat, 1300, is_training, batch_norm), keep_prob) #1320 #320,#120 x_fc = tf.nn.dropout(fully_conn(x_fc, 685, is_training, batch_norm), keep_prob) #685 #185,#85 x_fc = tf.nn.dropout(fully_conn(x_fc, 255, is_training, batch_norm), keep_prob) #255 #55,#25 return x_fc #------------------------------------------------------------------------------------- #def conv_net(x, keep_prob): #original function definition #tf.constant only added to pass the unit test cases, this should be tf.Variable def conv_net(x, keep_prob, is_training=tf.constant(True,tf.bool), batch_norm=tf.constant(False,tf.bool)): """ Create a convolutional neural network model : x: Placeholder tensor that holds image data. : keep_prob: Placeholder tensor that hold dropout keep probability. : return: Tensor that represents logits """ # TODO: Apply 1, 2, or 3 Convolution and Max Pool layers # Play around with different number of outputs, kernel size and stride # Function Definition from Above: # conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides) # (x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides) conv_num_outputs = [16,40,60] #[36,70,100] conv_ksize = [[5,5],[5,5],[5,5]] #[[3,3],[3,3],[1,1]] conv_strides = [1,1] pool_ksize = [[2,2],[2,2],[2,2]] pool_strides = [2,2] #function before batch norm #x_conv = conv2d_maxpool(x, conv_num_outputs[0], conv_ksize[0], conv_strides, pool_ksize[0], pool_strides) #x_conv = conv2d_maxpool(x_conv, conv_num_outputs[1], conv_ksize[1], conv_strides, pool_ksize[1], pool_strides) #x_conv = conv2d_maxpool(x_conv, conv_num_outputs[2], conv_ksize[2], conv_strides, pool_ksize[2], pool_strides) #---------------------------- Added later to try to implement Batch Norm --------------------------------------- #Hardcoded variables that drive the batch Norm - UNFORTUNATELY cannot change values for the Test cases where is_training should be false batch_norm_bool = False is_training_bool = True # Unsuccessful tf.cond to use the run_conv_layer since TF complains that the weights Tensor inside # conv2d_maxpool must be from the same graph/group as the tensor passed x/x_tensor, same issue in fc layer #x_conv = tf.cond(is_training, lambda:run_conv_layers(x,True,batch_norm_bool), lambda:run_conv_layers(x,False,batch_norm_bool)) x_conv = conv2d_maxpool(x, conv_num_outputs[0], conv_ksize[0], conv_strides, pool_ksize[0], pool_strides, is_training_bool, batch_norm_bool) x_conv = conv2d_maxpool(x_conv, conv_num_outputs[1], conv_ksize[1], conv_strides, pool_ksize[1], pool_strides, is_training_bool, batch_norm_bool) x_conv = conv2d_maxpool(x_conv, conv_num_outputs[2], conv_ksize[2], conv_strides, pool_ksize[2], pool_strides, is_training_bool, batch_norm_bool) #---------------------------------------------------------------------------------------------------------------- # TODO: Apply a Flatten Layer # Function Definition from Above: # flatten(x_tensor) x_flat = flatten(x_conv) # TODO: Apply 1, 2, or 3 Fully Connected Layers # Play around with different number of outputs # Function Definition from Above: # fully_conn(x_tensor, num_outputs) # before batch norm #x_fc = tf.nn.dropout(fully_conn(x_flat, 1300), keep_prob) #1320 #320,#120 #x_fc = tf.nn.dropout(fully_conn(x_fc, 685), keep_prob) #685 #185,#85 #x_fc = tf.nn.dropout(fully_conn(x_fc, 255), keep_prob) #255 #55,#25 #---------------------------- Added to try to implement Batch Norm --------------------------------------- # Unsuccessful tf.cond to use the run_fc_layer since TF complains that the weights Tensor inside # fully_conn must be from the same graph/group as the tensor passed x_flat/x_fc #x_fc = tf.cond(is_training, lambda: run_fc_layer(x_flat, keep_prob, True,batch_norm_bool), lambda: run_fc_layer(x_flat, keep_prob,False,batch_norm_bool) ) x_fc = tf.nn.dropout(fully_conn(x_flat, 120, is_training_bool, batch_norm_bool), keep_prob) #1320 #320,#120 x_fc = tf.nn.dropout(fully_conn(x_fc, 85, is_training_bool, batch_norm_bool), keep_prob) #685 #185,#85 x_fc = tf.nn.dropout(fully_conn(x_fc, 25, is_training_bool, batch_norm_bool), keep_prob) #255 #55,#25 #---------------------------------------------------------------------------------------------------------------- # TODO: Apply an Output Layer # Set this to the number of classes # Function Definition from Above: # output(x_tensor, num_outputs) num_outputs_pred = 10 x_predict =tf.nn.dropout(output(x_fc, num_outputs_pred), keep_prob) # TODO: return output return x_predict """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ ############################## ## Build the Neural Network ## ############################## # Remove previous weights, bias, inputs, etc.. tf.reset_default_graph() # Inputs x = neural_net_image_input((32, 32, 3)) y = neural_net_label_input(10) keep_prob = neural_net_keep_prob_input() #-----------------Added the Batch Normalization Parameters------------ #currently not used, missing connection from the tf.Variable and the core of the network batch_norm_on, batch_norm_mode = neural_net_batch_norm_mode_input(True,True) #--------------------------------------------------------------------- # Model #logits = conv_net(x, keep_prob) # original call to conv_net #---------------------------- Added to try to implement Batch Norm --------------------------------------- logits = conv_net(x, keep_prob, batch_norm_mode, batch_norm_on) #--------------------------------------------------------------------------------------------------------- # Name logits Tensor, so that is can be loaded from disk after training logits = tf.identity(logits, name='logits') # Loss and Optimizer cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y)) optimizer = tf.train.AdamOptimizer().minimize(cost) # Accuracy correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1)) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy') tests.test_conv_net(conv_net) """ Explanation: Create Convolutional Model Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model: Apply 1, 2, or 3 Convolution and Max Pool layers Apply a Flatten Layer Apply 1, 2, or 3 Fully Connected Layers Apply an Output Layer Return the output Apply TensorFlow's Dropout to one or more layers in the model using keep_prob. End of explanation """ #def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch): #original function declaration def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch, is_training=True, use_batch_norm=False): """ Optimize the session on a batch of images and labels : session: Current TensorFlow session : optimizer: TensorFlow optimizer function : keep_probability: keep probability : feature_batch: Batch of Numpy image data : label_batch: Batch of Numpy label data """ # TODO: Implement Function # train_feed_dict ={x: feature_batch, y: label_batch, keep_prob: keep_probability} #original train dict #---------------------------- Added to try to implement Batch Norm --------------------------------------- train_feed_dict ={x: feature_batch, y: label_batch, keep_prob: keep_probability, batch_norm_mode: is_training, batch_norm_on: use_batch_norm} #--------------------------------------------------------------------------------------------------------- session.run(optimizer, feed_dict=train_feed_dict) #pass """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_train_nn(train_neural_network) """ Explanation: Train the Neural Network Single Optimization Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following: * x for image input * y for labels * keep_prob for keep probability for dropout This function will be called for each batch, so tf.global_variables_initializer() has already been called. Note: Nothing needs to be returned. This function is only optimizing the neural network. End of explanation """ def print_stats(session, feature_batch, label_batch, cost, accuracy): """ Print information about loss and validation accuracy : session: Current TensorFlow session : feature_batch: Batch of Numpy image data : label_batch: Batch of Numpy label data : cost: TensorFlow cost function : accuracy: TensorFlow accuracy function """ # TODO: Implement Function #pass # train_feed_dict = {x: feature_batch, y: label_batch, keep_prob: 0.75} # original train dict # val_feed_dict = {x: valid_features, y: valid_labels, keep_prob: 1.0} # original val dict #---------------------------- Added to try to implement Batch Norm --------------------------------------- train_feed_dict = {x: feature_batch, y: label_batch, keep_prob: 0.75, batch_norm_mode: True, batch_norm_on: False} val_feed_dict = {x: valid_features, y: valid_labels, keep_prob: 1.0, batch_norm_mode: False, batch_norm_on: False} #--------------------------------------------------------------------------------------------------------- validation_cost = session.run(cost, feed_dict=val_feed_dict) validation_accuracy = session.run(accuracy, feed_dict=val_feed_dict) train_accuracy = session.run(accuracy, feed_dict=train_feed_dict) print('Train_acc: {:8.14f} | Val_acc: {:8.14f} | loss: {:8.14f}'.format(train_accuracy, validation_accuracy, validation_cost)) """ Explanation: Show Stats Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy. End of explanation """ # TODO: Tune Parameters epochs = 25 batch_size = 64 keep_probability = 0.75 #test with 0.75 seemed better #---------------------------- Added to try to implement Batch Norm --------------------------------------- # ACTUAL CONTROL FRO BATCH NORM IS INSIDE 'conv_net() -> batch_norm_bool, is_training_bool variables ' # Try to add the Batch Normalization parameters, but couldn't make the connection from the inside of # convnet to the rest of the functions, everything is layed out to work except for that step in which # from a tf.bool Tensor need to decide (try to use tf.cond) to use batch norm or not batch_norm_is_training = True use_batch_norm = False """ Explanation: Hyperparameters Tune the following parameters: * Set epochs to the number of iterations until the network stops learning or start overfitting * Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory: * 64 * 128 * 256 * ... * Set keep_probability to the probability of keeping a node using dropout End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ print('Checking the Training on a Single Batch...') with tf.Session() as sess: # Initializing the variables sess.run(tf.global_variables_initializer()) # Training cycle for epoch in range(epochs): batch_i = 1 for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size): # ORIGINAL call to train_neural_network #train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels) #---------------------------- Added to try to implement Batch Norm --------------------------------------- train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels, batch_norm_is_training, use_batch_norm) #--------------------------------------------------------------------------------------------------------- print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='') print_stats(sess, batch_features, batch_labels, cost, accuracy) """ Explanation: Train on a Single CIFAR-10 Batch Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ save_model_path = './image_classification' print('Training...') with tf.Session() as sess: # Initializing the variables sess.run(tf.global_variables_initializer()) # Training cycle for epoch in range(epochs): # Loop over all batches n_batches = 5 for batch_i in range(1, n_batches + 1): for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size): # ORIGINAL call to train_neural_network #train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels) #---------------------------- Added to try to implement Batch Norm --------------------------------------- train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels, batch_norm_is_training, use_batch_norm) #--------------------------------------------------------------------------------------------------------- print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='') print_stats(sess, batch_features, batch_labels, cost, accuracy) # Save Model saver = tf.train.Saver() save_path = saver.save(sess, save_model_path) """ Explanation: Fully Train the Model Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ %matplotlib inline %config InlineBackend.figure_format = 'retina' import tensorflow as tf import pickle import helper import random # Set batch size if not already set try: if batch_size: pass except NameError: batch_size = 64 save_model_path = './image_classification' n_samples = 4 top_n_predictions = 3 def test_model(): """ Test the saved model against the test dataset """ test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb')) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load model loader = tf.train.import_meta_graph(save_model_path + '.meta') loader.restore(sess, save_model_path) # Get Tensors from loaded model loaded_x = loaded_graph.get_tensor_by_name('x:0') loaded_y = loaded_graph.get_tensor_by_name('y:0') loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') loaded_logits = loaded_graph.get_tensor_by_name('logits:0') loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0') # Get accuracy in batches for memory limitations test_batch_acc_total = 0 test_batch_count = 0 for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size): test_batch_acc_total += sess.run( loaded_acc, feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0}) test_batch_count += 1 print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count)) # Print Random Samples random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples))) random_test_predictions = sess.run( tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions), feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0}) helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions) test_model() """ Explanation: Checkpoint The model has been saved to disk. Test Model Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters. End of explanation """
planetlabs/notebooks
jupyter-notebooks/forest-monitoring/drc_roads_mosaic.ipynb
apache-2.0
from functools import reduce import os import subprocess import tempfile import numpy as np from planet import api from planet.api import downloader, filters import rasterio from skimage import feature, filters from sklearn.ensemble import RandomForestClassifier # load local modules from utils import Timer import visual #uncomment if visual is in development # import importlib # importlib.reload(visual) # Import functionality from local notebooks from ipynb.fs.defs.drc_roads_classification import get_label_mask, get_unmasked_count \ load_4band, get_feature_bands, combine_masks, num_valid, perc_masked, bands_to_X, \ make_same_size_samples, classify_forest, y_to_band, classified_band_to_rgb """ Explanation: DRC Change Detection Using Mosaics This notebook performs forest change (in the form of creation of new roads) using Planet mosaics as the data source. This notebook follows the workflow, uses the labeled data, and pulls code from the following notebooks, which perform forest change detection using PSOrthoTiles as the data source: * DRC Roads Classification * DRC Roads Temporal Analysis NOTE: This notebook uses the gdal PLMosaic driver to access and download Planet mosaics. Use of the gdal PLMosaic driver requires specification of the Planet API key. This can either be specified in the command-line options, or gdal will try to pull this from the environmental variable PL_API_KEY. This notebook assumes that the environmental variable PL_API_KEY is set. See the gdal PLMosaic driver for more information. End of explanation """ # uncomment to see what mosaics are available and to make sure the PLMosaic driver is working # !gdalinfo "PLMosaic:" # get mosaic names for July 2017 to March 2018 mosaic_dates = [('2017', '{0:02d}'.format(m)) for m in range(7, 13)] + \ [('2018', '{0:02d}'.format(m)) for m in range(1, 4)] mosaic_names = ['global_monthly_{}_{}_mosaic'.format(yr, mo) for (yr, mo) in mosaic_dates] def get_mosaic_filename(mosaic_name): return os.path.join('data', mosaic_name + '.tif') for name in mosaic_names: print('{} -> {}'.format(name, get_mosaic_filename(name))) aoi_filename = 'pre-data/aoi.geojson' def _gdalwarp(input_filename, output_filename, options): commands = ['gdalwarp'] + options + \ ['-overwrite', input_filename, output_filename] print(' '.join(commands)) subprocess.check_call(commands) # lossless compression of an image def _compress(input_filename, output_filename): commands = ['gdal_translate', '-co', 'compress=LZW', '-co', 'predictor=2', input_filename, output_filename] print(' '.join(commands)) subprocess.check_call(commands) def download_mosaic(mosaic_name, output_filename, crop_filename, overwrite=False, compress=True): # typically gdalwarp would require `-oo API_KEY={PL_API_KEY}` # but if the environmental variable PL_API_KEY is set, gdal will use that options = ['-cutline', crop_filename, '-crop_to_cutline', '-oo', 'use_tiles=YES'] # use PLMosaic driver input_name = 'PLMosaic:mosaic={}'.format(mosaic_name) # check to see if output file exists, if it does, do not warp if os.path.isfile(output_filename) and not overwrite: print('{} already exists. Aborting download of {}.'.format(output_filename, mosaic_name)) elif compress: with tempfile.NamedTemporaryFile(suffix='.vrt') as vrt_file: options += ['-of', 'vrt'] _gdalwarp(input_name, vrt_file.name, options) _compress(vrt_file.name, output_filename) else: _gdalwarp(input_name, output_filename, options) for name in mosaic_names: download_mosaic(name, get_mosaic_filename(name), aoi_filename) """ Explanation: Download Mosaics End of explanation """ forest_img = os.path.join('pre-data', 'forestroad_forest.tif') road_img = os.path.join('pre-data', 'forestroad_road.tif') forest_mask = get_label_mask(forest_img) print(get_unmasked_count(forest_mask)) road_mask = get_label_mask(road_img) print(get_unmasked_count(road_mask)) forest_mask.shape """ Explanation: Classify Mosaics into Forest and Non-Forest To classify the mosaics into forest and non-forest, we use the Random Forests classifier. This is a supervised classification technique, so we need to create a training dataset. The training dataset will be created from one mosaic image and then the trained classifier will classify all mosaic images. Although we have already performed classification of a 4-band Orthotile into forest and non-forest in drc_roads_classification, the format of the data is different in mosaics, so we need to re-create our training dataset. However, we will use the same label images that were created as a part of that notebook. Additionally, we will pull a lot of code from that notebook. Create Label Masks End of explanation """ # specify the training dataset mosaic image file image_file = get_mosaic_filename(mosaic_names[0]) image_file # this is the georeferenced image that was used to create the forest and non-forest label images label_image = 'pre-data/roads.tif' # get label image crs, bounds, and pixel dimensions with rasterio.open(label_image, 'r') as ref: dst_crs = ref.crs['init'] (xmin, ymin, xmax, ymax) = ref.bounds width = ref.width height = ref.height print(dst_crs) print((xmin, ymin, xmax, ymax)) print((width, height)) # this is the warped training mosaic image we will create with gdal training_file = os.path.join('data', 'mosaic_training.tif') # use gdalwarp to warp mosaic image to match label image !gdalwarp -t_srs $dst_crs \ -te $xmin $ymin $xmax $ymax \ -ts $width $height \ -overwrite $image_file $training_file """ Explanation: Warp Mosaic to Match Label Masks The label images used to create the label masks were created from the PSOrthoTiles. Therefore, they are in a different projection, have a different transform, and have a different pixel size than the mosaic images. To create the training dataset, we must first match the label and mosaic images so that the pixel dimensions and locations line up. To do this, we warp the mosaic image to match the label image coordinate reference system, bounds, and pixel dimensions. The forest/non-forest labeled images were created in GIMP, which doesn't save georeference information. Therefore, we will pull georeference information from the source image used to create the labeled images, roads.tif. End of explanation """ feature_bands = get_feature_bands(training_file) print(feature_bands[0].shape) total_mask = combine_masks(feature_bands) print(total_mask.shape) # combine the label masks with the valid data mask and then create X dataset for each label total_forest_mask = np.logical_or(total_mask, forest_mask) print('{} valid pixels ({}% masked)'.format(num_valid(total_forest_mask), round(perc_masked(total_forest_mask), 2))) X_forest = bands_to_X(feature_bands, total_forest_mask) total_road_mask = np.logical_or(total_mask, road_mask) print('{} valid pixels ({}% masked)'.format(num_valid(total_road_mask), round(perc_masked(total_road_mask), 2))) X_road = bands_to_X(feature_bands, total_road_mask) [X_forest_sample, X_road_sample] = \ make_same_size_samples([X_forest, X_road], size_percent=100) print(X_forest_sample.shape) print(X_road_sample.shape) forest_label_value = 0 road_label_value = 1 X_training = np.concatenate((X_forest_sample, X_road_sample), axis=0) y_training = np.array(X_forest_sample.shape[0] * [forest_label_value] + \ X_road_sample.shape[0] * [road_label_value]) print(X_training.shape) print(y_training.shape) """ Explanation: Create Training Datasets Now that the images match, we create the training datasets from the labels and the training mosaic image. End of explanation """ with Timer(): y_band_rf = classify_forest(image_file, X_training, y_training) visual.plot_image(classified_band_to_rgb(y_band_rf), title='Classified Training Image (Random Forests)', figsize=(15, 15)) """ Explanation: Classify Training Image Now we will train the classifier to detect forest/non-forest classes from the training data and will run this on the original training mosaic image to see how well it works. End of explanation """ classified_bands_file = os.path.join('data', 'classified_mosaic_bands.npz') def save_to_cache(classified_bands, mosaic_names): save_bands = dict((s, classified_bands[s]) for s in mosaic_names) # masked arrays are saved as just arrays, so save mask for later save_bands.update(dict((s+'_msk', classified_bands[s].mask) for s in mosaic_names)) np.savez_compressed(classified_bands_file, **save_bands) def load_from_cache(): classified_bands = np.load(classified_bands_file) scene_ids = [k for k in classified_bands.keys() if not k.endswith('_msk')] # reform masked array from saved array and saved mask classified_bands = dict((s, np.ma.array(classified_bands[s], mask=classified_bands[s+'_msk'])) for s in scene_ids) return classified_bands use_cache = True if use_cache and os.path.isfile(classified_bands_file): print('using cached classified bands') classified_bands = load_from_cache() else: with Timer(): def classify(mosaic_name): img = get_mosaic_filename(mosaic_name) # we only have two values, 0 and 1. Convert to uint8 for memory band = (classify_forest(img, X_training, y_training)).astype(np.uint8) return band classified_bands = dict((s, classify(s)) for s in mosaic_names) # save to cache save_to_cache(classified_bands, mosaic_names) # Decimate classified arrays for memory conservation def decimate(arry, num=8): return arry[::num, ::num].copy() do_visualize = True # set to True to view images if do_visualize: for mosaic_name, classified_band in classified_bands.items(): visual.plot_image(classified_band_to_rgb(decimate(classified_band)), title='Classified Image ({})'.format(mosaic_name), figsize=(8, 8)) """ Explanation: Classification on All Mosaic Images Now that the classifier is trained, run it on all of the mosaic images. This process takes a while so in this section, if classification has already been run and the classification results have been saved, we will load the cached results instead of rerunning classification. This behavior can be altered by setting use_cache to False. End of explanation """ # labeled change images, not georeferenced change_img_orig = os.path.join('pre-data', 'difference_change.tif') nochange_img_orig = os.path.join('pre-data', 'difference_nochange.tif') # georeferenced source image src_img = os.path.join('pre-data', 'difference.tif') # destination georeferened label images change_img_geo = os.path.join('data', 'difference_change.tif') nochange_img_geo = os.path.join('data', 'difference_nochange.tif') # get crs and transform from the georeferenced source image with rasterio.open(src_img, 'r') as src: src_crs = src.crs src_transform = src.transform # create the georeferenced label images for (label_img, geo_img) in ((change_img_orig, change_img_geo), (nochange_img_orig, nochange_img_geo)): with rasterio.open(label_img, 'r') as src: profile = { 'width': src.width, 'height': src.height, 'driver': 'GTiff', 'count': src.count, 'compress': 'lzw', 'dtype': rasterio.uint8, 'crs': src_crs, 'transform': src_transform } with rasterio.open(geo_img, 'w', **profile) as dst: dst.write(src.read()) """ Explanation: These classified mosaics look a lot better than the classified PSOrthoTile strips. This bodes well for the quality of our change detection results! Identify Change In this section, we use Random Forest classification once again to detect change in the forest in the form of new roads being built. Once again we need to train the classifier, this time to detect change/no-change. And once again we use hand-labeled images created in the Temporal Analysis Notebook to train the classifier. Create Label Masks The change/no-change label images were created in the Temporal Analysis Notebook. The images were created from an image that was created for use with the PSOrthoTiles. Therefore, they are in a different projection, have a different affine transformation, and have different resolution than the classified mosaic bands. Further, the images were created with GIMP, which does not save the georeference information. We already have the classified mosaic bands in memory and so, instead of saving them out and warping each one, we will warp the label images to match the mosaic bands (this is the opposite of what we did in forest/non-forest classification). Therefore, the label images need to be georeferenced and then warped to match the classified mosaic images. Once this is done, the change/no-change label masks can be created. Georeference Labeled Images The labeled images are prepared in GIMP, so georeference information has not been preserved. First, we will restore georeference information to the labeled images using rasterio. End of explanation """ # get dest crs, bounds, and shape from mosaic image image_file = get_mosaic_filename(mosaic_names[0]) with rasterio.open(image_file, 'r') as ref: dst_crs = ref.crs['init'] (xmin, ymin, xmax, ymax) = ref.bounds width = ref.width height = ref.height print(dst_crs) print((xmin, ymin, xmax, ymax)) print((width, height)) # destination matched images change_img = os.path.join('data', 'mosaic_difference_change.tif') nochange_img = os.path.join('data', 'mosaic_difference_nochange.tif') # resample and resize to match mosaic !gdalwarp -t_srs $dst_crs \ -te $xmin $ymin $xmax $ymax \ -ts $width $height \ -overwrite $change_img_geo $change_img !gdalwarp -t_srs $dst_crs \ -te $xmin $ymin $xmax $ymax \ -ts $width $height \ -overwrite $nochange_img_geo $nochange_img """ Explanation: Match Georeferenced Label Images to Mosaic Images Now that the label images are georeferenced, we warp them to match the mosaic images. End of explanation """ change_mask = get_label_mask(change_img) print(get_unmasked_count(change_mask)) nochange_mask = get_label_mask(nochange_img) print(get_unmasked_count(nochange_mask)) """ Explanation: Load Label Masks Now that the label images match the mosaic images, we can load the label masks. End of explanation """ # combine the label masks with the valid data mask and then create X dataset for each label classified_bands_arrays = classified_bands.values() total_mask = combine_masks(classified_bands_arrays) total_change_mask = np.logical_or(total_mask, change_mask) print('Change: {} valid pixels ({}% masked)'.format(num_valid(total_change_mask), round(perc_masked(total_change_mask), 2))) X_change = bands_to_X(classified_bands_arrays, total_change_mask) total_nochange_mask = np.logical_or(total_mask, nochange_mask) print('No Change: {} valid pixels ({}% masked)'.format(num_valid(total_nochange_mask), round(perc_masked(total_nochange_mask), 2))) X_nochange = bands_to_X(classified_bands_arrays, total_nochange_mask) # create a training sample set that is equal in size for all categories # and uses 10% of the labeled change pixels [X_change_sample, X_nochange_sample] = \ make_same_size_samples([X_change, X_nochange], size_percent=10) print(X_change_sample.shape) print(X_nochange_sample.shape) change_label_value = 0 nochange_label_value = 1 X_rf = np.concatenate((X_change_sample, X_nochange_sample), axis=0) y_rf = np.array(X_change_sample.shape[0] * [change_label_value] + \ X_nochange_sample.shape[0] * [nochange_label_value]) print(X_rf.shape) print(y_rf.shape) """ Explanation: Get Features from Labels Create our training dataset from the label masks and the classified mosaic bands. End of explanation """ # NOTE: This relative import isn't working so the following code is directly # copied from the temporal analysis notebook # from ipynb.fs.defs.drc_roads_temporal_analysis import classify_change def classify_change(classified_bands, mask, X_training, y_training): clf = RandomForestClassifier() with Timer(): clf.fit(X_training, y_training) X = bands_to_X(classified_bands, total_mask) with Timer(): y_pred = clf.predict(X) y_band = y_to_band(y_pred, total_mask) return y_band with Timer(): y_band_rf = classify_change(classified_bands_arrays, total_mask, X_rf, y_rf) visual.plot_image(classified_band_to_rgb(y_band_rf), title='RF Classified Image', figsize=(25, 25)) """ Explanation: Classify Change End of explanation """
roatienza/Deep-Learning-Experiments
versions/2022/supervised/python/mnist_demo.ipynb
mit
%pip install pytorch-lightning --upgrade %pip install torchmetrics --upgrade import torch import torchvision import wandb from argparse import ArgumentParser from pytorch_lightning import LightningModule, Trainer, Callback from pytorch_lightning.loggers import WandbLogger from torchmetrics.functional import accuracy """ Explanation: Image Recognition on MNIST using PyTorch Lightning Demonstrating the elements of machine learning: 1) Experience (Datasets and Dataloaders)<br> 2) Task (Classifier Model)<br> 3) Performance (Accuracy)<br> Experience: <br> We use MNIST dataset for this demo. MNIST is made of 28x28 images of handwritten digits, 0 to 9. The train split has 60,000 images and the test split has 10,000 images. Images are all gray-scale. Task:<br> Our task is to classify the images into 10 classes. We use ResNet18 model from torchvision.models. The ResNet18 first convolutional layer (conv1) is modified to accept a single channel input. The number of classes is set to 10. Performance:<br> We use accuracy metric to evaluate the performance of our model on the test split. torchmetrics.functional.accuracy calculates the accuracy. Pytorch Lightning:<br> Our demo uses Pytorch Lightning to simplify the process of training and testing. Pytorch Lightning Trainer trains and evaluates our model. The default configurations are for a GPU-enabled system with 48 CPU cores. Please change the configurations if you have a different system. Weights and Biases:<br> wandb is used by PyTorch Lightining Module to log train and evaluations results. Use --no-wandb to disable wandb. Let us install pytorch-lightning and torchmetrics. End of explanation """ class LitMNISTModel(LightningModule): def __init__(self, num_classes=10, lr=0.001, batch_size=32): super().__init__() self.save_hyperparameters() self.model = torchvision.models.resnet18(num_classes=num_classes) self.model.conv1 = torch.nn.Conv2d(1, 64, kernel_size=7, stride=2, padding=3, bias=False) self.loss = torch.nn.CrossEntropyLoss() def forward(self, x): return self.model(x) # this is called during fit() def training_step(self, batch, batch_idx): x, y = batch y_hat = self.forward(x) loss = self.loss(y_hat, y) return {"loss": loss} # calls to self.log() are recorded in wandb def training_epoch_end(self, outputs): avg_loss = torch.stack([x["loss"] for x in outputs]).mean() self.log("train_loss", avg_loss, on_epoch=True) # this is called at the end of an epoch def test_step(self, batch, batch_idx): x, y = batch y_hat = self.forward(x) loss = self.loss(y_hat, y) acc = accuracy(y_hat, y) * 100. # we use y_hat to display predictions during callback return {"y_hat": y_hat, "test_loss": loss, "test_acc": acc} # this is called at the end of all epochs def test_epoch_end(self, outputs): avg_loss = torch.stack([x["test_loss"] for x in outputs]).mean() avg_acc = torch.stack([x["test_acc"] for x in outputs]).mean() self.log("test_loss", avg_loss, on_epoch=True, prog_bar=True) self.log("test_acc", avg_acc, on_epoch=True, prog_bar=True) # validation is the same as test def validation_step(self, batch, batch_idx): return self.test_step(batch, batch_idx) def validation_epoch_end(self, outputs): return self.test_epoch_end(outputs) # we use Adam optimizer def configure_optimizers(self): return torch.optim.Adam(self.parameters(), lr=self.hparams.lr) # this is called after model instatiation to initiliaze the datasets and dataloaders def setup(self, stage=None): self.train_dataloader() self.test_dataloader() # build train and test dataloaders using MNIST dataset # we use simple ToTensor transform def train_dataloader(self): return torch.utils.data.DataLoader( torchvision.datasets.MNIST( "./data", train=True, download=True, transform=torchvision.transforms.ToTensor() ), batch_size=self.hparams.batch_size, shuffle=True, num_workers=48, pin_memory=True, ) def test_dataloader(self): return torch.utils.data.DataLoader( torchvision.datasets.MNIST( "./data", train=False, download=True, transform=torchvision.transforms.ToTensor() ), batch_size=self.hparams.batch_size, shuffle=False, num_workers=48, pin_memory=True, ) def val_dataloader(self): return self.test_dataloader() """ Explanation: Pytorch Lightning Module PyTorch Lightning Module has a PyTorch ResNet18 Model. It is a subclass of LightningModule. The model part is subclassed to support a single channel input. We replaced the input convolutional layer to support single channel inputs. The Lightning Module is also a container for the model, the optimizer, the loss function, the metrics, and the data loaders. ResNet class can be found here. By using PyTorch Lightning, we simplify the training and testing processes since we do not need to write boiler plate code blocks. These include automatic transfer to chosen device (i.e. gpu or cpu), model eval and train modes, and backpropagation routines. End of explanation """ class WandbCallback(Callback): def on_validation_batch_end(self, trainer, pl_module, outputs, batch, batch_idx, dataloader_idx): # process first 10 images of the first batch if batch_idx == 0: n = 10 x, y = batch outputs = outputs["y_hat"] outputs = torch.argmax(outputs, dim=1) # log image, ground truth and prediction on wandb table columns = ['image', 'ground truth', 'prediction'] data = [[wandb.Image(x_i), y_i, y_pred] for x_i, y_i, y_pred in list( zip(x[:n], y[:n], outputs[:n]))] wandb_logger.log_table( key='ResNet18 on MNIST Predictions', columns=columns, data=data) """ Explanation: PyTorch Lightning Callback We can instantiate a callback object to perform certain tasks during training. In this case, we log sample images, ground truth labels, and predicted labels from the test dataset. We can also ModelCheckpoint callback to save the model after each epoch. End of explanation """ def get_args(): parser = ArgumentParser(description="PyTorch Lightning MNIST Example") parser.add_argument("--max-epochs", type=int, default=5, help="num epochs") parser.add_argument("--batch-size", type=int, default=32, help="batch size") parser.add_argument("--lr", type=float, default=0.001, help="learning rate") parser.add_argument("--num-classes", type=int, default=10, help="num classes") parser.add_argument("--devices", default=1) parser.add_argument("--accelerator", default='gpu') parser.add_argument("--num-workers", type=int, default=48, help="num workers") parser.add_argument("--no-wandb", default=False, action='store_true') args = parser.parse_args("") return args """ Explanation: Program Arguments When running on command line, we can pass arguments to the program. For the jupyter notebook, we can pass arguments using the %run magic command. ``` End of explanation """ if __name__ == "__main__": args = get_args() model = LitMNISTModel(num_classes=args.num_classes, lr=args.lr, batch_size=args.batch_size) model.setup() # printing the model is useful for debugging print(model) # wandb is a great way to debug and visualize this model wandb_logger = WandbLogger(project="pl-mnist") trainer = Trainer(accelerator=args.accelerator, devices=args.devices, max_epochs=args.max_epochs, logger=wandb_logger if not args.no_wandb else None, callbacks=[WandbCallback() if not args.no_wandb else None]) trainer.fit(model) trainer.test(model) wandb.finish() """ Explanation: Training and Evaluation using Trainer Get command line arguments. Instatiate a Pytorch Lightning Model. Train the model. Evaluate the model. End of explanation """
jmschrei/pomegranate
tutorials/B_Model_Tutorial_2_General_Mixture_Models.ipynb
mit
%matplotlib inline import matplotlib.pyplot as plt import seaborn; seaborn.set_style('whitegrid') import numpy from pomegranate import * numpy.random.seed(0) numpy.set_printoptions(suppress=True) %load_ext watermark %watermark -m -n -p numpy,scipy,pomegranate """ Explanation: General Mixture Models author: Jacob Schreiber <br> contact: jmschreiber91@gmail.com It is frequently the case that the data you have is not explained by a single underlying distribution. Typically this is because there are multiple phenomena occuring in the data set, each with their own underlying distribution. If we want to try to recover the underlying distributions, we need to have a model which has multiple components. An example could be sensor readings where the majority of the time a sensor shows no signal, but sometimes it detects some phenomena. Modeling both phenomena as a single distribution would be silly because the readings would come from two distinct phenomena. A solution to the problem of having more than one single underlying distribution is to use a mixture of distributions instead of a single distribution, commonly called a mixture model. This type of compositional model builds a more complex probability distribution from a set of simpler ones. A common type, called a Gaussian Mixture Model, is composed of Gaussian distributions, but mathematically there is no need for these distributions to all be Gaussian. In fact, there is no need for these distributions to be simple probability distributions. In this tutorial we'll explore how to do mixture modeling in pomegranate, compare against scikit-learn's implementation of Gaussian mixture models, and explore more complex types of mixture modeling that one can do with probabilistic modeling. End of explanation """ X = numpy.concatenate([numpy.random.normal((7, 2), 1, size=(100, 2)), numpy.random.normal((2, 3), 1, size=(150, 2)), numpy.random.normal((7, 7), 1, size=(100, 2))]) plt.figure(figsize=(8, 6)) plt.scatter(X[:,0], X[:,1]) plt.show() """ Explanation: A simple example: Gaussian mixture models Let's start off with a simple example. Perhaps we have a data set like the one below, which is made up of not a single blob, but many blobs. It doesn't seem like any of the simple distributions that pomegranate has implemented can fully capture what's going on in the data. End of explanation """ model = GeneralMixtureModel.from_samples(MultivariateGaussianDistribution, 3, X) """ Explanation: It seems clear to us that this data is composed of three blobs. Accordingly, rather than trying to find some complex single distribution that can describe this data, we can describe it as a mixture of three Gaussian distributions. In the same way that we could initialize a basic distribution using the from_samples method, we can initialize a mixture model using it, additionally passing in the type(s) of distribution(s) to use and the number of components. End of explanation """ x = numpy.arange(-1, 10.1, .1) y = numpy.arange(-1, 10.1, .1) xx, yy = numpy.meshgrid(x, y) x_ = numpy.array(list(zip(xx.flatten(), yy.flatten()))) p1 = MultivariateGaussianDistribution.from_samples(X).probability(x_).reshape(len(x), len(y)) p2 = model.probability(x_).reshape(len(x), len(y)) plt.figure(figsize=(14, 6)) plt.subplot(121) plt.contourf(xx, yy, p1, cmap='Blues', alpha=0.8) plt.scatter(X[:,0], X[:,1]) plt.subplot(122) plt.contourf(xx, yy, p2, cmap='Blues', alpha=0.8) plt.scatter(X[:,0], X[:,1]) plt.show() """ Explanation: Now we can look at the probability densities if we had used a single Gaussian distribution versus using this mixture of three Gaussian models. End of explanation """ mu = numpy.random.normal(7, 1, size=250) std = numpy.random.lognormal(-2.0, 0.4, size=250) std[::2] += 0.3 dur = numpy.random.exponential(250, size=250) dur[::2] -= 140 dur = numpy.abs(dur) data = numpy.concatenate([numpy.random.normal(mu_, std_, int(t)) for mu_, std_, t in zip(mu, std, dur)]) plt.figure(figsize=(14, 4)) plt.title("Randomly Generated Signal", fontsize=16) plt.plot(data) plt.xlabel("Time", fontsize=14) plt.ylabel("Signal", fontsize=14) plt.xlim(0, 10000) plt.show() """ Explanation: It looks like, unsurprisingly, the mixture model is able to better capture the structure of the data. The single model is so bad that the region of highest density (the darkest blue ellipse) has very few real points in it. In contrast, the darkest regions in the mixture densities correspond to where there are the most points in the clusters. A more complex example: independent component mixture models Most data is more complex than the Gaussian case above. A common case in the domain of signal processing is analyzing segments of steady signal based on the mean, noise, and duration of each segment. Let's say that we have a signal like the following which is made up of long low-noise signals and shorter high-noise signals. End of explanation """ plt.figure(figsize=(14, 4)) plt.subplot(131) plt.title("Segment Means", fontsize=14) plt.hist(mu[::2], bins=20, alpha=0.7) plt.hist(mu[1::2], bins=20, alpha=0.7) plt.subplot(132) plt.title("Segment STDs", fontsize=14) plt.hist(std[::2], bins=20, alpha=0.7) plt.hist(std[1::2], bins=20, alpha=0.7) plt.subplot(133) plt.title("Segment Durations", fontsize=14) plt.hist(dur[::2], bins=numpy.arange(0, 1000, 25), alpha=0.7) plt.hist(dur[1::2], bins=numpy.arange(0, 1000, 25), alpha=0.7) plt.show() """ Explanation: We can show the distribution of segment properties to highlight the differences. End of explanation """ X = numpy.array([mu, std, dur]).T.copy() model = GeneralMixtureModel.from_samples([NormalDistribution, LogNormalDistribution, ExponentialDistribution], 2, X) model """ Explanation: In this situation, it looks like the segment noise is going to be the feature with the most difference between the learned distributions. Let's use pomegranate to learn this mixture, modeling each feature with an appropriate distribution. End of explanation """ model = GeneralMixtureModel([NormalDistribution(4, 1), NormalDistribution(7, 1)]) """ Explanation: It looks like the model is easily able to fit the differences that we observed in the data. The durations of the two components are similar, the means are almost identical to each other, but it's the noise that's the most different between the two. The API Initialization The API for general mixture models is similar to that of the rest of pomegranate. The model can be initialized either through passing in some pre-initialized distributions or by calling from_samples on a data set, specifying the types of distributions you'd like. You can simply pass in some distributions to initialize the mixture. End of explanation """ model2 = GeneralMixtureModel([NormalDistribution(5, 1), ExponentialDistribution(0.3)]) """ Explanation: There is no reason these distributions have to be the same type. You can create a non-homogenous mixture by passing in different types of distributions. End of explanation """ x = numpy.arange(0, 10.01, 0.05) plt.figure(figsize=(14, 3)) plt.subplot(121) plt.title("~Norm(4, 1) + ~Norm(7, 1)", fontsize=14) plt.ylabel("Probability Density", fontsize=14) plt.fill_between(x, 0, model.probability(x)) plt.ylim(0, 0.25) plt.subplot(122) plt.title("~Norm(5, 1) + ~Exp(0.3)", fontsize=14) plt.ylabel("Probability Density", fontsize=14) plt.fill_between(x, 0, model2.probability(x)) plt.ylim(0, 0.25) plt.show() """ Explanation: They will produce very different probability distributions, but both work as a mixture. End of explanation """ X = numpy.random.normal(3, 1, size=(200, 1)) X[::2] += 3 model = GeneralMixtureModel.from_samples(NormalDistribution, 2, X) """ Explanation: pomegranate offers a lot of flexibility when it comes to making mixtures directly from data. The normal option is just to specify the type of distribution, the number of components, and then pass in the data. Make sure that if you're making 1 dimensional data that you're still passing in a matrix whose second dimension is just set to 1. End of explanation """ plt.figure(figsize=(6, 3)) plt.title("Learned Model", fontsize=14) plt.ylabel("Probability Density", fontsize=14) plt.fill_between(x, 0, model.probability(x)) plt.ylim(0, 0.25) plt.show() """ Explanation: We can take a quick look at what it looks like: End of explanation """ X = numpy.concatenate([numpy.random.normal(5, 1, size=(200, 1)), numpy.random.exponential(1, size=(50, 1))]) model = GeneralMixtureModel.from_samples([NormalDistribution, ExponentialDistribution], 2, X) x = numpy.arange(0, 12.01, 0.01) plt.figure(figsize=(6, 3)) plt.title("Learned Model", fontsize=14) plt.ylabel("Probability Density", fontsize=14) plt.fill_between(x, 0, model.probability(x)) plt.ylim(0, 0.5) plt.show() """ Explanation: However, we can also fit a non-homogenous mixture by passing in a list of univariate distributions to be fit to univariate data. Let's try to fit to data of which some of it is normally distributed and some is exponentially distributed. End of explanation """ X = numpy.array([mu, std, dur]).T.copy() model = GeneralMixtureModel.from_samples([NormalDistribution, LogNormalDistribution, ExponentialDistribution], 2, X) """ Explanation: It looks like the mixture is capturing the distribution fairly well. Next, if we want to make a mixture model that describes each feature using a different distribution, we can pass in a list of distributions---one or each feature---and it'll use that distribution to model the corresponding feature. For example, in the example below, the first feature will be modeled using a normal distribution, the second feature will be modeled using a log normal distribution, and the last feature will be modeled using an exponential distribution. End of explanation """ X = numpy.random.normal(5, 1, size=(500, 1)) X[::2] += 3 x = numpy.arange(0, 10, .01) model = GeneralMixtureModel.from_samples(NormalDistribution, 2, X) plt.figure(figsize=(8, 4)) plt.hist(X, bins=50, normed=True) plt.plot(x, model.distributions[0].probability(x), label="Distribution 1") plt.plot(x, model.distributions[1].probability(x), label="Distribution 2") plt.plot(x, model.probability(x), label="Mixture") plt.legend(fontsize=14, loc=2) plt.show() """ Explanation: This is the same command we used in the second example. It creates two components, each of which model the three features respectively with those three distributions. The difference between this and the previous example is that the data here is multivariate and so univariate distributions are assumed to model different distributions. Prediction Now that we have a mixture model, we can make predictions as to which component it is most likely to fall under. This is similar to the situation in which you predict which cluster a point falls under in k-means clustering. However, one of the benefits of using probabilistic models is that we can get a softer assignment of points based on probability, rather than a hard assignment to clusters for each point. End of explanation """ X = numpy.arange(4, 9, 0.5).reshape(10, 1) model.predict_proba(X) """ Explanation: The prediction task is identifying, for each point, whether it falls under the orange distribution or the green distribution. This is done using Bayes' rule, where the probability of each sample under each component is divided by the probability of the sample under all components of the mixture. The posterior probability of a sample belonging to component $i$ out of $k$ components is \begin{equation} P(M_{i}|D) = \frac{P(D|M_{i})P(M_{i})}{\sum\limits_{i=1}^{k} P(D|M_{i})P(M_{i})} \end{equation} where $D$ is the data, $M_{i}$ is the component of the model (such as the green or orange distributions), $P(D|M_{i}$ is the likelihood calculated by the probability function, $P(M_{i})$ is the prior probability of each component, stored under the weights attribute of the model, and $P(M_{i}|D)$ is the posterior probability that sums to 1 over all components. We can calculate these posterior probabilites using the predict_proba method. End of explanation """
ageron/tensorflow-safari-course
05_autodiff_ex5.ipynb
apache-2.0
from __future__ import absolute_import, division, print_function, unicode_literals import tensorflow as tf tf.__version__ """ Explanation: Try not to peek at the solutions when you go through the exercises. ;-) First let's make sure this notebook works well in both Python 2 and Python 3: End of explanation """ import numpy as np data = np.loadtxt("data/life_satisfaction.csv", dtype=np.float32, delimiter=",", skiprows=1, usecols=[1, 2]) X_train = data[:, 0:1] / 10000 # feature scaling y_train = data[:, 1:2] learning_rate = 0.01 %matplotlib inline import matplotlib.pyplot as plt plt.rcParams['axes.labelsize'] = 14 plt.rcParams['xtick.labelsize'] = 12 plt.rcParams['ytick.labelsize'] = 12 def plot_life_satisfaction(X_train, y_train): plt.plot(X_train * 10000, y_train, "bo") plt.axis([0, 60000, 0, 10]) plt.xlabel("GDP per capita ($)") plt.ylabel("Life Satisfaction") plt.grid() def plot_life_satisfaction_with_linear_model(X_train, y_train, w, b): plot_life_satisfaction(X_train, y_train) plt.plot([0, 60000], [b, w[0][0] * (60000 / 10000) + b]) """ Explanation: From notebook 4 linear regression End of explanation """ graph = tf.Graph() with graph.as_default(): X = tf.constant(X_train, dtype=tf.float32, name="X") y = tf.constant(y_train, dtype=tf.float32, name="y") b = tf.Variable(0.0, name="b") w = tf.Variable(tf.zeros([1, 1]), name="w") y_pred = tf.add(tf.matmul(X, w), b, name="y_pred") # X @ w + b mse = tf.reduce_mean(tf.square(y_pred - y), name="mse") gradients_w, gradients_b = tf.gradients(mse, [w, b]) # <= IT'S AUTODIFF MAGIC! tweak_w_op = tf.assign(w, w - learning_rate * gradients_w) tweak_b_op = tf.assign(b, b - learning_rate * gradients_b) training_op = tf.group(tweak_w_op, tweak_b_op) init = tf.global_variables_initializer() n_iterations = 2000 with tf.Session(graph=graph) as sess: init.run() for iteration in range(n_iterations): if iteration % 100 == 0: print("Iteration {:5}, MSE: {:.4f}".format(iteration, mse.eval())) training_op.run() w_val, b_val = sess.run([w, b]) plt.figure(figsize=(10, 5)) plot_life_satisfaction_with_linear_model(X_train, y_train, w_val, b_val) plt.show() """ Explanation: Using autodiff Instead End of explanation """ graph = tf.Graph() with graph.as_default(): X = tf.constant(X_train, dtype=tf.float32, name="X") y = tf.constant(y_train, dtype=tf.float32, name="y") b = tf.Variable(0.0, name="b") w = tf.Variable(tf.zeros([1, 1]), name="w") y_pred = tf.add(tf.matmul(X, w), b, name="y_pred") # X @ w + b mse = tf.reduce_mean(tf.square(y_pred - y), name="mse") optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate) training_op = optimizer.minimize(mse) # <= MOAR AUTODIFF MAGIC! init = tf.global_variables_initializer() n_iterations = 2000 with tf.Session(graph=graph) as sess: init.run() for iteration in range(n_iterations): if iteration % 100 == 0: print("Iteration {:5}, MSE: {:.4f}".format(iteration, mse.eval())) training_op.run() w_val, b_val = sess.run([w, b]) plt.figure(figsize=(10, 5)) plot_life_satisfaction_with_linear_model(X_train, y_train, w_val, b_val) plt.show() """ Explanation: Using Optimizers End of explanation """ learning_rate = 0.01 momentum = 0.8 graph = tf.Graph() with graph.as_default(): X = tf.constant(X_train, dtype=tf.float32, name="X") y = tf.constant(y_train, dtype=tf.float32, name="y") b = tf.Variable(0.0, name="b") w = tf.Variable(tf.zeros([1, 1]), name="w") y_pred = tf.add(tf.matmul(X, w), b, name="y_pred") # X @ w + b mse = tf.reduce_mean(tf.square(y_pred - y), name="mse") optimizer = tf.train.MomentumOptimizer(learning_rate, momentum) training_op = optimizer.minimize(mse) init = tf.global_variables_initializer() n_iterations = 500 with tf.Session(graph=graph) as sess: init.run() for iteration in range(n_iterations): if iteration % 100 == 0: print("Iteration {:5}, MSE: {:.4f}".format(iteration, mse.eval())) training_op.run() w_val, b_val = sess.run([w, b]) plt.figure(figsize=(10, 5)) plot_life_satisfaction_with_linear_model(X_train, y_train, w_val, b_val) plt.show() """ Explanation: Faster Optimizers End of explanation """ coll = graph.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES) [var.op.name for var in coll] """ Explanation: How does the optimizer know which variables to tweak? Answer: the TRAINABLE_VARIABLES collection. End of explanation """ cyprus_gdp_per_capita = 22000 cyprus_life_satisfaction = w_val[0][0] * cyprus_gdp_per_capita / 10000 + b_val cyprus_life_satisfaction """ Explanation: Making Predictions Outside of TensorFlow End of explanation """ graph = tf.Graph() with graph.as_default(): X = tf.placeholder(tf.float32, shape=[None, 1], name="X") # <= None allows for any y = tf.placeholder(tf.float32, shape=[None, 1], name="y") # training batch size b = tf.Variable(0.0, name="b") w = tf.Variable(tf.zeros([1, 1]), name="w") y_pred = tf.add(tf.matmul(X, w), b, name="y_pred") # X @ w + b mse = tf.reduce_mean(tf.square(y_pred - y), name="mse") optimizer = tf.train.MomentumOptimizer(learning_rate, momentum) training_op = optimizer.minimize(mse) init = tf.global_variables_initializer() n_iterations = 500 X_test = np.array([[22000]], dtype=np.float32) / 10000 with tf.Session(graph=graph) as sess: init.run() for iteration in range(n_iterations): feed_dict = {X: X_train, y: y_train} if iteration % 100 == 0: print("Iteration {:5}, MSE: {:.4f}".format( iteration, mse.eval(feed_dict))) # <= FEED TRAINING DATA training_op.run(feed_dict) # <= FEED TRAINING DATA # make the prediction: y_pred_val = y_pred.eval(feed_dict={X: X_test}) # <= FEED TEST DATA y_pred_val """ Explanation: Using placeholders End of explanation """ graph = tf.Graph() with graph.as_default(): x = tf.placeholder(tf.float32, shape=[], name="x") f = tf.square(x) - 3 * x + 1 with tf.Session(graph=graph): print(f.eval(feed_dict={x: 5.0})) """ Explanation: Exercise 5 5.1) Create a simple graph that computes the function $f(x) = x^2 - 3x + 1$. Define $x$ as a placeholder for a simple scalar value of type float32 value (i.e., shape=[], dtype=tf.float32). Create a session and evaluate $f(5)$. You should find 11.0. 5.2) Add an operation that computes the derivative of $f(x)$ with regards to $x$, noted $f'(x)$. Create a session and evaluate $f'(5)$. You should find 7.0. Hint: use tf.gradients(). 5.3) Using a MomentumOptimizer, find the value of $x$ that minimizes $f(x)$. You should find $\hat{x}=1.5$. Hint: you need to change x into a Variable. Moreover, the MomentumOptimizer has its own variables that need to be initialized, so don't forget to create an init operation using a tf.global_variables_initializer(), and call it at the start of the session. Try not to peek at the solution below before you have done the exercise! :) Exercise 5 - Solution 5.1) End of explanation """ with graph.as_default(): [fp] = tf.gradients(f, [x]) with tf.Session(graph=graph): print(fp.eval(feed_dict={x: 5.0})) """ Explanation: 5.2) End of explanation """ learning_rate = 0.01 momentum = 0.8 graph = tf.Graph() with graph.as_default(): x = tf.Variable(0.0, name="x") f = tf.square(x) - 3 * x + 1 optimizer = tf.train.MomentumOptimizer(learning_rate, momentum) training_op = optimizer.minimize(f) init = tf.global_variables_initializer() n_iterations = 70 with tf.Session(graph=graph): init.run() for iteration in range(n_iterations): training_op.run() if iteration % 10 == 0: print("x={:.2f}, f(x)={:.2f}".format(x.eval(), f.eval())) """ Explanation: 5.3) End of explanation """ with tf.Session(graph=graph): init.run() print(x.eval()) # x == 0.0 print(f.eval()) # f(0) == 1.0 print(f.eval(feed_dict={x: 5.0})) # use 5.0 instead of the value of x, to compute f(5) print(x.eval()) # x is still 0.0 print(f.eval()) # f(0) is still 1.0 """ Explanation: Note that it's possible to replace the output value of any operation, not just placeholders. So, for example, even though x is now a Variable, you can use a feed_dict to use any value you want, for example to compute f(5.0). Important: this does not affect the variable! End of explanation """ graph = tf.Graph() with graph.as_default(): X = tf.placeholder(tf.float32, shape=[None, 1], name="X") y = tf.placeholder(tf.float32, shape=[None, 1], name="y") b = tf.Variable(0.0, name="b") w = tf.Variable(tf.zeros([1, 1]), name="w") y_pred = tf.add(tf.matmul(X, w), b, name="y_pred") # X @ w + b mse = tf.reduce_mean(tf.square(y_pred - y), name="mse") optimizer = tf.train.MomentumOptimizer(learning_rate, momentum) training_op = optimizer.minimize(mse) init = tf.global_variables_initializer() saver = tf.train.Saver() # <= At the very end of the construction phase n_iterations = 500 with tf.Session(graph=graph) as sess: init.run() for iteration in range(n_iterations): if iteration % 100 == 0: print("Iteration {:5}, MSE: {:.4f}".format( iteration, mse.eval(feed_dict={X: X_train, y: y_train}))) training_op.run(feed_dict={X: X_train, y: y_train}) # <= FEED THE DICT saver.save(sess, "./my_life_satisfaction_model") with tf.Session(graph=graph) as sess: saver.restore(sess, "./my_life_satisfaction_model") # make the prediction: y_pred_val = y_pred.eval(feed_dict={X: X_test}) y_pred_val """ Explanation: Saving and Restoring a Model End of explanation """ model_path = "./my_life_satisfaction_model" graph = tf.Graph() with tf.Session(graph=graph) as sess: # restore the graph saver = tf.train.import_meta_graph(model_path + ".meta") saver.restore(sess, model_path) # get references to the tensors we need X = graph.get_tensor_by_name("X:0") y_pred = graph.get_tensor_by_name("y_pred:0") # make the prediction: y_pred_val = y_pred.eval(feed_dict={X: X_test}) y_pred_val """ Explanation: Restoring a Graph End of explanation """
Alexoner/skynet
notebooks/linear/decisonBoundary.ipynb
mit
xx, yy = np.mgrid[-5:5:.01, -5:5:.01] grid = np.c_[xx.ravel(), yy.ravel()] probs = clf.predict_proba(grid)[:, 1].reshape(xx.shape) """ Explanation: Next, make a continuous grid of values and evaluate the probability of each (x, y) point in the grid: End of explanation """ f, ax = plt.subplots(figsize=(8, 6)) contour = ax.contourf(xx, yy, probs, 25, cmap="RdBu", vmin=0, vmax=1) ax_c = f.colorbar(contour) ax_c.set_label("$P(y = 1)$") ax_c.set_ticks([0, .25, .5, .75, 1]) ax.scatter(X[100:,0], X[100:, 1], c=y[100:], s=50, cmap="RdBu", vmin=-.2, vmax=1.2, edgecolor="white", linewidth=1) ax.set(aspect="equal", xlim=(-5, 5), ylim=(-5, 5), xlabel="$X_1$", ylabel="$X_2$") """ Explanation: Now, plot the probability grid as a contour map and additionally show the test set samples on top of it: End of explanation """ f, ax = plt.subplots(figsize=(8, 6)) ax.contour(xx, yy, probs, levels=[.5], cmap="Greys", vmin=0, vmax=.6) ax.scatter(X[100:,0], X[100:, 1], c=y[100:], s=50, cmap="RdBu", vmin=-.2, vmax=1.2, edgecolor="white", linewidth=1) ax.set(aspect="equal", xlim=(-5, 5), ylim=(-5, 5), xlabel="$X_1$", ylabel="$X_2$") """ Explanation: The logistic regression lets your classify new samples based on any threshold you want, so it doesn't inherently have one "decision boundary." But, of course, a common decision rule to use is p = .5. We can also just draw that contour level using the above code: End of explanation """
humberto-ortiz/bioinf2017
directed-graphs.ipynb
gpl-3.0
graph = {"forward" : {}, "reverse" : {}} """ Explanation: Directed graphs Humberto Ortiz-Zuazaga A directed graph $G = (V, E)$, also called a digraph, is a set $V$ of vertices and a set $E$ of directed edges, or edges that proceed from a source vertex to a sink vertex. Here's a crude diagram of a directed graph: (1) ---&gt; (2) ---&gt; (3) &lt;-- \______________________/ Where node 1 has an edge to node 2 and node 3, and node 2 has an edge to node 3. Directed graphs in python We can represent these graphs in python by keeping track of forward and reverse edges, so we can find neighbors for any vertex in either direction. Here we'll build an empty graph, using dicts to keep track of the forward and reverse relationships. End of explanation """ graph["forward"][1] = [2] graph["reverse"][2] = [1] graph """ Explanation: Adding the edge from 1 to 2 requires two steps, adding the forward relationship, and adding the reverse. End of explanation """ graph["forward"][1].append(3) graph["reverse"][3] = [1] graph """ Explanation: To add the edge between 1 and 3, we have to be careful, we need to append 3 to the neighbor list of 1. End of explanation """ graph["forward"][2] = [3] graph["reverse"][3].append(2) graph """ Explanation: When can we assign to the neighbor list and when do we have to append? Let's see what happend when we add the edge from 2 to 3. End of explanation """ def make_digraph(): "Make an empty directed graph" pass def add_edge(digraph, source, sink): "Add a directed edge from the source vertex to the sink vertex to the digraph." pass """ Explanation: Exercises Write definitions for these two functions. End of explanation """ graph2 = make_digraph() add_edge(graph2, 1, 2) add_edge(graph2, 1, 3) add_edge(graph2, 2, 3) graph2 """ Explanation: Tests After completing the exercise, these commands should produce a graph like the example. End of explanation """ def check_vertex(digraph, vertex): "Check if a vertex in a digraph is even or odd." pass """ Explanation: Even and odd vertices In class we saw that in order to construct Eulerian paths in directed graphs, we need to change the concept of an even or odd vertex. In particular, in a directed graph, a node is even if it has the same number of forward edges leaving it as edges inbound to it. A vertex is odd otherwise. We can write a function to return the number of inbound edges minus the number of outbound edges. End of explanation """ def find_path(digraph): "Find an Eulerian path through digraph." path = [] # Check if G has an Eulerian path or exit # make a copy of G (from now on work on the copy) # Choose a starting vertex # Push starting vertex onto stack # while items remain on the stack: # set current vertex to top of stack # if current vertex has forward (outbound) edges # put neighbor on stack # remove edge from current vertex to neighbor # else # append current vertex to path # pop current vertex from stack return path """ Explanation: To have an Eulerian path, all verticies except for two (zero for an Eulerian cycle) must be even, The odd verticies (if any) must have one more outbound edge than inbound (for the start vertex) and one more inbound than outbound (for the end vertex) Eulerian path in directed graph The algorithm we described for Eulerian paths prevously will work for directed graphs with a few small modifications. End of explanation """ def remove_edge(digraph, source, sink): pass def has_path(digraph): "Test if this digraph has an Eulerian path" pass def pick_start(digraph): "Find a suitable starting vertex in digraph" pass """ Explanation: You will also need to implement remove_edge, and functions to check if a path exists, and choose a starting node. End of explanation """ graph3 = make_digraph() add_edge(graph3, 1, 2) add_edge(graph3, 2, 3) add_edge(graph3, 3, 1) add_edge(graph3, 1, 4) find_path(graph3) """ Explanation: Assignment: reimplement find_path Modify your find_path function so it works with directed graphs. When completed, you should be able to evaluate the following test and get back a path like [1, 2, 3, 4]. End of explanation """
quantumlib/OpenFermion
docs/fqe/tutorials/hamiltonian_time_evolution_and_expectation_estimation.ipynb
apache-2.0
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: Copyright 2020 The OpenFermion Developers End of explanation """ try: import fqe except ImportError: !pip install fqe --quiet Print = True from openfermion import FermionOperator, MolecularData from openfermion.utils import hermitian_conjugated import numpy import fqe numpy.set_printoptions(floatmode='fixed', precision=6, linewidth=80, suppress=True) numpy.random.seed(seed=409) !curl -O https://raw.githubusercontent.com/quantumlib/OpenFermion-FQE/master/tests/unittest_data/build_lih_data.py import build_lih_data h1e, h2e, wfn = build_lih_data.build_lih_data('energy') lih_hamiltonian = fqe.get_restricted_hamiltonian(([h1e, h2e])) lihwfn = fqe.Wavefunction([[4, 0, 6]]) lihwfn.set_wfn(strategy='from_data', raw_data={(4, 0): wfn}) if Print: lihwfn.print_wfn() """ Explanation: Hamiltonian Time Evolution and Expectation Value Computation <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://quantumai.google/openfermion/fqe/tutorials/hamiltonian_time_evolution_and_expectation_estimation"><img src="https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png" />View on QuantumAI</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/quantumlib/OpenFermion/blob/master/docs/fqe/tutorials/hamiltonian_time_evolution_and_expectation_estimation.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/colab_logo_1x.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/quantumlib/OpenFermion/blob/master/docs/fqe/tutorials/hamiltonian_time_evolution_and_expectation_estimation.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/github_logo_1x.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/OpenFermion/docs/fqe/tutorials/hamiltonian_time_evolution_and_expectation_estimation.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/download_icon_1x.png" />Download notebook</a> </td> </table> This tutorial describes the FQE's capabilities for Hamiltonian time-evolution and expectation value estimation Where possible, LiH will be used as an example molecule for the API. End of explanation """ # dummy geometry from openfermion.chem.molecular_data import spinorb_from_spatial from openfermion import jordan_wigner, get_sparse_operator, InteractionOperator, get_fermion_operator h1s, h2s = spinorb_from_spatial(h1e, numpy.einsum("ijlk", -2 * h2e) * 0.5) mol = InteractionOperator(0, h1s, h2s) ham_fop = get_fermion_operator(mol) ham_mat = get_sparse_operator(jordan_wigner(ham_fop)).toarray() from scipy.linalg import expm time = 0.01 evolved1 = lihwfn.time_evolve(time, lih_hamiltonian) if Print: evolved1.print_wfn() evolved2 = fqe.time_evolve(lihwfn, time, lih_hamiltonian) if Print: evolved2.print_wfn() assert numpy.isclose(fqe.vdot(evolved1, evolved2), 1) cirq_wf = fqe.to_cirq(lihwfn) evolve_cirq = expm(-1j * time * ham_mat) @ cirq_wf test_evolve = fqe.from_cirq(evolve_cirq, thresh=1.0E-12) assert numpy.isclose(fqe.vdot(test_evolve, evolved1), 1) """ Explanation: Application of one- and two-body fermionic gates The API for time propogation can be invoked through the fqe namespace or the wavefunction object End of explanation """ wfn = fqe.Wavefunction([[4, 2, 4]]) wfn.set_wfn(strategy='random') if Print: wfn.print_wfn() diagonal = FermionOperator('0^ 0', -2.0) + \ FermionOperator('1^ 1', -1.7) + \ FermionOperator('2^ 2', -0.7) + \ FermionOperator('3^ 3', -0.55) + \ FermionOperator('4^ 4', -0.1) + \ FermionOperator('5^ 5', -0.06) + \ FermionOperator('6^ 6', 0.5) + \ FermionOperator('7^ 7', 0.3) if Print: print(diagonal) evolved = wfn.time_evolve(time, diagonal) if Print: evolved.print_wfn() """ Explanation: Exact evolution implementation of quadratic Hamiltonians Listed here are examples of evolving the special Hamiltonians. Diagonal Hamiltonian evolution is supported. End of explanation """ norb = 4 h1e = numpy.zeros((norb, norb), dtype=numpy.complex128) for i in range(norb): for j in range(norb): h1e[i, j] += (i+j) * 0.02 h1e[i, i] += i * 2.0 hamil = fqe.get_restricted_hamiltonian((h1e,)) wfn = fqe.Wavefunction([[4, 0, norb]]) wfn.set_wfn(strategy='random') initial_energy = wfn.expectationValue(hamil) print('Initial Energy: {}'.format(initial_energy)) evolved = wfn.time_evolve(time, hamil) final_energy = evolved.expectationValue(hamil) print('Final Energy: {}'.format(final_energy)) """ Explanation: Exact evolution of dense quadratic hamiltonians is supported. Here is an evolution example using a spin restricted Hamiltonian on a number and spin conserving wavefunction End of explanation """ norb = 4 h1e = numpy.zeros((2*norb, 2*norb), dtype=numpy.complex128) for i in range(2*norb): for j in range(2*norb): h1e[i, j] += (i+j) * 0.02 h1e[i, i] += i * 2.0 hamil = fqe.get_gso_hamiltonian((h1e,)) wfn = fqe.get_number_conserving_wavefunction(4, norb) wfn.set_wfn(strategy='random') initial_energy = wfn.expectationValue(hamil) print('Initial Energy: {}'.format(initial_energy)) evolved = wfn.time_evolve(time, hamil) final_energy = evolved.expectationValue(hamil) print('Final Energy: {}'.format(final_energy)) """ Explanation: The GSO Hamiltonian is for evolution of quadratic hamiltonians that are spin broken and number conserving. End of explanation """ norb = 4 time = 0.001 wfn_spin = fqe.get_spin_conserving_wavefunction(2, norb) hamil = FermionOperator('', 6.0) for i in range(0, 2*norb, 2): for j in range(0, 2*norb, 2): opstring = str(i) + ' ' + str(j + 1) hamil += FermionOperator(opstring, (i+1 + j*2)*0.1 - (i+1 + 2*(j + 1))*0.1j) opstring = str(i) + '^ ' + str(j + 1) + '^ ' hamil += FermionOperator(opstring, (i+1 + j)*0.1 + (i+1 + j)*0.1j) h_noncon = (hamil + hermitian_conjugated(hamil))/2.0 if Print: print(h_noncon) wfn_spin.set_wfn(strategy='random') if Print: wfn_spin.print_wfn() spin_evolved = wfn_spin.time_evolve(time, h_noncon) if Print: spin_evolved.print_wfn() """ Explanation: The BCS hamiltonian evovles spin conserved and number broken wavefunctions. End of explanation """ norb = 4 wfn = fqe.Wavefunction([[5, 1, norb]]) vij = numpy.zeros((norb, norb, norb, norb), dtype=numpy.complex128) for i in range(norb): for j in range(norb): vij[i, j] += 4*(i % norb + 1)*(j % norb + 1)*0.21 wfn.set_wfn(strategy='random') if Print: wfn.print_wfn() hamil = fqe.get_diagonalcoulomb_hamiltonian(vij) evolved = wfn.time_evolve(time, hamil) if Print: evolved.print_wfn() """ Explanation: Exact Evolution Implementation of Diagonal Coulomb terms End of explanation """ norb = 3 nele = 4 ops = FermionOperator('5^ 1^ 2 0', 3.0 - 1.j) ops += FermionOperator('0^ 2^ 1 5', 3.0 + 1.j) wfn = fqe.get_number_conserving_wavefunction(nele, norb) wfn.set_wfn(strategy='random') wfn.normalize() if Print: wfn.print_wfn() evolved = wfn.time_evolve(time, ops) if Print: evolved.print_wfn() """ Explanation: Exact evolution of individual n-body anti-Hermitian gnerators End of explanation """ lih_evolved = lihwfn.apply_generated_unitary(time, 'taylor', lih_hamiltonian, accuracy=1.e-8) if Print: lih_evolved.print_wfn() norb = 2 nalpha = 1 nbeta = 1 nele = nalpha + nbeta time = 0.05 h1e = numpy.zeros((norb*2, norb*2), dtype=numpy.complex128) for i in range(2*norb): for j in range(2*norb): h1e[i, j] += (i+j) * 0.02 h1e[i, i] += i * 2.0 hamil = fqe.get_general_hamiltonian((h1e,)) spec_lim = [-1.13199078e-03, 6.12720338e+00] wfn = fqe.Wavefunction([[nele, nalpha - nbeta, norb]]) wfn.set_wfn(strategy='random') if Print: wfn.print_wfn() evol_wfn = wfn.apply_generated_unitary(time, 'chebyshev', hamil, spec_lim=spec_lim) if Print: evol_wfn.print_wfn() """ Explanation: Approximate evolution of sums of n-body generators Approximate evolution can be done for dense operators. End of explanation """ rdm1 = lihwfn.expectationValue('i^ j') if Print: print(rdm1) val = lihwfn.expectationValue('5^ 3') if Print: print(2.*val) trdm1 = fqe.expectationValue(lih_evolved, 'i j^', lihwfn) if Print: print(trdm1) val = fqe.expectationValue(lih_evolved, '5 3^', lihwfn) if Print: print(2*val) """ Explanation: API for determining desired expectation values End of explanation """ rdm2 = lihwfn.expectationValue('i^ j k l^') if Print: print(rdm2) rdm2 = fqe.expectationValue(lihwfn, 'i^ j^ k l', lihwfn) if Print: print(rdm2) """ Explanation: 2.B.1 RDMs In addition to the above API higher order density matrices in addition to hole densities can be calculated. End of explanation """ li_h_energy = lihwfn.expectationValue(lih_hamiltonian) if Print: print(li_h_energy) li_h_energy = fqe.expectationValue(lihwfn, lih_hamiltonian, lihwfn) if Print: print(li_h_energy) """ Explanation: 2.B.2 Hamiltonian expectations (or any expectation values) End of explanation """ op = fqe.get_s2_operator() print(lihwfn.expectationValue(op)) op = fqe.get_sz_operator() print(lihwfn.expectationValue(op)) op = fqe.get_time_reversal_operator() print(lihwfn.expectationValue(op)) op = fqe.get_number_operator() print(lihwfn.expectationValue(op)) """ Explanation: 2.B.3 Symmetry operations End of explanation """
ES-DOC/esdoc-jupyterhub
notebooks/cas/cmip6/models/fgoals-g3/ocean.ipynb
gpl-3.0
# DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'cas', 'fgoals-g3', 'ocean') """ Explanation: ES-DOC CMIP6 Model Properties - Ocean MIP Era: CMIP6 Institute: CAS Source ID: FGOALS-G3 Topic: Ocean Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing. Properties: 133 (101 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:53:44 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation """ # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) """ Explanation: Document Authors Set document authors End of explanation """ # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) """ Explanation: Document Contributors Specify document contributors End of explanation """ # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) """ Explanation: Document Publication Specify document publication status End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Seawater Properties 3. Key Properties --&gt; Bathymetry 4. Key Properties --&gt; Nonoceanic Waters 5. Key Properties --&gt; Software Properties 6. Key Properties --&gt; Resolution 7. Key Properties --&gt; Tuning Applied 8. Key Properties --&gt; Conservation 9. Grid 10. Grid --&gt; Discretisation --&gt; Vertical 11. Grid --&gt; Discretisation --&gt; Horizontal 12. Timestepping Framework 13. Timestepping Framework --&gt; Tracers 14. Timestepping Framework --&gt; Baroclinic Dynamics 15. Timestepping Framework --&gt; Barotropic 16. Timestepping Framework --&gt; Vertical Physics 17. Advection 18. Advection --&gt; Momentum 19. Advection --&gt; Lateral Tracers 20. Advection --&gt; Vertical Tracers 21. Lateral Physics 22. Lateral Physics --&gt; Momentum --&gt; Operator 23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff 24. Lateral Physics --&gt; Tracers 25. Lateral Physics --&gt; Tracers --&gt; Operator 26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff 27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity 28. Vertical Physics 29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details 30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers 31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum 32. Vertical Physics --&gt; Interior Mixing --&gt; Details 33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers 34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum 35. Uplow Boundaries --&gt; Free Surface 36. Uplow Boundaries --&gt; Bottom Boundary Layer 37. Boundary Forcing 38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction 39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction 40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration 41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing 1. Key Properties Ocean key properties 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of ocean model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of ocean model code (NEMO 3.6, MOM 5.0,...) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.model_family') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "OGCM" # "slab ocean" # "mixed layer ocean" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.3. Model Family Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of ocean model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.basic_approximations') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Primitive equations" # "Non-hydrostatic" # "Boussinesq" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.4. Basic Approximations Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Basic approximations made in the ocean. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Potential temperature" # "Conservative temperature" # "Salinity" # "U-velocity" # "V-velocity" # "W-velocity" # "SSH" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.5. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of prognostic variables in the ocean component. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Linear" # "Wright, 1997" # "Mc Dougall et al." # "Jackett et al. 2006" # "TEOS 2010" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 2. Key Properties --&gt; Seawater Properties Physical properties of seawater in ocean 2.1. Eos Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EOS for sea water End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Potential temperature" # "Conservative temperature" # TODO - please enter value(s) """ Explanation: 2.2. Eos Functional Temp Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Temperature used in EOS for sea water End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Practical salinity Sp" # "Absolute salinity Sa" # TODO - please enter value(s) """ Explanation: 2.3. Eos Functional Salt Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Salinity used in EOS for sea water End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Pressure (dbars)" # "Depth (meters)" # TODO - please enter value(s) """ Explanation: 2.4. Eos Functional Depth Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Depth or pressure used in EOS for sea water ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "TEOS 2010" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 2.5. Ocean Freezing Point Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 2.6. Ocean Specific Heat Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specific heat in ocean (cpocean) in J/(kg K) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 2.7. Ocean Reference Density Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Boussinesq reference density (rhozero) in kg / m3 End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Present day" # "21000 years BP" # "6000 years BP" # "LGM" # "Pliocene" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 3. Key Properties --&gt; Bathymetry Properties of bathymetry in ocean 3.1. Reference Dates Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Reference date of bathymetry End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.type') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 3.2. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the bathymetry fixed in time in the ocean ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3.3. Ocean Smoothing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe any smoothing or hand editing of bathymetry in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.source') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3.4. Source Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe source of bathymetry in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4. Key Properties --&gt; Nonoceanic Waters Non oceanic waters treatement in ocean 4.1. Isolated Seas Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how isolated seas is performed End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4.2. River Mouth Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how river mouth mixing or estuaries specific treatment is performed End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5. Key Properties --&gt; Software Properties Software properties of ocean code 5.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6. Key Properties --&gt; Resolution Resolution in the ocean grid 6.1. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.2. Canonical Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.3. Range Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 6.4. Number Of Horizontal Gridpoints Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Total number of horizontal (XY) points (or degrees of freedom) on computational grid. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 6.5. Number Of Vertical Levels Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of vertical levels resolved on computational grid. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.6. Is Adaptive Grid Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Default is False. Set true if grid resolution changes during execution. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 6.7. Thickness Level 1 Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Thickness of first surface ocean level (in meters) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7. Key Properties --&gt; Tuning Applied Tuning methodology for ocean component 7.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.2. Global Mean Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List set of metrics of the global mean state used in tuning model/component End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.3. Regional Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.4. Trend Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List observed trend metrics used in tuning model/component End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8. Key Properties --&gt; Conservation Conservation in the ocean component 8.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Brief description of conservation methodology End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.scheme') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Energy" # "Enstrophy" # "Salt" # "Volume of ocean" # "Momentum" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 8.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Properties conserved in the ocean by the numerical schemes End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.3. Consistency Properties Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.4. Corrected Conserved Prognostic Variables Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Set of variables which are conserved by more than the numerical scheme alone. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 8.5. Was Flux Correction Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Does conservation involve flux correction ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9. Grid Ocean grid 9.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of grid in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Z-coordinate" # "Z*-coordinate" # "S-coordinate" # "Isopycnic - sigma 0" # "Isopycnic - sigma 2" # "Isopycnic - sigma 4" # "Isopycnic - other" # "Hybrid / Z+S" # "Hybrid / Z+isopycnic" # "Hybrid / other" # "Pressure referenced (P)" # "P*" # "Z**" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 10. Grid --&gt; Discretisation --&gt; Vertical Properties of vertical discretisation in ocean 10.1. Coordinates Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of vertical coordinates in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 10.2. Partial Steps Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Using partial steps with Z or Z vertical coordinate in ocean ?* End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Lat-lon" # "Rotated north pole" # "Two north poles (ORCA-style)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 11. Grid --&gt; Discretisation --&gt; Horizontal Type of horizontal discretisation scheme in ocean 11.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal grid type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Arakawa B-grid" # "Arakawa C-grid" # "Arakawa E-grid" # "N/a" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 11.2. Staggering Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Horizontal grid staggering type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Finite difference" # "Finite volumes" # "Finite elements" # "Unstructured grid" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 11.3. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal discretisation scheme in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 12. Timestepping Framework Ocean Timestepping Framework 12.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of time stepping in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Via coupling" # "Specific treatment" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 12.2. Diurnal Cycle Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Diurnal cycle type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Leap-frog + Asselin filter" # "Leap-frog + Periodic Euler" # "Predictor-corrector" # "Runge-Kutta 2" # "AM3-LF" # "Forward-backward" # "Forward operator" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13. Timestepping Framework --&gt; Tracers Properties of tracers time stepping in ocean 13.1. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Tracers time stepping scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 13.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Tracers time step (in seconds) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Preconditioned conjugate gradient" # "Sub cyling" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14. Timestepping Framework --&gt; Baroclinic Dynamics Baroclinic dynamics in ocean 14.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Baroclinic dynamics type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Leap-frog + Asselin filter" # "Leap-frog + Periodic Euler" # "Predictor-corrector" # "Runge-Kutta 2" # "AM3-LF" # "Forward-backward" # "Forward operator" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Baroclinic dynamics scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 14.3. Time Step Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Baroclinic time step (in seconds) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "split explicit" # "implicit" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15. Timestepping Framework --&gt; Barotropic Barotropic time stepping in ocean 15.1. Splitting Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time splitting method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 15.2. Time Step Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Barotropic time step (in seconds) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 16. Timestepping Framework --&gt; Vertical Physics Vertical physics time stepping in ocean 16.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Details of vertical time stepping in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17. Advection Ocean advection 17.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of advection in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.momentum.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Flux form" # "Vector form" # TODO - please enter value(s) """ Explanation: 18. Advection --&gt; Momentum Properties of lateral momemtum advection scheme in ocean 18.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of lateral momemtum advection scheme in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.momentum.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 18.2. Scheme Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of ocean momemtum advection scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.momentum.ALE') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 18.3. ALE Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Using ALE for vertical advection ? (if vertical coordinates are sigma) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 19. Advection --&gt; Lateral Tracers Properties of lateral tracer advection scheme in ocean 19.1. Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Order of lateral tracer advection scheme in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 19.2. Flux Limiter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Monotonic flux limiter for lateral tracer advection scheme in ocean ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 19.3. Effective Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Effective order of limited lateral tracer advection scheme in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 19.4. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Ideal age" # "CFC 11" # "CFC 12" # "SF6" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 19.5. Passive Tracers Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Passive tracers advected End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 19.6. Passive Tracers Advection Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Is advection of passive tracers different than active ? if so, describe. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.vertical_tracers.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 20. Advection --&gt; Vertical Tracers Properties of vertical tracer advection scheme in ocean 20.1. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 20.2. Flux Limiter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Monotonic flux limiter for vertical tracer advection scheme in ocean ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 21. Lateral Physics Ocean lateral physics 21.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of lateral physics in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Eddy active" # "Eddy admitting" # TODO - please enter value(s) """ Explanation: 21.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of transient eddy representation in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Horizontal" # "Isopycnal" # "Isoneutral" # "Geopotential" # "Iso-level" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 22. Lateral Physics --&gt; Momentum --&gt; Operator Properties of lateral physics operator for momentum in ocean 22.1. Direction Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Direction of lateral physics momemtum scheme in the ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Harmonic" # "Bi-harmonic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 22.2. Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Order of lateral physics momemtum scheme in the ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Second order" # "Higher order" # "Flux limiter" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 22.3. Discretisation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Discretisation of lateral physics momemtum scheme in the ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Space varying" # "Time + space varying (Smagorinsky)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean 23.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Lateral physics momemtum eddy viscosity coeff type in the ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 23.2. Constant Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 23.3. Variable Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 23.4. Coeff Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 23.5. Coeff Backscatter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 24. Lateral Physics --&gt; Tracers Properties of lateral physics for tracers in ocean 24.1. Mesoscale Closure Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there a mesoscale closure in the lateral physics tracers scheme ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 24.2. Submesoscale Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Horizontal" # "Isopycnal" # "Isoneutral" # "Geopotential" # "Iso-level" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 25. Lateral Physics --&gt; Tracers --&gt; Operator Properties of lateral physics operator for tracers in ocean 25.1. Direction Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Direction of lateral physics tracers scheme in the ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Harmonic" # "Bi-harmonic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 25.2. Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Order of lateral physics tracers scheme in the ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Second order" # "Higher order" # "Flux limiter" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 25.3. Discretisation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Discretisation of lateral physics tracers scheme in the ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Space varying" # "Time + space varying (Smagorinsky)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean 26.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Lateral physics tracers eddy diffusity coeff type in the ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 26.2. Constant Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 26.3. Variable Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 26.4. Coeff Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 26.5. Coeff Backscatter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "GM" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean 27.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EIV in lateral physics tracers in the ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 27.2. Constant Val Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If EIV scheme for tracers is constant, specify coefficient value (M2/s) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 27.3. Flux Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EIV flux (advective or skew) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 27.4. Added Diffusivity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EIV added diffusivity (constant, flow dependent or none) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 28. Vertical Physics Ocean Vertical Physics 28.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of vertical physics in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details Properties of vertical physics in ocean 29.1. Langmuir Cells Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there Langmuir cells mixing in upper ocean ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure - TKE" # "Turbulent closure - KPP" # "Turbulent closure - Mellor-Yamada" # "Turbulent closure - Bulk Mixed Layer" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers *Properties of boundary layer (BL) mixing on tracers in the ocean * 30.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of boundary layer mixing for tracers in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 30.2. Closure Order Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 30.3. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant BL mixing of tracers, specific coefficient (m2/s) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 30.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure - TKE" # "Turbulent closure - KPP" # "Turbulent closure - Mellor-Yamada" # "Turbulent closure - Bulk Mixed Layer" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum *Properties of boundary layer (BL) mixing on momentum in the ocean * 31.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of boundary layer mixing for momentum in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 31.2. Closure Order Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 31.3. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant BL mixing of momentum, specific coefficient (m2/s) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 31.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Non-penetrative convective adjustment" # "Enhanced vertical diffusion" # "Included in turbulence closure" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 32. Vertical Physics --&gt; Interior Mixing --&gt; Details *Properties of interior mixing in the ocean * 32.1. Convection Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of vertical convection in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 32.2. Tide Induced Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how tide induced mixing is modelled (barotropic, baroclinic, none) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 32.3. Double Diffusion Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there double diffusion End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 32.4. Shear Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there interior shear mixing End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure / TKE" # "Turbulent closure - Mellor-Yamada" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers *Properties of interior mixing on tracers in the ocean * 33.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of interior mixing for tracers in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 33.2. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant interior mixing of tracers, specific coefficient (m2/s) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 33.3. Profile Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 33.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure / TKE" # "Turbulent closure - Mellor-Yamada" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum *Properties of interior mixing on momentum in the ocean * 34.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of interior mixing for momentum in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 34.2. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant interior mixing of momentum, specific coefficient (m2/s) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 34.3. Profile Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 34.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 35. Uplow Boundaries --&gt; Free Surface Properties of free surface in ocean 35.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of free surface in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Linear implicit" # "Linear filtered" # "Linear semi-explicit" # "Non-linear implicit" # "Non-linear filtered" # "Non-linear semi-explicit" # "Fully explicit" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 35.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Free surface scheme in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 35.3. Embeded Seaice Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the sea-ice embeded in the ocean model (instead of levitating) ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 36. Uplow Boundaries --&gt; Bottom Boundary Layer Properties of bottom boundary layer in ocean 36.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of bottom boundary layer in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Diffusive" # "Acvective" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 36.2. Type Of Bbl Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of bottom boundary layer in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 36.3. Lateral Mixing Coef Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 36.4. Sill Overflow Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe any specific treatment of sill overflows End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 37. Boundary Forcing Ocean boundary forcing 37.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of boundary forcing in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 37.2. Surface Pressure Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 37.3. Momentum Flux Correction Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 37.4. Tracers Flux Correction Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 37.5. Wave Effects Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how wave effects are modelled at ocean surface. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 37.6. River Runoff Budget Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how river runoff from land surface is routed to ocean and any global adjustment done. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 37.7. Geothermal Heating Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how geothermal heating is present at ocean bottom. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Linear" # "Non-linear" # "Non-linear (drag function of speed of tides)" # "Constant drag coefficient" # "None" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction Properties of momentum bottom friction in ocean 38.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of momentum bottom friction in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Free-slip" # "No-slip" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction Properties of momentum lateral friction in ocean 39.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of momentum lateral friction in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "1 extinction depth" # "2 extinction depth" # "3 extinction depth" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration Properties of sunlight penetration scheme in ocean 40.1. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of sunlight penetration scheme in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 40.2. Ocean Colour Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the ocean sunlight penetration scheme ocean colour dependent ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 40.3. Extinction Depth Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe and list extinctions depths for sunlight penetration scheme (if applicable). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Freshwater flux" # "Virtual salt flux" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing Properties of surface fresh water forcing in ocean 41.1. From Atmopshere Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of surface fresh water forcing from atmos in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Freshwater flux" # "Virtual salt flux" # "Real salt flux" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 41.2. From Sea Ice Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of surface fresh water forcing from sea-ice in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 41.3. Forced Mode Restoring Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of surface salinity restoring in forced mode (OMIP) End of explanation """
numb3r33/StumbpleUponChallenge
notebooks/EnsemblingAndTextParsing.ipynb
mit
import pandas as pd import numpy as np import os, sys import re, json from urllib.parse import urlparse from sklearn.base import BaseEstimator, TransformerMixin from sklearn.preprocessing import Imputer, FunctionTransformer from sklearn.pipeline import Pipeline, FeatureUnion from sklearn.preprocessing import StandardScaler, LabelEncoder, MinMaxScaler from sklearn.linear_model import LogisticRegression from sklearn.ensemble import RandomForestClassifier from sklearn.metrics import roc_auc_score from sklearn.externals import joblib from sklearn.decomposition import TruncatedSVD from sklearn.feature_extraction.text import ENGLISH_STOP_WORDS from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.feature_selection import chi2, SelectKBest from sklearn.linear_model import LogisticRegression from sklearn.cross_validation import KFold from nltk.stem.snowball import SnowballStemmer from nltk.stem import WordNetLemmatizer from nltk import word_tokenize import xgboost as xgb import warnings warnings.filterwarnings('ignore') basepath = os.path.expanduser('~/Desktop/src/Stumbleupon_classification_challenge/') sys.path.append(os.path.join(basepath, 'src')) np.random.seed(4) from data import load_datasets from models import train_test_split, cross_val_scheme # Initialize Stemmer sns = SnowballStemmer(language='english') train, test, sample_sub = load_datasets.load_dataset() train['is_news'] = train.is_news.fillna(-999) test['is_news'] = test.is_news.fillna(-999) """ Explanation: Objectives * Learn how to parse html. * Create models that capture different aspects of the problem. * How to learn processes in parallel ? End of explanation """ def extract_top_level_domain(url): parsed_url = urlparse(url) top_level = parsed_url[1].split('.')[-1] return top_level def get_tlds(urls): return np.array([extract_top_level_domain(url) for url in urls]) train['tlds'] = get_tlds(train.url) test['tlds'] = get_tlds(test.url) ohe = pd.get_dummies(list(train.tlds) + list(test.tlds)) train = pd.concat((train, ohe.iloc[:len(train)]), axis=1) test = pd.concat((test, ohe.iloc[len(train):]), axis=1) class NumericalFeatures(BaseEstimator, TransformerMixin): @staticmethod def url_depth(url): parsed_url = urlparse(url) path = parsed_url.path return len(list(filter(lambda x: len(x)> 0, path.split('/')))) @staticmethod def get_url_depths(urls): return np.array([NumericalFeatures.url_depth(url) for url in urls]) def __init__(self, numerical_features): self.features = numerical_features def fit(self, X, y=None): return self def transform(self, df): df['url_depth'] = self.get_url_depths(df.url) numeric_features = self.features + ['url_depth'] df_numeric = df[numeric_features] return df_numeric """ Explanation: Text Features based on the boiler plate Text Features based on the parsed raw html Numerical features Train different models on different datasets and then use their predictions in the next stage of classifier and predict. End of explanation """ params = { 'test_size': 0.2, 'random_state': 2, 'stratify': train.is_news } itrain, itest = train_test_split.tr_ts_split(len(train), **params) X_train = train.iloc[itrain] X_test = train.iloc[itest] y_train = train.iloc[itrain].label y_test = train.iloc[itest].label numeric_features = list(train.select_dtypes(exclude=['object']).columns[1:]) numeric_features.remove('label') pipeline = Pipeline([ ('feature_extractor', NumericalFeatures(numeric_features)), ('imputer', Imputer(strategy='mean')), ('scaler', StandardScaler()), ('model', xgb.XGBClassifier(learning_rate=.08, max_depth=6)) ]) pipeline.fit(X_train, y_train) # cross validation params = { 'n_folds': 5, 'shuffle': True, 'random_state': 3 } scores, mean_score, std_score = cross_val_scheme.cv_scheme(pipeline, X_train, y_train, train.iloc[itrain].is_news, **params) print('CV Scores: %s'%(scores)) print('Mean CV Score: %f'%(mean_score)) print('Std Cv Scoes: %f'%(std_score)) y_preds = pipeline.predict_proba(X_test)[:, 1] print('ROC AUC score on the test set ', roc_auc_score(y_test, y_preds)) joblib.dump(pipeline, os.path.join(basepath, 'data/processed/pipeline_numeric/pipeline_numeric.pkl')) """ Explanation: Split into training and test sets. End of explanation """ train = joblib.load(os.path.join(basepath, 'data/processed/train_raw_content.pkl')) test = joblib.load(os.path.join(basepath, 'data/processed/test_raw_content.pkl')) """ Explanation: Load Textual Features Prepared from raw content End of explanation """ train_json = list(map(json.loads, train.boilerplate)) test_json = list(map(json.loads, test.boilerplate)) train['boilerplate'] = train_json test['boilerplate'] = test_json def get_component(boilerplate, key): """ Get value for a particular key in boilerplate json, if present return the value else return an empty string boilerplate: list of boilerplate text in json format key: key for which we want to fetch value e.g. body, title and url """ return np.array([bp[key] if key in bp and bp[key] else u'' for bp in boilerplate]) train['body_bp'] = get_component(train.boilerplate, 'body') test['body_bp'] = get_component(test.boilerplate, 'body') train['title_bp'] = get_component(train.boilerplate, 'title') test['title_bp'] = get_component(test.boilerplate, 'title') train['url_component'] = get_component(train.boilerplate, 'url') test['url_component'] = get_component(test.boilerplate, 'url') class LemmaTokenizer(object): def __init__(self): self.wnl = WordNetLemmatizer() def __call__(self, doc): return [self.wnl.lemmatize(t) for t in word_tokenize(doc)] class VarSelect(BaseEstimator, TransformerMixin): def __init__(self, keys): self.keys = keys def fit(self, X, y=None): return self def transform(self, df): return df[self.keys] class StemTokenizer(object): def __init__(self): self.sns = sns def __call__(self, doc): return [self.sns.stem(t) for t in word_tokenize(doc)] def remove_non_alphanumeric(df): return df.replace(r'[^A-Za-z0-9]+', ' ', regex=True) strip_non_words = FunctionTransformer(remove_non_alphanumeric, validate=False) # Lemma Tokenizer pipeline_lemma = Pipeline([ ('strip', strip_non_words), ('union', FeatureUnion([ ('body', Pipeline([ ('var', VarSelect(keys='body_bp')), ('tfidf', TfidfVectorizer(strip_accents='unicode', tokenizer=LemmaTokenizer(), ngram_range=(1, 2), min_df=3, sublinear_tf=True)), ('svd', TruncatedSVD(n_components=100)) ])), ('title', Pipeline([ ('var', VarSelect(keys='title_bp')), ('tfidf', TfidfVectorizer(strip_accents='unicode', tokenizer=LemmaTokenizer(), ngram_range=(1, 2), min_df=3, sublinear_tf=True)), ('svd', TruncatedSVD(n_components=100)) ])), ('url', Pipeline([ ('var', VarSelect(keys='url_component')), ('tfidf', TfidfVectorizer(strip_accents='unicode', tokenizer=LemmaTokenizer(), ngram_range=(1,2), min_df=3, sublinear_tf=True)), ('svd', TruncatedSVD(n_components=50)) ])) ])), ('scaler', MinMaxScaler()), ('selection', SelectKBest(chi2, k=100)), ('model', LogisticRegression()) ]) params = { 'test_size': 0.2, 'random_state': 2, 'stratify': train.is_news } itrain, itest = train_test_split.tr_ts_split(len(train), **params) features = ['url_component', 'body_bp', 'title_bp'] X_train = train.iloc[itrain][features] X_test = train.iloc[itest][features] y_train = train.iloc[itrain].label y_test = train.iloc[itest].label pipeline.fit(X_train, y_train) y_preds = pipeline.predict_proba(X_test)[:, 1] print('AUC score on unseen examples are: ', roc_auc_score(y_test, y_preds)) # train on full dataset X = train[features] y = train.label pipeline_lemma.fit(X, y) # save this model to disk joblib.dump(pipeline_lemma, os.path.join(basepath, 'data/processed/pipeline_boilerplate_lemma/model_lemma.pkl')) """ Explanation: Text features from Boilerplate End of explanation """ # Stemming Tokenizer pipeline_stemming = Pipeline([ ('strip', strip_non_words), ('union', FeatureUnion([ ('body', Pipeline([ ('var', VarSelect(keys='body_bp')), ('tfidf', TfidfVectorizer(strip_accents='unicode', tokenizer=StemTokenizer(), ngram_range=(1, 2), min_df=3, sublinear_tf=True)), ('svd', TruncatedSVD(n_components=100)) ])), ('title', Pipeline([ ('var', VarSelect(keys='title_bp')), ('tfidf', TfidfVectorizer(strip_accents='unicode', tokenizer=StemTokenizer(), ngram_range=(1, 2), min_df=3, sublinear_tf=True)), ('svd', TruncatedSVD(n_components=100)) ])), ('url', Pipeline([ ('var', VarSelect(keys='url_component')), ('tfidf', TfidfVectorizer(strip_accents='unicode', tokenizer=StemTokenizer(), ngram_range=(1,2), min_df=3, sublinear_tf=True)), ('svd', TruncatedSVD(n_components=50)) ])) ])), ('scaler', MinMaxScaler()), ('selection', SelectKBest(chi2, k=100)), ('model', LogisticRegression()) ]) params = { 'test_size': 0.2, 'random_state': 2, 'stratify': train.is_news } itrain, itest = train_test_split.tr_ts_split(len(train), **params) features = ['url_component', 'body', 'title'] X_train = train.iloc[itrain][features] X_test = train.iloc[itest][features] y_train = train.iloc[itrain].label y_test = train.iloc[itest].label pipeline_stemming.fit(X_train, y_train) y_preds = pipeline_stemming.predict_proba(X_test)[:, 1] print('AUC score on unseen examples are: ', roc_auc_score(y_test, y_preds)) # train on full dataset X = train[features] y = train.label pipeline_stemming.fit(X, y) # save this model to disk joblib.dump(pipeline_stemming, os.path.join(basepath, 'data/processed/pipeline_boilerplate_stem/model_stem.pkl')) """ Explanation: Pipeline involving Stemming End of explanation """ class Blending(object): def __init__(self, models): self.models = models # dict def predict(self, X, X_test, y=None): cv = KFold(len(X), n_folds=3, shuffle=True, random_state=10) dataset_blend_train = np.zeros((X.shape[0], len(self.models.keys()))) dataset_blend_test = np.zeros((X_test.shape[0], len(self.models.keys()))) for index, key in enumerate(self.models.keys()): dataset_blend_test_index = np.zeros((X_test.shape[0], len(cv))) model = self.models[key][1] feature_list = self.models[key][0] print('Training model of type: ', key) for i , (itrain, itest) in enumerate(cv): Xtr = X.iloc[itrain][feature_list] ytr = y.iloc[itrain] Xte = X.iloc[itest][feature_list] yte = y.iloc[itest] y_preds = model.predict_proba(Xte)[:, 1] dataset_blend_train[itest, index] = y_preds dataset_blend_test_index[:, i] = model.predict_proba(X_test)[:, 1] dataset_blend_test[:, index] = dataset_blend_test_index.mean(1) print('\nBlending') clf = LogisticRegression() clf.fit(dataset_blend_train, y) y_submission = clf.predict_proba(dataset_blend_test)[:, 1] y_submission = (y_submission - y_submission.min()) / (y_submission.max() - y_submission.min()) return y_submission def stem_tokens(x): return ' '.join([sns.stem(word) for word in word_tokenize(x)]) def preprocess_string(s): return stem_tokens(s) class Weights(BaseEstimator, TransformerMixin): def __init__(self, weight): self.weight = weight def fit(self, X, y=None): return self def transform(self, X): return self.weight * X # load all the models from the disk # pipeline_numeric = joblib.load(os.path.join(basepath, 'data/processed/pipeline_numeric/pipeline_numeric.pkl')) # pipeline_lemma = joblib.load(os.path.join(basepath, 'data/processed/pipeline_boilerplate_lemma/model_lemma.pkl')) # pipeline_stemming = joblib.load(os.path.join(basepath, 'data/processed/pipeline_boilerplate_stem/model_stem.pkl')) pipeline_raw = joblib.load(os.path.join(basepath, 'data/processed/pipeline_raw/model_raw.pkl')) numeric_features = list(train.select_dtypes(exclude=['object']).columns[1:]) + ['url'] numeric_features.remove('label') boilerplate_features = ['body_bp', 'title_bp', 'url_component'] raw_features = ['body', 'title', 'h1', 'h2', 'h3', 'h4', 'span', 'a', 'label_',\ 'meta-title', 'meta-description', 'li'] models = { # 'numeric': [numeric_features, pipeline_numeric], 'boilerplate_lemma': [boilerplate_features, pipeline_lemma], 'boilerplate_stem': [boilerplate_features, pipeline_stemming], 'boilerplate_raw': [raw_features, pipeline_raw] } params = { 'test_size': 0.2, 'random_state': 2, 'stratify': train.is_news } itrain, itest = train_test_split.tr_ts_split(len(train), **params) features = list(boilerplate_features) + list(raw_features) X_train = train.iloc[itrain][features] X_test = train.iloc[itest][features] y_train = train.iloc[itrain].label y_test = train.iloc[itest].label blend = Blending(models) y_blend = blend.predict(X_train, X_test, y_train) print('AUC score after blending ', roc_auc_score(y_test, y_blend)) """ Explanation: Blending End of explanation """ X = train[features] X_test = test[features] y = train.label assert X.shape[1] == X_test.shape[1] blend = Blending(models) predictions = blend.predict(X, X_test, y) """ Explanation: Train on full dataset. End of explanation """ sample_sub['label'] = predictions sample_sub.to_csv(os.path.join(basepath, 'submissions/blend_3.csv'), index=False) """ Explanation: Submissions End of explanation """
mne-tools/mne-tools.github.io
0.18/_downloads/db126f84a1b5439712a1d57b1be2255c/plot_time_frequency_global_field_power.ipynb
bsd-3-clause
# Authors: Denis A. Engemann <denis.engemann@gmail.com> # # License: BSD (3-clause) import numpy as np import matplotlib.pyplot as plt import mne from mne.datasets import somato from mne.baseline import rescale from mne.stats import _bootstrap_ci """ Explanation: Explore event-related dynamics for specific frequency bands The objective is to show you how to explore spectrally localized effects. For this purpose we adapt the method described in [1]_ and use it on the somato dataset. The idea is to track the band-limited temporal evolution of spatial patterns by using the Global Field Power (GFP). We first bandpass filter the signals and then apply a Hilbert transform. To reveal oscillatory activity the evoked response is then subtracted from every single trial. Finally, we rectify the signals prior to averaging across trials by taking the magniude of the Hilbert. Then the GFP is computed as described in [2], using the sum of the squares but without normalization by the rank. Baselining is subsequently applied to make the GFPs comparable between frequencies. The procedure is then repeated for each frequency band of interest and all GFPs are visualized. To estimate uncertainty, non-parametric confidence intervals are computed as described in [3] across channels. The advantage of this method over summarizing the Space x Time x Frequency output of a Morlet Wavelet in frequency bands is relative speed and, more importantly, the clear-cut comparability of the spectral decomposition (the same type of filter is used across all bands). References .. [1] Hari R. and Salmelin R. Human cortical oscillations: a neuromagnetic view through the skull (1997). Trends in Neuroscience 20 (1), pp. 44-49. .. [2] Engemann D. and Gramfort A. (2015) Automated model selection in covariance estimation and spatial whitening of MEG and EEG signals, vol. 108, 328-342, NeuroImage. .. [3] Efron B. and Hastie T. Computer Age Statistical Inference (2016). Cambrdige University Press, Chapter 11.2. End of explanation """ data_path = somato.data_path() raw_fname = data_path + '/MEG/somato/sef_raw_sss.fif' # let's explore some frequency bands iter_freqs = [ ('Theta', 4, 7), ('Alpha', 8, 12), ('Beta', 13, 25), ('Gamma', 30, 45) ] """ Explanation: Set parameters End of explanation """ # set epoching parameters event_id, tmin, tmax = 1, -1., 3. baseline = None # get the header to extract events raw = mne.io.read_raw_fif(raw_fname, preload=False) events = mne.find_events(raw, stim_channel='STI 014') frequency_map = list() for band, fmin, fmax in iter_freqs: # (re)load the data to save memory raw = mne.io.read_raw_fif(raw_fname, preload=True) raw.pick_types(meg='grad', eog=True) # we just look at gradiometers # bandpass filter and compute Hilbert raw.filter(fmin, fmax, n_jobs=1, # use more jobs to speed up. l_trans_bandwidth=1, # make sure filter params are the same h_trans_bandwidth=1, # in each band and skip "auto" option. fir_design='firwin') raw.apply_hilbert(n_jobs=1, envelope=False) epochs = mne.Epochs(raw, events, event_id, tmin, tmax, baseline=baseline, reject=dict(grad=4000e-13, eog=350e-6), preload=True) # remove evoked response and get analytic signal (envelope) epochs.subtract_evoked() # for this we need to construct new epochs. epochs = mne.EpochsArray( data=np.abs(epochs.get_data()), info=epochs.info, tmin=epochs.tmin) # now average and move on frequency_map.append(((band, fmin, fmax), epochs.average())) """ Explanation: We create average power time courses for each frequency band End of explanation """ fig, axes = plt.subplots(4, 1, figsize=(10, 7), sharex=True, sharey=True) colors = plt.get_cmap('winter_r')(np.linspace(0, 1, 4)) for ((freq_name, fmin, fmax), average), color, ax in zip( frequency_map, colors, axes.ravel()[::-1]): times = average.times * 1e3 gfp = np.sum(average.data ** 2, axis=0) gfp = mne.baseline.rescale(gfp, times, baseline=(None, 0)) ax.plot(times, gfp, label=freq_name, color=color, linewidth=2.5) ax.axhline(0, linestyle='--', color='grey', linewidth=2) ci_low, ci_up = _bootstrap_ci(average.data, random_state=0, stat_fun=lambda x: np.sum(x ** 2, axis=0)) ci_low = rescale(ci_low, average.times, baseline=(None, 0)) ci_up = rescale(ci_up, average.times, baseline=(None, 0)) ax.fill_between(times, gfp + ci_up, gfp - ci_low, color=color, alpha=0.3) ax.grid(True) ax.set_ylabel('GFP') ax.annotate('%s (%d-%dHz)' % (freq_name, fmin, fmax), xy=(0.95, 0.8), horizontalalignment='right', xycoords='axes fraction') ax.set_xlim(-1000, 3000) axes.ravel()[-1].set_xlabel('Time [ms]') """ Explanation: Now we can compute the Global Field Power We can track the emergence of spatial patterns compared to baseline for each frequency band, with a bootstrapped confidence interval. We see dominant responses in the Alpha and Beta bands. End of explanation """
paulgradie/SeqPyPlot
main_app/notebooks/correction_experiments.ipynb
gpl-3.0
import pandas as pd import numpy as np import seaborn as sns from random import randint as rand import matplotlib.pyplot as plt %matplotlib inline from sklearn.linear_model import LinearRegression from sklearn.metrics.pairwise import euclidean_distances from scipy.linalg import svd from seqpyplot.container.data_container import DataContainer from seqpyplot.parsers.config_parser import config_parser from pathlib import Path from matplotlib import rcParams rcParams['figure.figsize'] = (10, 10) pd.options.mode.chained_assignment = None """ Explanation: Experiments in heteroskedasticity and bias correction using linear transformations This notebook works through an idea for correcting heteroskedasticity which uses rotation matrices to align samples. Previous iterations of seqpyplot (SPPLOT v0.3 and below) output a scatter plot that plots the expression values of control and treated samples and against their means. These plots revealed what appeared to be a bias where one sample tended towards overall higher expression than the other. Given the type of data being analyzed (Time series RNA seq data), it seemedunlikely that there would be such a bias inherent to the experiment. In this notebook, I propose a method for correcting this bias using linear regression and linear transformations to rotate one of the two samples prior to differential experesssion inference. The steps to performing this transformation are as follows 1. For a given sample pair, compute the mean. 2. For each sample and its repestive mean computed in step 1, perform linear regression to compute a regression coefficient (a line of best fit) through the sample/mean pair as well as a bias. 3. Zero out the sample bias by addition or subtraction across all points. 4. If the regression coeffieicients do not match, use the coefficients to compute the angle difference. 5. Compute a rotation matrix to rotate the data about the origin until they match, resulting in proper alignment between the control and treated samples. References: Rotation Matrices: https://en.wikipedia.org/wiki/Rotation_matrix End of explanation """ def calc_theta(coef1, coef2): "Returns an angle in radians" return np.abs( np.arctan(np.abs(coef1 - coef2) / (1. + (coef1 * coef2))) ) def compute_rot_mat(rad, coef=.5): " Compute a rotation matrix using rad for a given regression coefficient " if coef < 1.0: rotation_matrix = np.array([[np.cos(rad), -np.sin(rad)], [np.sin(rad), np.cos(rad)]]) else: rotation_matrix = np.array([[np.cos(rad), np.sin(rad)], [-np.sin(rad), np.cos(rad)]]) return rotation_matrix """ Explanation: Helper functions End of explanation """ slope1 = 1.1 slope2 = 2.0 line1 = np.array([slope1 * x for x in range(10)]) line2 = np.array([slope2 * x for x in range(10)]) xs = list(range(10)) """ Explanation: Experiment 1: Simple example of line rotation Example of rotating a line around the origin to match the slope of another line End of explanation """ # Plot lines plt.plot(xs, line1, color='black', label='line to rotate'); plt.plot(xs, line2, color='red', linewidth=5, label='stationary'); plt.axvline(0, linestyle='--') plt.axhline(0, linestyle='--'); plt.annotate(s=('<-- we\'ll rotate this line'), xy=(xs[3]+0.08, line1[3]), rotation=-36) plt.annotate(s=('<-- Direction of rotation'), xy=(xs[3] + 0.75, line1[3]+3.5), rotation=-15) plt.ylim((-1, 10)) plt.xlim((-1, 10)) plt.legend(loc='upper right'); """ Explanation: Original Lines End of explanation """ # Compute angle angle_diff = calc_theta(slope1, slope2) angle_diff # Compute rotation matrix rot_matrix = compute_rot_mat(angle_diff) rot_matrix # rotate line 1 (black line) new_line1 = list() for x, y in zip(xs, line1): # need shape [[#], [#]] old_point = np.array([[x], [y]]) new_point = np.dot(old_point, rot_matrix) new_line1.append(new_point) new_line1 = np.squeeze(np.asarray(new_line1)) xs[6], line1[6] plt.plot(xs, line2, color='red', linewidth=5, alpha=0.7, label='Stationary'); plt.plot(xs, line1, color='black', label='line to rotate') plt.scatter(new_line1[:, 0], new_line1[:, 1], color='black', s=95) plt.plot(new_line1[:, 0], new_line1[:, 1], color='black', linestyle='--', label='rotated line') plt.annotate(s=('Original Line'), xy=(xs[6] + 0.7, line1[6]+0.3)) plt.annotate(s=('<-- Direction of rotation'), xy=(xs[6] - 1.7, line1[6]+0.8), rotation=-36) plt.axvline(0, linestyle='--') plt.axhline(0, linestyle='--') for (x1, y1), (x2, y2) in zip(zip(xs, line1), new_line1): plt.plot([x1, x2], [y1, y2], linestyle='--', color='black', alpha=0.4) plt.ylim((-1, 10)) plt.xlim((-1, 10)); plt.legend(loc='upper right'); """ Explanation: Rotated lines End of explanation """ config = '../examples/example_config.ini' config_obj = config_parser(config) # load the data container_obj container_obj = DataContainer(config_obj) data, ercc_data = container_obj.parse_input() cols = data.columns cols df = data[['D1_Cont', 'D1_Treat']] df.loc[:, 'mean'] = df.mean(axis=1) df.head() d1 = df[['D1_Cont', 'mean']] d2 = df[['D1_Treat', 'mean']] fig, ax = plt.subplots() d1.plot('mean', 'D1_Cont', kind='scatter', xlim=(0, 5000), ylim=(0, 5000), ax=ax, color='blue', alpha=0.4) d2.plot('mean', 'D1_Treat', kind='scatter', xlim=(0, 5000), ylim=(0, 5000), ax=ax, color='red', alpha=0.4); plt.annotate('The bias between samples is clearly seen in this plot!', (500, 3900), fontsize=14) ax.set_title("Raw unnormalized data"); # Quick reset for this cell d1 = df[['D1_Cont', 'mean']] d2 = df[['D1_Treat', 'mean']] # define regression objects regCont = LinearRegression(fit_intercept=True) regTreat = LinearRegression(fit_intercept=True) # fit regression regCont.fit(d1['D1_Cont'].values.reshape(-1, 1), d1['mean'].values.reshape(-1, 1)) regTreat.fit(d2['D1_Treat'].values.reshape(-1, 1), d1['mean'].values.reshape(-1, 1)) print(regCont.coef_, regCont.intercept_) print(regTreat.coef_, regTreat.intercept_) # Correct bias d1['D1_Cont'] = d1['D1_Cont'] - regCont.intercept_ d2['D1_Treat'] = d2['D1_Treat'] - regTreat.intercept_ # Plot regression lines fig, ax = plt.subplots() d1.plot('mean', 'D1_Cont', kind='scatter', ax=ax, color='blue', alpha=0.4) d2.plot('mean', 'D1_Treat', kind='scatter', xlim=(0, 8000), ylim=(0, 8000), ax=ax, color='red', alpha=0.4) plt.plot([0, 8000], [0.0, regCont.coef_ * 8000], linestyle='--', color='black') plt.plot([0, 8000], [0.0, regTreat.coef_ * 8000], linestyle='--', color='black'); ax.set_title("bias corrected, with best fit lines"); plt.ylim((-500, 8000)) plt.xlim((-500, 8000)); plt.axvline(0, linestyle='--') plt.axhline(0, linestyle='--') plt.legend(); """ Explanation: Experiment 2: Vanilla correction using linear regression on raw expression Data (No TMM normalization) End of explanation """ correction_theta = calc_theta(np.squeeze(regCont.coef_), np.squeeze(regTreat.coef_)) correction_theta # in radians rotation_matrix = compute_rot_mat(correction_theta) rotation_matrix new_treat = np.array([np.dot(rotation_matrix, d2.values[i, :]) for i in range(len(d2.values))]) new_treat d2_cor = d2.copy() d2_cor.loc[:, 'D1_Treat'] = new_treat[:, 0] d2_cor.loc[:, 'mean'] = new_treat[:, 1] fig, ax = plt.subplots() d1.plot('mean', 'D1_Cont', kind='scatter', xlim=(0, 20000), ylim=(0, 20000), ax=ax, color='blue', alpha=0.4) d2_cor.plot('mean', 'D1_Treat', kind='scatter', xlim=(0, 5000), ylim=(0, 5000), ax=ax, color='red', alpha=0.4); ax.set_title("No TMM, linearly transformed") plt.ylim((-1000, 8000)) plt.xlim((-1000, 8000)); plt.axvline(0, linestyle='--') plt.axhline(0, linestyle='--') plt.legend(); """ Explanation: Compute rotation linear transformation End of explanation """ # load the data container_obj config = '../examples/example_config.ini' config_obj = config_parser(config) container_obj = DataContainer(config_obj) data, ercc_data = container_obj.parse_input() data = container_obj.normalize_file_pairs(data) # Single df of normalized data normed_df = data[['D1_Cont', 'D1_Treat']].copy() normed_df.loc[:, 'mean'] = normed_df.mean(axis=1) regCont = LinearRegression(fit_intercept=True) regTreat = LinearRegression(fit_intercept=True) regCont.fit(normed_df['D1_Cont'].values.reshape(-1, 1), normed_df['mean'].values.reshape(-1, 1)) regTreat.fit(normed_df['D1_Treat'].values.reshape(-1, 1), normed_df['mean'].values.reshape(-1, 1)) normed_df['D1_Cont'] = normed_df['D1_Cont'] - regCont.intercept_ normed_df['D1_Treat'] = normed_df['D1_Treat'] - regTreat.intercept_ fig, ax = plt.subplots() normed_df.plot('mean', 'D1_Cont', kind='scatter', xlim=(0, 5000), ylim=(0, 8000), ax=ax, color='blue', alpha=0.4, label='Control') normed_df.plot('mean', 'D1_Treat', kind='scatter', xlim=(0, 5000), ylim=(0, 8000), ax=ax, color='red', alpha=0.4, label='Treated') # plot regression lines, with color switch! plt.plot([0, 8000], [0.0, regCont.coef_ * 8000], linestyle='--', color='black') plt.plot([0, 4000], [0.0, regTreat.coef_ * 4000], linestyle='--', color='white'); plt.plot([4000, 8000], [regTreat.coef_[0] * 4000.0, regTreat.coef_[0] * 8000], linestyle='--', color='black'); ax.set_title("TMM normalized expression data"); plt.ylim((-500, 8000)) plt.xlim((-500, 8000)); plt.axvline(0, linestyle='--') plt.axhline(0, linestyle='--') plt.legend(); """ Explanation: Successs! We have eliminated the sample divergence! Experiment 3: Use TMM before linear transformation There could still be advantage in normalizing the samples prior to performing the linear transformation. End of explanation """ correction_theta = calc_theta(np.squeeze(regCont.coef_), np.squeeze(regTreat.coef_)) rotation_matrix = compute_rot_mat(correction_theta, regTreat.coef_) new_treat = np.array([np.dot(rotation_matrix, normed_df[['D1_Treat', 'mean']].values[i, :]) for i in range(len(normed_df))]) corr_df = normed_df.copy() corr_df.loc[:, 'D1_Treat'] = new_treat[:, 0] # corr_df.loc[:, 'mean'] = normed_df['mean'].values corr_df.loc[:, 'mean'] = new_treat[:, 1] fig, ax = plt.subplots() normed_df.plot('mean', 'D1_Cont', kind='scatter', xlim=(0, 20000), ylim=(0, 20000), ax=ax, color='blue', alpha=0.4) corr_df.plot('mean', 'D1_Treat', kind='scatter', xlim=(0, 5000), ylim=(0, 5000), ax=ax, color='red', alpha=0.4, s=10); ax.set_title("With TMM, linearly transformed"); plt.ylim((-500, 15000)) plt.xlim((-500, 15000)); plt.axvline(0, linestyle='--') plt.axhline(0, linestyle='--') plt.legend(); """ Explanation: Compute linear transformation End of explanation """ data_copy = data.copy()[['D1_Cont', 'D1_Treat']] percentiles = [.1, .2, .3, .4, .5, .6, .7, .8, .9, 0.95, 0.99] data_copy.describe(percentiles=percentiles) data_copy2 = data_copy[(data_copy.D1_Cont != 0) & (data_copy.D1_Treat != 0)] (data_copy2.D1_Cont - data_copy2.D1_Treat).abs().describe(percentiles=percentiles) (data_copy2.D1_Cont - data_copy2.D1_Treat).abs().describe(percentiles=percentiles).loc['80%'] data_copy3 = (data_copy2.D1_Cont - data_copy2.D1_Treat).abs() sns.boxplot(data_copy3[data_copy3 < 500]); data_copy2.head() # load the data container_obj config = '../examples/example_config.ini' config_obj = config_parser(config) container_obj = DataContainer(config_obj) data, ercc_data = container_obj.parse_input() data = container_obj.normalize_file_pairs(data) # Single df of normalized data #---------------------------------------------------------- data_copy = data.copy() data_copy.loc[:, 'mean'] = data_copy.mean(axis=1) data_copy = data_copy[(data_copy.D1_Cont != 0) & (data_copy.D1_Treat != 0)] data_copy.loc[:, 'abs_diff'] = (data_copy.D1_Cont - data_copy.D1_Treat).abs() cutoff = (data_copy.D1_Cont - data_copy.D1_Treat).abs().describe(percentiles=percentiles).loc['80%'] data_copy = data_copy[data_copy['abs_diff'] < cutoff] regCont = LinearRegression(fit_intercept=True) regTreat = LinearRegression(fit_intercept=True) regCont.fit(data_copy['D1_Cont'].values.reshape(-1, 1), data_copy['mean'].values.reshape(-1, 1)) regTreat.fit(data_copy['D1_Treat'].values.reshape(-1, 1), data_copy['mean'].values.reshape(-1, 1)) #---------------------------------------------------------- normed_df = data.copy() normed_df = normed_df[['D1_Cont', 'D1_Treat']].copy() normed_df.loc[:, 'mean'] = normed_df.mean(axis=1) normed_df['D1_Cont'] = normed_df['D1_Cont'] - regCont.intercept_ normed_df['D1_Treat'] = normed_df['D1_Treat'] - regTreat.intercept_ fig, ax = plt.subplots() normed_df.plot('mean', 'D1_Cont', kind='scatter', xlim=(0, 5000), ylim=(0, 8000), ax=ax, color='blue', alpha=0.4, label='Control') normed_df.plot('mean', 'D1_Treat', kind='scatter', xlim=(0, 5000), ylim=(0, 8000), ax=ax, color='red', alpha=0.4, label='Treated') # plot regression lines, with color switch! plt.plot([0, 8000], [0.0, regCont.coef_ * 8000], linestyle='--', color='black') plt.plot([0, 4000], [0.0, regTreat.coef_ * 4000], linestyle='--', color='white'); plt.plot([4000, 8000], [regTreat.coef_[0] * 4000.0, regTreat.coef_[0] * 8000], linestyle='--', color='black'); ax.set_title("TMM normalized expression data"); plt.ylim((-500, 8000)) plt.xlim((-500, 8000)); plt.axvline(0, linestyle='--') plt.axhline(0, linestyle='--') plt.legend(); correction_theta = calc_theta(np.squeeze(regCont.coef_), np.squeeze(regTreat.coef_)) rotation_matrix = compute_rot_mat(correction_theta) regCont.coef_, regTreat.coef_ normed_df.head() new_treat = np.array([np.dot(rotation_matrix, normed_df[['D1_Cont', 'mean']].values[i, :]) for i in range(len(normed_df.values))]) rotated = pd.DataFrame(new_treat, columns=['Cont_cor', 'mean_cor'], index=normed_df.index) fig, ax = plt.subplots() rotated.plot('mean_cor', 'Cont_cor', kind='scatter', xlim=(0, 20000), ylim=(0, 20000), ax=ax, color='blue', alpha=0.4) normed_df.plot('mean', 'D1_Treat', kind='scatter', xlim=(0, 5000), ylim=(0, 5000), ax=ax, color='red', alpha=0.4); ax.set_title("No TMM, linearly transformed") plt.ylim((-1000, 8000)) plt.xlim((-1000, 8000)); plt.axvline(0, linestyle='--') plt.axhline(0, linestyle='--') plt.legend(); """ Explanation: Experiment 4: Before linear regression, remove zeros and remove 80% percentile outliers End of explanation """ # load the data container_obj config = '../examples/example_config.ini' config_obj = config_parser(config) container_obj = DataContainer(config_obj) data, ercc_data = container_obj.parse_input() data = container_obj.normalize_file_pairs(data) # Single df of normalized data #---------------------------------------------------------- data_copy = data.copy() data_copy.loc[:, 'mean'] = data_copy.mean(axis=1) data_copy = data_copy[(data_copy['mean'] > 100) & (data_copy['mean'] < 500)] regCont = LinearRegression(fit_intercept=True) regTreat = LinearRegression(fit_intercept=True) regCont.fit(data_copy['D1_Cont'].values.reshape(-1, 1), data_copy['mean'].values.reshape(-1, 1)) regTreat.fit(data_copy['D1_Treat'].values.reshape(-1, 1), data_copy['mean'].values.reshape(-1, 1)) #---------------------------------------------------------- normed_df = data.copy() normed_df = normed_df[['D1_Cont', 'D1_Treat']].copy() normed_df.loc[:, 'mean'] = normed_df.mean(axis=1) normed_df['D1_Cont'] = normed_df['D1_Cont'] - regCont.intercept_ normed_df['D1_Treat'] = normed_df['D1_Treat'] - regTreat.intercept_ fig, ax = plt.subplots() normed_df.plot('mean', 'D1_Cont', kind='scatter', xlim=(0, 5000), ylim=(0, 8000), ax=ax, color='blue', alpha=0.4, label='Control') normed_df.plot('mean', 'D1_Treat', kind='scatter', xlim=(0, 5000), ylim=(0, 8000), ax=ax, color='red', alpha=0.4, label='Treated') # plot regression lines, with color switch! plt.plot([0, 8000], [0.0, regCont.coef_ * 8000], linestyle='--', color='black') plt.plot([0, 4000], [0.0, regTreat.coef_ * 4000], linestyle='--', color='white'); plt.plot([4000, 8000], [regTreat.coef_[0] * 4000.0, regTreat.coef_[0] * 8000], linestyle='--', color='black'); ax.set_title("TMM normalized expression data"); plt.ylim((-500, 8000)) plt.xlim((-500, 8000)); plt.axvline(0, linestyle='--') plt.axhline(0, linestyle='--') plt.legend(); correction_theta = calc_theta(np.squeeze(regCont.coef_), np.squeeze(regTreat.coef_)) rotation_matrix = compute_rot_mat(correction_theta) np.squeeze(regCont.coef_), np.squeeze(regTreat.coef_) normed_df.head() coefficients new_treat = np.array([np.dot(rotation_matrix, normed_df[['D1_Treat', 'mean']].values[i, :]) for i in range(len(normed_df.values))]) rotated = pd.DataFrame(new_treat, columns=['Treat_cor', 'mean_cor'], index=normed_df.index) fig, ax = plt.subplots() rotated.plot('mean_cor', 'Treat_cor', kind='scatter', xlim=(0, 20000), ylim=(0, 20000), ax=ax, color='blue', alpha=0.4) normed_df.plot('mean', 'D1_Cont', kind='scatter', xlim=(0, 5000), ylim=(0, 5000), ax=ax, color='red', alpha=0.4); ax.set_title("TMM, linearly transformed, full corretion") plt.ylim((-1000, 8000)) plt.xlim((-1000, 8000)); plt.axvline(0, linestyle='--') plt.axhline(0, linestyle='--') plt.legend(); """ Explanation: Conclusion Outlier removal doesn't provide sufficient correction. Experiment 5 - Use samples with mean ranged between interval (maybe 200 - 1200?) to compute regression coefficient. End of explanation """ # load the data container_obj config = '../examples/example_config.ini' config_obj = config_parser(config) container_obj = DataContainer(config_obj) data, ercc_data = container_obj.parse_input() data = container_obj.normalize_file_pairs(data) # Single df of normalized data #---------------------------------------------------------- data_copy = data.copy() data_copy.loc[:, 'mean'] = data_copy.mean(axis=1) data_copy = data_copy[(data_copy['mean'] > 100) & (data_copy['mean'] < 500)] regCont = LinearRegression(fit_intercept=True) regTreat = LinearRegression(fit_intercept=True) regCont.fit(data_copy['D1_Cont'].values.reshape(-1, 1), data_copy['mean'].values.reshape(-1, 1)) regTreat.fit(data_copy['D1_Treat'].values.reshape(-1, 1), data_copy['mean'].values.reshape(-1, 1)) #---------------------------------------------------------- normed_df = data.copy() normed_df = normed_df[['D1_Cont', 'D1_Treat']].copy() normed_df.loc[:, 'mean'] = normed_df.mean(axis=1) normed_df['D1_Cont'] = normed_df['D1_Cont'] - regCont.intercept_ normed_df['D1_Treat'] = normed_df['D1_Treat'] - regTreat.intercept_ correction_theta = calc_theta(np.squeeze(regCont.coef_), np.squeeze(regTreat.coef_)) rotation_matrix = compute_rot_mat(correction_theta) new_treat = np.array([np.dot(rotation_matrix, normed_df[['D1_Treat', 'mean']].values[i, :]) for i in range(len(normed_df.values))]) rotated = pd.DataFrame(new_treat, columns=['Treat_cor', 'mean_cor'], index=normed_df.index) rotated.loc[:, 'mean'] = normed_df['mean'] fig, ax = plt.subplots() rotated.plot('mean', 'Treat_cor', kind='scatter', xlim=(0, 20000), ylim=(0, 20000), ax=ax, color='blue', alpha=0.4) normed_df.plot('mean', 'D1_Cont', kind='scatter', xlim=(0, 5000), ylim=(0, 5000), ax=ax, color='red', alpha=0.4); ax.set_title("TMM, linearly transformed") plt.ylim((-1000, 18000)) plt.xlim((-1000, 18000)); plt.axvline(0, linestyle='--') plt.axhline(0, linestyle='--') plt.legend(); correction_theta = calc_theta() coefs = list(map(float, [np.squeeze(regTreat.coef_), np.squeeze(regCont.coef_)])) coefs calc_theta(*coefs) normed_df.columns[np.argmin(coefs)] """ Explanation: Experiment 6: Visualization of correction ignoring horizontal rotation shift End of explanation """
jmhsi/justin_tinker
data_science/courses/deeplearning1/nbs/dogs_cats_redux.ipynb
apache-2.0
#Verify we are in the lesson1 directory %pwd #Create references to important directories we will use over and over import os, sys current_dir = os.getcwd() LESSON_HOME_DIR = current_dir DATA_HOME_DIR = current_dir+'/data/redux' #Allow relative imports to directories above lesson1/ sys.path.insert(1, os.path.join(sys.path[0], '..')) #import modules from utils import * from vgg16 import Vgg16 #Instantiate plotting tool #In Jupyter notebooks, you will need to run this command before doing any plotting %matplotlib inline """ Explanation: Dogs vs Cat Redux In this tutorial, you will learn how generate and submit predictions to a Kaggle competiton Dogs vs. Cats Redux: Kernels Edition To start you will need to download and unzip the competition data from Kaggle and ensure your directory structure looks like this utils/ vgg16.py utils.py lesson1/ redux.ipynb data/ redux/ train/ cat.437.jpg dog.9924.jpg cat.1029.jpg dog.4374.jpg test/ 231.jpg 325.jpg 1235.jpg 9923.jpg You can download the data files from the competition page here or you can download them from the command line using the Kaggle CLI. You should launch your notebook inside the lesson1 directory cd lesson1 jupyter notebook End of explanation """ #Create directories %cd $DATA_HOME_DIR %mkdir valid %mkdir results %mkdir -p sample/train %mkdir -p sample/test %mkdir -p sample/valid %mkdir -p sample/results %mkdir -p test/unknown %cd $DATA_HOME_DIR/train g = glob('*.jpg') shuf = np.random.permutation(g) for i in range(2000): os.rename(shuf[i], DATA_HOME_DIR+'/valid/' + shuf[i]) from shutil import copyfile g = glob('*.jpg') shuf = np.random.permutation(g) for i in range(200): copyfile(shuf[i], DATA_HOME_DIR+'/sample/train/' + shuf[i]) %cd $DATA_HOME_DIR/valid g = glob('*.jpg') shuf = np.random.permutation(g) for i in range(50): copyfile(shuf[i], DATA_HOME_DIR+'/sample/valid/' + shuf[i]) """ Explanation: Action Plan Create Validation and Sample sets Rearrange image files into their respective directories Finetune and Train model Generate predictions Validate predictions Submit predictions to Kaggle Create validation set and sample End of explanation """ #Divide cat/dog images into separate directories %cd $DATA_HOME_DIR/sample/train %mkdir cats %mkdir dogs %mv cat.*.jpg cats/ %mv dog.*.jpg dogs/ %cd $DATA_HOME_DIR/sample/valid %mkdir cats %mkdir dogs %mv cat.*.jpg cats/ %mv dog.*.jpg dogs/ %cd $DATA_HOME_DIR/valid %mkdir cats %mkdir dogs %mv cat.*.jpg cats/ %mv dog.*.jpg dogs/ %cd $DATA_HOME_DIR/train %mkdir cats %mkdir dogs %mv cat.*.jpg cats/ %mv dog.*.jpg dogs/ # Create single 'unknown' class for test set %cd $DATA_HOME_DIR/test %mv *.jpg unknown/ """ Explanation: Rearrange image files into their respective directories End of explanation """ %cd $DATA_HOME_DIR #Set path to sample/ path if desired path = DATA_HOME_DIR + '/' #'/sample/' test_path = DATA_HOME_DIR + '/test/' #We use all the test data results_path=DATA_HOME_DIR + '/results/' train_path=path + '/train/' valid_path=path + '/valid/' #import Vgg16 helper class vgg = Vgg16() #Set constants. You can experiment with no_of_epochs to improve the model batch_size=64 no_of_epochs=3 #Finetune the model batches = vgg.get_batches(train_path, batch_size=batch_size) val_batches = vgg.get_batches(valid_path, batch_size=batch_size*2) vgg.finetune(batches) #Not sure if we set this for all fits vgg.model.optimizer.lr = 0.01 #Notice we are passing in the validation dataset to the fit() method #For each epoch we test our model against the validation set latest_weights_filename = None for epoch in range(no_of_epochs): print "Running epoch: %d" % epoch vgg.fit(batches, val_batches, nb_epoch=1) latest_weights_filename = 'ft%d.h5' % epoch vgg.model.save_weights(results_path+latest_weights_filename) print "Completed %s fit operations" % no_of_epochs """ Explanation: Finetuning and Training End of explanation """ batches, preds = vgg.test(test_path, batch_size = batch_size*2) #For every image, vgg.test() generates two probabilities #based on how we've ordered the cats/dogs directories. #It looks like column one is cats and column two is dogs print preds[:5] filenames = batches.filenames print filenames[:5] #You can verify the column ordering by viewing some images from PIL import Image Image.open(test_path + filenames[2]) #Save our test results arrays so we can use them again later save_array(results_path + 'test_preds.dat', preds) save_array(results_path + 'filenames.dat', filenames) """ Explanation: Generate Predictions Let's use our new model to make predictions on the test dataset End of explanation """ vgg.model.load_weights(results_path+latest_weights_filename) val_batches, probs = vgg.test(valid_path, batch_size = batch_size) filenames = val_batches.filenames expected_labels = val_batches.classes #0 or 1 #Round our predictions to 0/1 to generate labels our_predictions = probs[:,0] our_labels = np.round(1-our_predictions) from keras.preprocessing import image #Helper function to plot images by index in the validation set #Plots is a helper function in utils.py def plots_idx(idx, titles=None): plots([image.load_img(valid_path + filenames[i]) for i in idx], titles=titles) #Number of images to view for each visualization task n_view = 4 #1. A few correct labels at random correct = np.where(our_labels==expected_labels)[0] print "Found %d correct labels" % len(correct) idx = permutation(correct)[:n_view] plots_idx(idx, our_predictions[idx]) #2. A few incorrect labels at random incorrect = np.where(our_labels!=expected_labels)[0] print "Found %d incorrect labels" % len(incorrect) idx = permutation(incorrect)[:n_view] plots_idx(idx, our_predictions[idx]) #3a. The images we most confident were cats, and are actually cats correct_cats = np.where((our_labels==0) & (our_labels==expected_labels))[0] print "Found %d confident correct cats labels" % len(correct_cats) most_correct_cats = np.argsort(our_predictions[correct_cats])[::-1][:n_view] plots_idx(correct_cats[most_correct_cats], our_predictions[correct_cats][most_correct_cats]) #3b. The images we most confident were dogs, and are actually dogs correct_dogs = np.where((our_labels==1) & (our_labels==expected_labels))[0] print "Found %d confident correct dogs labels" % len(correct_dogs) most_correct_dogs = np.argsort(our_predictions[correct_dogs])[:n_view] plots_idx(correct_dogs[most_correct_dogs], our_predictions[correct_dogs][most_correct_dogs]) #4a. The images we were most confident were cats, but are actually dogs incorrect_cats = np.where((our_labels==0) & (our_labels!=expected_labels))[0] print "Found %d incorrect cats" % len(incorrect_cats) if len(incorrect_cats): most_incorrect_cats = np.argsort(our_predictions[incorrect_cats])[::-1][:n_view] plots_idx(incorrect_cats[most_incorrect_cats], our_predictions[incorrect_cats][most_incorrect_cats]) #4b. The images we were most confident were dogs, but are actually cats incorrect_dogs = np.where((our_labels==1) & (our_labels!=expected_labels))[0] print "Found %d incorrect dogs" % len(incorrect_dogs) if len(incorrect_dogs): most_incorrect_dogs = np.argsort(our_predictions[incorrect_dogs])[:n_view] plots_idx(incorrect_dogs[most_incorrect_dogs], our_predictions[incorrect_dogs][most_incorrect_dogs]) #5. The most uncertain labels (ie those with probability closest to 0.5). most_uncertain = np.argsort(np.abs(our_predictions-0.5)) plots_idx(most_uncertain[:n_view], our_predictions[most_uncertain]) """ Explanation: Validate Predictions Keras' fit() function conveniently shows us the value of the loss function, and the accuracy, after every epoch ("epoch" refers to one full run through all training examples). The most important metrics for us to look at are for the validation set, since we want to check for over-fitting. Tip: with our first model we should try to overfit before we start worrying about how to reduce over-fitting - there's no point even thinking about regularization, data augmentation, etc if you're still under-fitting! (We'll be looking at these techniques shortly). As well as looking at the overall metrics, it's also a good idea to look at examples of each of: 1. A few correct labels at random 2. A few incorrect labels at random 3. The most correct labels of each class (ie those with highest probability that are correct) 4. The most incorrect labels of each class (ie those with highest probability that are incorrect) 5. The most uncertain labels (ie those with probability closest to 0.5). Let's see what we can learn from these examples. (In general, this is a particularly useful technique for debugging problems in the model. However, since this model is so simple, there may not be too much to learn at this stage.) Calculate predictions on validation set, so we can find correct and incorrect examples: End of explanation """ from sklearn.metrics import confusion_matrix cm = confusion_matrix(expected_labels, our_labels) """ Explanation: Perhaps the most common way to analyze the result of a classification model is to use a confusion matrix. Scikit-learn has a convenient function we can use for this purpose: End of explanation """ plot_confusion_matrix(cm, val_batches.class_indices) """ Explanation: We can just print out the confusion matrix, or we can show a graphical view (which is mainly useful for dependents with a larger number of categories). End of explanation """ #Load our test predictions from file preds = load_array(results_path + 'test_preds.dat') filenames = load_array(results_path + 'filenames.dat') #Grab the dog prediction column isdog = preds[:,1] print "Raw Predictions: " + str(isdog[:5]) print "Mid Predictions: " + str(isdog[(isdog < .6) & (isdog > .4)]) print "Edge Predictions: " + str(isdog[(isdog == 1) | (isdog == 0)]) """ Explanation: Submit Predictions to Kaggle! Here's the format Kaggle requires for new submissions: imageId,isDog 1242, .3984 3947, .1000 4539, .9082 2345, .0000 Kaggle wants the imageId followed by the probability of the image being a dog. Kaggle uses a metric called Log Loss to evaluate your submission. End of explanation """ #Visualize Log Loss when True value = 1 #y-axis is log loss, x-axis is probabilty that label = 1 #As you can see Log Loss increases rapidly as we approach 0 #But increases slowly as our predicted probability gets closer to 1 import matplotlib.pyplot as plt import numpy as np from sklearn.metrics import log_loss x = [i*.0001 for i in range(1,10000)] y = [log_loss([1],[[i*.0001,1-(i*.0001)]],eps=1e-15) for i in range(1,10000,1)] plt.plot(x, y) plt.axis([-.05, 1.1, -.8, 10]) plt.title("Log Loss when true label = 1") plt.xlabel("predicted probability") plt.ylabel("log loss") plt.show() #So to play it safe, we use a sneaky trick to round down our edge predictions #Swap all ones with .95 and all zeros with .05 isdog = isdog.clip(min=0.05, max=0.95) #Extract imageIds from the filenames in our test/unknown directory filenames = batches.filenames ids = np.array([int(f[8:f.find('.')]) for f in filenames]) """ Explanation: Log Loss doesn't support probability values of 0 or 1--they are undefined (and we have many). Fortunately, Kaggle helps us by offsetting our 0s and 1s by a very small value. So if we upload our submission now we will have lots of .99999999 and .000000001 values. This seems good, right? Not so. There is an additional twist due to how log loss is calculated--log loss rewards predictions that are confident and correct (p=.9999,label=1), but it punishes predictions that are confident and wrong far more (p=.0001,label=1). See visualization below. End of explanation """ subm = np.stack([ids,isdog], axis=1) subm[:5] %cd $DATA_HOME_DIR submission_file_name = 'submission1.csv' np.savetxt(submission_file_name, subm, fmt='%d,%.5f', header='id,label', comments='') from IPython.display import FileLink %cd $LESSON_HOME_DIR FileLink('data/redux/'+submission_file_name) """ Explanation: Here we join the two columns into an array of [imageId, isDog] End of explanation """
olihit/Defensive-prgramming
DefensiveProgramming_3.ipynb
mit
def test_range_overlap(): assert range_overlap([(-3.0, 5.0), (0.0, 4.5), (-1.5, 2.0)]) == (0.0, 2.0) assert range_overlap([ (2.0, 3.0), (2.0, 4.0) ]) == (2.0, 3.0) assert range_overlap([ (0.0, 1.0), (0.0, 2.0), (-1.0, 1.0) ]) == (0.0, 1.0) """ Explanation: # Defensive programming (2) We have seen the basic idea that we can insert assert statments into code, to check that the results are what we expect, but how can we test software more fully? Can doing this help us avoid bugs in the first place? One possible approach is test driven development. Many people think this reduces the number of bugs in software as it is written, but evidence for this in the sciences is somewhat limited as it is not always easy to say what the right answer should be before writing the software. Having said that, the tests involved in test driven development are certanly useful even if some of them are written after the software. We will look at a new (and quite difficult) problem, finding the overlap between ranges of numbers. For example, these could be the dates that different sensors were running, and you need to find the date ranges where all sensors recorded data before running further analysis. <img src="python-overlapping-ranges.svg"> Start off by imagining you have a working function range_overlap that takes a list of tuples. Write some assert statments that would check if the answer from this function is correct. Put these in a function. Think of different cases and about edge cases (which may show a subtle bug). End of explanation """ def test_range_overlap_no_overlap(): assert range_overlap([ (0.0, 1.0), (5.0, 6.0) ]) == None assert range_overlap([ (0.0, 1.0), (1.0, 2.0) ]) == None """ Explanation: But what if there is no overlap? What if they just touch? End of explanation """ def test_range_overlap_one_range(): assert range_overlap([ (0.0, 1.0) ]) == (0.0, 1.0) """ Explanation: What about the case of a single range? End of explanation """ def range_overlap(ranges): # Return common overlap among a set of [low, high] ranges. lowest = -1000.0 highest = 1000.0 for (low, high) in ranges: lowest = max(lowest, low) highest = min(highest, high) return (lowest, highest) """ Explanation: The write a solution - one possible one is below. End of explanation """ test_range_overlap() test_range_overlap_no_overlap() test_range_overlap_one_range() """ Explanation: And test it... End of explanation """ def pairs_overlap(rangeA, rangeB): # Check if A starts after B ends and # A ends before B starts. If both are # false, there is an overlap. # We are assuming (0.0 1.0) and # (1.0 2.0) do not overlap. If these should # overlap swap >= for > and <= for <. overlap = not ((rangeA[0] >= rangeB[1]) or (rangeA[1] <= rangeB[0])) return overlap def find_overlap(rangeA, rangeB): # Return the overlap between range # A and B if pairs_overlap(rangeA, rangeB): low = max(rangeA[0], rangeB[0]) high = min(rangeA[1], rangeB[1]) return (low, high) else: return None def range_overlap(ranges): # Return common overlap among a set of # [low, high] ranges. if len(ranges) == 1: # Special case of one range - # overlaps with itself return(ranges[0]) elif len(ranges) == 2: # Just return from find_overlap return find_overlap(ranges[0], ranges[1]) else: # Range of A, B, C is the # range of range(B,C) with # A, etc. Do this by recursion... overlap = find_overlap(ranges[-1], ranges[-2]) if overlap is not None: # Chop off the end of ranges and # replace with the overlap ranges = ranges[:-2] ranges.append(overlap) # Now run again, with the smaller list. return range_overlap(ranges) else: return None test_range_overlap() test_range_overlap_one_range() test_range_overlap_no_overlap() """ Explanation: Should we add to the tests? Can you write version with fewer bugs. My attempt is below. End of explanation """
slundberg/shap
notebooks/tabular_examples/model_agnostic/Iris classification with scikit-learn.ipynb
mit
import sklearn from sklearn.model_selection import train_test_split import numpy as np import shap import time X_train,X_test,Y_train,Y_test = train_test_split(*shap.datasets.iris(), test_size=0.2, random_state=0) # rather than use the whole training set to estimate expected values, we could summarize with # a set of weighted kmeans, each weighted by the number of points they represent. But this dataset # is so small we don't worry about it #X_train_summary = shap.kmeans(X_train, 50) def print_accuracy(f): print("Accuracy = {0}%".format(100*np.sum(f(X_test) == Y_test)/len(Y_test))) time.sleep(0.5) # to let the print get out before any progress bars shap.initjs() """ Explanation: Iris classification with scikit-learn Here we use the well-known Iris species dataset to illustrate how SHAP can explain the output of many different model types, from k-nearest neighbors, to neural networks. This dataset is very small, with only a 150 samples. We use a random set of 130 for training and 20 for testing the models. Because this is a small dataset with only a few features we use the entire training dataset for the background. In problems with more features we would want to pass only the median of the training dataset, or weighted k-medians. While we only have a few samples, the prediction problem is fairly easy and all methods acheive perfect accuracy. What's interesting is how different methods sometimes rely on different sets of features for their predictions. Load the data End of explanation """ knn = sklearn.neighbors.KNeighborsClassifier() knn.fit(X_train, Y_train) print_accuracy(knn.predict) """ Explanation: K-nearest neighbors End of explanation """ explainer = shap.KernelExplainer(knn.predict_proba, X_train) shap_values = explainer.shap_values(X_test.iloc[0,:]) shap.force_plot(explainer.expected_value[0], shap_values[0], X_test.iloc[0,:]) """ Explanation: Explain a single prediction from the test set End of explanation """ shap_values = explainer.shap_values(X_test) shap.force_plot(explainer.expected_value[0], shap_values[0], X_test) """ Explanation: Explain all the predictions in the test set End of explanation """ svc_linear = sklearn.svm.SVC(kernel='linear', probability=True) svc_linear.fit(X_train, Y_train) print_accuracy(svc_linear.predict) # explain all the predictions in the test set explainer = shap.KernelExplainer(svc_linear.predict_proba, X_train) shap_values = explainer.shap_values(X_test) shap.force_plot(explainer.expected_value[0], shap_values[0], X_test) """ Explanation: Support vector machine with a linear kernel End of explanation """ svc_linear = sklearn.svm.SVC(kernel='rbf', probability=True) svc_linear.fit(X_train, Y_train) print_accuracy(svc_linear.predict) # explain all the predictions in the test set explainer = shap.KernelExplainer(svc_linear.predict_proba, X_train) shap_values = explainer.shap_values(X_test) shap.force_plot(explainer.expected_value[0], shap_values[0], X_test) """ Explanation: Support vector machine with a radial basis function kernel End of explanation """ linear_lr = sklearn.linear_model.LogisticRegression() linear_lr.fit(X_train, Y_train) print_accuracy(linear_lr.predict) # explain all the predictions in the test set explainer = shap.KernelExplainer(linear_lr.predict_proba, X_train) shap_values = explainer.shap_values(X_test) shap.force_plot(explainer.expected_value[0], shap_values[0], X_test) """ Explanation: Logistic regression End of explanation """ import sklearn.tree dtree = sklearn.tree.DecisionTreeClassifier(min_samples_split=2) dtree.fit(X_train, Y_train) print_accuracy(dtree.predict) # explain all the predictions in the test set explainer = shap.KernelExplainer(dtree.predict_proba, X_train) shap_values = explainer.shap_values(X_test) shap.force_plot(explainer.expected_value[0], shap_values[0], X_test) """ Explanation: Decision tree End of explanation """ from sklearn.ensemble import RandomForestClassifier rforest = RandomForestClassifier(n_estimators=100, max_depth=None, min_samples_split=2, random_state=0) rforest.fit(X_train, Y_train) print_accuracy(rforest.predict) # explain all the predictions in the test set explainer = shap.KernelExplainer(rforest.predict_proba, X_train) shap_values = explainer.shap_values(X_test) shap.force_plot(explainer.expected_value[0], shap_values[0], X_test) """ Explanation: Random forest End of explanation """ from sklearn.neural_network import MLPClassifier nn = MLPClassifier(solver='lbfgs', alpha=1e-1, hidden_layer_sizes=(5, 2), random_state=0) nn.fit(X_train, Y_train) print_accuracy(nn.predict) # explain all the predictions in the test set explainer = shap.KernelExplainer(nn.predict_proba, X_train) shap_values = explainer.shap_values(X_test) shap.force_plot(explainer.expected_value[0], shap_values[0], X_test) """ Explanation: Neural network End of explanation """
ES-DOC/esdoc-jupyterhub
notebooks/cnrm-cerfacs/cmip6/models/cnrm-esm2-1/toplevel.ipynb
gpl-3.0
# DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'cnrm-cerfacs', 'cnrm-esm2-1', 'toplevel') """ Explanation: ES-DOC CMIP6 Model Properties - Toplevel MIP Era: CMIP6 Institute: CNRM-CERFACS Source ID: CNRM-ESM2-1 Sub-Topics: Radiative Forcings. Properties: 85 (42 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:53:52 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation """ # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) """ Explanation: Document Authors Set document authors End of explanation """ # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) """ Explanation: Document Contributors Specify document contributors End of explanation """ # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) """ Explanation: Document Publication Specify document publication status End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Flux Correction 3. Key Properties --&gt; Genealogy 4. Key Properties --&gt; Software Properties 5. Key Properties --&gt; Coupling 6. Key Properties --&gt; Tuning Applied 7. Key Properties --&gt; Conservation --&gt; Heat 8. Key Properties --&gt; Conservation --&gt; Fresh Water 9. Key Properties --&gt; Conservation --&gt; Salt 10. Key Properties --&gt; Conservation --&gt; Momentum 11. Radiative Forcings 12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2 13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4 14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O 15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3 16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3 17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC 18. Radiative Forcings --&gt; Aerosols --&gt; SO4 19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon 20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon 21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate 22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect 23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect 24. Radiative Forcings --&gt; Aerosols --&gt; Dust 25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic 26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic 27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt 28. Radiative Forcings --&gt; Other --&gt; Land Use 29. Radiative Forcings --&gt; Other --&gt; Solar 1. Key Properties Key properties of the model 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Top level overview of coupled model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of coupled model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2. Key Properties --&gt; Flux Correction Flux correction properties of the model 2.1. Details Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how flux corrections are applied in the model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3. Key Properties --&gt; Genealogy Genealogy and history of the model 3.1. Year Released Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Year the model was released End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3.2. CMIP3 Parent Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 CMIP3 parent if any End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3.3. CMIP5 Parent Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 CMIP5 parent if any End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3.4. Previous Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Previously known as End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4. Key Properties --&gt; Software Properties Software properties of model 4.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4.4. Components Structure Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "OASIS" # "OASIS3-MCT" # "ESMF" # "NUOPC" # "Bespoke" # "Unknown" # "None" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 4.5. Coupler Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Overarching coupling framework for model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.coupling.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5. Key Properties --&gt; Coupling ** 5.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of coupling in the model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 5.2. Atmosphere Double Flux Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Atmosphere grid" # "Ocean grid" # "Specific coupler grid" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 5.3. Atmosphere Fluxes Calculation Grid Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Where are the air-sea fluxes calculated End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 5.4. Atmosphere Relative Winds Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6. Key Properties --&gt; Tuning Applied Tuning methodology for model 6.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.2. Global Mean Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List set of metrics/diagnostics of the global mean state used in tuning model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.3. Regional Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.4. Trend Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List observed trend metrics/diagnostics used in tuning model/component (such as 20th century) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.5. Energy Balance Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.6. Fresh Water Balance Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7. Key Properties --&gt; Conservation --&gt; Heat Global heat convervation properties of the model 7.1. Global Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how heat is conserved globally End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.2. Atmos Ocean Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how heat is conserved at the atmosphere/ocean coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.3. Atmos Land Interface Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how heat is conserved at the atmosphere/land coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.4. Atmos Sea-ice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.5. Ocean Seaice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how heat is conserved at the ocean/sea-ice coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.6. Land Ocean Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how heat is conserved at the land/ocean coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8. Key Properties --&gt; Conservation --&gt; Fresh Water Global fresh water convervation properties of the model 8.1. Global Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how fresh_water is conserved globally End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.2. Atmos Ocean Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.3. Atmos Land Interface Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how fresh water is conserved at the atmosphere/land coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.4. Atmos Sea-ice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.5. Ocean Seaice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.6. Runoff Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe how runoff is distributed and conserved End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.7. Iceberg Calving Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how iceberg calving is modeled and conserved End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.8. Endoreic Basins Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how endoreic basins (no ocean access) are treated End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.9. Snow Accumulation Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe how snow accumulation over land and over sea-ice is treated End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9. Key Properties --&gt; Conservation --&gt; Salt Global salt convervation properties of the model 9.1. Ocean Seaice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how salt is conserved at the ocean/sea-ice coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 10. Key Properties --&gt; Conservation --&gt; Momentum Global momentum convervation properties of the model 10.1. Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how momentum is conserved in the model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11. Radiative Forcings Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5) 11.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of radiative forcings (GHG and aerosols) implementation in model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2 Carbon dioxide forcing 12.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 12.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4 Methane forcing 13.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 13.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O Nitrous oxide forcing 14.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 14.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3 Troposheric ozone forcing 15.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3 Stratospheric ozone forcing 16.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 16.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC Ozone-depleting and non-ozone-depleting fluorinated gases forcing 17.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "Option 1" # "Option 2" # "Option 3" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.2. Equivalence Concentration Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Details of any equivalence concentrations used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.3. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 18. Radiative Forcings --&gt; Aerosols --&gt; SO4 SO4 aerosol forcing 18.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 18.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon Black carbon aerosol forcing 19.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 19.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon Organic carbon aerosol forcing 20.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 20.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate Nitrate forcing 21.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 21.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect Cloud albedo effect forcing (RFaci) 22.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 22.2. Aerosol Effect On Ice Clouds Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Radiative effects of aerosols on ice clouds are represented? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 22.3. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect Cloud lifetime effect forcing (ERFaci) 23.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 23.2. Aerosol Effect On Ice Clouds Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Radiative effects of aerosols on ice clouds are represented? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 23.3. RFaci From Sulfate Only Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Radiative forcing from aerosol cloud interactions from sulfate aerosol only? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 23.4. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 24. Radiative Forcings --&gt; Aerosols --&gt; Dust Dust forcing 24.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 24.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic Tropospheric volcanic forcing 25.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Type A" # "Type B" # "Type C" # "Type D" # "Type E" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How explosive volcanic aerosol is implemented in historical simulations End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Type A" # "Type B" # "Type C" # "Type D" # "Type E" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How explosive volcanic aerosol is implemented in future simulations End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 25.4. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic Stratospheric volcanic forcing 26.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Type A" # "Type B" # "Type C" # "Type D" # "Type E" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How explosive volcanic aerosol is implemented in historical simulations End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Type A" # "Type B" # "Type C" # "Type D" # "Type E" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How explosive volcanic aerosol is implemented in future simulations End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 26.4. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt Sea salt forcing 27.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 27.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 28. Radiative Forcings --&gt; Other --&gt; Land Use Land use forcing 28.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 28.2. Crop Change Only Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Land use change represented via crop change only? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 28.3. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "irradiance" # "proton" # "electron" # "cosmic ray" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 29. Radiative Forcings --&gt; Other --&gt; Solar Solar forcing 29.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How solar forcing is provided End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 29.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """
tensorflow/docs-l10n
site/ko/guide/keras/transfer_learning.ipynb
apache-2.0
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: Copyright 2020 The TensorFlow Authors. End of explanation """ import numpy as np import tensorflow as tf from tensorflow import keras """ Explanation: 전이 학습 및 미세 조정 <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/guide/keras/transfer_learning"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org에서 보기</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/guide/keras/transfer_learning.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab에서 실행</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/guide/keras/transfer_learning.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub에서 소스 보기</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/guide/keras/transfer_learning.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">노트북 다운로드</a> </td> </table> 설정 End of explanation """ layer = keras.layers.Dense(3) layer.build((None, 4)) # Create the weights print("weights:", len(layer.weights)) print("trainable_weights:", len(layer.trainable_weights)) print("non_trainable_weights:", len(layer.non_trainable_weights)) """ Explanation: 시작하기 전이 학습은 한 가지 문제에 대해 학습한 기능을 가져와서 비슷한 새로운 문제에 활용하는 것으로 구성됩니다. 예를 들어, 너구리를 식별하는 방법을 배운 모델의 기능은 너구리를 식별하는 모델을 시작하는 데 유용할 수 있습니다. 전이 학습은 일반적으로 전체 모델을 처음부터 훈련하기에는 데이터세트에 데이터가 너무 적은 작업에 대해 수행됩니다. 딥 러닝의 맥락에서 전이 학습의 가장 일반적인 구현은 다음 워크플로와 같습니다. 이전에 훈련된 모델에서 레이어를 가져옵니다. 추후 훈련 라운드 중에 포함된 정보가 손상되지 않도록 동결합니다. 고정된 레이어 위에 훈련할 수 있는 새 레이어를 추가합니다. 해당 레이어는 기존 기능을 새로운 데이터세트에 대한 예측으로 전환하는 방법을 배웁니다. 데이터세트에서 새로운 레이어를 훈련합니다. 마지막으로 선택적인 단계는 미세 조정입니다. 이 단계는 위에서 얻은 전체 모델(또는 모델의 일부)을 동결 해제하고 학습률이 매우 낮은 새로운 데이터에 대해 재훈련하는 과정으로 구성됩니다. 이는 사전 훈련된 특성을 새로운 데이터에 점진적으로 적용함으로써 의미 있는 개선을 달성할 수 있습니다. 먼저, Keras의 trainable API에 대해 자세히 살펴보겠습니다. 이 API는 대부분의 전이 학습 및 미세 조정 워크플로의 기초가 됩니다. 그런 다음 ImageNet 데이터세트에서 사전 훈련된 모델을 사용하여 Kaggle "cats vs dogs" 분류 데이터세트에서 재훈련함으로써 일반적인 워크플로를 시연합니다. 이것은 Deep Learning with Python과 2016 블로그 게시물 "building powerful image classification models using very little data"로부터 조정되었습니다. 레이어 동결: trainable 속성의 이해 레이어 및 모델에는 세 가지 가중치 속성이 있습니다. weights는 레이어의 모든 가중치 변수 목록입니다. trainable_weights는 훈련 중 손실을 최소화하기 위해 업데이트(그래디언트 디센트를 통해)되어야 하는 목록입니다. non_trainable_weights는 훈련되지 않은 가중치 변수의 목록입니다. 일반적으로 순방향 전달 중에 모델에 의해 업데이트됩니다. 예제: Dense 레이어에는 2개의 훈련 가능한 가중치가 있습니다(커널 및 바이어스) End of explanation """ layer = keras.layers.BatchNormalization() layer.build((None, 4)) # Create the weights print("weights:", len(layer.weights)) print("trainable_weights:", len(layer.trainable_weights)) print("non_trainable_weights:", len(layer.non_trainable_weights)) """ Explanation: 일반적으로 모든 가중치는 훈련이 가능합니다. 훈련할 수 없는 가중치가 있는 유일한 내장 레이어는 BatchNormalization 레이어입니다. 훈련할 수 없는 가중치를 사용하여 훈련 중 입력의 평균 및 분산을 추적합니다. 훈련할 수 없는 가중치를 사용자 정의 레이어에서 사용하는 방법을 배우려면 새 레이어를 처음부터 작성하는 방법을 참조하세요. 예제: BatchNormalization 레이어에는 2개의 훈련 가능한 가중치와 2개의 훈련할 수 없는 가중치가 있습니다 End of explanation """ layer = keras.layers.Dense(3) layer.build((None, 4)) # Create the weights layer.trainable = False # Freeze the layer print("weights:", len(layer.weights)) print("trainable_weights:", len(layer.trainable_weights)) print("non_trainable_weights:", len(layer.non_trainable_weights)) """ Explanation: 레이어 및 모델에는 boolean 속성 trainable도 있습니다. 값은 변경될 수 있습니다. layer.trainable을 False로 설정하면 모든 레이어의 가중치가 훈련 가능에서 훈련 불가능으로 이동합니다. 이를 레이어 "동결"이라고 합니다. 동결 레이어의 상태는 fit()을 사용하거나 trainable_weights에 의존하는 사용자 정의 루프를 사용해 훈련하여 그래디언트 업데이트를 적용할 때도 훈련하는 동안 업데이트되지 않습니다. 예제: trainable을 False로 설정 End of explanation """ # Make a model with 2 layers layer1 = keras.layers.Dense(3, activation="relu") layer2 = keras.layers.Dense(3, activation="sigmoid") model = keras.Sequential([keras.Input(shape=(3,)), layer1, layer2]) # Freeze the first layer layer1.trainable = False # Keep a copy of the weights of layer1 for later reference initial_layer1_weights_values = layer1.get_weights() # Train the model model.compile(optimizer="adam", loss="mse") model.fit(np.random.random((2, 3)), np.random.random((2, 3))) # Check that the weights of layer1 have not changed during training final_layer1_weights_values = layer1.get_weights() np.testing.assert_allclose( initial_layer1_weights_values[0], final_layer1_weights_values[0] ) np.testing.assert_allclose( initial_layer1_weights_values[1], final_layer1_weights_values[1] ) """ Explanation: 훈련 가능한 가중치가 훈련할 수 없게 되면 훈련 중에 그 값이 더는 업데이트되지 않습니다. End of explanation """ inner_model = keras.Sequential( [ keras.Input(shape=(3,)), keras.layers.Dense(3, activation="relu"), keras.layers.Dense(3, activation="relu"), ] ) model = keras.Sequential( [keras.Input(shape=(3,)), inner_model, keras.layers.Dense(3, activation="sigmoid"),] ) model.trainable = False # Freeze the outer model assert inner_model.trainable == False # All layers in `model` are now frozen assert inner_model.layers[0].trainable == False # `trainable` is propagated recursively """ Explanation: layer.trainable 속성을 레이어가 추론 모드 또는 훈련 모드에서 순방향 전달을 실행해야 하는지를 제어하는 layer.__call__()의 인수 training과 혼동하지 마세요. 자세한 내용은 Keras FAQ를 참조하세요. trainable 속성의 재귀 설정 모델 또는 하위 레이어가 있는 레이어에서 trainable = False를 설정하면 모든 하위 레이어도 훈련할 수 없게 됩니다. 예제: End of explanation """ import tensorflow_datasets as tfds tfds.disable_progress_bar() train_ds, validation_ds, test_ds = tfds.load( "cats_vs_dogs", # Reserve 10% for validation and 10% for test split=["train[:40%]", "train[40%:50%]", "train[50%:60%]"], as_supervised=True, # Include labels ) print("Number of training samples: %d" % tf.data.experimental.cardinality(train_ds)) print( "Number of validation samples: %d" % tf.data.experimental.cardinality(validation_ds) ) print("Number of test samples: %d" % tf.data.experimental.cardinality(test_ds)) """ Explanation: 일반적인 전이 학습 워크플로 이를 통해 Keras에서 일반적인 전이 학습 워크플로를 구현할 수 있습니다. 기본 모델을 인스턴스화하고 사전 훈련된 가중치를 여기에 로드합니다. trainable = False를 설정하여 기본 모델의 모든 레이어를 동결합니다. 기본 모델에서 하나 이상의 레이어 출력 위에 새 모델을 만듭니다. 새 데이터세트에서 새 모델을 훈련합니다. 보다 가벼운 대안 워크플로는 다음과 같습니다. 기본 모델을 인스턴스화하고 사전 훈련된 가중치를 여기에 로드합니다. 이를 통해 새로운 데이터세트를 실행하고 기본 모델의 하나의(또는 여러) 레이어의 출력을 기록합니다. 이를 특성 추출이라 합니다. 이 출력을 더 작은 새 모델의 입력 데이터로 사용합니다. 이 두 번째 워크플로의 주요 장점은 훈련 epoch마다 한 번이 아니라 한 번의 데이터로 기본 모델을 실행한다는 것입니다. 따라서 훨씬 빠르고 저렴합니다. 그러나 두 번째 워크플로의 문제점은 훈련 중에 새 모델의 입력 데이터를 동적으로 수정할 수 없다는 것입니다. 예를 들어 데이터 증강을 수행할 때 필요합니다. 전이 학습은 일반적으로 새 데이터세트에 데이터가 너무 작아서 전체 규모의 모델을 처음부터 훈련할 수 없는 작업에 사용되며, 이러한 시나리오에서는 데이터 증강이 매우 중요합니다. 따라서 다음 내용에서는 첫 번째 워크플로에 중점을 둘 것입니다. Keras의 첫 번째 워크플로는 다음과 같습니다. 먼저, 사전 훈련된 가중치를 사용하여 기본 모델을 인스턴스화합니다. python base_model = keras.applications.Xception( weights='imagenet', # Load weights pre-trained on ImageNet. input_shape=(150, 150, 3), include_top=False) # Do not include the ImageNet classifier at the top. 그런 다음 기본 모델을 동결합니다. python base_model.trainable = False 맨 위에 새 모델을 만듭니다. ```python inputs = keras.Input(shape=(150, 150, 3)) We make sure that the base_model is running in inference mode here, by passing training=False. This is important for fine-tuning, as you will learn in a few paragraphs. x = base_model(inputs, training=False) Convert features of shape base_model.output_shape[1:] to vectors x = keras.layers.GlobalAveragePooling2D()(x) A Dense classifier with a single unit (binary classification) outputs = keras.layers.Dense(1)(x) model = keras.Model(inputs, outputs) ``` 새 데이터로 모델을 훈련합니다. python model.compile(optimizer=keras.optimizers.Adam(), loss=keras.losses.BinaryCrossentropy(from_logits=True), metrics=[keras.metrics.BinaryAccuracy()]) model.fit(new_dataset, epochs=20, callbacks=..., validation_data=...) 미세 조정 모델이 새로운 데이터에 수렴하면 기본 모델의 일부 또는 전부를 동결 해제하고 학습률이 매우 낮은 전체 모델을 전체적으로 재훈련할 수 있습니다. 이 단계는 선택적으로 마지막 단계이며 점진적으로 개선할 수 있습니다. 또한 잠재적으로 빠른 과대적합을 초래할 수 있습니다. 명심하세요. 동결된 레이어가 있는 모델이 수렴하도록 훈련된 후에만 이 단계를 수행하는 것이 중요합니다. 무작위로 초기화된 훈련 가능한 레이어를 사전 훈련된 특성을 보유하는 훈련 가능한 레이어와 혼합하는 경우, 무작위로 초기화된 레이어는 훈련 중에 매우 큰 그래디언트 업데이트를 유발하여 사전 훈련된 특성을 파괴합니다. 또한 이 단계에서는 일반적으로 매우 작은 데이터 집합에서 첫 번째 훈련보다 훨씬 더 큰 모델을 훈련하기 때문에, 매우 낮은 학습률을 사용하는 것이 중요합니다. 결과적으로 큰 가중치 업데이트를 적용하면 과도하게 빠른 과대적합의 위험이 있습니다. 여기에서는 사전 훈련된 가중치만 점진적인 방식으로 다시 적용하려고 합니다. 다음은 전체 기본 모델의 미세 조정을 구현하는 방법입니다. ```python Unfreeze the base model base_model.trainable = True It's important to recompile your model after you make any changes to the trainable attribute of any inner layer, so that your changes are take into account model.compile(optimizer=keras.optimizers.Adam(1e-5), # Very low learning rate loss=keras.losses.BinaryCrossentropy(from_logits=True), metrics=[keras.metrics.BinaryAccuracy()]) Train end-to-end. Be careful to stop before you overfit! model.fit(new_dataset, epochs=10, callbacks=..., validation_data=...) ``` compile() 및 trainable에 대한 중요 사항 모델에서 compile()을 호출하는 것은 해당 모델의 동작을 "동결"하기 위한 것입니다. 이는 compile이 다시 호출될 때까지 모델이 컴파일될 때 trainable 속성값이 해당 모델의 수명 동안 유지되어야 함을 의미합니다. 따라서 trainable 값을 변경하면 모델에서 compile()을 다시 호출하여 변경 사항을 적용합니다. BatchNormalization 레이어에 대한 중요 사항 많은 이미지 모델에는 BatchNormalization 레이어가 포함되어 있습니다. 그 레이어는 상상할 수 있는 모든 수에서 특별한 경우입니다. 다음은 명심해야 할 몇 가지 사항입니다. BatchNormalization에는 훈련 중에 업데이트되는 훈련 불가능한 2개의 가중치가 포함되어 있습니다. 입력의 평균과 분산을 추적하는 변수입니다. bn_layer.trainable = False를 설정하면 BatchNormalization 레이어가 추론 모드에서 실행되며 평균 및 분산 통계가 업데이트되지 않습니다. 가중치 훈련 및 추론/훈련 모드가 직교 개념이므로 일반적으로 다른 레이어의 경우에는 해당하지 않습니다. 그러나 BatchNormalization 레이어의 경우 두 가지가 묶여 있습니다. 미세 조정을 위해 BatchNormalization 레이어를 포함하는 모델을 동결 해제하면 기본 모델을 호출할 때 training=False를 전달하여 BatchNormalization 레이어를 추론 모드로 유지해야 합니다. 그렇지 않으면 훈련 불가능한 가중치에 적용된 업데이트로 인해 모델이 학습한 내용이 갑작스럽게 파괴됩니다. 이 가이드 끝의 엔드 투 엔드 예제에서 이 패턴이 적용되는 것을 볼 수 있습니다. 사용자 정의 훈련 루프를 사용한 전이 학습 및 미세 조정 fit() 대신 자체 저수준 훈련 루프를 사용하는 경우 워크플로는 본질적으로 동일하게 유지됩니다. 그래디언트 업데이트를 적용할 때 목록 model.trainable_weights만 고려해야 합니다. ```python Create base model base_model = keras.applications.Xception( weights='imagenet', input_shape=(150, 150, 3), include_top=False) Freeze base model base_model.trainable = False Create new model on top. inputs = keras.Input(shape=(150, 150, 3)) x = base_model(inputs, training=False) x = keras.layers.GlobalAveragePooling2D()(x) outputs = keras.layers.Dense(1)(x) model = keras.Model(inputs, outputs) loss_fn = keras.losses.BinaryCrossentropy(from_logits=True) optimizer = keras.optimizers.Adam() Iterate over the batches of a dataset. for inputs, targets in new_dataset: # Open a GradientTape. with tf.GradientTape() as tape: # Forward pass. predictions = model(inputs) # Compute the loss value for this batch. loss_value = loss_fn(targets, predictions) # Get gradients of loss wrt the *trainable* weights. gradients = tape.gradient(loss_value, model.trainable_weights) # Update the weights of the model. optimizer.apply_gradients(zip(gradients, model.trainable_weights)) ``` 미세 조정의 경우도 마찬가지입니다. 엔드 투 엔드 예제: 고양이 vs 개 데이터세트에서 이미지 분류 모델 미세 조정 이러한 개념을 강화하기 위해 구체적인 엔드 투 엔드 전이 학습 및 미세 조정 예제를 안내합니다. ImageNet에서 사전 훈련된 Xception 모델을 로드하고 Kaggle "cats vs. dogs" 분류 데이터세트에서 사용합니다. 데이터 얻기 먼저 TFDS를 사용해 고양이 vs 개 데이터세트를 가져옵니다. 자체 데이터세트가 있다면 유틸리티 tf.keras.preprocessing.image_dataset_from_directory를 사용하여 클래스별 폴더에 보관된 디스크의 이미지 세트에서 유사한 레이블이 지정된 데이터세트 객체를 생성할 수 있습니다. 전이 학습은 매우 작은 데이터로 작업할 때 가장 유용합니다. 데이터세트를 작게 유지하기 위해 원래 훈련 데이터(25,000개 이미지)의 40%를 훈련에, 10%를 유효성 검사에, 10%를 테스트에 사용합니다. End of explanation """ import matplotlib.pyplot as plt plt.figure(figsize=(10, 10)) for i, (image, label) in enumerate(train_ds.take(9)): ax = plt.subplot(3, 3, i + 1) plt.imshow(image) plt.title(int(label)) plt.axis("off") """ Explanation: 이것은 훈련 데이터세트에서 처음 9개의 이미지입니다. 보시다시피 이미지는 모두 크기가 다릅니다. End of explanation """ size = (150, 150) train_ds = train_ds.map(lambda x, y: (tf.image.resize(x, size), y)) validation_ds = validation_ds.map(lambda x, y: (tf.image.resize(x, size), y)) test_ds = test_ds.map(lambda x, y: (tf.image.resize(x, size), y)) """ Explanation: 레이블 1이 "개"이고 레이블 0이 "고양이"임을 알 수 있습니다. 데이터 표준화하기 원시 이미지는 다양한 크기를 가지고 있습니다. 또한 각 픽셀은 0에서 255 사이의 3개의 정숫값(RGB 레벨값)으로 구성됩니다. 이는 신경망에 공급하기 적합하지 않습니다. 2가지를 수행해야 합니다. 고정된 이미지 크기로 표준화합니다. 150x150을 선택합니다. 정상 픽셀 값은 -1 과 1 사이입니다. 모델 자체의 일부로 Normalization 레이어를 사용합니다. 일반적으로 이미 사전 처리된 데이터를 사용하는 모델과 달리 원시 데이터를 입력으로 사용하는 모델을 개발하는 것이 좋습니다. 모델에 사전 처리 된 데이터가 필요한 경우 모델을 내보내 다른 위치(웹 브라우저, 모바일 앱)에서 사용할 때마다 동일한 사전 처리 파이프 라인을 다시 구현해야하기 때문입니다. 이것은 매우 까다로워집니다. 따라서 모델에 도달하기 전에 가능한 최소한의 전처리를 수행해야합니다. 여기에서는 데이터 파이프라인에서 이미지 크기 조정을 수행하고(심층 신경망은 인접한 데이터 배치만 처리할 수 있기 때문에) 입력값 스케일링을 모델의 일부로 생성합니다. 이미지 크기를 150x150으로 조정해 보겠습니다. End of explanation """ batch_size = 32 train_ds = train_ds.cache().batch(batch_size).prefetch(buffer_size=10) validation_ds = validation_ds.cache().batch(batch_size).prefetch(buffer_size=10) test_ds = test_ds.cache().batch(batch_size).prefetch(buffer_size=10) """ Explanation: 또한 데이터를 일괄 처리하고 캐싱 및 프리페치를 사용하여 로딩 속도를 최적화합니다. End of explanation """ from tensorflow import keras from tensorflow.keras import layers data_augmentation = keras.Sequential( [layers.RandomFlip("horizontal"), layers.RandomRotation(0.1),] ) """ Explanation: 무작위 데이터 증강 사용하기 큰 이미지 데이터세트가 없는 경우 임의의 수평 뒤집기 또는 작은 임의의 회전과 같이 훈련 이미지에 무작위이지만 사실적인 변형을 적용하여 샘플 다양성을 인위적으로 도입하는 것이 좋습니다. 이것은 과대적합을 늦추면서 모델을 훈련 데이터의 다른 측면에 노출하는 데 도움이 됩니다. End of explanation """ import numpy as np for images, labels in train_ds.take(1): plt.figure(figsize=(10, 10)) first_image = images[0] for i in range(9): ax = plt.subplot(3, 3, i + 1) augmented_image = data_augmentation( tf.expand_dims(first_image, 0), training=True ) plt.imshow(augmented_image[0].numpy().astype("int32")) plt.title(int(labels[0])) plt.axis("off") """ Explanation: 다양한 무작위 변형 후 첫 번째 배치의 첫 번째 이미지가 어떻게 보이는지 시각화해 보겠습니다. End of explanation """ base_model = keras.applications.Xception( weights="imagenet", # Load weights pre-trained on ImageNet. input_shape=(150, 150, 3), include_top=False, ) # Do not include the ImageNet classifier at the top. # Freeze the base_model base_model.trainable = False # Create new model on top inputs = keras.Input(shape=(150, 150, 3)) x = data_augmentation(inputs) # Apply random data augmentation # Pre-trained Xception weights requires that input be scaled # from (0, 255) to a range of (-1., +1.), the rescaling layer # outputs: `(inputs * scale) + offset` scale_layer = keras.layers.Rescaling(scale=1 / 127.5, offset=-1) x = scale_layer(x) # The base model contains batchnorm layers. We want to keep them in inference mode # when we unfreeze the base model for fine-tuning, so we make sure that the # base_model is running in inference mode here. x = base_model(x, training=False) x = keras.layers.GlobalAveragePooling2D()(x) x = keras.layers.Dropout(0.2)(x) # Regularize with dropout outputs = keras.layers.Dense(1)(x) model = keras.Model(inputs, outputs) model.summary() """ Explanation: 모델 빌드하기 이제 앞에서 설명한 청사진을 따르는 모델을 만들어 보겠습니다. 참고 사항: 입력 값(처음에는 [0, 255] 범위)을 [-1, 1] 범위로 조정하기 위해 Rescaling 레이어를 추가합니다. 정규화를 위해 분류 레이어 앞에 Dropout 레이어를 추가합니다. 기본 모델을 호출할 때 training=False를 전달하여 추론 모드에서 실행되므로 미세 조정을 위해 기본 모델을 동결 해제한 후에도 batchnorm 통계가 업데이트되지 않습니다. End of explanation """ model.compile( optimizer=keras.optimizers.Adam(), loss=keras.losses.BinaryCrossentropy(from_logits=True), metrics=[keras.metrics.BinaryAccuracy()], ) epochs = 20 model.fit(train_ds, epochs=epochs, validation_data=validation_ds) """ Explanation: 최상위 레이어 훈련하기 End of explanation """ # Unfreeze the base_model. Note that it keeps running in inference mode # since we passed `training=False` when calling it. This means that # the batchnorm layers will not update their batch statistics. # This prevents the batchnorm layers from undoing all the training # we've done so far. base_model.trainable = True model.summary() model.compile( optimizer=keras.optimizers.Adam(1e-5), # Low learning rate loss=keras.losses.BinaryCrossentropy(from_logits=True), metrics=[keras.metrics.BinaryAccuracy()], ) epochs = 10 model.fit(train_ds, epochs=epochs, validation_data=validation_ds) """ Explanation: 전체 모델의 미세 조정 수행하기 마지막으로 기본 모델을 동결 해제하고 낮은 학습률로 전체 모델을 전체적으로 훈련합니다. 중요한 것은 기본 모델이 훈련 가능하지만 모델 빌드를 호출할 때 training=False를 전달했으므로 여전히 추론 모드로 실행되고 있다는 것입니다. 이는 내부의 배치 정규화 레이어가 배치 통계를 업데이트하지 않음을 의미합니다. 만약 레이어가 배치 통계를 업데이트한다면 지금까지 모델이 학습한 표현에 혼란을 줄 수 있습니다. End of explanation """
OxES/k2sc
notebooks/lightkurve.ipynb
gpl-3.0
import numpy as np import matplotlib import matplotlib as mpl import lightkurve as lk import k2sc from k2sc.standalone import k2sc_lc from astropy.io import fits %pylab inline --no-import-all matplotlib.rcParams['image.origin'] = 'lower' matplotlib.rcParams['figure.figsize']=(10.0,10.0) #(6.0,4.0) matplotlib.rcParams['font.size']=16 #10 matplotlib.rcParams['savefig.dpi']= 300 #72 colours = mpl.rcParams['axes.prop_cycle'].by_key()['color'] import warnings warnings.filterwarnings('ignore') print(lk.__version__) print(k2sc.__version__) """ Explanation: K2SC Lightkurve Interface Let's go through how you would use k2sc with its lightkurve interface to detrend the light curve of WASP-55. First, we import some things! End of explanation """ lc = lk.search_lightcurve('EPIC 212300977')[1].download() lc = lc.remove_nans() lc = lc[lc.quality==0] """ Explanation: Reading in data. Let's search MAST for the long-cadence light curve file of WASP-55 using the lightkurve API, and do some very basic filtering for data quality. End of explanation """ lc.__class__ = k2sc_lc """ Explanation: Let's now try K2SC! As a quick hack for now, let's just clobber the lightkurve object class to our k2sc standalone. End of explanation """ lc.k2sc() """ Explanation: Now we run with default values! The tqdm progress bar will show a percentage of the maximum iterations of the differential evolution optimizer, but it will usually finish early. End of explanation """ fig = plt.figure(figsize=(12.0,8.0)) plt.plot(lc.time.value,lc.flux.value,'.',label="Uncorrected") detrended = lc.corr_flux-lc.tr_time + np.nanmedian(lc.tr_time) plt.plot(lc.time.value,detrended.value,'.',label="K2SC") plt.legend() plt.xlabel('BJD') plt.ylabel('Flux') plt.title('WASP-55',y=1.01) """ Explanation: Now we plot! See how the k2sc lightcurve has such better quality than the uncorrected data. Careful with astropy units - flux and time are dimensionful quantities in lightkurve 2.0, so we have to use .value to render them as numbers. End of explanation """ extras = {'CORR_FLUX':lc.corr_flux.value, 'TR_TIME':lc.tr_time.value, 'TR_POSITION':lc.tr_position.value} out = lc.to_fits(extra_data=extras,path='test.fits',overwrite=True) """ Explanation: Now we save the data. End of explanation """
DJCordhose/ai
notebooks/2019_tf/rnn-add-example.ipynb
mit
# Adapted from # https://github.com/keras-team/keras/blob/master/examples/addition_rnn.py import warnings warnings.filterwarnings('ignore') %matplotlib inline %pylab inline import tensorflow as tf tf.logging.set_verbosity(tf.logging.ERROR) print(tf.__version__) # let's see what compute devices we have available, hopefully a GPU sess = tf.Session() devices = sess.list_devices() for d in devices: print(d.name) # a small sanity check, does tf seem to work ok? hello = tf.constant('Hello TF!') print(sess.run(hello)) from tensorflow import keras print(keras.__version__) """ Explanation: <a href="https://colab.research.google.com/github/DJCordhose/ai/blob/master/notebooks/2019_tf/rnn-add-example.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> End of explanation """ class CharacterTable(object): """Given a set of characters: + Encode them to a one hot integer representation + Decode the one hot integer representation to their character output + Decode a vector of probabilities to their character output """ def __init__(self, chars): """Initialize character table. # Arguments chars: Characters that can appear in the input. """ self.chars = sorted(set(chars)) self.char_indices = dict((c, i) for i, c in enumerate(self.chars)) self.indices_char = dict((i, c) for i, c in enumerate(self.chars)) def encode(self, C, num_rows): """One hot encode given string C. # Arguments num_rows: Number of rows in the returned one hot encoding. This is used to keep the # of rows for each data the same. """ x = np.zeros((num_rows, len(self.chars))) for i, c in enumerate(C): x[i, self.char_indices[c]] = 1 return x def decode(self, x, calc_argmax=True): if calc_argmax: x = x.argmax(axis=-1) return ''.join(self.indices_char[x] for x in x) class colors: ok = '\033[92m' fail = '\033[91m' close = '\033[0m' # Parameters for the model and dataset. TRAINING_SIZE = 50000 DIGITS = 3 # REVERSE = True REVERSE = False # Maximum length of input is 'int + int' (e.g., '345+678'). Maximum length of # int is DIGITS. MAXLEN = DIGITS + 1 + DIGITS # All the numbers, plus sign and space for padding. chars = '0123456789+ ' ctable = CharacterTable(chars) questions = [] expected = [] seen = set() print('Generating data...') while len(questions) < TRAINING_SIZE: f = lambda: int(''.join(np.random.choice(list('0123456789')) for i in range(np.random.randint(1, DIGITS + 1)))) a, b = f(), f() # Skip any addition questions we've already seen # Also skip any such that x+Y == Y+x (hence the sorting). key = tuple(sorted((a, b))) if key in seen: continue seen.add(key) # Pad the data with spaces such that it is always MAXLEN. q = '{}+{}'.format(a, b) query = q + ' ' * (MAXLEN - len(q)) ans = str(a + b) # Answers can be of maximum size DIGITS + 1. ans += ' ' * (DIGITS + 1 - len(ans)) if REVERSE: # Reverse the query, e.g., '12+345 ' becomes ' 543+21'. (Note the # space used for padding.) query = query[::-1] questions.append(query) expected.append(ans) print('Total addition questions:', len(questions)) questions[0] print('Vectorization...') x = np.zeros((len(questions), MAXLEN, len(chars)), dtype=np.bool) y = np.zeros((len(questions), DIGITS + 1, len(chars)), dtype=np.bool) for i, sentence in enumerate(questions): x[i] = ctable.encode(sentence, MAXLEN) for i, sentence in enumerate(expected): y[i] = ctable.encode(sentence, DIGITS + 1) len(x[0]) len(questions[0]) questions[0] """ Explanation: Step 1: Generate sample equations End of explanation """ x[0] """ Explanation: Input is encoded as one-hot, 7 digits times 12 possibilities End of explanation """ y[0] expected[0] # Shuffle (x, y) in unison as the later parts of x will almost all be larger # digits. indices = np.arange(len(y)) np.random.shuffle(indices) x = x[indices] y = y[indices] """ Explanation: Same for output, but at most 4 digits End of explanation """ # Explicitly set apart 10% for validation data that we never train over. split_at = len(x) - len(x) // 10 (x_train, x_val) = x[:split_at], x[split_at:] (y_train, y_val) = y[:split_at], y[split_at:] print('Training Data:') print(x_train.shape) print(y_train.shape) print('Validation Data:') print(x_val.shape) print(y_val.shape) """ Explanation: Step 2: Training/Validation Split End of explanation """ # input shape: 7 digits, each being 0-9, + or space (12 possibilities) MAXLEN, len(chars) from tensorflow.keras.models import Sequential from tensorflow.keras.layers import LSTM, GRU, SimpleRNN, Dense, RepeatVector # Try replacing LSTM, GRU, or SimpleRNN. # RNN = LSTM RNN = SimpleRNN # should be enough since we do not have long sequences and only local dependencies # RNN = GRU HIDDEN_SIZE = 128 BATCH_SIZE = 128 model = Sequential() # encoder model.add(RNN(units=HIDDEN_SIZE, input_shape=(MAXLEN, len(chars)))) # latent space encoding_dim = 32 model.add(Dense(units=encoding_dim, activation='relu', name="encoder")) # decoder: have 4 temporal outputs one for each of the digits of the results model.add(RepeatVector(DIGITS + 1)) # return_sequences=True tells it to keep all 4 temporal outputs, not only the final one (we need all four digits for the results) model.add(RNN(units=HIDDEN_SIZE, return_sequences=True)) model.add(Dense(name='classifier', units=len(chars), activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) model.summary() """ Explanation: Step 3: Create Model End of explanation """ # input one-hot x_val[0] # output "one-hot" scores model.predict(np.array([x_val[0]])) # output decoded by only showing highest score for digit model.predict_classes(np.array([x_val[0]])) """ Explanation: Before training lets look at sample input and output End of explanation """ %%time # Train the model each generation and show predictions against the validation # dataset. merged_losses = { "loss": [], "val_loss": [] } for iteration in range(1, 50): print() print('-' * 50) print('Iteration', iteration) iteration_history = model.fit(x_train, y_train, batch_size=BATCH_SIZE, epochs=1, validation_data=(x_val, y_val)) merged_losses["loss"].append(iteration_history.history["loss"]) merged_losses["val_loss"].append(iteration_history.history["val_loss"]) # Select 10 samples from the validation set at random so we can visualize # errors. for i in range(10): ind = np.random.randint(0, len(x_val)) rowx, rowy = x_val[np.array([ind])], y_val[np.array([ind])] preds = model.predict_classes(rowx, verbose=0) q = ctable.decode(rowx[0]) correct = ctable.decode(rowy[0]) guess = ctable.decode(preds[0], calc_argmax=False) print('Q', q[::-1] if REVERSE else q, end=' ') print('T', correct, end=' ') if correct == guess: print(colors.ok + '☑' + colors.close, end=' ') else: print(colors.fail + '☒' + colors.close, end=' ') print(guess) import matplotlib.pyplot as plt plt.ylabel('loss') plt.xlabel('epoch') plt.yscale('log') plt.plot(merged_losses['loss'], 'b') plt.plot(merged_losses['val_loss'], 'r') plt.legend(['loss', 'validation loss']) plt.plot() """ Explanation: Step 4: Train End of explanation """
xpmanoj/content
HW0_solutions.ipynb
mit
x = [10, 20, 30, 40, 50] for item in x: print "Item is ", item """ Explanation: Homework 0 Due Tuesday, September 10 (but no submission is required) Welcome to CS109 / STAT121 / AC209 / E-109 (http://cs109.org/). In this class, we will be using a variety of tools that will require some initial configuration. To ensure everything goes smoothly moving forward, we will setup the majority of those tools in this homework. While some of this will likely be dull, doing it now will enable us to do more exciting work in the weeks that follow without getting bogged down in further software configuration. This homework will not be graded, however it is essential that you complete it timely since it will enable us to set up your accounts. You do not have to hand anything in, with the exception of filling out the online survey. Class Survey, Piazza, and Introduction Class Survey Please complete the mandatory course survey located here. It should only take a few moments of your time. Once you fill in the survey we will sign you up to the course forum on Piazza and the dropbox system that you will use to hand in the homework. It is imperative that you fill out the survey on time as we use the provided information to sign you up for these services. Piazza Go to Piazza and sign up for the class using your Harvard e-mail address. You will use Piazza as a forum for discussion, to find team members, to arrange appointments, and to ask questions. Piazza should be your primary form of communication with the staff. Use the staff e-mail (staff@cs109.org) only for individual requests, e.g., to excuse yourself from a mandatory guest lecture. All readings, homeworks, and project descriptions will be announced on Piazza first. Introduction Once you are signed up to the Piazza course forum, introduce yourself to your classmates and course staff with a follow-up post in the introduction thread. Include your name/nickname, your affiliation, why you are taking this course, and tell us something interesting about yourself (e.g., an industry job, an unusual hobby, past travels, or a cool project you did, etc.). Also tell us whether you have experience with data science. Programming expectations All the assignments and labs for this class will use Python and, for the most part, the browser-based IPython notebook format you are currently viewing. Knowledge of Python is not a prerequisite for this course, provided you are comfortable learning on your own as needed. While we have strived to make the programming component of this course straightforward, we will not devote much time to teaching prorgramming or Python syntax. Basically, you should feel comfortable with: How to look up Python syntax on Google and StackOverflow. Basic programming concepts like functions, loops, arrays, dictionaries, strings, and if statements. How to learn new libraries by reading documentation. Asking questions on StackOverflow or Piazza. There are many online tutorials to introduce you to scientific python programming. Here is one that is very nice. Lectures 1-4 are most relevant to this class. Getting Python You will be using Python throughout the course, including many popular 3rd party Python libraries for scientific computing. Anaconda is an easy-to-install bundle of Python and most of these libraries. We recommend that you use Anaconda for this course. Please visit this page and follow the instructions to set up Python <hline> Hello, Python The IPython notebook is an application to build interactive computational notebooks. You'll be using them to complete labs and homework. Once you've set up Python, please <a href=https://raw.github.com/cs109/content/master/HW0.ipynb download="HW0.ipynb">download this page</a>, and open it with IPython by typing ipython notebook &lt;name_of_downloaded_file&gt; For the rest of the assignment, use your local copy of this page, running on IPython. Notebooks are composed of many "cells", which can contain text (like this one), or code (like the one below). Double click on the cell below, and evaluate it by clicking the "play" button above, for by hitting shift + enter End of explanation """ #IPython is what you are using now to run the notebook import IPython print "IPython version: %6.6s (need at least 1.0)" % IPython.__version__ # Numpy is a library for working with Arrays import numpy as np print "Numpy version: %6.6s (need at least 1.7.1)" % np.__version__ # SciPy implements many different numerical algorithms import scipy as sp print "SciPy version: %6.6s (need at least 0.12.0)" % sp.__version__ # Pandas makes working with data tables easier import pandas as pd print "Pandas version: %6.6s (need at least 0.11.0)" % pd.__version__ # Module for plotting import matplotlib print "Mapltolib version: %6.6s (need at least 1.2.1)" % matplotlib.__version__ # SciKit Learn implements several Machine Learning algorithms import sklearn print "Scikit-Learn version: %6.6s (need at least 0.13.1)" % sklearn.__version__ # Requests is a library for getting data from the Web import requests print "requests version: %6.6s (need at least 1.2.3)" % requests.__version__ # Networkx is a library for working with networks import networkx as nx print "NetworkX version: %6.6s (need at least 1.7)" % nx.__version__ #BeautifulSoup is a library to parse HTML and XML documents import BeautifulSoup print "BeautifulSoup version:%6.6s (need at least 3.2)" % BeautifulSoup.__version__ #MrJob is a library to run map reduce jobs on Amazon's computers import mrjob print "Mr Job version: %6.6s (need at least 0.4)" % mrjob.__version__ #Pattern has lots of tools for working with data from the internet import pattern print "Pattern version: %6.6s (need at least 2.6)" % pattern.__version__ """ Explanation: Python Libraries We will be using a several different libraries throughout this course. If you've successfully completed the installation instructions, all of the following statements should run. End of explanation """ #this line prepares IPython for working with matplotlib %matplotlib inline # this actually imports matplotlib import matplotlib.pyplot as plt x = np.linspace(0, 10, 30) #array of 30 points from 0 to 10 y = np.sin(x) z = y + np.random.normal(size=30) * .2 plt.plot(x, y, 'ro-', label='A sine wave') plt.plot(x, z, 'b-', label='Noisy sine') plt.legend(loc = 'lower right') plt.xlabel("X axis") plt.ylabel("Y axis") """ Explanation: If any of these libraries are missing or out of date, you will need to install them and restart IPython Hello matplotlib The notebook integrates nicely with Matplotlib, the primary plotting package for python. This should embed a figure of a sine wave: End of explanation """ print "Make a 3 row x 4 column array of random numbers" x = np.random.random((3, 4)) print x print print "Add 1 to every element" x = x + 1 print x print print "Get the element at row 1, column 2" print x[1, 2] print # The colon syntax is called "slicing" the array. print "Get the first row" print x[0, :] print print "Get every 2nd column of the first row" print x[0, ::2] print """ Explanation: If that last cell complained about the %matplotlib line, you need to update IPython to v1.0, and restart the notebook. See the installation page Hello Numpy The Numpy array processing library is the basis of nearly all numerical computing in Python. Here's a 30 second crash course. For more details, consult Chapter 4 of Python for Data Analysis, or the Numpy User's Guide End of explanation """ #your code here print "Max is ", x.max() print "Min is ", x.min() print "Mean is ", x.mean() """ Explanation: Print the maximum, minimum, and mean of the array. This does not require writing a loop. In the code cell below, type x.m&lt;TAB&gt;, to find built-in operations for common array statistics like this End of explanation """ #your code here print x.max(axis=1) """ Explanation: Call the x.max function again, but use the axis keyword to print the maximum of each row in x. End of explanation """ x = np.random.binomial(500, .5) print "number of heads:", x """ Explanation: Here's a way to quickly simulate 500 coin "fair" coin tosses (where the probabily of getting Heads is 50%, or 0.5) End of explanation """ #your code here # 3 ways to run the simulations # loop heads = [] for i in range(500): heads.append(np.random.binomial(500, .5)) # "list comprehension" heads = [np.random.binomial(500, .5) for i in range(500)] # pure numpy heads = np.random.binomial(500, .5, size=500) histogram = plt.hist(heads, bins=10) heads.shape """ Explanation: Repeat this simulation 500 times, and use the plt.hist() function to plot a histogram of the number of Heads (1s) in each simulation End of explanation """ """ Function -------- simulate_prizedoor Generate a random array of 0s, 1s, and 2s, representing hiding a prize between door 0, door 1, and door 2 Parameters ---------- nsim : int The number of simulations to run Returns ------- sims : array Random array of 0s, 1s, and 2s Example ------- >>> print simulate_prizedoor(3) array([0, 0, 2]) """ def simulate_prizedoor(nsim): #compute here return answer #your code here def simulate_prizedoor(nsim): return np.random.randint(0, 3, (nsim)) """ Explanation: The Monty Hall Problem Here's a fun and perhaps surprising statistical riddle, and a good way to get some practice writing python functions In a gameshow, contestants try to guess which of 3 closed doors contain a cash prize (goats are behind the other two doors). Of course, the odds of choosing the correct door are 1 in 3. As a twist, the host of the show occasionally opens a door after a contestant makes his or her choice. This door is always one of the two the contestant did not pick, and is also always one of the goat doors (note that it is always possible to do this, since there are two goat doors). At this point, the contestant has the option of keeping his or her original choice, or swtiching to the other unopened door. The question is: is there any benefit to switching doors? The answer surprises many people who haven't heard the question before. We can answer the problem by running simulations in Python. We'll do it in several parts. First, write a function called simulate_prizedoor. This function will simulate the location of the prize in many games -- see the detailed specification below: End of explanation """ """ Function -------- simulate_guess Return any strategy for guessing which door a prize is behind. This could be a random strategy, one that always guesses 2, whatever. Parameters ---------- nsim : int The number of simulations to generate guesses for Returns ------- guesses : array An array of guesses. Each guess is a 0, 1, or 2 Example ------- >>> print simulate_guess(5) array([0, 0, 0, 0, 0]) """ #your code here def simulate_guess(nsim): return np.zeros(nsim, dtype=np.int) """ Explanation: Next, write a function that simulates the contestant's guesses for nsim simulations. Call this function simulate_guess. The specs: End of explanation """ """ Function -------- goat_door Simulate the opening of a "goat door" that doesn't contain the prize, and is different from the contestants guess Parameters ---------- prizedoors : array The door that the prize is behind in each simulation guesses : array THe door that the contestant guessed in each simulation Returns ------- goats : array The goat door that is opened for each simulation. Each item is 0, 1, or 2, and is different from both prizedoors and guesses Examples -------- >>> print goat_door(np.array([0, 1, 2]), np.array([1, 1, 1])) >>> array([2, 2, 0]) """ #your code here def goat_door(prizedoors, guesses): #strategy: generate random answers, and #keep updating until they satisfy the rule #that they aren't a prizedoor or a guess result = np.random.randint(0, 3, prizedoors.size) while True: bad = (result == prizedoors) | (result == guesses) if not bad.any(): return result result[bad] = np.random.randint(0, 3, bad.sum()) """ Explanation: Next, write a function, goat_door, to simulate randomly revealing one of the goat doors that a contestant didn't pick. End of explanation """ """ Function -------- switch_guess The strategy that always switches a guess after the goat door is opened Parameters ---------- guesses : array Array of original guesses, for each simulation goatdoors : array Array of revealed goat doors for each simulation Returns ------- The new door after switching. Should be different from both guesses and goatdoors Examples -------- >>> print switch_guess(np.array([0, 1, 2]), np.array([1, 2, 1])) >>> array([2, 0, 0]) """ #your code here def switch_guess(guesses, goatdoors): result = np.zeros(guesses.size) switch = {(0, 1): 2, (0, 2): 1, (1, 0): 2, (1, 2): 1, (2, 0): 1, (2, 1): 0} for i in [0, 1, 2]: for j in [0, 1, 2]: mask = (guesses == i) & (goatdoors == j) if not mask.any(): continue result = np.where(mask, np.ones_like(result) * switch[(i, j)], result) return result """ Explanation: Write a function, switch_guess, that represents the strategy of always switching a guess after the goat door is opened. End of explanation """ """ Function -------- win_percentage Calculate the percent of times that a simulation of guesses is correct Parameters ----------- guesses : array Guesses for each simulation prizedoors : array Location of prize for each simulation Returns -------- percentage : number between 0 and 100 The win percentage Examples --------- >>> print win_percentage(np.array([0, 1, 2]), np.array([0, 0, 0])) 33.333 """ #your code here def win_percentage(guesses, prizedoors): return 100 * (guesses == prizedoors).mean() """ Explanation: Last function: write a win_percentage function that takes an array of guesses and prizedoors, and returns the percent of correct guesses End of explanation """ #your code here nsim = 10000 #keep guesses print "Win percentage when keeping original door" print win_percentage(simulate_prizedoor(nsim), simulate_guess(nsim)) #switch pd = simulate_prizedoor(nsim) guess = simulate_guess(nsim) goats = goat_door(pd, guess) guess = switch_guess(guess, goats) print "Win percentage when switching doors" print win_percentage(pd, guess).mean() """ Explanation: Now, put it together. Simulate 10000 games where contestant keeps his original guess, and 10000 games where the contestant switches his door after a goat door is revealed. Compute the percentage of time the contestant wins under either strategy. Is one strategy better than the other? End of explanation """
Cyb3rWard0g/HELK
docker/helk-jupyter/notebooks/tutorials/07-pyspark-sparkSQL_tables.ipynb
gpl-3.0
from pyspark.sql import SparkSession """ Explanation: Spark SQL Tables via Pyspark Goals: Practice Spark SQL via PySpark skills Ensure JupyterLab Server, Spark Cluster & Elasticsearch are communicating Practice Query execution via Pyspark Create template for future queries Import SparkSession Class End of explanation """ spark = SparkSession.builder \ .appName("HELK Reader") \ .master("spark://helk-spark-master:7077") \ .enableHiveSupport() \ .getOrCreate() """ Explanation: Create a SparkSession instance End of explanation """ es_reader = (spark.read .format("org.elasticsearch.spark.sql") .option("inferSchema", "true") .option("es.read.field.as.array.include", "tags") .option("es.nodes","helk-elasticsearch:9200") .option("es.net.http.auth.user","elastic") ) #PLEASE REMEMBER!!!! #If you are using elastic TRIAL license, then you need the es.net.http.auth.pass config option set #Example: .option("es.net.http.auth.pass","elasticpassword") """ Explanation: Read data from the HELK Elasticsearch via Spark SQL End of explanation """ %%time sysmon_df = es_reader.load("logs-endpoint-winevent-sysmon-*/") """ Explanation: Read Sysmon Events End of explanation """ sysmon_df.createOrReplaceTempView("sysmon_events") ## Run SQL Queries sysmon_ps_execution = spark.sql( ''' SELECT event_id,process_parent_name,process_name FROM sysmon_events WHERE event_id = 1 AND process_name = "powershell.exe" AND NOT process_parent_name = "explorer.exe" ''' ) sysmon_ps_execution.show(10) sysmon_ps_module = spark.sql( ''' SELECT event_id,process_name FROM sysmon_events WHERE event_id = 7 AND ( lower(file_description) = "system.management.automation" OR lower(module_loaded) LIKE "%\\\\system.management.automation%" ) ''' ) sysmon_ps_module.show(10) sysmon_ps_pipe = spark.sql( ''' SELECT event_id,process_name FROM sysmon_events WHERE event_id = 17 AND lower(pipe_name) LIKE "\\\\pshost%" ''' ) sysmon_ps_pipe.show(10) """ Explanation: Register Sysmon SQL temporary View End of explanation """ %%time powershell_df = es_reader.load("logs-endpoint-winevent-powershell-*/") """ Explanation: Read PowerShell Events End of explanation """ powershell_df.createOrReplaceTempView("powershell_events") ps_named_pipe = spark.sql( ''' SELECT event_id FROM powershell_events WHERE event_id = 53504 ''' ) ps_named_pipe.show(10) """ Explanation: Register PowerShell SQL temporary View End of explanation """
abatula/MachineLearningIntro
KMeans_Tutorial.ipynb
gpl-2.0
# Print figures in the notebook %matplotlib inline import numpy as np import matplotlib.pyplot as plt from matplotlib.colors import ListedColormap from sklearn import datasets # Import the dataset from scikit-learn from sklearn.cluster import KMeans # Import the KMeans classifier # Import patch for drawing rectangles in the legend from matplotlib.patches import Rectangle # Create color maps cmap_bold = ListedColormap(['#FF0000', '#00FF00', '#0000FF']) cmap_bg = ListedColormap(['#333333', '#666666', '#999999']) # Create a legend for the colors, using rectangles for the corresponding colormap colors labelList = [] for color in cmap_bold.colors: labelList.append(Rectangle((0, 0), 1, 1, fc=color)) """ Explanation: K-Means Tutorial K-means is an example of unsupervised learning through clustering. It tries to separate unlabeled data into clusters with equal variance. In two dimensions, this can be visualized as grouping data using circular areas of equal radius. There are three steps training a K-means classifier: Pick how many groups you want it to use and (randomly) assign a starting centroid (center point) to each cluster. Assign each data point to the group with the closest centroid. Find the mean value of each feature (the middle point of the cluster) for all the points assinged to each cluster. This is the new centroid for that cluster. Steps 2 and 3 repeat until the cluster centroids do not move significantly. Scikit-learn provides more information on the K-means classifier function KMeans. They also have an examples of using K-means to classify handwritten numbers. Setup Tell matplotlib to print figures in the notebook. Then import numpy (for numerical data), pyplot (for plotting figures) ListedColormap (for plotting colors), kmeans (for the scikit-learn kmeans algorithm) and datasets (to download the iris dataset from scikit-learn). Also create the color maps to use to color the plotted data, and "labelList", which is a list of colored rectangles to use in plotted legends. End of explanation """ np.random.seed(42) data1 = np.random.randn(40,2) + np.array([1,3]) data2 = np.random.randn(40,2) + np.array([5,2]) data = np.concatenate([data1, data2], axis=0) centroids = np.random.randn(2,2) + np.array([-1,3]) # Plot the data plt.scatter(data[:, 0], data[:, 1]) plt.title("Initial Configuration") # Plot the centroids as a black X plt.scatter(centroids[:, 0], centroids[:, 1], marker='x', s=169, linewidths=3, color='r', zorder=10) plt.show() """ Explanation: Visualizing K-Means Let's start by visualizing the steps involved in K-Means clustering. Let's create a simple toy dataset and cluster it into four groups. Don't worry if you aren't sure what the code is doing yet, the important part is the visualizations. We'll cover how to use K-Means afterwards. Below, we create the toy dataset, randomly select 2 starting locations, and plot them together. First we set the seed to 42 so everyone should see the same results. Then we create half of the x,y co-ordinates using one random distribution, and the other half from a second distribution. These are joined into a single dataset. Afterwards, we randomly select the x,y co-ordinates for the 2 starting locations and plot these along with the data points. End of explanation """ h = .02 # step size in the mesh # Plot the decision boundary. For that, we will assign a color to each # point in the mesh [x_min, m_max]x[y_min, y_max]. x_min, x_max = data[:, 0].min() - 1, data[:, 0].max() + 1 y_min, y_max = data[:, 1].min() - 1, data[:, 1].max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) Z = np.zeros(xx.shape) for row in range(Z.shape[0]): for col in range(Z.shape[1]): d1 = np.linalg.norm(np.array([xx[row,col], yy[row,col]]) - centroids[0,:]) d2 = np.linalg.norm(np.array([xx[row,col], yy[row,col]]) - centroids[1,:]) if d2 < d1: Z[row,col] = 1 # Put the result into a color plot plt.figure(figsize=(8, 6)) plt.pcolormesh(xx, yy, Z, cmap=cmap_bg) # Plot the training points plt.scatter(data[:, 0], data[:, 1]) plt.xlim(xx.min(), xx.max()) plt.ylim(yy.min(), yy.max()) plt.title("Assigned to Clusters") # Plot the centroids as a red X plt.scatter(centroids[:, 0], centroids[:, 1], marker='x', s=169, linewidths=3, color='r', zorder=10) plt.show() """ Explanation: Currently the centroids aren't in great locations, they aren't even really in the data. But in our next step, we assign each data point to belong to the closest centroid. You can see how they're assigned in the plot below. End of explanation """ kmeans = KMeans(n_clusters=2, init=centroids, max_iter=1, n_init=1) kmeans.fit(data) # Put the result into a color plot plt.figure(figsize=(8, 6)) plt.pcolormesh(xx, yy, Z, cmap=cmap_bg) # Plot the training points plt.scatter(data[:, 0], data[:, 1]) plt.xlim(xx.min(), xx.max()) plt.ylim(yy.min(), yy.max()) plt.title("Centroids Relocated") # Plot old centroids as a black X plt.scatter(centroids[:, 0], centroids[:, 1], marker='x', s=169, linewidths=3, color='k', zorder=10) # Plot the centroids as a red X centroids = kmeans.cluster_centers_ plt.scatter(centroids[:, 0], centroids[:, 1], marker='x', s=169, linewidths=3, color='r', zorder=10) plt.show() """ Explanation: Once all the points are assigned to a cluster, the centroids move to the average x and y value of their assigned points. Below, the old locations are shown as a black X, and the new locations are red Xs. End of explanation """ h = .02 # step size in the mesh # Plot the decision boundary. For that, we will assign a color to each # point in the mesh [x_min, m_max]x[y_min, y_max]. x_min, x_max = data[:, 0].min() - 1, data[:, 0].max() + 1 y_min, y_max = data[:, 1].min() - 1, data[:, 1].max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) Z = kmeans.predict(np.c_[xx.ravel(), yy.ravel()]) # Make a prediction at every point # in the mesh in order to find the # classification areas for each label # Put the result into a color plot Z = Z.reshape(xx.shape) plt.figure(figsize=(8, 6)) plt.pcolormesh(xx, yy, Z, cmap=cmap_bg) # Plot the training points plt.scatter(data[:, 0], data[:, 1]) plt.xlim(xx.min(), xx.max()) plt.ylim(yy.min(), yy.max()) plt.title("Re-assigned to New Clusters") # Plot the centroids as a black X plt.scatter(centroids[:, 0], centroids[:, 1], marker='x', s=169, linewidths=3, color='r', zorder=10) plt.show() """ Explanation: Next, the datapoints are re-assigned to the new closest centroid. End of explanation """ kmeans = KMeans(n_clusters=2, init=centroids, max_iter=1, n_init=1) kmeans.fit(data) # Put the result into a color plot plt.figure(figsize=(8, 6)) plt.pcolormesh(xx, yy, Z, cmap=cmap_bg) # Plot the training points plt.scatter(data[:, 0], data[:, 1]) plt.xlim(xx.min(), xx.max()) plt.ylim(yy.min(), yy.max()) plt.title("Centroids Relocated") plt.scatter(centroids[:, 0], centroids[:, 1], marker='x', s=169, linewidths=3, color='k', zorder=10) # Plot the centroids as a red X centroids = kmeans.cluster_centers_ plt.scatter(centroids[:, 0], centroids[:, 1], marker='x', s=169, linewidths=3, color='r', zorder=10) plt.show() h = .02 # step size in the mesh # Plot the decision boundary. For that, we will assign a color to each # point in the mesh [x_min, m_max]x[y_min, y_max]. x_min, x_max = data[:, 0].min() - 1, data[:, 0].max() + 1 y_min, y_max = data[:, 1].min() - 1, data[:, 1].max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) Z = kmeans.predict(np.c_[xx.ravel(), yy.ravel()]) # Make a prediction at every point # in the mesh in order to find the # classification areas for each label # Put the result into a color plot Z = Z.reshape(xx.shape) plt.figure(figsize=(8, 6)) plt.pcolormesh(xx, yy, Z, cmap=cmap_bg) # Plot the training points plt.scatter(data[:, 0], data[:, 1]) plt.xlim(xx.min(), xx.max()) plt.ylim(yy.min(), yy.max()) plt.title("Re-assigned to New Clusters") # Plot the centroids as a red X plt.scatter(centroids[:, 0], centroids[:, 1], marker='x', s=169, linewidths=3, color='r', zorder=10) plt.show() kmeans = KMeans(n_clusters=2, init=centroids, max_iter=1, n_init=1) kmeans.fit(data) # Put the result into a color plot plt.figure(figsize=(8, 6)) plt.pcolormesh(xx, yy, Z, cmap=cmap_bg) # Plot the training points plt.scatter(data[:, 0], data[:, 1]) plt.xlim(xx.min(), xx.max()) plt.ylim(yy.min(), yy.max()) plt.title("Centroids Relocated") plt.scatter(centroids[:, 0], centroids[:, 1], marker='x', s=169, linewidths=3, color='k', zorder=10) # Plot the centroids as a red X centroids = kmeans.cluster_centers_ plt.scatter(centroids[:, 0], centroids[:, 1], marker='x', s=169, linewidths=3, color='r', zorder=10) plt.show() h = .02 # step size in the mesh # Plot the decision boundary. For that, we will assign a color to each # point in the mesh [x_min, m_max]x[y_min, y_max]. x_min, x_max = data[:, 0].min() - 1, data[:, 0].max() + 1 y_min, y_max = data[:, 1].min() - 1, data[:, 1].max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) Z = kmeans.predict(np.c_[xx.ravel(), yy.ravel()]) # Make a prediction at every point # in the mesh in order to find the # classification areas for each label # Put the result into a color plot Z = Z.reshape(xx.shape) plt.figure(figsize=(8, 6)) plt.pcolormesh(xx, yy, Z, cmap=cmap_bg) # Plot the training points plt.scatter(data[:, 0], data[:, 1]) plt.xlim(xx.min(), xx.max()) plt.ylim(yy.min(), yy.max()) plt.title("Re-assigned to New Clusters") # Plot the centroids as a black X plt.scatter(centroids[:, 0], centroids[:, 1], marker='x', s=169, linewidths=3, color='r', zorder=10) plt.show() """ Explanation: We repeat these two steps until we've reached a maximum number of calculations or the centroids no longer move significantly. We'll show a few more iterations so you can see what happens. End of explanation """ # Import some data to play with iris = datasets.load_iris() # Store the labels (y), label names, features (X), and feature names y = iris.target # Labels are stored in y as numbers labelNames = iris.target_names # Species names corresponding to labels 0, 1, and 2 X = iris.data featureNames = iris.feature_names """ Explanation: Import the dataset Import the dataset and store it to a variable called iris. Scikit-learn's explanation of the dataset is here. This dataset is similar to a python dictionary, with the keys: ['DESCR', 'target_names', 'target', 'data', 'feature_names'] The data features are stored in iris.data, where each row is an example from a single flower, and each column is a single feature. The feature names are stored in iris.feature_names. Labels are stored as the numbers 0, 1, or 2 in iris.target, and the names of these labels are in iris.target_names. The dataset consists of measurements made on 50 examples from each of three different species of iris flowers (Setosa, Versicolour, and Virginica). Each example has four features (or measurements): sepal length, sepal width, petal length, and petal width. All measurements are in cm. Below, we load the labels into y, the corresponding label names into labelNames, the data into X, and the names of the features into featureNames. End of explanation """ # Plot the data # Sepal length and width X_small = X[:,:2] # Get the minimum and maximum values with an additional 0.5 border x_min, x_max = X_small[:, 0].min() - .5, X_small[:, 0].max() + .5 y_min, y_max = X_small[:, 1].min() - .5, X_small[:, 1].max() + .5 plt.figure(figsize=(8, 6)) # Plot the training points plt.scatter(X_small[:, 0], X_small[:, 1], c=y, cmap=cmap_bold) plt.xlabel('Sepal length (cm)') plt.ylabel('Sepal width (cm)') plt.title('Sepal width vs length') # Set the plot limits plt.xlim(x_min, x_max) plt.ylim(y_min, y_max) # Plot the legend plt.legend(labelList, labelNames) plt.show() """ Explanation: Below, we plot the first two features from the dataset (sepal length and width). Normally we would try to use all useful features, but sticking with two allows us to visualize the data more easily. Then we plot the data to get a look at what we're dealing with. The colormap is used to determine what colors are used for each class when plotting. End of explanation """ # Plot the data # Sepal length and width X_small = X[:,:2] # Get the minimum and maximum values with an additional 0.5 border x_min, x_max = X_small[:, 0].min() - .5, X_small[:, 0].max() + .5 y_min, y_max = X_small[:, 1].min() - .5, X_small[:, 1].max() + .5 plt.figure(figsize=(8, 6)) # Plot the training points plt.scatter(X_small[:, 0], X_small[:, 1]) plt.xlabel('Sepal length (cm)') plt.ylabel('Sepal width (cm)') plt.title('Sepal width vs length') # Set the plot limits plt.xlim(x_min, x_max) plt.ylim(y_min, y_max) plt.show() """ Explanation: Unlabeled data K-means is an unsupervised learning method, which means it doesn't make use of data labels. This is useful when we're exploring a new dataset. We may not have labels for this dataset, but we want to see how it is grouped together and what examples are most similar to each other. Below we plot the data again, but this time without any labels. This is what k-means "sees" when we use it. End of explanation """ # Choose your number of clusters n_clusters = 3 # we create an instance of KMeans Classifier and fit the data. kmeans = KMeans(n_clusters=n_clusters) kmeans.fit(X_small) """ Explanation: K-means: training Next, we train a K-means classifier on our data. The first section chooses the number of clusters to use, and stores it in the variable n_clusters. We choose 3 because we know there are 3 species of iris, but we don't always know this when approaching a machine learning problem. The last two lines create and train the classifier. The first line creates a classifier (kmeans) using the KMeans() function, and tells it to use the number of neighbors stored in n_neighbors. The second line uses the fit() method to train the classifier on the features in X. Notice that because this is an unsupervised method, it does not use the labels stored in y. End of explanation """ h = .02 # step size in the mesh # Plot the decision boundary. For that, we will assign a color to each # point in the mesh [x_min, m_max]x[y_min, y_max]. x_min, x_max = X_small[:, 0].min() - 1, X_small[:, 0].max() + 1 y_min, y_max = X_small[:, 1].min() - 1, X_small[:, 1].max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) Z = kmeans.predict(np.c_[xx.ravel(), yy.ravel()]) # Make a prediction oat every point # in the mesh in order to find the # classification areas for each label # Put the result into a color plot Z = Z.reshape(xx.shape) plt.figure(figsize=(8, 6)) plt.pcolormesh(xx, yy, Z, cmap=cmap_bg) # Plot the training points plt.scatter(X_small[:, 0], X_small[:, 1]) plt.xlim(xx.min(), xx.max()) plt.ylim(yy.min(), yy.max()) plt.title("KMeans (k = %i)" % (n_clusters)) plt.xlabel('Sepal length (cm)') plt.ylabel('Sepal width (cm)') # Plot the centroids as a black X centroids = kmeans.cluster_centers_ plt.scatter(centroids[:, 0], centroids[:, 1], marker='x', s=169, linewidths=3, color='k', zorder=10) plt.show() """ Explanation: Plot the classification boundaries Now that we have our classifier, let's visualize what it's doing. First we plot the decision boundaries, or the lines dividing areas assigned to the different clusters. The background shows the areas that are considered to belong to a certain cluster, and each cluster can then be assigned to a species of iris. They are plotted in grey, because the classifier does not assign labels to the clusters. The center of each cluster is plotted as a black x. Then we plot our examples onto the space, showing where each point lies in relation to the decision boundaries. If we took sepal measurements from a new flower, we could plot it in this space and use the background shade to determine which cluster of data points our classifier would assign to it. End of explanation """ h = .02 # step size in the mesh # Plot the decision boundary. For that, we will assign a color to each # point in the mesh [x_min, m_max]x[y_min, y_max]. x_min, x_max = X_small[:, 0].min() - 1, X_small[:, 0].max() + 1 y_min, y_max = X_small[:, 1].min() - 1, X_small[:, 1].max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) Z = kmeans.predict(np.c_[xx.ravel(), yy.ravel()]) # Make a prediction oat every point # in the mesh in order to find the # classification areas for each label # Put the result into a color plot Z = Z.reshape(xx.shape) plt.figure(figsize=(8, 6)) plt.pcolormesh(xx, yy, Z, cmap=cmap_bg) # Plot the training points plt.scatter(X_small[:, 0], X_small[:, 1], c=y, cmap=cmap_bold) plt.xlim(xx.min(), xx.max()) plt.ylim(yy.min(), yy.max()) plt.title("KMeans (k = %i)" % (n_clusters)) plt.xlabel('Sepal length (cm)') plt.ylabel('Sepal width (cm)') # Plot the centroids as a black X centroids = kmeans.cluster_centers_ plt.scatter(centroids[:, 0], centroids[:, 1], marker='x', s=169, linewidths=3, color='k', zorder=10) # Plot the legend plt.legend(labelList, labelNames) plt.show() """ Explanation: Cheating with labels Because we do have labels for this dataset, let's see how well k-means did at separating the three species. End of explanation """ # Your code goes here! cmap_bg = ListedColormap(['#111111','#333333', '#555555', '#777777', '#999999']) """ Explanation: Analyzing the clusters As you can see in the previous plots, K-means does a good job of separating the Setosa species (red) into its own cluster. It also does a reasonable job separating Versicolour (green) and Virginica (blue), although there is a considerable amount of overlap that it can't predict properly. This is an example where it is important to understand your data (and visualize it whenever possible), as well as understand your machine learning model. In this example, you may want to use a different machine learning model that can separate the data more accurately. Alternatively, we could use all four features to see if that improves accuracy (remember, we aren't using petal length or width here for easier data visualization). Changing the number of clusters What would happen if you changed the number of clusters? What would the plot look like with 2 clusters, or 5? Based on the unlabeled data, how would you try to determine the number of classes to use? In the next block of code, try changing the number of clusteres and seeing what happens. You may need to change the number of colors represented in cmap_bg to match the number of classes you are using. End of explanation """ # Add our new data examples examples = [[4.3, 2.5], # Plant A [6.3, 2.1]] # Plant B # Choose your number of clusters n_clusters = 3 # we create an instance of KMeans Classifier and fit the data. kmeans = KMeans(n_clusters=n_clusters) kmeans.fit(X_small) # Predict the labels for our new examples labels = kmeans.predict(examples) # Print the predicted species names print('A: Cluster ' + str(labels[0])) print('B: Cluster ' + str(labels[1])) # Now plot the results h = .02 # step size in the mesh # Plot the decision boundary. For that, we will assign a color to each # point in the mesh [x_min, m_max]x[y_min, y_max]. x_min, x_max = X_small[:, 0].min() - 1, X_small[:, 0].max() + 1 y_min, y_max = X_small[:, 1].min() - 1, X_small[:, 1].max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) Z = kmeans.predict(np.c_[xx.ravel(), yy.ravel()]) # Put the result into a color plot Z = Z.reshape(xx.shape) plt.figure(figsize=(8, 6)) plt.pcolormesh(xx, yy, Z, cmap=cmap_bg) # Plot the training points plt.scatter(X_small[:, 0], X_small[:, 1]) plt.xlim(xx.min(), xx.max()) plt.ylim(yy.min(), yy.max()) plt.title("KMeans (k = %i)" % (n_clusters)) plt.xlabel('Sepal length (cm)') plt.ylabel('Sepal width (cm)') # Plot the centroids as a black X centroids = kmeans.cluster_centers_ plt.scatter(centroids[:, 0], centroids[:, 1], marker='x', s=169, linewidths=3, color='k', zorder=10) # Display the new examples as labeled text on the graph plt.text(examples[0][0], examples[0][1],'A', fontsize=14) plt.text(examples[1][0], examples[1][1],'B', fontsize=14) plt.show() """ Explanation: Making Predictions Now, let's say we go out and measure the sepals of two iris plants, and want to know what group they belong to. We're going to use our classifier to predict the flowers with the following measurements: Plant | Sepal length | Sepal width ------|--------------|------------ A |4.3 |2.5 B |6.3 |2.1 We can use our classifier's predict() function to predict the label for our input features. We pass in the variable examples to the predict() function, which is a list, and each element is another list containing the features (measurements) for a particular example. The output is a list of labels corresponding to the input examples. We'll also plot them on the boundary plot, to show why they were predicted that way. End of explanation """ # Your code goes here! #cmap_bg = ListedColormap(['#111111','#333333', '#555555', '#777777', '#999999']) """ Explanation: As you can see, example A is grouped into Cluster 2 and example B is grouped into Cluster 0. Remember, K-means does not use labels. It only clusters the data by feature similarity, and it's up to us to decide what the clusters mean (or if they don't mean anything at all). Using different features Try using different combinations of the four features and see what results you get. Does it make it any easier to determine how many clusters should be used? End of explanation """
h-mayorquin/hopfield_sequences
notebooks/2016-11-29(Overlap reproduction).ipynb
mit
from __future__ import print_function import sys sys.path.append('../') import numpy as np import matplotlib.pyplot as plt import seaborn as sns from hopfield import Hopfield %matplotlib inline sns.set(font_scale=2.0) """ Explanation: Overlap reproduction This notebook should reproduce some results of the Amit's book (Attractor Neural Networks) in the section 4.1 End of explanation """ n_dim = 400 n_store = 7 T = 0.0 prng = np.random.RandomState(seed=10000) N = 2000 nn = Hopfield(n_dim=n_dim, T=T, prng=prng) list_of_patterns = nn.generate_random_patterns(n_store) nn.train(list_of_patterns) """ Explanation: Symmetric mix of attractors According to Amit with the Hebbian rule of this model a very particular type of spurios states appears: symmetric mixtures. Symmetric mixtures are called like that because they overlaps with the intended attractors become more or less same. In the asynchronous case only symmetric mixtures of odd number of states are stable. We show a symmetric mixture here First we build the network with very low temperature (noise) End of explanation """ overlaps = np.zeros((N, n_store)) for i in range(N): nn.update_async() overlaps[i, :] = nn.calculate_overlap() """ Explanation: Then we run the network End of explanation """ # Plot this thing fig = plt.figure(figsize=(16, 12)) ax = fig.add_subplot(111) ax.set_xlabel('Iterations') ax.set_ylabel('Overlap') ax.axhline(y=0, color='k') for pattern_n, overlap in enumerate(overlaps.T): ax.plot(overlap, '-', label='m' +str(pattern_n)) ax.legend() ax.set_ylim(-1.1, 1.1) plt.show() """ Explanation: Ploting End of explanation """ n_dim = 400 n_store = 7 T = 0.8 prng = np.random.RandomState(seed=10000) N = 2000 nn = Hopfield(n_dim=n_dim, T=T, prng=prng) list_of_patterns = nn.generate_random_patterns(n_store) nn.train(list_of_patterns) """ Explanation: Here we see a symmetric mixture of three states Effect of temperature in symmetric mixtures As Amit discusses we can increase the temperature to a sweet spot where the spurious states are destroyed and only the real attractors are preserved. We can try to repeat the same process with higher temperature End of explanation """ overlaps = np.zeros((N, n_store)) for i in range(N): nn.update_async() overlaps[i, :] = nn.calculate_overlap() """ Explanation: Then we run the network End of explanation """ # Plot this thing fig = plt.figure(figsize=(16, 12)) ax = fig.add_subplot(111) ax.set_xlabel('Iterations') ax.set_ylabel('Overlap') ax.axhline(y=0, color='k') for pattern_n, overlap in enumerate(overlaps.T): ax.plot(overlap, '-', label='m' +str(pattern_n)) ax.legend() ax.set_ylim(-1.1, 1.1) plt.show() """ Explanation: Ploting End of explanation """ n_dim = 400 n_store = 7 T = 3.0 nn = Hopfield(n_dim=n_dim, T=T, prng=prng) list_of_patterns = nn.generate_random_patterns(n_store) nn.train(list_of_patterns) """ Explanation: We can appreciate here that with higher noises all the other overlaps become close to 0 and one state the symmetric reflexion of state m0 is being recalled Effect of very high noise Finally if the noise is very high only the state where all the overlaps vanish will be stable. End of explanation """ overlaps = np.zeros((N, n_store)) for i in range(N): nn.update_async() overlaps[i, :] = nn.calculate_overlap() """ Explanation: Then we run the network End of explanation """ # Plot this thing fig = plt.figure(figsize=(16, 12)) ax = fig.add_subplot(111) ax.set_xlabel('Iterations') ax.set_ylabel('Overlap') ax.axhline(y=0, color='k') for pattern_n, overlap in enumerate(overlaps.T): ax.plot(overlap, '-', label='m' +str(pattern_n)) ax.legend() ax.set_ylim(-1.1, 1.1) plt.show() """ Explanation: Ploting End of explanation """
chrismcginlay/crazy-koala
jupyter/01_basic_input_and_output.ipynb
gpl-3.0
print(42) print('Boris') pint[47 print'Jane """ Explanation: Basic Input and Output Basic Output - print() In Python, we talk about the terminal - by this we really just mean the screen, or maybe a window on the screen. Python 3 can output to the terminal using the print() function. In the very early days of computers, there weren't any screens - terminal output always went to a printer, hence telling a computer to print() would spew something out of a printer. Nowadays, we still use the print() function, but the output goes to a screen! You just * type print * open a bracket ( * put in the stuff you want to show on the screen or terminal * close the bracket ) python print(42) print('Boris') Try it yourself. This bit of python will print the number 42 and the name 'Boris'. Fix the last two lines to show the number 47 and the name 'Jane'. End of explanation """ print(42, 'Boris') """ Explanation: If you've done it right, the error messages should go away and your output should look like this: 42 Boris 47 Jane Printing More Than One Thing You can put 42 Boris 47 Jane on the screen using just one print() function instead of four. Change the following line to print 42 Boris 47 Jane using commas to separate each part. End of explanation """ print(42, '\nBoris', 47, 'Jane') """ Explanation: If you've done it right, your output should look like this: 42 Boris 47 Jane Printing on a New Line If you want to have items printed on new lines, you can use the \n special character. python print(42, '\nBoris') Change the following code to use \n to get each item on a separate line End of explanation """ some_text = input('Please enter some text: ') print(some_text) """ Explanation: If you've done it right, your output should look like this: 42 Boris 47 Jane Basic User Input You the human will use your computing device by typing on the keyboard, moving and clicking the mouse, touching or pinching on a phone touchscreen, talking into a microphone etc. The easiest method for beginner programmers is the keyboard. Think about the last time you ordered anything online: the webpage asks for your name, you type it into the correct box, then your address etc. Your answers are stored in a variable (more on that later). Type in some text and press enter to see how it works: (Don't try to change the program code here, just run it, then type in something). End of explanation """ name = input('Please enter your name: ') print('It is a pleasure to meet you 'name,'.') """ Explanation: Try to run the following code then fix the error message (Hint: there's a missing comma somewhere...) End of explanation """ name = input('What is your name? ') seats = int(input('How many seats do you want to book? ')) height_metres = float(input('What is your height in metres? ')) print ('\nCustomer Report:', name, 'booking', seats, 'seats.') """ Explanation: If you've done it right, your output should look like this, unless you aren't Katie: Please enter your name: Katie It is a pleasure to meet you Katie. Number or Text Input? Python easily handles situations when you want to input a number or text. This will accept any input at all and store it as text. python name = input('What is your name? ') This will accept whole numbers only (whole numbers are officially known as 'integers'). The program will crash if the user types in anything that is not a whole number. python seats = int(input('How many seats do you want to book? ')) This will accept decimal numbers, which are officially called real or floating point numbers. It will still accept integer numbers too. The program will crash if the user types in anything that is not a number. python height_metres = float(input('What is your height in metres? ')) Type your answers to the following questions - try to get each input function to crash! End of explanation """
PMEAL/OpenPNM-Examples
Simulations/fickian_diffusion.ipynb
mit
import openpnm as op net = op.network.Cubic(shape=[1, 10, 10], spacing=1e-5) """ Explanation: Summary One of the main applications of OpenPNM is simulating transport phenomena such as Fickian diffusion, advection diffusion, reactive transport, etc. In this example, we will learn how to perform Fickian diffusion on a Cubic network. The algorithm works fine with every other network type, but for now we want to keep it simple. Problem setup Generating network First, we need to generate a Cubic network. For now, we stick to a 2d network, but you might as well try it in 3d! End of explanation """ geom = op.geometry.StickAndBall(network=net, pores=net.Ps, throats=net.Ts) """ Explanation: Adding geometry Next, we need to add a geometry to the generated network. A geometry contains information about size of the pores/throats in a network. OpenPNM has tons of prebuilt geometries that represent the microstructure of different materials such as Toray090 carbon papers, sand stone, electrospun fibers, etc. For now, we stick to a sample geometry called StickAndBall that assigns random values to pore/throat diameters. End of explanation """ air = op.phases.Air(network=net) """ Explanation: Adding phase Next, we need to add a phase to our simulation. A phase object(s) contain(s) thermophysical information about the working fluid(s) in the simulation. OpenPNM has tons of prebuilt phases as well! For this simulation, we use air as our working fluid. End of explanation """ phys_air = op.physics.Standard(network=net, phase=air, geometry=geom) """ Explanation: Adding physics Finally, we need to add a physics. A physics object contains information about the working fluid in the simulation that depend on the geometry of the network. A good example is diffusive conductance, which not only depends on the thermophysical properties of the working fluid, but also depends on the geometry of pores/throats. End of explanation """ fd = op.algorithms.FickianDiffusion(network=net, phase=air) """ Explanation: Performing Fickian diffusion Now that everything's set up, it's time to perform our Fickian diffusion simulation. For this purpose, we need to add the FickianDiffusion algorithm to our simulation. Here's how we do it: End of explanation """ inlet = net.pores('left') outlet = net.pores('right') fd.set_value_BC(pores=inlet, values=1.0) fd.set_value_BC(pores=outlet, values=0.0) """ Explanation: Note that network and phase are required parameters for pretty much every algorithm we add, since we need to specify on which network and for which phase do we want to run the algorithm. Adding boundary conditions Next, we need to add some boundary conditions to the simulation. By default, OpenPNM assumes zero flux for the boundary pores. End of explanation """ fd.run(); """ Explanation: set_value_BC applies the so-called "Dirichlet" boundary condition to the specified pores. Note that unless you want to apply a single value to all of the specified pores (like we just did), you must pass a list (or ndarray) as the values parameter. Running the algorithm Now, it's time to run the algorithm. This is done by calling the run method attached to the algorithm object. End of explanation """ print(fd.settings) """ Explanation: Post processing When an algorithm is successfully run, the results are attached to the same object. To access the results, you need to know the quantity for which the algorithm was solving. For instance, FickianDiffusion solves for the quantity pore.concentration, which is somewhat intuitive. However, if you ever forget it, or wanted to manually check the quantity, you can take a look at the algorithm settings: End of explanation """ c = fd['pore.concentration'] print(c) """ Explanation: Now that we know the quantity for which FickianDiffusion was solved, let's take a look at the results: End of explanation """ print('Network shape:', net._shape) c2d = c.reshape((net._shape)) import matplotlib.pyplot as plt plt.imshow(c2d[0,:,:]) plt.title('Concentration (mol/m$^3$)') plt.colorbar() """ Explanation: Heatmap Well, it's hard to make sense out of a bunch of numbers! Let's visualize the results. Since the network is 2d, we can simply reshape the results in form of a 2d array similar to the shape of the network and plot the heatmap of it using matplotlib. End of explanation """ rate_inlet = fd.rate(pores=inlet)[0] print('Mass flow rate from inlet:', rate_inlet, 'mol/s') """ Explanation: Calculating heat flux You might as well be interested in calculating the mass flux from a boundary! This is easily done in OpenPNM via calling the rate method attached to the algorithm. Let's see how it works: End of explanation """
OceanPARCELS/parcels
parcels/examples/tutorial_Argofloats.ipynb
mit
# Define the new Kernel that mimics Argo vertical movement def ArgoVerticalMovement(particle, fieldset, time): driftdepth = 1000 # maximum depth in m maxdepth = 2000 # maximum depth in m vertical_speed = 0.10 # sink and rise speed in m/s cycletime = 10 * 86400 # total time of cycle in seconds drifttime = 9 * 86400 # time of deep drift in seconds if particle.cycle_phase == 0: # Phase 0: Sinking with vertical_speed until depth is driftdepth particle.depth += vertical_speed * particle.dt if particle.depth >= driftdepth: particle.cycle_phase = 1 elif particle.cycle_phase == 1: # Phase 1: Drifting at depth for drifttime seconds particle.drift_age += particle.dt if particle.drift_age >= drifttime: particle.drift_age = 0 # reset drift_age for next cycle particle.cycle_phase = 2 elif particle.cycle_phase == 2: # Phase 2: Sinking further to maxdepth particle.depth += vertical_speed * particle.dt if particle.depth >= maxdepth: particle.cycle_phase = 3 elif particle.cycle_phase == 3: # Phase 3: Rising with vertical_speed until at surface particle.depth -= vertical_speed * particle.dt #particle.temp = fieldset.temp[time, particle.depth, particle.lat, particle.lon] # if fieldset has temperature if particle.depth <= fieldset.mindepth: particle.depth = fieldset.mindepth #particle.temp = 0./0. # reset temperature to NaN at end of sampling cycle particle.cycle_phase = 4 elif particle.cycle_phase == 4: # Phase 4: Transmitting at surface until cycletime is reached if particle.cycle_age > cycletime: particle.cycle_phase = 0 particle.cycle_age = 0 if particle.state == ErrorCode.Evaluate: particle.cycle_age += particle.dt # update cycle_age """ Explanation: Tutorial on how to simulate an Argo float in Parcels This tutorial shows how simple it is to construct a Kernel in Parcels that mimics the vertical movement of Argo floats. End of explanation """ from parcels import FieldSet, ParticleSet, JITParticle, AdvectionRK4, ErrorCode, Variable from datetime import timedelta import numpy as np # Load the GlobCurrent data in the Agulhas region from the example_data filenames = {'U': "GlobCurrent_example_data/20*.nc", 'V': "GlobCurrent_example_data/20*.nc"} variables = {'U': 'eastward_eulerian_current_velocity', 'V': 'northward_eulerian_current_velocity'} dimensions = {'lat': 'lat', 'lon': 'lon', 'time': 'time'} fieldset = FieldSet.from_netcdf(filenames, variables, dimensions) fieldset.mindepth = fieldset.U.depth[0] # uppermost layer in the hydrodynamic data # Define a new Particle type including extra Variables class ArgoParticle(JITParticle): # Phase of cycle: init_descend=0, drift=1, profile_descend=2, profile_ascend=3, transmit=4 cycle_phase = Variable('cycle_phase', dtype=np.int32, initial=0.) cycle_age = Variable('cycle_age', dtype=np.float32, initial=0.) drift_age = Variable('drift_age', dtype=np.float32, initial=0.) #temp = Variable('temp', dtype=np.float32, initial=np.nan) # if fieldset has temperature # Initiate one Argo float in the Agulhas Current pset = ParticleSet(fieldset=fieldset, pclass=ArgoParticle, lon=[32], lat=[-31], depth=[0]) # combine Argo vertical movement kernel with built-in Advection kernel kernels = ArgoVerticalMovement + pset.Kernel(AdvectionRK4) # Create a ParticleFile object to store the output output_file = pset.ParticleFile(name="argo_float", outputdt=timedelta(minutes=30)) # Now execute the kernels for 30 days, saving data every 30 minutes pset.execute(kernels, runtime=timedelta(days=30), dt=timedelta(minutes=5), output_file=output_file) """ Explanation: And then we can run Parcels with this 'custom kernel'. Note that below we use the two-dimensional velocity fields of GlobCurrent, as these are provided as example_data with Parcels. We therefore assume that the horizontal velocities are the same throughout the entire water column. However, the ArgoVerticalMovement kernel will work on any FieldSet, including from full three-dimensional hydrodynamic data. If the hydrodynamic data also has a Temperature Field, then uncommenting the lines about temperature will also simulate the sampling of temperature. End of explanation """ %matplotlib inline import netCDF4 from mpl_toolkits.mplot3d import Axes3D import matplotlib.pyplot as plt output_file.export() # export the trajectory data to a netcdf file nc = netCDF4.Dataset("argo_float.nc") x = nc.variables["lon"][:].squeeze() y = nc.variables["lat"][:].squeeze() z = nc.variables["z"][:].squeeze() nc.close() fig = plt.figure(figsize=(13,10)) ax = plt.axes(projection='3d') cb = ax.scatter(x, y, z, c=z, s=20, marker="o") ax.set_xlabel("Longitude") ax.set_ylabel("Latitude") ax.set_zlabel("Depth (m)") ax.set_zlim(np.max(z),0) plt.show() """ Explanation: Now we can plot the trajectory of the Argo float with some simple calls to netCDF4 and matplotlib End of explanation """
google/objax
examples/tutorials/objax_to_tf.ipynb
apache-2.0
# install the latest version of Objax from github %pip --quiet install git+https://github.com/google/objax.git import math import random import tempfile import numpy as np import tensorflow as tf import objax from objax.zoo.wide_resnet import WideResNet """ Explanation: Conversion of Objax models to Tensorflow This tutorial demonstrates how to export models from Objax to Tensorflow and then export them into SavedModel format. SavedModel format could be read and served by Tensorflow serving infrastructure or by custom user code written in C++. Thus export to Tensorflow allows users to potentially run experiments in Objax and then serve these models in production (using Tensorflow infrastructure). Installation and Imports First of all, let's install Objax and import all necessary python modules. End of explanation """ # Model model = WideResNet(nin=3, nclass=10, depth=4, width=1) # Prediction operation @objax.Function.with_vars(model.vars()) def predict_op(x): return objax.functional.softmax(model(x, training=False)) predict_op = objax.Jit(predict_op) """ Explanation: Setup Objax model Let's make a model in Objax and create a prediction operation which we will be later converting to Tensorflow. In this tutorial we use randomly initialized model, so we don't need to wait for model training to finish. However conversion to Tensorflow would be the same if we train model first. End of explanation """ input_shape = (4, 3, 32, 32) x1 = np.random.uniform(size=input_shape) y1 = predict_op(x1) print('y1:\n', y1) x2 = np.random.uniform(size=input_shape) y2 = predict_op(x2) print('y2:\n', y2) """ Explanation: Now, let's generate a few examples and run prediction operation on them: End of explanation """ predict_op_tf = objax.util.Objax2Tf(predict_op) print('isinstance(predict_op_tf, tf.Module) =', isinstance(predict_op_tf, tf.Module)) print('Number of variables: ', len(predict_op_tf.variables)) """ Explanation: Convert a model to Tensorflow We use Objax2Tf object to convert Objax module into tf.Module. Internally Objax2Tf makes a copy of all Objax variables used by the provided module and converts __call__ method of the provided Objax module into Tensorflow function. End of explanation """ y1_tf = predict_op_tf(x1) print('max(abs(y1_tf - y1)) =', np.amax(np.abs(y1_tf - y1))) y2_tf = predict_op_tf(x2) print('max(abs(y2_tf - y2)) =', np.amax(np.abs(y2_tf - y2))) """ Explanation: After module is converted we can run it and compare results between Objax and Tensorflow. Results are pretty close numerically, however they are not exactly the same due to implementation differences between JAX and Tensorflow. End of explanation """ model_dir = tempfile.mkdtemp() %ls -al $model_dir """ Explanation: Export Tensorflow model as SavedModel Converting an Objax model to Tensorflow allows us to export it as Tensorflow SavedModel. Discussion of details of SavedModel format is out of scope of this tutorial, thus we only provide an example showing how to save and load SavedModel. For more details about SavedModel please refert to the following Tensorflow documentation: Using the SavedModel format guide tf.saved_model.save API call tf.saved_model.load API call Saving model as SavedModel First of all, let's create a new empty directory where model will be saved: End of explanation """ tf.saved_model.save( predict_op_tf, model_dir, signatures=predict_op_tf.__call__.get_concrete_function( tf.TensorSpec(input_shape, tf.float32))) """ Explanation: Then let's use tf.saved_model.save API to save our Tensorflow model. Since Objax2Tf is a subclass of tf.Module, instances of Objax2Tf class could be directly used with tf.saved_model.save API: End of explanation """ %ls -al $model_dir """ Explanation: Now we can list the content of model_dir and see files and subdirectories of SavedModel: End of explanation """ loaded_tf_model = tf.saved_model.load(model_dir) print('Exported signatures: ', loaded_tf_model.signatures) """ Explanation: Loading exported SavedModel We can load SavedModel as a new Tensorflow object loaded_tf_model. End of explanation """ loaded_predict_op_tf = loaded_tf_model.signatures['serving_default'] y1_loaded_tf = loaded_predict_op_tf(tf.cast(x1, tf.float32))['output_0'] print('max(abs(y1_loaded_tf - y1_tf)) =', np.amax(np.abs(y1_loaded_tf - y1_tf))) y2_loaded_tf = loaded_predict_op_tf(tf.cast(x2, tf.float32))['output_0'] print('max(abs(y2_loaded_tf - y2_tf)) =', np.amax(np.abs(y2_loaded_tf - y2_tf))) """ Explanation: Then we can run inference using loaded Tensorflow model loaded_tf_model and compare resuls with the model predict_op_tf which was converted from Objax: End of explanation """
AstroHackWeek/AstroHackWeek2015
day3-machine-learning/06 - Model Complexity.ipynb
gpl-2.0
from plots import plot_kneighbors_regularization plot_kneighbors_regularization() """ Explanation: Model Complexity, Overfitting and Underfitting End of explanation """ from sklearn.datasets import load_digits from sklearn.ensemble import RandomForestClassifier from sklearn.learning_curve import validation_curve digits = load_digits() X, y = digits.data, digits.target model = RandomForestClassifier(n_estimators=20) param_range = range(1, 13) training_scores, validation_scores = validation_curve(model, X, y, param_name="max_depth", param_range=param_range, cv=5) training_scores.shape training_scores def plot_validation_curve(parameter_values, train_scores, validation_scores): train_scores_mean = np.mean(train_scores, axis=1) train_scores_std = np.std(train_scores, axis=1) validation_scores_mean = np.mean(validation_scores, axis=1) validation_scores_std = np.std(validation_scores, axis=1) plt.fill_between(parameter_values, train_scores_mean - train_scores_std, train_scores_mean + train_scores_std, alpha=0.1, color="r") plt.fill_between(parameter_values, validation_scores_mean - validation_scores_std, validation_scores_mean + validation_scores_std, alpha=0.1, color="g") plt.plot(parameter_values, train_scores_mean, 'o-', color="r", label="Training score") plt.plot(parameter_values, validation_scores_mean, 'o-', color="g", label="Cross-validation score") plt.ylim(validation_scores_mean.min() - .1, train_scores_mean.max() + .1) plt.legend(loc="best") plt.figure() plot_validation_curve(param_range, training_scores, validation_scores) """ Explanation: Validation Curves End of explanation """ # %load solutions/validation_curve.py """ Explanation: Exercise Plot the validation curve on the digit dataset for: * a LinearSVC with a logarithmic range of regularization parameters C. * KNeighborsClassifier with a linear range of neighbors n_neighbors. What do you expect them to look like? How do they actually look like? End of explanation """
fonnesbeck/PyMC3_Oslo
notebooks/5. Model Building with PyMC3.ipynb
cc0-1.0
import pymc3 as pm with pm.Model() as disaster_model: switchpoint = pm.DiscreteUniform('switchpoint', lower=0, upper=110) """ Explanation: Building Models in PyMC3 Bayesian inference begins with specification of a probability model relating unknown variables to data. PyMC3 provides the basic building blocks for Bayesian probability models: stochastic random variables, deterministic variables, and factor potentials. A stochastic random variable is a factor whose value is not completely determined by its parents, while the value of a deterministic random variable is entirely determined by its parents. Most models can be constructed using only these two variable types. The third quantity, the factor potential, is not a variable but simply a log-likelihood term or constraint that is added to the joint log-probability to modify it. The FreeRV class A stochastic variable is represented in PyMC3 by a FreeRV class. This structure adds functionality to Theano's TensorVariable class, by mixing in the PyMC Factor class. A Factor is used whenever a variable contributes a log-probability term to a model. Hence, you know a variable is a subclass of Factor whenever it has a logp method, as we saw in the previous section. A FreeRV object has several important attributes: dshape : The variable's shape. dsize : The overall size of the variable. distribution : The probability density or mass function that describes the distribution of the variable's values. logp : The log-probability of the variable's current value given the values of its parents. init_value : The initial value of the variable, used by many algorithms as a starting point for model fitting. model : The PyMC model to which the variable belongs. Creation of stochastic random variables There are two ways to create stochastic random variables (FreeRV objects), which we will call the automatic, and manual interfaces. Automatic Stochastic random variables with standard distributions provided by PyMC3 can be created in a single line using special subclasses of the Distribution class. For example, as we have seen, the uniformly-distributed discrete variable $switchpoint$ in the coal mining disasters model is created using the automatic interface as follows: End of explanation """ with disaster_model: early_mean = pm.Exponential('early_mean', lam=1) late_mean = pm.Exponential('late_mean', lam=1) """ Explanation: Similarly, the rate parameters can automatically be given exponential priors: End of explanation """ switchpoint.distribution.defaults """ Explanation: PyMC includes most of the probability density functions (for continuous variables) and probability mass functions (for discrete variables) used in statistical modeling. Continuous variables are represented by a specialized subclass of Distribution called Continuous and discrete variables by the Discrete subclass. The main differences between these two sublcasses are in the dtype attribute (int64 for Discrete and float64 for Continuous) and the defaults attribute, which determines which summary statistic to use for initial values when one is not specified ('mode' for Discrete and 'median', 'mean', and 'mode' for Continuous). End of explanation """ pm.Exponential.dist(1) """ Explanation: As we previewed in the introduction, Distribution has a class method dist that returns a probability distribution of that type, without being wrapped in a PyMC random variable object. Sometimes we wish to use a particular statistical distribution, without using it as a variable in a model; for example, to generate random numbers from the distribution. This class method allows that. End of explanation """ import numpy as np with pm.Model(): def uniform_logp(value, lower=0, upper=111): """The switchpoint for the rate of disaster occurrence.""" return pm.switch((value > upper) | (value < lower), -np.inf, -np.log(upper - lower + 1)) switchpoint = pm.DensityDist('switchpoint', logp=uniform_logp, dtype='int64') switchpoint.logp({'switchpoint':4}) switchpoint.logp({'switchpoint': 44}) switchpoint.logp({'switchpoint':-1}) """ Explanation: Manual The uniformly-distributed discrete stochastic variable switchpoint in the disasters model could alternatively be created from a function that computes its log-probability as follows: End of explanation """ from pymc3.distributions import Continuous import theano.tensor as tt from theano import as_op class Beta(Continuous): def __init__(self, mu, *args, **kwargs): super(Beta, self).__init__(*args, **kwargs) self.mu = mu self.mode = mu def logp(self, value): mu = self.mu return beta_logp(value - mu) @as_op(itypes=[tt.dscalar], otypes=[tt.dscalar]) def beta_logp(value): return -1.5 * np.log(1 + (value)**2) with pm.Model() as model: beta = Beta('slope', mu=0, testval=0) """ Explanation: A couple of things to notice: while the function specified for the logp argument can be an arbitrary Python function, it must use Theano operators and functions in its body. This is because one or more of the arguments passed to the function may be TensorVariables, and they must be supported. Also, we passed the value to be evaluated by the logp function as a dictionary, rather than as a plain integer. By convention, values in PyMC3 are passed around as a data structure called a Point. Points in parameter space are represented by dictionaries with parameter names as they keys and the value of the parameters as the values. To emphasize, the Python function passed to DensityDist should compute the log-density or log-probability of the variable. That is why the return value in the example above is -log(upper-lower+1) rather than 1/(upper-lower+1). Specifying Custom Distributions Similarly, the library of statistical distributions in PyMC3 is not exhaustive, but PyMC allows for the creation of user-defined functions for an arbitrary probability distribution. For simple statistical distributions, the DensityDist function takes as an argument any function that calculates a log-probability $log(p(x))$. This function may employ other random variables in its calculation. Here is a simple example inspired by a blog post by Jake Vanderplas (Vanderplas, 2014), where Jeffreys priors are used to specify priors that are invariant to transformation. In the case of simple linear regression, these are: $$\beta \propto (1+\beta^2)^{3/2}$$ $$\sigma \propto \frac{1}{\alpha}$$ The logarithms of these functions can be specified as the argument to DensityDist and inserted into the model. ```python import theano.tensor as T from pymc3 import DensityDist, Uniform with Model() as model: alpha = Uniform('intercept', -100, 100) # Create custom densities beta = DensityDist('beta', lambda value: -1.5 * T.log(1 + value**2), testval=0) eps = DensityDist('eps', lambda value: -T.log(T.abs_(value)), testval=1) # Create likelihood like = Normal('y_est', mu=alpha + beta * X, sd=eps, observed=Y) ``` For more complex distributions, one can create a subclass of Continuous or Discrete and provide the custom logp function, as required. This is how the built-in distributions in PyMC are specified. As an example, fields like psychology and astrophysics have complex likelihood functions for a particular process that may require numerical approximation. In these cases, it is impossible to write the function in terms of predefined theano operators and we must use a custom theano operator using as_op or inheriting from theano.Op. Implementing the beta variable above as a Continuous subclass is shown below, along with a sub-function using the as_op decorator, though this is not strictly necessary. End of explanation """ with disaster_model: disasters = pm.Poisson('disasters', mu=3, observed=[3,4,1,2,0,2,2]) """ Explanation: The ObservedRV Class Stochastic random variables whose values are observed (i.e. data likelihoods) are represented by a different class than unobserved random variables. A ObservedRV object is instantiated any time a stochastic variable is specified with data passed as the observed argument. Otherwise, observed stochastic random variables are created via the same interfaces as unobserved: automatic or manual. As an example of an automatic instantiation, consider a Poisson data likelihood : End of explanation """ with disaster_model: rate = pm.Deterministic('rate', pm.switch(switchpoint >= np.arange(112), early_mean, late_mean)) """ Explanation: We have already seen manual instantiation, from the melanoma survial model where the exponential survival likelihood was implemented manually: ```python def logp(failure, value): return (failure * log(lam) - lam * value).sum() x = DensityDist('x', logp, observed={'failure':failure, 'value':t}) ``` Notice in this example that there are two vetors observed data for the likelihood x, passed as a dictionary. An important responsibility of ObservedRV is to automatically handle missing values in the data, when they are present (absent?). More on this later. Deterministic Variables A deterministic variable is one whose values are completely determined by the values of their parents. For example, in our disasters model, rate is a deterministic variable. End of explanation """ with disaster_model: rate = pm.switch(switchpoint >= np.arange(112), early_mean, late_mean) """ Explanation: so rate's value can be computed exactly from the values of its parents early_mean, late_mean and switchpoint. There are two types of deterministic variables in PyMC3 Anonymous deterministic variables The easiest way to create a deterministic variable is to operate on or transform one or more variables in a model directly. For example, the simplest way to specify the rate variable above is as follows: End of explanation """ with disaster_model: mean_of_means = (early_mean + late_mean)/2 """ Explanation: Or, let's say we wanted to use the mean of the early_mean and late_mean variables somehere in our model: End of explanation """ with disaster_model: rate = pm.Deterministic('rate', pm.switch(switchpoint >= np.arange(112), early_mean, late_mean)) disaster_model.named_vars """ Explanation: These are called anonymous variables because we did not wrap it with a call to Determinstic, which gives it a name as its first argument. We simply specified the variable as a Python (or, Theano) expression. This is therefore the simplest way to construct a determinstic variable. The only caveat is that the values generated by anonymous determinstics at every iteration of a MCMC algorithm, for example, are not recorded to the resulting trace. So, this approach is only appropriate for intermediate values in your model that you do not wish to obtain posterior estimates for, alongside the other variables in the model. Named deterministic variables To ensure that deterministic variables' values are accumulated during sampling, they should be instantiated using the named deterministic interface; this uses the Deterministic function to create the variable. Two things happen when a variable is created this way: The variable is given a name (passed as the first argument) The variable is appended to the model's list of random variables, which ensures that its values are tallied. End of explanation """ with disaster_model: rate_constraint = pm.Potential('rate_constraint', pm.switch(pm.abs_(early_mean-late_mean)>1, -np.inf, 0)) """ Explanation: Factor Potentials For some applications, we want to be able to modify the joint density by incorporating terms that don't correspond to probabilities of variables conditional on parents, for example: $$p(x_0, x_2, \ldots x_{N-1}) \propto \prod_{i=0}^{N-2} \psi_i(x_i, x_{i+1})$$ In other cases we may want to add probability terms to existing models. For example, suppose we want to constrain the difference between the early and late means in the disaster model to be less than 1, so that the joint density becomes: $$p(y,\tau,\lambda_1,\lambda_2) \propto p(y|\tau,\lambda_1,\lambda_2) p(\tau) p(\lambda_1) p(\lambda_2) I(|\lambda_2-\lambda_1| \lt 1)$$ We call such log-probability terms factor potentials (Jordan 2004). Bayesian hierarchical notation doesn't accomodate these potentials. Creation of Potentials A potential can be created via the Potential function, in a way very similar to Deterministic's named interface: End of explanation """ y = np.array([15, 10, 16, 11, 9, 11, 10, 18, 11]) x = np.array([1, 2, 4, 5, 6, 8, 19, 18, 12]) with pm.Model() as arma_model: sigma = pm.HalfCauchy('sigma', 5) beta = pm.Normal('beta', 0, sd=2) mu = pm.Normal('mu', 0, sd=10) err = y - (mu + beta*x) like = pm.Potential('like', pm.Normal.dist(0, sd=sigma).logp(err)) """ Explanation: The function takes just a name as its first argument and an expression returning the appropriate log-probability as the second argument. A common use of a factor potential is to represent an observed likelihood, where the observations are partly a function of model variables. In the contrived example below, we are representing the error in a linear regression model as a zero-mean normal random variable. Thus, the "data" in this scenario is the residual, which is a function both of the data and the regression parameters. End of explanation """ # Log dose in each group log_dose = [-.86, -.3, -.05, .73] # Sample size in each group n = 5 # Outcomes deaths = [0, 1, 3, 5] ## Write your answer here """ Explanation: This parameterization would not be compatible with an observed stochastic, because the err term would become fixed in the likelihood and not be allowed to change during sampling. Exercise: Bioassay model Gelman et al. (2003) present an example of an acute toxicity test, commonly performed on animals to estimate the toxicity of various compounds. In this dataset log_dose includes 4 levels of dosage, on the log scale, each administered to 5 rats during the experiment. The response variable is death, the number of positive responses to the dosage. The number of deaths can be modeled as a binomial response, with the probability of death being a linear function of dose: $$\begin{aligned} y_i &\sim \text{Bin}(n_i, p_i) \ \text{logit}(p_i) &= a + b x_i \end{aligned}$$ The common statistic of interest in such experiments is the LD50, the dosage at which the probability of death is 50%. Specify this model in PyMC: End of explanation """ from pymc3.examples.gelman_bioassay import model as bioassay_model with bioassay_model: start = pm.find_MAP() start with bioassay_model: trace = pm.sample(100, step=pm.Metropolis(), start=start) """ Explanation: Sampling with MCMC PyMC's core business is using Markov chain Monte Carlo to fit virtually any probability model. This involves the assignment and coordination of a suite of step methods, each of which is responsible for updating one or more variables. The user's interface to PyMC's sampling algorithms is the sample function: python sample(draws, step=None, start=None, trace=None, chain=0, njobs=1, tune=None, progressbar=True, model=None, random_seed=None) sample assigns particular samplers to model variables, and generates samples from them. The draws argument controls the total number of MCMC iterations. PyMC can automate most of the details of sampling, outside of the selection of the number of draws, using default settings for several parameters that control how the sampling is set up and conducted. However, users may manually intervene in the specification of the sampling by passing values to a number of keyword argumetns for sample. Assigning step methods The step argument allows users to assign a MCMC sampling algorithm to the entire model, or to a subset of the variables in the model. For example, if we wanted to use the Metropolis-Hastings sampler to fit our model, we could pass an instance of that step method to sample via the step argument: ```python with my_model: trace = sample(1000, step=Metropolis()) ``` or if we only wanted to assign Metropolis to a parameter called β: ```python with my_model: trace = sample(1000, step=Metropolis(vars=[β])) ``` When step is not specified by the user, PyMC3 will assign step methods to variables automatically. To do so, each step method implements a class method called competence. This method returns a value from 0 (incompatible) to 3 (ideal), based on the attributes of the random variable in question. sample assigns the step method that returns the highest competence value to each of its unallocated stochastic random variables. In general: Binary variables will be assigned to BinaryMetropolis (Metropolis-Hastings for binary values) Discrete variables will be assigned to Metropolis Continuous variables will be assigned to NUTS (No U-turn Sampler) Starting values The start argument allows for the specification of starting values for stochastic random variables in the model. MCMC algorithms begin by initializing all unknown quantities to arbitrary starting values. Though in theory the value can be any value under the support of the distribution describing the random variable, we can make sampling more difficult if an initial value is chosen in the extreme tail of the distribution, for example. If starting values are not passed by the user, default values are chosen from the mean, median or mode of the distribution. As suggested in the previous section on approximation methods, it is sometimes useful to initialize a MCMC simulation at the maximum a posteriori (MAP) estimate: End of explanation """ trace """ Explanation: If we are sampling more than one Markov chain from our model, it is often recommended to initialize each chain to different starting values, so that lack of convergence can be more easily detected (see Model Checking section). Storing samples Notice in the above call to sample that output is assigned to a variable we have called trace. End of explanation """ with bioassay_model: db_trace = pm.sample(100, trace='sqlite') """ Explanation: This MultiTrace object is a data structure that stores the samples from an MCMC run in a tabular structure. By default, sample will create a new MultiTrace object that stores its samples in memory, as a NumPy ndarray. We can override the default behavior by specifying the trace argument. There are three options: Selecting an alternative database backend to keeping samples in an ndarray. Passing either "text" or "sqlite", for example, will save samples to text files or a SQLite database, respectively. An instance of a backend can also be passed. Passing a list of variables will only record samples for the subset of variables specified in the list. These will be stored in memory. An existing MultiTrace object. This will add samples to an existing backend. End of explanation """ with bioassay_model: ptrace = pm.sample(100, njobs=4) """ Explanation: We will look at the various database backends in greater detail in the next section. Parallel sampling Nearly all modern desktop computers have multiple CPU cores, and running multiple MCMC chains is an embarrasingly parallel computing task. It is therefore relatively simple to run chains in parallel in PyMC3. This is done by setting the njobs argument in sample to some value between 2 and the number of cores on your machine (you can specify more chains than cores, but you will not gain efficiency by doing so). The default value of njobs is 1 (i.e. no parallel sampling) and specifying None will select the 2 CPUs fewer than the number of cores on your machine. End of explanation """ ptrace['alpha'].shape """ Explanation: Running $n$ iterations with $c$ chains will result in $n \times c$ samples. End of explanation """ with bioassay_model: ptrace = pm.sample(100, njobs=2, start=[{'alpha':-2}, {'alpha':2}]) [chain[:5] for chain in ptrace.get_values('alpha', combine=False)] """ Explanation: If you want to specify different arguments for each chain, a list of argument values can be passed to sample as appropriate. For example, if we want to initialize random variables to particular (e.g. dispersed) values, we can pass a list of dictionaries to start: End of explanation """ with bioassay_model: rtrace = pm.sample(100, random_seed=42) rtrace['beta', -5:] """ Explanation: Generating several chains is generally recommended because it aids in model checking, allowing statistics such as the potential scale reduction factor ($\hat{R}$) and effective sample size to be calculated. Reproducible sampling A practical drawback of using stochastic sampling methods for statistical inference is that it can be more difficult to reproduce individual results, due to the fact that sampling involves the use of pseudo-random number generation. To aid in reproducibility (and debugging), it can be helpful to set a random number seed prior to sampling. The random_seed argument can be used to set PyMC's random number generator to a particular seed integer, which results in the same sequence of random numbers each time the seed is set to the same value. End of explanation """ with bioassay_model: rtrace = pm.sample(100, random_seed=42) rtrace['beta', -5:] """ Explanation: Setting the same seed for another run of the same model will generate the same sequence of samples: End of explanation """ with bioassay_model: trace_90 = pm.sample(100, step=pm.NUTS(target_accept=0.9)) %matplotlib inline pm.traceplot(trace_90, varnames=['alpha']); with bioassay_model: trace_99 = pm.sample(100, step=pm.NUTS(target_accept=0.99)) pm.traceplot(trace_99, varnames=['alpha']); """ Explanation: Step methods Step method classes handle individual stochastic variables, or sometimes groups of them. They are responsible for making the variables they handle take single MCMC steps conditional on the rest of the model. Each PyMC step method (usually subclasses of ArrayStep) implements a method called astep(), which is called iteratively by sample. All step methods share an optional argument vars that allows a particular subset of variables to be handled by the step method instance. Particular step methods will have additional arguments for setting parameters and preferences specific to that sampling algorithm. NB: when a PyMC function or method has an argument called vars it is expecting a list of variables (i.e. the variables themselves), whereas arguments called varnames expect a list of variables names (i.e. strings) HamiltonianMC The Hamiltonian Monte Carlo algorithm is implemented in the HamiltonianMC class. Being a gradient-based sampler, it is only suitable for continuous random variables. Several optional arguments can be provided by the user. The algorithm is non-adaptive, so the parameter values passed at instantiation are fixed at those values throughout sampling. HamiltonianMC requires a scaling matrix parameter scaling, which is analogous to the variance parameter for the jump proposal distribution in Metropolis-Hastings, although it is used somewhat differently here. The matrix gives an approximate shape of the posterior distribution, so that HamiltonianMC does not make jumps that are too large in some directions and too small in other directions. It is important to set this scaling parameter to a reasonable value to facilitate efficient sampling. This is especially true for models that have many unobserved stochastic random variables or models with highly non-normal posterior distributions. Fortunately, HamiltonianMC can often make good guesses for the scaling parameters. If you pass a point in parameter space (as a dictionary of variable names to parameter values, the same format as returned by find_MAP), it will look at the local curvature of the log posterior-density (the diagonal of the Hessian matrix) at that point to guess values for a good scaling vector, which can result in a good scaling value. Also, the MAP estimate is often a good point to use to initiate sampling. scaling : Scaling for momentum distribution. If a 1-dimensional array is passed, it is interpreted as a matrix diagonal. step_scale : Size of steps to take, automatically scaled down by $1/n^{0.25}$. Defaults to .25. path_length : total length to travel during leapfrog. Defaults to 2. is_cov : Flag for treating scaling as a covariance matrix/vector, if True. Treated as precision otherwise. step_rand : A function which takes the step size and returns an new one used to randomize the step size at each iteration. NUTS NUTS is the No U-turn Sampler of Hoffman and Gelman (2014), an adaptive version of Hamiltonian MC that automatically tunes the step size and number on the fly. In addition to the arguments to HamiltonianMC, NUTS takes additional parameters to controls the tuning. The most important of these is the target acceptance rate for the Metropolis acceptance phase of the algorithm, taget_accept. Sometimes if the NUTS struggles to sample efficiently, changing this parameter above the default target rate of 0.8 will improve sampling (the original recommendation by Hoffman & Gelman was 0.6). Increasing the rate very high will also make the sampler more conservative, however, taking many small steps at every iteration. End of explanation """ %matplotlib inline with bioassay_model: slice_trace = pm.sample(5000, step=pm.Slice()) pm.traceplot(slice_trace[1000:], varnames=['alpha','beta']); """ Explanation: There is rarely a reason to use HamiltonianMC rather than NUTS. It is the default sampler for continuous variables in PyMC3. Metropolis Metropolis implements a Metropolis-Hastings step, as described the theory section, and is designed to handle float- and integer-valued variables. A Metropolis step method can be instantiated with any of several optional arguments: S : This sets the proposal standard deviation or covariance matrix. proposal_dist : A function that generates zero-mean random deviates used as proposals. Defaults to the normal distribution. scaling : An initial scale factor for the proposal tune_interval : The number of intervals between tuning updates to scaling factor. When the step method is instantiated, the proposal_dist is parameterized with the value passed for S. While sampling, the value of scaling is used to scale the value proposed by proposal_dist, and this value is tuned throughout the MCMC run. During tuning, the acceptance ratio of the step method is examined, and this scaling factor is updated accordingly. Tuning only occurs when the acceptance rate is lower than 20% or higher than 50%; rates between 20-50% are considered optimal for Metropolis-Hastings sampling. The default tuning interval (tune_interval) is 100 iterations. Although tuning will continue throughout the sampling loop, it is important to verify that the diminishing tuning condition of Roberts and Rosenthal (2007) is satisfied: the amount of tuning should decrease to zero, or tuning should become very infrequent. Metropolis handles discrete variable types automatically by rounding the proposed values and casting them to integers. BinaryMetropolis While binary (boolean) variables can be handled by the Metropolis step method, sampling will be very inefficient. The BinaryMetropolis class is optimized to handle binary variables, by one of only two possible values. The only tuneable parameter is the scaling argument, which is used to vary the Bernoulli probability: p_jump = 1. - .5 ** self.scaling This value is compared to pseudo-random numbers generated by the step method, to determine whether a 0 or 1 is proposed. BinaryMetropolis will be automatically selected for random variables that are distributed as Bernoulli, or categorical with only 2 categories. Slice Though the Metropolis-Hastings algorithm is easy to implement for a variety of models, its efficiency is poor. We have seen that it is possible to tune Metropolis samplers, but it would be nice to have a "black-box" method that works for arbitrary continuous distributions, which we may know little about a priori. The slice sampler (Neal 2003) improves upon the Metropolis sampler by being both efficient and easy to program generally. The idea is to first sample from the conditional distribution for $y$ (i.e., $Pr(x)$) given some current value of $x$, which is uniform over the $(0,f(x))$, and conditional on this value for $y$, then sample $x$, which is uniform on $S = {x : y < f (x)}$. The steps required to perform a single iteration of the slice sampler to update the current value of $x_i$ is as follows: Sample $y$ uniformly on (0,f(xi)). Use this value $y$ to define a horizontal slice $S = {x : y < f (x)}$. Establish an interval, I=(xa,xb), around xi that contains most of the slice. Sample $x_{i+1}$ from the region of the slice overlaping I. Hence, slice sampling employs an auxilliary variable ($y$) that is not retained at the end of the iteration. Note that in practice one may operate on the log scale such that $g(x) = \log(f (x))$ to avoid floating-point underflow. In this case, the auxiliary variable becomes $z = log(y) = g(x_i) − e$, where $e \sim \text{Exp}(1)$, resulting in the slice $S = {x : z < g(x)}$. There are many ways of establishing and sampling from the interval $I$, with the only restriction being that the resulting Markov chain leaves $f(x)$ invariant. The objective is to include as much of the slice as possible, so that the potential step size can be large, but not (much) larger than the slice, so that the sampling of invalid points is minimized. Ideally, we would like it to be the slice itself, but it may not always be feasible to determine (and certainly not automatically). In PyMC3, the Slice class implements the univariate slice sampler. It is suitable for univariate, continuous variables. There is a single user-defined parameter w, which sets the width of the initial slice. If not specified, it defaults to a width of 1. End of explanation """ disasters_missing = np.array([ 4, 5, 4, 0, 1, 4, 3, 4, 0, 6, 3, 3, 4, 0, 2, 6, 3, 3, 5, 4, 5, 3, 1, 4, 4, 1, 5, 5, 3, 4, 2, 5, 2, 2, 3, 4, 2, 1, 3, -999, 2, 1, 1, 1, 1, 3, 0, 0, 1, 0, 1, 1, 0, 0, 3, 1, 0, 3, 2, 2, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 2, 1, 0, 0, 0, 1, 1, 0, 2, 3, 3, 1, -999, 2, 1, 1, 1, 1, 2, 4, 2, 0, 0, 1, 4, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1]) """ Explanation: PyMC3 also includes an implementation of adaptive transitional Markov chain Monte Carlo (ATMCMC, Ching & Chen 2007), which we will not cover here. Consult the documentation for details. Imputation of Missing Data As with most textbook examples, the models we have examined so far assume that the associated data are complete. That is, there are no missing values corresponding to any observations in the dataset. However, many real-world datasets have missing observations, usually due to some logistical problem during the data collection process. The easiest way of dealing with observations that contain missing values is simply to exclude them from the analysis. However, this results in loss of information if an excluded observation contains valid values for other quantities, and can bias results. An alternative is to impute the missing values, based on information in the rest of the model. For example, consider a survey dataset for some wildlife species: Count Site Observer Temperature ------- ------ ---------- ------------- 15 1 1 15 10 1 2 NA 6 1 1 11 Each row contains the number of individuals seen during the survey, along with three covariates: the site on which the survey was conducted, the observer that collected the data, and the temperature during the survey. If we are interested in modelling, say, population size as a function of the count and the associated covariates, it is difficult to accommodate the second observation because the temperature is missing (perhaps the thermometer was broken that day). Ignoring this observation will allow us to fit the model, but it wastes information that is contained in the other covariates. In a Bayesian modelling framework, missing data are accommodated simply by treating them as unknown model parameters. Values for the missing data $\tilde{y}$ are estimated naturally, using the posterior predictive distribution: $$p(\tilde{y}|y) = \int p(\tilde{y}|\theta) f(\theta|y) d\theta$$ This describes additional data $\tilde{y}$, which may either be considered unobserved data or potential future observations. We can use the posterior predictive distribution to model the likely values of missing data. Consider the coal mining disasters data introduced previously. Assume that two years of data are missing from the time series; we indicate this in the data array by the use of an arbitrary placeholder value, None: End of explanation """ disasters_masked = np.ma.masked_values(disasters_missing, value=-999) disasters_masked """ Explanation: To estimate these values in PyMC, we cast the data to a masked array. These are specialised NumPy arrays that contain a matching True or False value for each element to indicate if that value should be excluded from any computation. Masked arrays can be generated using NumPy's ma.masked_equal function: End of explanation """ with pm.Model() as missing_data_model: # Prior for distribution of switchpoint location switchpoint = pm.DiscreteUniform('switchpoint', lower=0, upper=len(disasters_masked)) # Priors for pre- and post-switch mean number of disasters early_mean = pm.Exponential('early_mean', lam=1.) late_mean = pm.Exponential('late_mean', lam=1.) # Allocate appropriate Poisson rates to years before and after current # switchpoint location idx = np.arange(len(disasters_masked)) rate = pm.Deterministic('rate', pm.switch(switchpoint >= idx, early_mean, late_mean)) # Data likelihood disasters = pm.Poisson('disasters', rate, observed=disasters_masked) """ Explanation: This masked array, in turn, can then be passed to one of PyMC's data stochastic variables, which recognizes the masked array and replaces the missing values with stochastic variables of the desired type. For the coal mining disasters problem, recall that disaster events were modeled as Poisson variates: python disasters = Poisson('disasters', mu=rate, observed=masked_values) Each element in disasters is a Poisson random variable, irrespective of whether the observation was missing or not. The difference is that actual observations are assumed to be data stochastics, while the missing values are unobserved stochastics. The latter are considered unknown, rather than fixed, and therefore estimated by the fitting algorithm, just as unknown model parameters are. The entire model looks very similar to the original model: End of explanation """ with missing_data_model: trace_missing = pm.sample(2000) missing_data_model.vars from pymc3 import forestplot pm.forestplot(trace_missing, varnames=['disasters_missing']) """ Explanation: Here, we have used the masked_array function, rather than masked_equal, and the value -999 as a placeholder for missing data. The result is the same. End of explanation """ import pandas as pd # Generate data size = 50 true_intercept = 1 true_slope = 2 x = np.linspace(0, 1, size) y = true_intercept + x*true_slope + np.random.normal(scale=.5, size=size) data = pd.DataFrame(dict(x=x, y=y)) """ Explanation: Generalized Linear Models Generalized Linear Models (GLMs) are a class of flexible models that are widely used to estimate regression relationships between a single outcome variable and one or multiple predictors. Because these models are so common, PyMC3 offers a glm submodule that allows flexible creation of various GLMs with an intuitive R-like syntax that is implemented via the patsy module. The glm submodule requires data to be included as a pandas DataFrame. Hence, for our linear regression example: End of explanation """ from pymc3.glm import glm with pm.Model() as model: glm('y ~ x', data) fit = pm.advi(n=50000) fit[0] """ Explanation: The model can then be very concisely specified in one line of code. End of explanation """ from pymc3.glm.families import Binomial df_logistic = pd.DataFrame({'x': x, 'y': y > np.median(y)}) with pm.Model() as model_glm_logistic: glm('y ~ x', df_logistic, family=Binomial()) """ Explanation: The error distribution, if not specified via the family argument, is assumed to be normal. In the case of logistic regression, this can be modified by passing in a Binomial family object. End of explanation """ from pymc3.backends import SQLite with pm.Model() as model_glm_logistic: glm('y ~ x', df_logistic, family=Binomial()) backend = SQLite('trace.sqlite') trace = pm.sample(2000, trace=backend) """ Explanation: Backends PyMC3 has support for different ways to store samples during and after sampling, called backends, including in-memory (default), text file, and SQLite. These can be found in pymc.backends: By default, an in-memory ndarray is used but if the samples would get too large to be held in memory we could use the sqlite backend: End of explanation """ from pymc3.backends.sqlite import load with model_glm_logistic: trace_loaded = load('trace.sqlite') trace_loaded """ Explanation: The stored trace can then later be loaded using the load command: End of explanation """
xaibeing/cn-deep-learning
image-classification/dlnd_image_classification.ipynb
mit
""" DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ from urllib.request import urlretrieve from os.path import isfile, isdir from tqdm import tqdm import problem_unittests as tests import tarfile cifar10_dataset_folder_path = 'cifar-10-batches-py' # Use Floyd's cifar-10 dataset if present floyd_cifar10_location = '/input/cifar-10/python.tar.gz' if isfile(floyd_cifar10_location): tar_gz_path = floyd_cifar10_location else: tar_gz_path = 'cifar-10-python.tar.gz' class DLProgress(tqdm): last_block = 0 def hook(self, block_num=1, block_size=1, total_size=None): self.total = total_size self.update((block_num - self.last_block) * block_size) self.last_block = block_num if not isfile(tar_gz_path): with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar: urlretrieve( 'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz', tar_gz_path, pbar.hook) if not isdir(cifar10_dataset_folder_path): with tarfile.open(tar_gz_path) as tar: tar.extractall() tar.close() tests.test_folder_path(cifar10_dataset_folder_path) """ Explanation: 图像分类 在此项目中,你将对 CIFAR-10 数据集 中的图片进行分类。该数据集包含飞机、猫狗和其他物体。你需要预处理这些图片,然后用所有样本训练一个卷积神经网络。图片需要标准化(normalized),标签需要采用 one-hot 编码。你需要应用所学的知识构建卷积的、最大池化(max pooling)、丢弃(dropout)和完全连接(fully connected)的层。最后,你需要在样本图片上看到神经网络的预测结果。 获取数据 请运行以下单元,以下载 CIFAR-10 数据集(Python版)。 End of explanation """ %matplotlib inline %config InlineBackend.figure_format = 'retina' import helper import numpy as np # Explore the dataset batch_id = 1 sample_id = 5 helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id) """ Explanation: 探索数据 该数据集分成了几部分/批次(batches),以免你的机器在计算时内存不足。CIFAR-10 数据集包含 5 个部分,名称分别为 data_batch_1、data_batch_2,以此类推。每个部分都包含以下某个类别的标签和图片: 飞机 汽车 鸟类 猫 鹿 狗 青蛙 马 船只 卡车 了解数据集也是对数据进行预测的必经步骤。你可以通过更改 batch_id 和 sample_id 探索下面的代码单元。batch_id 是数据集一个部分的 ID(1 到 5)。sample_id 是该部分中图片和标签对(label pair)的 ID。 问问你自己:“可能的标签有哪些?”、“图片数据的值范围是多少?”、“标签是按顺序排列,还是随机排列的?”。思考类似的问题,有助于你预处理数据,并使预测结果更准确。 End of explanation """ def normalize(x): """ Normalize a list of sample image data in the range of 0 to 1 : x: List of image data. The image shape is (32, 32, 3) : return: Numpy array of normalize data """ # TODO: Implement Function return x / 255 """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_normalize(normalize) """ Explanation: 实现预处理函数 标准化 在下面的单元中,实现 normalize 函数,传入图片数据 x,并返回标准化 Numpy 数组。值应该在 0 到 1 的范围内(含 0 和 1)。返回对象应该和 x 的形状一样。 End of explanation """ from sklearn.preprocessing import OneHotEncoder enc = OneHotEncoder() each_label = np.array(list(range(10))).reshape(-1,1) enc.fit(each_label) print(enc.n_values_) print(enc.feature_indices_) def one_hot_encode(x): """ One hot encode a list of sample labels. Return a one-hot encoded vector for each label. : x: List of sample Labels : return: Numpy array of one-hot encoded labels """ # TODO: Implement Function X = np.array(x).reshape(-1, 1) return enc.transform(X).toarray() """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_one_hot_encode(one_hot_encode) """ Explanation: One-hot 编码 和之前的代码单元一样,你将为预处理实现一个函数。这次,你将实现 one_hot_encode 函数。输入,也就是 x,是一个标签列表。实现该函数,以返回为 one_hot 编码的 Numpy 数组的标签列表。标签的可能值为 0 到 9。每次调用 one_hot_encode 时,对于每个值,one_hot 编码函数应该返回相同的编码。确保将编码映射保存到该函数外面。 提示:不要重复发明轮子。 End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ # Preprocess Training, Validation, and Testing Data helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode) """ Explanation: 随机化数据 之前探索数据时,你已经了解到,样本的顺序是随机的。再随机化一次也不会有什么关系,但是对于这个数据集没有必要。 预处理所有数据并保存 运行下方的代码单元,将预处理所有 CIFAR-10 数据,并保存到文件中。下面的代码还使用了 10% 的训练数据,用来验证。 End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ import pickle import problem_unittests as tests import helper # Load the Preprocessed Validation data valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb')) """ Explanation: 检查点 这是你的第一个检查点。如果你什么时候决定再回到该记事本,或需要重新启动该记事本,你可以从这里开始。预处理的数据已保存到本地。 End of explanation """ import tensorflow as tf def neural_net_image_input(image_shape): """ Return a Tensor for a batch of image input : image_shape: Shape of the images : return: Tensor for image input. """ # TODO: Implement Function return tf.placeholder(tf.float32, shape=[None, image_shape[0], image_shape[1], image_shape[2]], name='x') def neural_net_label_input(n_classes): """ Return a Tensor for a batch of label input : n_classes: Number of classes : return: Tensor for label input. """ # TODO: Implement Function return tf.placeholder(tf.float32, shape=[None, n_classes], name='y') def neural_net_keep_prob_input(): """ Return a Tensor for keep probability : return: Tensor for keep probability. """ # TODO: Implement Function return tf.placeholder(tf.float32, shape=None, name='keep_prob') """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tf.reset_default_graph() tests.test_nn_image_inputs(neural_net_image_input) tests.test_nn_label_inputs(neural_net_label_input) tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input) """ Explanation: 构建网络 对于该神经网络,你需要将每层都构建为一个函数。你看到的大部分代码都位于函数外面。要更全面地测试你的代码,我们需要你将每层放入一个函数中。这样使我们能够提供更好的反馈,并使用我们的统一测试检测简单的错误,然后再提交项目。 注意:如果你觉得每周很难抽出足够的时间学习这门课程,我们为此项目提供了一个小捷径。对于接下来的几个问题,你可以使用 TensorFlow Layers 或 TensorFlow Layers (contrib) 程序包中的类来构建每个层级,但是“卷积和最大池化层级”部分的层级除外。TF Layers 和 Keras 及 TFLearn 层级类似,因此很容易学会。 但是,如果你想充分利用这门课程,请尝试自己解决所有问题,不使用 TF Layers 程序包中的任何类。你依然可以使用其他程序包中的类,这些类和你在 TF Layers 中的类名称是一样的!例如,你可以使用 TF Neural Network 版本的 conv2d 类 tf.nn.conv2d,而不是 TF Layers 版本的 conv2d 类 tf.layers.conv2d。 我们开始吧! 输入 神经网络需要读取图片数据、one-hot 编码标签和丢弃保留概率(dropout keep probability)。请实现以下函数: 实现 neural_net_image_input 返回 TF Placeholder 使用 image_shape 设置形状,部分大小设为 None 使用 TF Placeholder 中的 TensorFlow name 参数对 TensorFlow 占位符 "x" 命名 实现 neural_net_label_input 返回 TF Placeholder 使用 n_classes 设置形状,部分大小设为 None 使用 TF Placeholder 中的 TensorFlow name 参数对 TensorFlow 占位符 "y" 命名 实现 neural_net_keep_prob_input 返回 TF Placeholder,用于丢弃保留概率 使用 TF Placeholder 中的 TensorFlow name 参数对 TensorFlow 占位符 "keep_prob" 命名 这些名称将在项目结束时,用于加载保存的模型。 注意:TensorFlow 中的 None 表示形状可以是动态大小。 End of explanation """ def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides): """ Apply convolution then max pooling to x_tensor :param x_tensor: TensorFlow Tensor :param conv_num_outputs: Number of outputs for the convolutional layer :param conv_ksize: kernal size 2-D Tuple for the convolutional layer :param conv_strides: Stride 2-D Tuple for convolution :param pool_ksize: kernal size 2-D Tuple for pool :param pool_strides: Stride 2-D Tuple for pool : return: A tensor that represents convolution and max pooling of x_tensor """ # TODO: Implement Function weights = tf.Variable(tf.truncated_normal([conv_ksize[0], conv_ksize[1], x_tensor.get_shape().as_list()[-1], conv_num_outputs], stddev=0.1)) biases = tf.Variable(tf.zeros([conv_num_outputs])) net = tf.nn.conv2d(x_tensor, weights, [1, conv_strides[0], conv_strides[1], 1], 'SAME') net = tf.nn.bias_add(net, biases) net = tf.nn.relu(net) pool_kernel = [1, pool_ksize[0], pool_ksize[1], 1] pool_strides = [1, pool_strides[0], pool_strides[1], 1] net = tf.nn.max_pool(net, pool_kernel, pool_strides, 'VALID') return net """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_con_pool(conv2d_maxpool) """ Explanation: 卷积和最大池化层 卷积层级适合处理图片。对于此代码单元,你应该实现函数 conv2d_maxpool 以便应用卷积然后进行最大池化: 使用 conv_ksize、conv_num_outputs 和 x_tensor 的形状创建权重(weight)和偏置(bias)。 使用权重和 conv_strides 对 x_tensor 应用卷积。 建议使用我们建议的间距(padding),当然也可以使用任何其他间距。 添加偏置 向卷积中添加非线性激活(nonlinear activation) 使用 pool_ksize 和 pool_strides 应用最大池化 建议使用我们建议的间距(padding),当然也可以使用任何其他间距。 注意:对于此层,请勿使用 TensorFlow Layers 或 TensorFlow Layers (contrib),但是仍然可以使用 TensorFlow 的 Neural Network 包。对于所有其他层,你依然可以使用快捷方法。 End of explanation """ import numpy as np def flatten(x_tensor): """ Flatten x_tensor to (Batch Size, Flattened Image Size) : x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions. : return: A tensor of size (Batch Size, Flattened Image Size). """ # TODO: Implement Function shape = x_tensor.get_shape().as_list() dim = np.prod(shape[1:]) x_tensor = tf.reshape(x_tensor, [-1,dim]) return x_tensor """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_flatten(flatten) """ Explanation: 扁平化层 实现 flatten 函数,将 x_tensor 的维度从四维张量(4-D tensor)变成二维张量。输出应该是形状(部分大小(Batch Size),扁平化图片大小(Flattened Image Size))。快捷方法:对于此层,你可以使用 TensorFlow Layers 或 TensorFlow Layers (contrib) 包中的类。如果你想要更大挑战,可以仅使用其他 TensorFlow 程序包。 End of explanation """ def fully_conn(x_tensor, num_outputs): """ Apply a fully connected layer to x_tensor using weight and bias : x_tensor: A 2-D tensor where the first dimension is batch size. : num_outputs: The number of output that the new tensor should be. : return: A 2-D tensor where the second dimension is num_outputs. """ # TODO: Implement Function weights = tf.Variable(tf.truncated_normal(shape=[x_tensor.get_shape().as_list()[-1], num_outputs], mean=0, stddev=1)) biases = tf.Variable(tf.zeros(shape=[num_outputs])) net = tf.nn.bias_add(tf.matmul(x_tensor, weights), biases) net = tf.nn.relu(net) return net """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_fully_conn(fully_conn) """ Explanation: 全连接层 实现 fully_conn 函数,以向 x_tensor 应用完全连接的层级,形状为(部分大小(Batch Size),num_outputs)。快捷方法:对于此层,你可以使用 TensorFlow Layers 或 TensorFlow Layers (contrib) 包中的类。如果你想要更大挑战,可以仅使用其他 TensorFlow 程序包。 End of explanation """ def output(x_tensor, num_outputs): """ Apply a output layer to x_tensor using weight and bias : x_tensor: A 2-D tensor where the first dimension is batch size. : num_outputs: The number of output that the new tensor should be. : return: A 2-D tensor where the second dimension is num_outputs. """ # TODO: Implement Function weights = tf.Variable(tf.truncated_normal(shape=[x_tensor.get_shape().as_list()[-1], num_outputs], mean=0, stddev=1)) biases = tf.Variable(tf.zeros(shape=[num_outputs])) net = tf.nn.bias_add(tf.matmul(x_tensor, weights), biases) return net """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_output(output) """ Explanation: 输出层 实现 output 函数,向 x_tensor 应用完全连接的层级,形状为(部分大小(Batch Size),num_outputs)。快捷方法:对于此层,你可以使用 TensorFlow Layers 或 TensorFlow Layers (contrib) 包中的类。如果你想要更大挑战,可以仅使用其他 TensorFlow 程序包。 注意:该层级不应应用 Activation、softmax 或交叉熵(cross entropy)。 End of explanation """ def conv_net(x, keep_prob): """ Create a convolutional neural network model : x: Placeholder tensor that holds image data. : keep_prob: Placeholder tensor that hold dropout keep probability. : return: Tensor that represents logits """ # TODO: Apply 1, 2, or 3 Convolution and Max Pool layers # Play around with different number of outputs, kernel size and stride # Function Definition from Above: # conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides) # net = conv2d_maxpool(x, 32, (5,5), (1,1), (2,2), (2,2)) # net = tf.nn.dropout(net, keep_prob) # net = conv2d_maxpool(net, 32, (5,5), (1,1), (2,2), (2,2)) # net = conv2d_maxpool(net, 64, (5,5), (1,1), (2,2), (2,2)) net = conv2d_maxpool(x, 32, (3,3), (1,1), (2,2), (2,2)) net = tf.nn.dropout(net, keep_prob) net = conv2d_maxpool(net, 64, (3,3), (1,1), (2,2), (2,2)) net = tf.nn.dropout(net, keep_prob) net = conv2d_maxpool(net, 64, (3,3), (1,1), (2,2), (2,2)) # TODO: Apply a Flatten Layer # Function Definition from Above: # flatten(x_tensor) net = flatten(net) # TODO: Apply 1, 2, or 3 Fully Connected Layers # Play around with different number of outputs # Function Definition from Above: # fully_conn(x_tensor, num_outputs) net = fully_conn(net, 64) # TODO: Apply an Output Layer # Set this to the number of classes # Function Definition from Above: # output(x_tensor, num_outputs) net = output(net, enc.n_values_) # TODO: return output return net """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ ############################## ## Build the Neural Network ## ############################## # Remove previous weights, bias, inputs, etc.. tf.reset_default_graph() # Inputs x = neural_net_image_input((32, 32, 3)) y = neural_net_label_input(10) keep_prob = neural_net_keep_prob_input() # Model logits = conv_net(x, keep_prob) # Name logits Tensor, so that is can be loaded from disk after training logits = tf.identity(logits, name='logits') # Loss and Optimizer cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y)) optimizer = tf.train.AdamOptimizer().minimize(cost) # Accuracy correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1)) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy') tests.test_conv_net(conv_net) """ Explanation: 创建卷积模型 实现函数 conv_net, 创建卷积神经网络模型。该函数传入一批图片 x,并输出对数(logits)。使用你在上方创建的层创建此模型: 应用 1、2 或 3 个卷积和最大池化层(Convolution and Max Pool layers) 应用一个扁平层(Flatten Layer) 应用 1、2 或 3 个完全连接层(Fully Connected Layers) 应用一个输出层(Output Layer) 返回输出 使用 keep_prob 向模型中的一个或多个层应用 TensorFlow 的 Dropout End of explanation """ def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch): """ Optimize the session on a batch of images and labels : session: Current TensorFlow session : optimizer: TensorFlow optimizer function : keep_probability: keep probability : feature_batch: Batch of Numpy image data : label_batch: Batch of Numpy label data """ # TODO: Implement Function _ = session.run([optimizer, cost, accuracy], feed_dict={x: feature_batch, y: label_batch, keep_prob: keep_probability}) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_train_nn(train_neural_network) """ Explanation: 训练神经网络 单次优化 实现函数 train_neural_network 以进行单次优化(single optimization)。该优化应该使用 optimizer 优化 session,其中 feed_dict 具有以下参数: x 表示图片输入 y 表示标签 keep_prob 表示丢弃的保留率 每个部分都会调用该函数,所以 tf.global_variables_initializer() 已经被调用。 注意:不需要返回任何内容。该函数只是用来优化神经网络。 End of explanation """ def print_stats(session, feature_batch, label_batch, cost, accuracy): """ Print information about loss and validation accuracy : session: Current TensorFlow session : feature_batch: Batch of Numpy image data : label_batch: Batch of Numpy label data : cost: TensorFlow cost function : accuracy: TensorFlow accuracy function """ # TODO: Implement Function valid_loss, valid_accuracy = session.run([cost, accuracy], feed_dict={x: valid_features, y: valid_labels, keep_prob: 1}) print("valid loss {:.3f}, accuracy {:.3f}".format(valid_loss, valid_accuracy)) """ Explanation: 显示数据 实现函数 print_stats 以输出损失和验证准确率。使用全局变量 valid_features 和 valid_labels 计算验证准确率。使用保留率 1.0 计算损失和验证准确率(loss and validation accuracy)。 End of explanation """ # TODO: Tune Parameters epochs = 100 batch_size = 256 keep_probability = 0.8 """ Explanation: 超参数 调试以下超参数: * 设置 epochs 表示神经网络停止学习或开始过拟合的迭代次数 * 设置 batch_size,表示机器内存允许的部分最大体积。大部分人设为以下常见内存大小: 64 128 256 ... 设置 keep_probability 表示使用丢弃时保留节点的概率 End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ print('Checking the Training on a Single Batch...') with tf.Session() as sess: # Initializing the variables sess.run(tf.global_variables_initializer()) # Training cycle for epoch in range(epochs): batch_i = 1 for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size): train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels) print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='') print_stats(sess, batch_features, batch_labels, cost, accuracy) """ Explanation: 在单个 CIFAR-10 部分上训练 我们先用单个部分,而不是用所有的 CIFAR-10 批次训练神经网络。这样可以节省时间,并对模型进行迭代,以提高准确率。最终验证准确率达到 50% 或以上之后,在下一部分对所有数据运行模型。 End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ save_model_path = './image_classification' print('Training...') with tf.Session() as sess: # Initializing the variables sess.run(tf.global_variables_initializer()) # Training cycle for epoch in range(epochs): # Loop over all batches n_batches = 5 for batch_i in range(1, n_batches + 1): for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size): train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels) print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='') print_stats(sess, batch_features, batch_labels, cost, accuracy) # Save Model saver = tf.train.Saver() save_path = saver.save(sess, save_model_path) """ Explanation: 完全训练模型 现在,单个 CIFAR-10 部分的准确率已经不错了,试试所有五个部分吧。 End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ %matplotlib inline %config InlineBackend.figure_format = 'retina' import tensorflow as tf import pickle import helper import random # Set batch size if not already set try: if batch_size: pass except NameError: batch_size = 64 save_model_path = './image_classification' n_samples = 4 top_n_predictions = 3 def test_model(): """ Test the saved model against the test dataset """ test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb')) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load model loader = tf.train.import_meta_graph(save_model_path + '.meta') loader.restore(sess, save_model_path) # Get Tensors from loaded model loaded_x = loaded_graph.get_tensor_by_name('x:0') loaded_y = loaded_graph.get_tensor_by_name('y:0') loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') loaded_logits = loaded_graph.get_tensor_by_name('logits:0') loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0') # Get accuracy in batches for memory limitations test_batch_acc_total = 0 test_batch_count = 0 for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size): test_batch_acc_total += sess.run( loaded_acc, feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0}) test_batch_count += 1 print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count)) # Print Random Samples random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples))) random_test_predictions = sess.run( tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions), feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0}) helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions) test_model() """ Explanation: 检查点 模型已保存到本地。 测试模型 利用测试数据集测试你的模型。这将是最终的准确率。你的准确率应该高于 50%。如果没达到,请继续调整模型结构和参数。 End of explanation """
zomansud/coursera
ml-regression/week-6/week-6-local-regression-assignment-blank.ipynb
mit
import graphlab """ Explanation: Predicting house prices using k-nearest neighbors regression In this notebook, you will implement k-nearest neighbors regression. You will: * Find the k-nearest neighbors of a given query input * Predict the output for the query input using the k-nearest neighbors * Choose the best value of k using a validation set Fire up GraphLab Create End of explanation """ sales = graphlab.SFrame('kc_house_data_small.gl/') """ Explanation: Load in house sales data For this notebook, we use a subset of the King County housing dataset created by randomly selecting 40% of the houses in the full dataset. End of explanation """ sales.head() """ Explanation: Import useful functions from previous notebooks End of explanation """ import numpy as np # note this allows us to refer to numpy as np instead def get_numpy_data(data_sframe, features, output): data_sframe['constant'] = 1 # this is how you add a constant column to an SFrame # add the column 'constant' to the front of the features list so that we can extract it along with the others: features = ['constant'] + features # this is how you combine two lists print features # select the columns of data_SFrame given by the features list into the SFrame features_sframe (now including constant): features_sframe = data_sframe[features] # the following line will convert the features_SFrame into a numpy matrix: feature_matrix = features_sframe.to_numpy() # assign the column of data_sframe associated with the output to the SArray output_sarray output_sarray = data_sframe[output] # the following will convert the SArray into a numpy array by first converting it to a list output_array = output_sarray.to_numpy() return(feature_matrix, output_array) """ Explanation: To efficiently compute pairwise distances among data points, we will convert the SFrame into a 2D Numpy array. First import the numpy library and then copy and paste get_numpy_data() from the second notebook of Week 2. End of explanation """ def normalize_features(feature_matrix): norms = np.linalg.norm(feature_matrix, axis=0) return feature_matrix / norms, norms """ Explanation: We will also need the normalize_features() function from Week 5 that normalizes all feature columns to unit norm. Paste this function below. End of explanation """ (train_and_validation, test) = sales.random_split(.8, seed=1) # initial train/test split (train, validation) = train_and_validation.random_split(.8, seed=1) # split training set into training and validation sets """ Explanation: Split data into training, test, and validation sets End of explanation """ feature_list = ['bedrooms', 'bathrooms', 'sqft_living', 'sqft_lot', 'floors', 'waterfront', 'view', 'condition', 'grade', 'sqft_above', 'sqft_basement', 'yr_built', 'yr_renovated', 'lat', 'long', 'sqft_living15', 'sqft_lot15'] features_train, output_train = get_numpy_data(train, feature_list, 'price') features_test, output_test = get_numpy_data(test, feature_list, 'price') features_valid, output_valid = get_numpy_data(validation, feature_list, 'price') """ Explanation: Extract features and normalize Using all of the numerical inputs listed in feature_list, transform the training, test, and validation SFrames into Numpy arrays: End of explanation """ features_train, norms = normalize_features(features_train) # normalize training set features (columns) features_test = features_test / norms # normalize test set by training set norms features_valid = features_valid / norms # normalize validation set by training set norms """ Explanation: In computing distances, it is crucial to normalize features. Otherwise, for example, the sqft_living feature (typically on the order of thousands) would exert a much larger influence on distance than the bedrooms feature (typically on the order of ones). We divide each column of the training feature matrix by its 2-norm, so that the transformed column has unit norm. IMPORTANT: Make sure to store the norms of the features in the training set. The features in the test and validation sets must be divided by these same norms, so that the training, test, and validation sets are normalized consistently. End of explanation """ features_test[0] """ Explanation: Compute a single distance To start, let's just explore computing the "distance" between two given houses. We will take our query house to be the first house of the test set and look at the distance between this house and the 10th house of the training set. To see the features associated with the query house, print the first row (index 0) of the test feature matrix. You should get an 18-dimensional vector whose components are between 0 and 1. End of explanation """ features_train[9] """ Explanation: Now print the 10th row (index 9) of the training feature matrix. Again, you get an 18-dimensional vector with components between 0 and 1. End of explanation """ def euclidean_dist(xq, xi): return np.sqrt(np.sum((xq - xi) ** 2)) round(euclidean_dist(features_test[0], features_train[9]), 3) """ Explanation: QUIZ QUESTION What is the Euclidean distance between the query house and the 10th house of the training set? Note: Do not use the np.linalg.norm function; use np.sqrt, np.sum, and the power operator (**) instead. The latter approach is more easily adapted to computing multiple distances at once. End of explanation """ xq = features_test[0] dist_min = float('inf') closest_train_house = -1 for i in xrange(len(features_train[0:10])): xi = features_train[i] dist = euclidean_dist(xq, xi) print "Train house " + str(i) + " distance to query = " + str(dist) if dist < dist_min: dist_min = dist closest_train_house = i """ Explanation: Compute multiple distances Of course, to do nearest neighbor regression, we need to compute the distance between our query house and all houses in the training set. To visualize this nearest-neighbor search, let's first compute the distance from our query house (features_test[0]) to the first 10 houses of the training set (features_train[0:10]) and then search for the nearest neighbor within this small set of houses. Through restricting ourselves to a small set of houses to begin with, we can visually scan the list of 10 distances to verify that our code for finding the nearest neighbor is working. Write a loop to compute the Euclidean distance from the query house to each of the first 10 houses in the training set. End of explanation """ closest_train_house """ Explanation: QUIZ QUESTION Among the first 10 training houses, which house is the closest to the query house? End of explanation """ for i in xrange(3): print features_train[i]-features_test[0] # should print 3 vectors of length 18 """ Explanation: It is computationally inefficient to loop over computing distances to all houses in our training dataset. Fortunately, many of the Numpy functions can be vectorized, applying the same operation over multiple values or vectors. We now walk through this process. Consider the following loop that computes the element-wise difference between the features of the query house (features_test[0]) and the first 3 training houses (features_train[0:3]): End of explanation """ print features_train[0:3] - features_test[0] """ Explanation: The subtraction operator (-) in Numpy is vectorized as follows: End of explanation """ # verify that vectorization works results = features_train[0:3] - features_test[0] print results[0] - (features_train[0]-features_test[0]) # should print all 0's if results[0] == (features_train[0]-features_test[0]) print results[1] - (features_train[1]-features_test[0]) # should print all 0's if results[1] == (features_train[1]-features_test[0]) print results[2] - (features_train[2]-features_test[0]) # should print all 0's if results[2] == (features_train[2]-features_test[0]) """ Explanation: Note that the output of this vectorized operation is identical to that of the loop above, which can be verified below: End of explanation """ diff = features_train - xq """ Explanation: Aside: it is a good idea to write tests like this cell whenever you are vectorizing a complicated operation. Perform 1-nearest neighbor regression Now that we have the element-wise differences, it is not too hard to compute the Euclidean distances between our query house and all of the training houses. First, write a single-line expression to define a variable diff such that diff[i] gives the element-wise difference between the features of the query house and the i-th training house. End of explanation """ print diff[-1].sum() # sum of the feature differences between the query and last training house # should print -0.0934339605842 """ Explanation: To test the code above, run the following cell, which should output a value -0.0934339605842: End of explanation """ print np.sum(diff**2, axis=1)[15] # take sum of squares across each row, and print the 16th sum print np.sum(diff[15]**2) # print the sum of squares for the 16th row -- should be same as above """ Explanation: The next step in computing the Euclidean distances is to take these feature-by-feature differences in diff, square each, and take the sum over feature indices. That is, compute the sum of square feature differences for each training house (row in diff). By default, np.sum sums up everything in the matrix and returns a single number. To instead sum only over a row or column, we need to specifiy the axis parameter described in the np.sum documentation. In particular, axis=1 computes the sum across each row. Below, we compute this sum of square feature differences for all training houses and verify that the output for the 16th house in the training set is equivalent to having examined only the 16th row of diff and computing the sum of squares on that row alone. End of explanation """ distances = np.sqrt(np.sum(diff ** 2, axis=1)) """ Explanation: With this result in mind, write a single-line expression to compute the Euclidean distances between the query house and all houses in the training set. Assign the result to a variable distances. Hint: Do not forget to take the square root of the sum of squares. End of explanation """ print distances[100] # Euclidean distance between the query house and the 101th training house # should print 0.0237082324496 """ Explanation: To test the code above, run the following cell, which should output a value 0.0237082324496: End of explanation """ def one_nn(xq, x): # compute distances of xq with all of the x diff = x - xq distances = np.sqrt(np.sum(diff ** 2, axis=1)) return distances.argmin(), distances.min() """ Explanation: Now you are ready to write a function that computes the distances from a query house to all training houses. The function should take two parameters: (i) the matrix of training features and (ii) the single feature vector associated with the query. End of explanation """ xq = features_test[2] closest_train_index, min_dist = one_nn(xq, features_train) print closest_train_index print train[closest_train_index]['price'] """ Explanation: QUIZ QUESTIONS Take the query house to be third house of the test set (features_test[2]). What is the index of the house in the training set that is closest to this query house? What is the predicted value of the query house based on 1-nearest neighbor regression? End of explanation """ np.array([i for i in xrange(5)])[0:4] def k_nn(xq, x, k): # compute distances of xq with all of the x diff = x - xq distances = np.sqrt(np.sum(diff ** 2, axis=1)) # find the k nearest neighbour knn_distances = np.sort(distances[0:k]) knn_indices = np.array([i for i in xrange(k)]) for i in xrange(k, len(distances)): # current distance d = distances[i] if d < knn_distances[k-1]: # potential entry into knn temp_dist = np.copy(knn_distances) temp_dist = np.append(temp_dist, d) temp_idx = np.copy(knn_indices) temp_idx = np.append(temp_idx, i) # find argsort of temp idx_sort = np.argsort(temp_dist) # find the index of the value `k` in the argsorted list idx = np.where(idx_sort == k)[0][0] # adjust knns knn_distances[idx + 1: k] = knn_distances[idx: k-1 ] knn_indices[idx + 1: k] = knn_indices[idx : k -1 ] knn_distances[idx] = d knn_indices[idx] = i return knn_indices, knn_distances """ Explanation: Perform k-nearest neighbor regression For k-nearest neighbors, we need to find a set of k houses in the training set closest to a given query house. We then make predictions based on these k nearest neighbors. Fetch k-nearest neighbors Using the functions above, implement a function that takes in * the value of k; * the feature matrix for the training houses; and * the feature vector of the query house and returns the indices of the k closest training houses. For instance, with 2-nearest neighbor, a return value of [5, 10] would indicate that the 6th and 11th training houses are closest to the query house. Hint: Look at the documentation for np.argsort. End of explanation """ xq = features_test[2] k = 4 closest_houses, houses_dist = k_nn(xq, features_train, k) closest_houses """ Explanation: QUIZ QUESTION Take the query house to be third house of the test set (features_test[2]). What are the indices of the 4 training houses closest to the query house? End of explanation """ def predict_price(xq, x, k, x_output): # get the k closest indices closest_houses, houses_dist = k_nn(xq, x, k) # average the price price = np.array(x_output)[closest_houses].mean() return price """ Explanation: Make a single prediction by averaging k nearest neighbor outputs Now that we know how to find the k-nearest neighbors, write a function that predicts the value of a given query house. For simplicity, take the average of the prices of the k nearest neighbors in the training set. The function should have the following parameters: * the value of k; * the feature matrix for the training houses; * the output values (prices) of the training houses; and * the feature vector of the query house, whose price we are predicting. The function should return a predicted value of the query house. Hint: You can extract multiple items from a Numpy array using a list of indices. For instance, output_train[[6, 10]] returns the prices of the 7th and 11th training houses. End of explanation """ predict_price(features_test[2], features_train, 4, train['price']) """ Explanation: QUIZ QUESTION Again taking the query house to be third house of the test set (features_test[2]), predict the value of the query house using k-nearest neighbors with k=4 and the simple averaging method described and implemented above. End of explanation """ features_test.shape[0] def predict_prices(q, x, k, x_output): prices = [] for i in xrange(q.shape[0]): xq = q[i] prices.append(predict_price(xq, x, k, x_output)) return prices """ Explanation: Compare this predicted value using 4-nearest neighbors to the predicted value using 1-nearest neighbor computed earlier. Make multiple predictions Write a function to predict the value of each and every house in a query set. (The query set can be any subset of the dataset, be it the test set or validation set.) The idea is to have a loop where we take each house in the query set as the query house and make a prediction for that specific house. The new function should take the following parameters: * the value of k; * the feature matrix for the training houses; * the output values (prices) of the training houses; and * the feature matrix for the query set. The function should return a set of predicted values, one for each house in the query set. Hint: To get the number of houses in the query set, use the .shape field of the query features matrix. See the documentation. End of explanation """ prices_test_10 = predict_prices(features_test[0:10], features_train, 10, train['price']) prices_test_10 np.array(prices_test_10).argmin() """ Explanation: QUIZ QUESTION Make predictions for the first 10 houses in the test set using k-nearest neighbors with k=10. What is the index of the house in this query set that has the lowest predicted value? What is the predicted value of this house? End of explanation """ rss_validation_all = [] for k in xrange(1, 16): prediction = predict_prices(features_valid, features_train, k, train['price']) # compute rss error = np.array(prediction) - np.array(validation['price']) error_squared = error * error rss = error_squared.sum() print rss rss_validation_all.append(rss) """ Explanation: Choosing the best value of k using a validation set There remains a question of choosing the value of k to use in making predictions. Here, we use a validation set to choose this value. Write a loop that does the following: For k in [1, 2, ..., 15]: Makes predictions for each house in the VALIDATION set using the k-nearest neighbors from the TRAINING set. Computes the RSS for these predictions on the VALIDATION set Stores the RSS computed above in rss_all Report which k produced the lowest RSS on VALIDATION set. (Depending on your computing environment, this computation may take 10-15 minutes.) End of explanation """ import matplotlib.pyplot as plt %matplotlib inline kvals = range(1, 16) plt.plot(kvals, rss_validation_all,'bo-') """ Explanation: To visualize the performance as a function of k, plot the RSS on the VALIDATION set for each considered k value: End of explanation """ k_optimal = 8 prediction = predict_prices(features_test, features_train, k_optimal, train['price']) # compute rss error = np.array(prediction) - np.array(test['price']) error_squared = error * error rss = error_squared.sum() print rss """ Explanation: QUIZ QUESTION What is the RSS on the TEST data using the value of k found above? To be clear, sum over all houses in the TEST set. End of explanation """
atulsingh0/MachineLearning
HandsOnML/code/11_deep_learning.ipynb
gpl-3.0
# To support both python 2 and python 3 from __future__ import division, print_function, unicode_literals # Common imports import numpy as np import os # to make this notebook's output stable across runs def reset_graph(seed=42): tf.reset_default_graph() tf.set_random_seed(seed) np.random.seed(seed) # To plot pretty figures %matplotlib inline import matplotlib import matplotlib.pyplot as plt plt.rcParams['axes.labelsize'] = 14 plt.rcParams['xtick.labelsize'] = 12 plt.rcParams['ytick.labelsize'] = 12 # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "deep" def save_fig(fig_id, tight_layout=True): path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png") print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format='png', dpi=300) """ Explanation: Chapter 11 – Deep Learning This notebook contains all the sample code and solutions to the exercises in chapter 11. Setup First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures: End of explanation """ def logit(z): return 1 / (1 + np.exp(-z)) z = np.linspace(-5, 5, 200) plt.plot([-5, 5], [0, 0], 'k-') plt.plot([-5, 5], [1, 1], 'k--') plt.plot([0, 0], [-0.2, 1.2], 'k-') plt.plot([-5, 5], [-3/4, 7/4], 'g--') plt.plot(z, logit(z), "b-", linewidth=2) props = dict(facecolor='black', shrink=0.1) plt.annotate('Saturating', xytext=(3.5, 0.7), xy=(5, 1), arrowprops=props, fontsize=14, ha="center") plt.annotate('Saturating', xytext=(-3.5, 0.3), xy=(-5, 0), arrowprops=props, fontsize=14, ha="center") plt.annotate('Linear', xytext=(2, 0.2), xy=(0, 0.5), arrowprops=props, fontsize=14, ha="center") plt.grid(True) plt.title("Sigmoid activation function", fontsize=14) plt.axis([-5, 5, -0.2, 1.2]) save_fig("sigmoid_saturation_plot") plt.show() """ Explanation: Vanishing/Exploding Gradients Problem End of explanation """ import tensorflow as tf reset_graph() n_inputs = 28 * 28 # MNIST n_hidden1 = 300 X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X") he_init = tf.contrib.layers.variance_scaling_initializer() hidden1 = tf.layers.dense(X, n_hidden1, activation=tf.nn.relu, kernel_initializer=he_init, name="hidden1") """ Explanation: Xavier and He Initialization Note: the book uses tensorflow.contrib.layers.fully_connected() rather than tf.layers.dense() (which did not exist when this chapter was written). It is now preferable to use tf.layers.dense(), because anything in the contrib module may change or be deleted without notice. The dense() function is almost identical to the fully_connected() function. The main differences relevant to this chapter are: * several parameters are renamed: scope becomes name, activation_fn becomes activation (and similarly the _fn suffix is removed from other parameters such as normalizer_fn), weights_initializer becomes kernel_initializer, etc. * the default activation is now None rather than tf.nn.relu. * it does not support tensorflow.contrib.framework.arg_scope() (introduced later in chapter 11). * it does not support regularizer params (introduced later in chapter 11). End of explanation """ def leaky_relu(z, alpha=0.01): return np.maximum(alpha*z, z) plt.plot(z, leaky_relu(z, 0.05), "b-", linewidth=2) plt.plot([-5, 5], [0, 0], 'k-') plt.plot([0, 0], [-0.5, 4.2], 'k-') plt.grid(True) props = dict(facecolor='black', shrink=0.1) plt.annotate('Leak', xytext=(-3.5, 0.5), xy=(-5, -0.2), arrowprops=props, fontsize=14, ha="center") plt.title("Leaky ReLU activation function", fontsize=14) plt.axis([-5, 5, -0.5, 4.2]) save_fig("leaky_relu_plot") plt.show() """ Explanation: Nonsaturating Activation Functions Leaky ReLU End of explanation """ reset_graph() X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X") def leaky_relu(z, name=None): return tf.maximum(0.01 * z, z, name=name) hidden1 = tf.layers.dense(X, n_hidden1, activation=leaky_relu, name="hidden1") """ Explanation: Implementing Leaky ReLU in TensorFlow: End of explanation """ reset_graph() n_inputs = 28 * 28 # MNIST n_hidden1 = 300 n_hidden2 = 100 n_outputs = 10 X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X") y = tf.placeholder(tf.int64, shape=(None), name="y") with tf.name_scope("dnn"): hidden1 = tf.layers.dense(X, n_hidden1, activation=leaky_relu, name="hidden1") hidden2 = tf.layers.dense(hidden1, n_hidden2, activation=leaky_relu, name="hidden2") logits = tf.layers.dense(hidden2, n_outputs, name="outputs") with tf.name_scope("loss"): xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits) loss = tf.reduce_mean(xentropy, name="loss") learning_rate = 0.01 with tf.name_scope("train"): optimizer = tf.train.GradientDescentOptimizer(learning_rate) training_op = optimizer.minimize(loss) with tf.name_scope("eval"): correct = tf.nn.in_top_k(logits, y, 1) accuracy = tf.reduce_mean(tf.cast(correct, tf.float32)) init = tf.global_variables_initializer() saver = tf.train.Saver() """ Explanation: Let's train a neural network on MNIST using the Leaky ReLU. First let's create the graph: End of explanation """ from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("/tmp/data/") n_epochs = 40 batch_size = 50 with tf.Session() as sess: init.run() for epoch in range(n_epochs): for iteration in range(mnist.train.num_examples // batch_size): X_batch, y_batch = mnist.train.next_batch(batch_size) sess.run(training_op, feed_dict={X: X_batch, y: y_batch}) if epoch % 5 == 0: acc_train = accuracy.eval(feed_dict={X: X_batch, y: y_batch}) acc_test = accuracy.eval(feed_dict={X: mnist.validation.images, y: mnist.validation.labels}) print(epoch, "Batch accuracy:", acc_train, "Validation accuracy:", acc_test) save_path = saver.save(sess, "./my_model_final.ckpt") """ Explanation: Let's load the data: End of explanation """ def elu(z, alpha=1): return np.where(z < 0, alpha * (np.exp(z) - 1), z) plt.plot(z, elu(z), "b-", linewidth=2) plt.plot([-5, 5], [0, 0], 'k-') plt.plot([-5, 5], [-1, -1], 'k--') plt.plot([0, 0], [-2.2, 3.2], 'k-') plt.grid(True) plt.title(r"ELU activation function ($\alpha=1$)", fontsize=14) plt.axis([-5, 5, -2.2, 3.2]) save_fig("elu_plot") plt.show() """ Explanation: ELU End of explanation """ reset_graph() X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X") hidden1 = tf.layers.dense(X, n_hidden1, activation=tf.nn.elu, name="hidden1") """ Explanation: Implementing ELU in TensorFlow is trivial, just specify the activation function when building each layer: End of explanation """ def selu(z, scale=1.0507009873554804934193349852946, alpha=1.6732632423543772848170429916717): return scale * elu(z, alpha) plt.plot(z, selu(z), "b-", linewidth=2) plt.plot([-5, 5], [0, 0], 'k-') plt.plot([-5, 5], [-1.758, -1.758], 'k--') plt.plot([0, 0], [-2.2, 3.2], 'k-') plt.grid(True) plt.title(r"SELU activation function", fontsize=14) plt.axis([-5, 5, -2.2, 3.2]) save_fig("selu_plot") plt.show() """ Explanation: SELU This activation function was proposed in this great paper by Günter Klambauer, Thomas Unterthiner and Andreas Mayr, published in June 2017 (I will definitely add it to the book). It outperforms the other activation functions very significantly for deep neural networks, so you should really try it out. End of explanation """ np.random.seed(42) Z = np.random.normal(size=(500, 100)) for layer in range(100): W = np.random.normal(size=(100, 100), scale=np.sqrt(1/100)) Z = selu(np.dot(Z, W)) means = np.mean(Z, axis=1) stds = np.std(Z, axis=1) if layer % 10 == 0: print("Layer {}: {:.2f} < mean < {:.2f}, {:.2f} < std deviation < {:.2f}".format( layer, means.min(), means.max(), stds.min(), stds.max())) """ Explanation: With this activation function, even a 100 layer deep neural network preserves roughly mean 0 and standard deviation 1 across all layers, avoiding the exploding/vanishing gradients problem: End of explanation """ def selu(z, scale=1.0507009873554804934193349852946, alpha=1.6732632423543772848170429916717): return scale * tf.where(z >= 0.0, z, alpha * tf.nn.elu(z)) """ Explanation: Here's a TensorFlow implementation (there will almost certainly be a tf.nn.selu() function in future TensorFlow versions): End of explanation """ reset_graph() n_inputs = 28 * 28 # MNIST n_hidden1 = 300 n_hidden2 = 100 n_outputs = 10 X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X") y = tf.placeholder(tf.int64, shape=(None), name="y") with tf.name_scope("dnn"): hidden1 = tf.layers.dense(X, n_hidden1, activation=selu, name="hidden1") hidden2 = tf.layers.dense(hidden1, n_hidden2, activation=selu, name="hidden2") logits = tf.layers.dense(hidden2, n_outputs, name="outputs") with tf.name_scope("loss"): xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits) loss = tf.reduce_mean(xentropy, name="loss") learning_rate = 0.01 with tf.name_scope("train"): optimizer = tf.train.GradientDescentOptimizer(learning_rate) training_op = optimizer.minimize(loss) with tf.name_scope("eval"): correct = tf.nn.in_top_k(logits, y, 1) accuracy = tf.reduce_mean(tf.cast(correct, tf.float32)) init = tf.global_variables_initializer() saver = tf.train.Saver() n_epochs = 40 batch_size = 50 """ Explanation: SELUs can also be combined with dropout, check out this implementation by the Institute of Bioinformatics, Johannes Kepler University Linz. Let's create a neural net for MNIST using the SELU activation function: End of explanation """ means = mnist.train.images.mean(axis=0, keepdims=True) stds = mnist.train.images.std(axis=0, keepdims=True) + 1e-10 with tf.Session() as sess: init.run() for epoch in range(n_epochs): for iteration in range(mnist.train.num_examples // batch_size): X_batch, y_batch = mnist.train.next_batch(batch_size) X_batch_scaled = (X_batch - means) / stds sess.run(training_op, feed_dict={X: X_batch_scaled, y: y_batch}) if epoch % 5 == 0: acc_train = accuracy.eval(feed_dict={X: X_batch_scaled, y: y_batch}) X_val_scaled = (mnist.validation.images - means) / stds acc_test = accuracy.eval(feed_dict={X: X_val_scaled, y: mnist.validation.labels}) print(epoch, "Batch accuracy:", acc_train, "Validation accuracy:", acc_test) save_path = saver.save(sess, "./my_model_final_selu.ckpt") """ Explanation: Now let's train it. Do not forget to scale the inputs to mean 0 and standard deviation 1: End of explanation """ reset_graph() import tensorflow as tf n_inputs = 28 * 28 n_hidden1 = 300 n_hidden2 = 100 n_outputs = 10 X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X") training = tf.placeholder_with_default(False, shape=(), name='training') hidden1 = tf.layers.dense(X, n_hidden1, name="hidden1") bn1 = tf.layers.batch_normalization(hidden1, training=training, momentum=0.9) bn1_act = tf.nn.elu(bn1) hidden2 = tf.layers.dense(bn1_act, n_hidden2, name="hidden2") bn2 = tf.layers.batch_normalization(hidden2, training=training, momentum=0.9) bn2_act = tf.nn.elu(bn2) logits_before_bn = tf.layers.dense(bn2_act, n_outputs, name="outputs") logits = tf.layers.batch_normalization(logits_before_bn, training=training, momentum=0.9) reset_graph() X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X") training = tf.placeholder_with_default(False, shape=(), name='training') """ Explanation: Batch Normalization Note: the book uses tensorflow.contrib.layers.batch_norm() rather than tf.layers.batch_normalization() (which did not exist when this chapter was written). It is now preferable to use tf.layers.batch_normalization(), because anything in the contrib module may change or be deleted without notice. Instead of using the batch_norm() function as a regularizer parameter to the fully_connected() function, we now use batch_normalization() and we explicitly create a distinct layer. The parameters are a bit different, in particular: * decay is renamed to momentum, * is_training is renamed to training, * updates_collections is removed: the update operations needed by batch normalization are added to the UPDATE_OPS collection and you need to explicity run these operations during training (see the execution phase below), * we don't need to specify scale=True, as that is the default. Also note that in order to run batch norm just before each hidden layer's activation function, we apply the ELU activation function manually, right after the batch norm layer. Note: since the tf.layers.dense() function is incompatible with tf.contrib.layers.arg_scope() (which is used in the book), we now use python's functools.partial() function instead. It makes it easy to create a my_dense_layer() function that just calls tf.layers.dense() with the desired parameters automatically set (unless they are overridden when calling my_dense_layer()). As you can see, the code remains very similar. End of explanation """ from functools import partial my_batch_norm_layer = partial(tf.layers.batch_normalization, training=training, momentum=0.9) hidden1 = tf.layers.dense(X, n_hidden1, name="hidden1") bn1 = my_batch_norm_layer(hidden1) bn1_act = tf.nn.elu(bn1) hidden2 = tf.layers.dense(bn1_act, n_hidden2, name="hidden2") bn2 = my_batch_norm_layer(hidden2) bn2_act = tf.nn.elu(bn2) logits_before_bn = tf.layers.dense(bn2_act, n_outputs, name="outputs") logits = my_batch_norm_layer(logits_before_bn) """ Explanation: To avoid repeating the same parameters over and over again, we can use Python's partial() function: End of explanation """ reset_graph() batch_norm_momentum = 0.9 X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X") y = tf.placeholder(tf.int64, shape=(None), name="y") training = tf.placeholder_with_default(False, shape=(), name='training') with tf.name_scope("dnn"): he_init = tf.contrib.layers.variance_scaling_initializer() my_batch_norm_layer = partial( tf.layers.batch_normalization, training=training, momentum=batch_norm_momentum) my_dense_layer = partial( tf.layers.dense, kernel_initializer=he_init) hidden1 = my_dense_layer(X, n_hidden1, name="hidden1") bn1 = tf.nn.elu(my_batch_norm_layer(hidden1)) hidden2 = my_dense_layer(bn1, n_hidden2, name="hidden2") bn2 = tf.nn.elu(my_batch_norm_layer(hidden2)) logits_before_bn = my_dense_layer(bn2, n_outputs, name="outputs") logits = my_batch_norm_layer(logits_before_bn) with tf.name_scope("loss"): xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits) loss = tf.reduce_mean(xentropy, name="loss") with tf.name_scope("train"): optimizer = tf.train.GradientDescentOptimizer(learning_rate) training_op = optimizer.minimize(loss) with tf.name_scope("eval"): correct = tf.nn.in_top_k(logits, y, 1) accuracy = tf.reduce_mean(tf.cast(correct, tf.float32)) init = tf.global_variables_initializer() saver = tf.train.Saver() """ Explanation: Let's build a neural net for MNIST, using the ELU activation function and Batch Normalization at each layer: End of explanation """ n_epochs = 20 batch_size = 200 extra_update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS) with tf.Session() as sess: init.run() for epoch in range(n_epochs): for iteration in range(mnist.train.num_examples // batch_size): X_batch, y_batch = mnist.train.next_batch(batch_size) sess.run([training_op, extra_update_ops], feed_dict={training: True, X: X_batch, y: y_batch}) accuracy_val = accuracy.eval(feed_dict={X: mnist.test.images, y: mnist.test.labels}) print(epoch, "Test accuracy:", accuracy_val) save_path = saver.save(sess, "./my_model_final.ckpt") """ Explanation: Note: since we are using tf.layers.batch_normalization() rather than tf.contrib.layers.batch_norm() (as in the book), we need to explicitly run the extra update operations needed by batch normalization (sess.run([training_op, extra_update_ops],...). End of explanation """ [v.name for v in tf.trainable_variables()] [v.name for v in tf.global_variables()] """ Explanation: What!? That's not a great accuracy for MNIST. Of course, if you train for longer it will get much better accuracy, but with such a shallow network, Batch Norm and ELU are unlikely to have very positive impact: they shine mostly for much deeper nets. Note that you could also make the training operation depend on the update operations: python with tf.name_scope("train"): optimizer = tf.train.GradientDescentOptimizer(learning_rate) extra_update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS) with tf.control_dependencies(extra_update_ops): training_op = optimizer.minimize(loss) This way, you would just have to evaluate the training_op during training, TensorFlow would automatically run the update operations as well: python sess.run(training_op, feed_dict={training: True, X: X_batch, y: y_batch}) One more thing: notice that the list of trainable variables is shorter than the list of all global variables. This is because the moving averages are non-trainable variables. If you want to reuse a pretrained neural network (see below), you must not forget these non-trainable variables. End of explanation """ reset_graph() n_inputs = 28 * 28 # MNIST n_hidden1 = 300 n_hidden2 = 50 n_hidden3 = 50 n_hidden4 = 50 n_hidden5 = 50 n_outputs = 10 X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X") y = tf.placeholder(tf.int64, shape=(None), name="y") with tf.name_scope("dnn"): hidden1 = tf.layers.dense(X, n_hidden1, activation=tf.nn.relu, name="hidden1") hidden2 = tf.layers.dense(hidden1, n_hidden2, activation=tf.nn.relu, name="hidden2") hidden3 = tf.layers.dense(hidden2, n_hidden3, activation=tf.nn.relu, name="hidden3") hidden4 = tf.layers.dense(hidden3, n_hidden4, activation=tf.nn.relu, name="hidden4") hidden5 = tf.layers.dense(hidden4, n_hidden5, activation=tf.nn.relu, name="hidden5") logits = tf.layers.dense(hidden5, n_outputs, name="outputs") with tf.name_scope("loss"): xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits) loss = tf.reduce_mean(xentropy, name="loss") learning_rate = 0.01 """ Explanation: Gradient Clipping Let's create a simple neural net for MNIST and add gradient clipping. The first part is the same as earlier (except we added a few more layers to demonstrate reusing pretrained models, see below): End of explanation """ threshold = 1.0 optimizer = tf.train.GradientDescentOptimizer(learning_rate) grads_and_vars = optimizer.compute_gradients(loss) capped_gvs = [(tf.clip_by_value(grad, -threshold, threshold), var) for grad, var in grads_and_vars] training_op = optimizer.apply_gradients(capped_gvs) """ Explanation: Now we apply gradient clipping. For this, we need to get the gradients, use the clip_by_value() function to clip them, then apply them: End of explanation """ with tf.name_scope("eval"): correct = tf.nn.in_top_k(logits, y, 1) accuracy = tf.reduce_mean(tf.cast(correct, tf.float32), name="accuracy") init = tf.global_variables_initializer() saver = tf.train.Saver() n_epochs = 20 batch_size = 200 with tf.Session() as sess: init.run() for epoch in range(n_epochs): for iteration in range(mnist.train.num_examples // batch_size): X_batch, y_batch = mnist.train.next_batch(batch_size) sess.run(training_op, feed_dict={X: X_batch, y: y_batch}) accuracy_val = accuracy.eval(feed_dict={X: mnist.test.images, y: mnist.test.labels}) print(epoch, "Test accuracy:", accuracy_val) save_path = saver.save(sess, "./my_model_final.ckpt") """ Explanation: The rest is the same as usual: End of explanation """ reset_graph() saver = tf.train.import_meta_graph("./my_model_final.ckpt.meta") """ Explanation: Reusing Pretrained Layers Reusing a TensorFlow Model First you need to load the graph's structure. The import_meta_graph() function does just that, loading the graph's operations into the default graph, and returning a Saver that you can then use to restore the model's state. Note that by default, a Saver saves the structure of the graph into a .meta file, so that's the file you should load: End of explanation """ for op in tf.get_default_graph().get_operations(): print(op.name) """ Explanation: Next you need to get a handle on all the operations you will need for training. If you don't know the graph's structure, you can list all the operations: End of explanation """ from IPython.display import clear_output, Image, display, HTML def strip_consts(graph_def, max_const_size=32): """Strip large constant values from graph_def.""" strip_def = tf.GraphDef() for n0 in graph_def.node: n = strip_def.node.add() n.MergeFrom(n0) if n.op == 'Const': tensor = n.attr['value'].tensor size = len(tensor.tensor_content) if size > max_const_size: tensor.tensor_content = b"<stripped %d bytes>"%size return strip_def def show_graph(graph_def, max_const_size=32): """Visualize TensorFlow graph.""" if hasattr(graph_def, 'as_graph_def'): graph_def = graph_def.as_graph_def() strip_def = strip_consts(graph_def, max_const_size=max_const_size) code = """ <script> function load() {{ document.getElementById("{id}").pbtxt = {data}; }} </script> <link rel="import" href="https://tensorboard.appspot.com/tf-graph-basic.build.html" onload=load()> <div style="height:600px"> <tf-graph-basic id="{id}"></tf-graph-basic> </div> """.format(data=repr(str(strip_def)), id='graph'+str(np.random.rand())) iframe = """ <iframe seamless style="width:1200px;height:620px;border:0" srcdoc="{}"></iframe> """.format(code.replace('"', '&quot;')) display(HTML(iframe)) show_graph(tf.get_default_graph()) """ Explanation: Oops, that's a lot of operations! It's much easier to use TensorBoard to visualize the graph. The following hack will allow you to visualize the graph within Jupyter (if it does not work with your browser, you will need to use a FileWriter to save the graph and then visualize it in TensorBoard): End of explanation """ X = tf.get_default_graph().get_tensor_by_name("X:0") y = tf.get_default_graph().get_tensor_by_name("y:0") accuracy = tf.get_default_graph().get_tensor_by_name("eval/accuracy:0") training_op = tf.get_default_graph().get_operation_by_name("GradientDescent") """ Explanation: Once you know which operations you need, you can get a handle on them using the graph's get_operation_by_name() or get_tensor_by_name() methods: End of explanation """ for op in (X, y, accuracy, training_op): tf.add_to_collection("my_important_ops", op) """ Explanation: If you are the author of the original model, you could make things easier for people who will reuse your model by giving operations very clear names and documenting them. Another approach is to create a collection containing all the important operations that people will want to get a handle on: End of explanation """ X, y, accuracy, training_op = tf.get_collection("my_important_ops") """ Explanation: This way people who reuse your model will be able to simply write: End of explanation """ with tf.Session() as sess: saver.restore(sess, "./my_model_final.ckpt") # continue training the model... """ Explanation: Now you can start a session, restore the model's state and continue training on your data: End of explanation """ with tf.Session() as sess: saver.restore(sess, "./my_model_final.ckpt") for epoch in range(n_epochs): for iteration in range(mnist.train.num_examples // batch_size): X_batch, y_batch = mnist.train.next_batch(batch_size) sess.run(training_op, feed_dict={X: X_batch, y: y_batch}) accuracy_val = accuracy.eval(feed_dict={X: mnist.test.images, y: mnist.test.labels}) print(epoch, "Test accuracy:", accuracy_val) save_path = saver.save(sess, "./my_new_model_final.ckpt") """ Explanation: Actually, let's test this for real! End of explanation """ reset_graph() n_inputs = 28 * 28 # MNIST n_hidden1 = 300 n_hidden2 = 50 n_hidden3 = 50 n_hidden4 = 50 n_outputs = 10 X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X") y = tf.placeholder(tf.int64, shape=(None), name="y") with tf.name_scope("dnn"): hidden1 = tf.layers.dense(X, n_hidden1, activation=tf.nn.relu, name="hidden1") hidden2 = tf.layers.dense(hidden1, n_hidden2, activation=tf.nn.relu, name="hidden2") hidden3 = tf.layers.dense(hidden2, n_hidden3, activation=tf.nn.relu, name="hidden3") hidden4 = tf.layers.dense(hidden3, n_hidden4, activation=tf.nn.relu, name="hidden4") hidden5 = tf.layers.dense(hidden4, n_hidden5, activation=tf.nn.relu, name="hidden5") logits = tf.layers.dense(hidden5, n_outputs, name="outputs") with tf.name_scope("loss"): xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits) loss = tf.reduce_mean(xentropy, name="loss") with tf.name_scope("eval"): correct = tf.nn.in_top_k(logits, y, 1) accuracy = tf.reduce_mean(tf.cast(correct, tf.float32), name="accuracy") learning_rate = 0.01 threshold = 1.0 optimizer = tf.train.GradientDescentOptimizer(learning_rate) grads_and_vars = optimizer.compute_gradients(loss) capped_gvs = [(tf.clip_by_value(grad, -threshold, threshold), var) for grad, var in grads_and_vars] training_op = optimizer.apply_gradients(capped_gvs) init = tf.global_variables_initializer() saver = tf.train.Saver() """ Explanation: Alternatively, if you have access to the Python code that built the original graph, you can use it instead of import_meta_graph(): End of explanation """ with tf.Session() as sess: saver.restore(sess, "./my_model_final.ckpt") for epoch in range(n_epochs): for iteration in range(mnist.train.num_examples // batch_size): X_batch, y_batch = mnist.train.next_batch(batch_size) sess.run(training_op, feed_dict={X: X_batch, y: y_batch}) accuracy_val = accuracy.eval(feed_dict={X: mnist.test.images, y: mnist.test.labels}) print(epoch, "Test accuracy:", accuracy_val) save_path = saver.save(sess, "./my_new_model_final.ckpt") """ Explanation: And continue training: End of explanation """ reset_graph() n_hidden4 = 20 # new layer n_outputs = 10 # new layer saver = tf.train.import_meta_graph("./my_model_final.ckpt.meta") X = tf.get_default_graph().get_tensor_by_name("X:0") y = tf.get_default_graph().get_tensor_by_name("y:0") hidden3 = tf.get_default_graph().get_tensor_by_name("dnn/hidden4/Relu:0") new_hidden4 = tf.layers.dense(hidden3, n_hidden4, activation=tf.nn.relu, name="new_hidden4") new_logits = tf.layers.dense(new_hidden4, n_outputs, name="new_outputs") with tf.name_scope("new_loss"): xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=new_logits) loss = tf.reduce_mean(xentropy, name="loss") with tf.name_scope("new_eval"): correct = tf.nn.in_top_k(new_logits, y, 1) accuracy = tf.reduce_mean(tf.cast(correct, tf.float32), name="accuracy") with tf.name_scope("new_train"): optimizer = tf.train.GradientDescentOptimizer(learning_rate) training_op = optimizer.minimize(loss) init = tf.global_variables_initializer() new_saver = tf.train.Saver() """ Explanation: In general you will want to reuse only the lower layers. If you are using import_meta_graph() it will load the whole graph, but you can simply ignore the parts you do not need. In this example, we add a new 4th hidden layer on top of the pretrained 3rd layer (ignoring the old 4th hidden layer). We also build a new output layer, the loss for this new output, and a new optimizer to minimize it. We also need another saver to save the whole graph (containing both the entire old graph plus the new operations), and an initialization operation to initialize all the new variables: End of explanation """ with tf.Session() as sess: init.run() saver.restore(sess, "./my_model_final.ckpt") for epoch in range(n_epochs): for iteration in range(mnist.train.num_examples // batch_size): X_batch, y_batch = mnist.train.next_batch(batch_size) sess.run(training_op, feed_dict={X: X_batch, y: y_batch}) accuracy_val = accuracy.eval(feed_dict={X: mnist.test.images, y: mnist.test.labels}) print(epoch, "Test accuracy:", accuracy_val) save_path = new_saver.save(sess, "./my_new_model_final.ckpt") """ Explanation: And we can train this new model: End of explanation """ reset_graph() n_inputs = 28 * 28 # MNIST n_hidden1 = 300 # reused n_hidden2 = 50 # reused n_hidden3 = 50 # reused n_hidden4 = 20 # new! n_outputs = 10 # new! X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X") y = tf.placeholder(tf.int64, shape=(None), name="y") with tf.name_scope("dnn"): hidden1 = tf.layers.dense(X, n_hidden1, activation=tf.nn.relu, name="hidden1") # reused hidden2 = tf.layers.dense(hidden1, n_hidden2, activation=tf.nn.relu, name="hidden2") # reused hidden3 = tf.layers.dense(hidden2, n_hidden3, activation=tf.nn.relu, name="hidden3") # reused hidden4 = tf.layers.dense(hidden3, n_hidden4, activation=tf.nn.relu, name="hidden4") # new! logits = tf.layers.dense(hidden4, n_outputs, name="outputs") # new! with tf.name_scope("loss"): xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits) loss = tf.reduce_mean(xentropy, name="loss") with tf.name_scope("eval"): correct = tf.nn.in_top_k(logits, y, 1) accuracy = tf.reduce_mean(tf.cast(correct, tf.float32), name="accuracy") with tf.name_scope("train"): optimizer = tf.train.GradientDescentOptimizer(learning_rate) training_op = optimizer.minimize(loss) """ Explanation: If you have access to the Python code that built the original graph, you can just reuse the parts you need and drop the rest: End of explanation """ reuse_vars = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope="hidden[123]") # regular expression reuse_vars_dict = dict([(var.op.name, var) for var in reuse_vars]) restore_saver = tf.train.Saver(reuse_vars_dict) # to restore layers 1-3 init = tf.global_variables_initializer() saver = tf.train.Saver() with tf.Session() as sess: init.run() restore_saver.restore(sess, "./my_model_final.ckpt") for epoch in range(n_epochs): # not shown in the book for iteration in range(mnist.train.num_examples // batch_size): # not shown X_batch, y_batch = mnist.train.next_batch(batch_size) # not shown sess.run(training_op, feed_dict={X: X_batch, y: y_batch}) # not shown accuracy_val = accuracy.eval(feed_dict={X: mnist.test.images, # not shown y: mnist.test.labels}) # not shown print(epoch, "Test accuracy:", accuracy_val) # not shown save_path = saver.save(sess, "./my_new_model_final.ckpt") """ Explanation: However, you must create one Saver to restore the pretrained model (giving it the list of variables to restore, or else it will complain that the graphs don't match), and another Saver to save the new model, once it is trained: End of explanation """ reset_graph() n_inputs = 2 n_hidden1 = 3 original_w = [[1., 2., 3.], [4., 5., 6.]] # Load the weights from the other framework original_b = [7., 8., 9.] # Load the biases from the other framework X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X") hidden1 = tf.layers.dense(X, n_hidden1, activation=tf.nn.relu, name="hidden1") # [...] Build the rest of the model # Get a handle on the assignment nodes for the hidden1 variables graph = tf.get_default_graph() assign_kernel = graph.get_operation_by_name("hidden1/kernel/Assign") assign_bias = graph.get_operation_by_name("hidden1/bias/Assign") init_kernel = assign_kernel.inputs[1] init_bias = assign_bias.inputs[1] init = tf.global_variables_initializer() with tf.Session() as sess: sess.run(init, feed_dict={init_kernel: original_w, init_bias: original_b}) # [...] Train the model on your new task print(hidden1.eval(feed_dict={X: [[10.0, 11.0]]})) # not shown in the book """ Explanation: Reusing Models from Other Frameworks In this example, for each variable we want to reuse, we find its initializer's assignment operation, and we get its second input, which corresponds to the initialization value. When we run the initializer, we replace the initialization values with the ones we want, using a feed_dict: End of explanation """ reset_graph() n_inputs = 2 n_hidden1 = 3 original_w = [[1., 2., 3.], [4., 5., 6.]] # Load the weights from the other framework original_b = [7., 8., 9.] # Load the biases from the other framework X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X") hidden1 = tf.layers.dense(X, n_hidden1, activation=tf.nn.relu, name="hidden1") # [...] Build the rest of the model # Get a handle on the variables of layer hidden1 with tf.variable_scope("", default_name="", reuse=True): # root scope hidden1_weights = tf.get_variable("hidden1/kernel") hidden1_biases = tf.get_variable("hidden1/bias") # Create dedicated placeholders and assignment nodes original_weights = tf.placeholder(tf.float32, shape=(n_inputs, n_hidden1)) original_biases = tf.placeholder(tf.float32, shape=n_hidden1) assign_hidden1_weights = tf.assign(hidden1_weights, original_weights) assign_hidden1_biases = tf.assign(hidden1_biases, original_biases) init = tf.global_variables_initializer() with tf.Session() as sess: sess.run(init) sess.run(assign_hidden1_weights, feed_dict={original_weights: original_w}) sess.run(assign_hidden1_biases, feed_dict={original_biases: original_b}) # [...] Train the model on your new task print(hidden1.eval(feed_dict={X: [[10.0, 11.0]]})) """ Explanation: Note: the weights variable created by the tf.layers.dense() function is called "kernel" (instead of "weights" when using the tf.contrib.layers.fully_connected(), as in the book), and the biases variable is called bias instead of biases. Another approach (initially used in the book) would be to create dedicated assignment nodes and dedicated placeholders. This is more verbose and less efficient, but you may find this more explicit: End of explanation """ tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope="hidden1") """ Explanation: Note that we could also get a handle on the variables using get_collection() and specifying the scope: End of explanation """ tf.get_default_graph().get_tensor_by_name("hidden1/kernel:0") tf.get_default_graph().get_tensor_by_name("hidden1/bias:0") """ Explanation: Or we could use the graph's get_tensor_by_name() method: End of explanation """ reset_graph() n_inputs = 28 * 28 # MNIST n_hidden1 = 300 # reused n_hidden2 = 50 # reused n_hidden3 = 50 # reused n_hidden4 = 20 # new! n_outputs = 10 # new! X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X") y = tf.placeholder(tf.int64, shape=(None), name="y") with tf.name_scope("dnn"): hidden1 = tf.layers.dense(X, n_hidden1, activation=tf.nn.relu, name="hidden1") # reused hidden2 = tf.layers.dense(hidden1, n_hidden2, activation=tf.nn.relu, name="hidden2") # reused hidden3 = tf.layers.dense(hidden2, n_hidden3, activation=tf.nn.relu, name="hidden3") # reused hidden4 = tf.layers.dense(hidden3, n_hidden4, activation=tf.nn.relu, name="hidden4") # new! logits = tf.layers.dense(hidden4, n_outputs, name="outputs") # new! with tf.name_scope("loss"): xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits) loss = tf.reduce_mean(xentropy, name="loss") with tf.name_scope("eval"): correct = tf.nn.in_top_k(logits, y, 1) accuracy = tf.reduce_mean(tf.cast(correct, tf.float32), name="accuracy") with tf.name_scope("train"): # not shown in the book optimizer = tf.train.GradientDescentOptimizer(learning_rate) # not shown train_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope="hidden[34]|outputs") training_op = optimizer.minimize(loss, var_list=train_vars) init = tf.global_variables_initializer() new_saver = tf.train.Saver() reuse_vars = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope="hidden[123]") # regular expression reuse_vars_dict = dict([(var.op.name, var) for var in reuse_vars]) restore_saver = tf.train.Saver(reuse_vars_dict) # to restore layers 1-3 init = tf.global_variables_initializer() saver = tf.train.Saver() with tf.Session() as sess: init.run() restore_saver.restore(sess, "./my_model_final.ckpt") for epoch in range(n_epochs): for iteration in range(mnist.train.num_examples // batch_size): X_batch, y_batch = mnist.train.next_batch(batch_size) sess.run(training_op, feed_dict={X: X_batch, y: y_batch}) accuracy_val = accuracy.eval(feed_dict={X: mnist.test.images, y: mnist.test.labels}) print(epoch, "Test accuracy:", accuracy_val) save_path = saver.save(sess, "./my_new_model_final.ckpt") reset_graph() n_inputs = 28 * 28 # MNIST n_hidden1 = 300 # reused n_hidden2 = 50 # reused n_hidden3 = 50 # reused n_hidden4 = 20 # new! n_outputs = 10 # new! X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X") y = tf.placeholder(tf.int64, shape=(None), name="y") with tf.name_scope("dnn"): hidden1 = tf.layers.dense(X, n_hidden1, activation=tf.nn.relu, name="hidden1") # reused frozen hidden2 = tf.layers.dense(hidden1, n_hidden2, activation=tf.nn.relu, name="hidden2") # reused frozen hidden2_stop = tf.stop_gradient(hidden2) hidden3 = tf.layers.dense(hidden2_stop, n_hidden3, activation=tf.nn.relu, name="hidden3") # reused, not frozen hidden4 = tf.layers.dense(hidden3, n_hidden4, activation=tf.nn.relu, name="hidden4") # new! logits = tf.layers.dense(hidden4, n_outputs, name="outputs") # new! with tf.name_scope("loss"): xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits) loss = tf.reduce_mean(xentropy, name="loss") with tf.name_scope("eval"): correct = tf.nn.in_top_k(logits, y, 1) accuracy = tf.reduce_mean(tf.cast(correct, tf.float32), name="accuracy") with tf.name_scope("train"): optimizer = tf.train.GradientDescentOptimizer(learning_rate) training_op = optimizer.minimize(loss) """ Explanation: Freezing the Lower Layers End of explanation """ reuse_vars = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope="hidden[123]") # regular expression reuse_vars_dict = dict([(var.op.name, var) for var in reuse_vars]) restore_saver = tf.train.Saver(reuse_vars_dict) # to restore layers 1-3 init = tf.global_variables_initializer() saver = tf.train.Saver() with tf.Session() as sess: init.run() restore_saver.restore(sess, "./my_model_final.ckpt") for epoch in range(n_epochs): for iteration in range(mnist.train.num_examples // batch_size): X_batch, y_batch = mnist.train.next_batch(batch_size) sess.run(training_op, feed_dict={X: X_batch, y: y_batch}) accuracy_val = accuracy.eval(feed_dict={X: mnist.test.images, y: mnist.test.labels}) print(epoch, "Test accuracy:", accuracy_val) save_path = saver.save(sess, "./my_new_model_final.ckpt") """ Explanation: The training code is exactly the same as earlier: End of explanation """ reset_graph() n_inputs = 28 * 28 # MNIST n_hidden1 = 300 # reused n_hidden2 = 50 # reused n_hidden3 = 50 # reused n_hidden4 = 20 # new! n_outputs = 10 # new! X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X") y = tf.placeholder(tf.int64, shape=(None), name="y") with tf.name_scope("dnn"): hidden1 = tf.layers.dense(X, n_hidden1, activation=tf.nn.relu, name="hidden1") # reused frozen hidden2 = tf.layers.dense(hidden1, n_hidden2, activation=tf.nn.relu, name="hidden2") # reused frozen & cached hidden2_stop = tf.stop_gradient(hidden2) hidden3 = tf.layers.dense(hidden2_stop, n_hidden3, activation=tf.nn.relu, name="hidden3") # reused, not frozen hidden4 = tf.layers.dense(hidden3, n_hidden4, activation=tf.nn.relu, name="hidden4") # new! logits = tf.layers.dense(hidden4, n_outputs, name="outputs") # new! with tf.name_scope("loss"): xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits) loss = tf.reduce_mean(xentropy, name="loss") with tf.name_scope("eval"): correct = tf.nn.in_top_k(logits, y, 1) accuracy = tf.reduce_mean(tf.cast(correct, tf.float32), name="accuracy") with tf.name_scope("train"): optimizer = tf.train.GradientDescentOptimizer(learning_rate) training_op = optimizer.minimize(loss) reuse_vars = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope="hidden[123]") # regular expression reuse_vars_dict = dict([(var.op.name, var) for var in reuse_vars]) restore_saver = tf.train.Saver(reuse_vars_dict) # to restore layers 1-3 init = tf.global_variables_initializer() saver = tf.train.Saver() import numpy as np n_batches = mnist.train.num_examples // batch_size with tf.Session() as sess: init.run() restore_saver.restore(sess, "./my_model_final.ckpt") h2_cache = sess.run(hidden2, feed_dict={X: mnist.train.images}) h2_cache_test = sess.run(hidden2, feed_dict={X: mnist.test.images}) # not shown in the book for epoch in range(n_epochs): shuffled_idx = np.random.permutation(mnist.train.num_examples) hidden2_batches = np.array_split(h2_cache[shuffled_idx], n_batches) y_batches = np.array_split(mnist.train.labels[shuffled_idx], n_batches) for hidden2_batch, y_batch in zip(hidden2_batches, y_batches): sess.run(training_op, feed_dict={hidden2:hidden2_batch, y:y_batch}) accuracy_val = accuracy.eval(feed_dict={hidden2: h2_cache_test, # not shown y: mnist.test.labels}) # not shown print(epoch, "Test accuracy:", accuracy_val) # not shown save_path = saver.save(sess, "./my_new_model_final.ckpt") """ Explanation: Caching the Frozen Layers End of explanation """ optimizer = tf.train.MomentumOptimizer(learning_rate=learning_rate, momentum=0.9) """ Explanation: Faster Optimizers Momentum optimization End of explanation """ optimizer = tf.train.MomentumOptimizer(learning_rate=learning_rate, momentum=0.9, use_nesterov=True) """ Explanation: Nesterov Accelerated Gradient End of explanation """ optimizer = tf.train.AdagradOptimizer(learning_rate=learning_rate) """ Explanation: AdaGrad End of explanation """ optimizer = tf.train.RMSPropOptimizer(learning_rate=learning_rate, momentum=0.9, decay=0.9, epsilon=1e-10) """ Explanation: RMSProp End of explanation """ optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate) """ Explanation: Adam Optimization End of explanation """ reset_graph() n_inputs = 28 * 28 # MNIST n_hidden1 = 300 n_hidden2 = 50 n_outputs = 10 X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X") y = tf.placeholder(tf.int64, shape=(None), name="y") with tf.name_scope("dnn"): hidden1 = tf.layers.dense(X, n_hidden1, activation=tf.nn.relu, name="hidden1") hidden2 = tf.layers.dense(hidden1, n_hidden2, activation=tf.nn.relu, name="hidden2") logits = tf.layers.dense(hidden2, n_outputs, name="outputs") with tf.name_scope("loss"): xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits) loss = tf.reduce_mean(xentropy, name="loss") with tf.name_scope("eval"): correct = tf.nn.in_top_k(logits, y, 1) accuracy = tf.reduce_mean(tf.cast(correct, tf.float32), name="accuracy") with tf.name_scope("train"): # not shown in the book initial_learning_rate = 0.1 decay_steps = 10000 decay_rate = 1/10 global_step = tf.Variable(0, trainable=False, name="global_step") learning_rate = tf.train.exponential_decay(initial_learning_rate, global_step, decay_steps, decay_rate) optimizer = tf.train.MomentumOptimizer(learning_rate, momentum=0.9) training_op = optimizer.minimize(loss, global_step=global_step) init = tf.global_variables_initializer() saver = tf.train.Saver() n_epochs = 5 batch_size = 50 with tf.Session() as sess: init.run() for epoch in range(n_epochs): for iteration in range(mnist.train.num_examples // batch_size): X_batch, y_batch = mnist.train.next_batch(batch_size) sess.run(training_op, feed_dict={X: X_batch, y: y_batch}) accuracy_val = accuracy.eval(feed_dict={X: mnist.test.images, y: mnist.test.labels}) print(epoch, "Test accuracy:", accuracy_val) save_path = saver.save(sess, "./my_model_final.ckpt") """ Explanation: Learning Rate Scheduling End of explanation """ reset_graph() n_inputs = 28 * 28 # MNIST n_hidden1 = 300 n_outputs = 10 X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X") y = tf.placeholder(tf.int64, shape=(None), name="y") with tf.name_scope("dnn"): hidden1 = tf.layers.dense(X, n_hidden1, activation=tf.nn.relu, name="hidden1") logits = tf.layers.dense(hidden1, n_outputs, name="outputs") """ Explanation: Avoiding Overfitting Through Regularization $\ell_1$ and $\ell_2$ regularization Let's implement $\ell_1$ regularization manually. First, we create the model, as usual (with just one hidden layer this time, for simplicity): End of explanation """ W1 = tf.get_default_graph().get_tensor_by_name("hidden1/kernel:0") W2 = tf.get_default_graph().get_tensor_by_name("outputs/kernel:0") scale = 0.001 # l1 regularization hyperparameter with tf.name_scope("loss"): xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits) base_loss = tf.reduce_mean(xentropy, name="avg_xentropy") reg_losses = tf.reduce_sum(tf.abs(W1)) + tf.reduce_sum(tf.abs(W2)) loss = tf.add(base_loss, scale * reg_losses, name="loss") """ Explanation: Next, we get a handle on the layer weights, and we compute the total loss, which is equal to the sum of the usual cross entropy loss and the $\ell_1$ loss (i.e., the absolute values of the weights): End of explanation """ with tf.name_scope("eval"): correct = tf.nn.in_top_k(logits, y, 1) accuracy = tf.reduce_mean(tf.cast(correct, tf.float32), name="accuracy") learning_rate = 0.01 with tf.name_scope("train"): optimizer = tf.train.GradientDescentOptimizer(learning_rate) training_op = optimizer.minimize(loss) init = tf.global_variables_initializer() saver = tf.train.Saver() n_epochs = 20 batch_size = 200 with tf.Session() as sess: init.run() for epoch in range(n_epochs): for iteration in range(mnist.train.num_examples // batch_size): X_batch, y_batch = mnist.train.next_batch(batch_size) sess.run(training_op, feed_dict={X: X_batch, y: y_batch}) accuracy_val = accuracy.eval(feed_dict={X: mnist.test.images, y: mnist.test.labels}) print(epoch, "Test accuracy:", accuracy_val) save_path = saver.save(sess, "./my_model_final.ckpt") """ Explanation: The rest is just as usual: End of explanation """ reset_graph() n_inputs = 28 * 28 # MNIST n_hidden1 = 300 n_hidden2 = 50 n_outputs = 10 X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X") y = tf.placeholder(tf.int64, shape=(None), name="y") """ Explanation: Alternatively, we can pass a regularization function to the tf.layers.dense() function, which will use it to create operations that will compute the regularization loss, and it adds these operations to the collection of regularization losses. The beginning is the same as above: End of explanation """ scale = 0.001 my_dense_layer = partial( tf.layers.dense, activation=tf.nn.relu, kernel_regularizer=tf.contrib.layers.l1_regularizer(scale)) with tf.name_scope("dnn"): hidden1 = my_dense_layer(X, n_hidden1, name="hidden1") hidden2 = my_dense_layer(hidden1, n_hidden2, name="hidden2") logits = my_dense_layer(hidden2, n_outputs, activation=None, name="outputs") """ Explanation: Next, we will use Python's partial() function to avoid repeating the same arguments over and over again. Note that we set the kernel_regularizer argument: End of explanation """ with tf.name_scope("loss"): # not shown in the book xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits( # not shown labels=y, logits=logits) # not shown base_loss = tf.reduce_mean(xentropy, name="avg_xentropy") # not shown reg_losses = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES) loss = tf.add_n([base_loss] + reg_losses, name="loss") """ Explanation: Next we must add the regularization losses to the base loss: End of explanation """ with tf.name_scope("eval"): correct = tf.nn.in_top_k(logits, y, 1) accuracy = tf.reduce_mean(tf.cast(correct, tf.float32), name="accuracy") learning_rate = 0.01 with tf.name_scope("train"): optimizer = tf.train.GradientDescentOptimizer(learning_rate) training_op = optimizer.minimize(loss) init = tf.global_variables_initializer() saver = tf.train.Saver() n_epochs = 20 batch_size = 200 with tf.Session() as sess: init.run() for epoch in range(n_epochs): for iteration in range(mnist.train.num_examples // batch_size): X_batch, y_batch = mnist.train.next_batch(batch_size) sess.run(training_op, feed_dict={X: X_batch, y: y_batch}) accuracy_val = accuracy.eval(feed_dict={X: mnist.test.images, y: mnist.test.labels}) print(epoch, "Test accuracy:", accuracy_val) save_path = saver.save(sess, "./my_model_final.ckpt") """ Explanation: And the rest is the same as usual: End of explanation """ reset_graph() X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X") y = tf.placeholder(tf.int64, shape=(None), name="y") training = tf.placeholder_with_default(False, shape=(), name='training') dropout_rate = 0.5 # == 1 - keep_prob X_drop = tf.layers.dropout(X, dropout_rate, training=training) with tf.name_scope("dnn"): hidden1 = tf.layers.dense(X_drop, n_hidden1, activation=tf.nn.relu, name="hidden1") hidden1_drop = tf.layers.dropout(hidden1, dropout_rate, training=training) hidden2 = tf.layers.dense(hidden1_drop, n_hidden2, activation=tf.nn.relu, name="hidden2") hidden2_drop = tf.layers.dropout(hidden2, dropout_rate, training=training) logits = tf.layers.dense(hidden2_drop, n_outputs, name="outputs") with tf.name_scope("loss"): xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits) loss = tf.reduce_mean(xentropy, name="loss") with tf.name_scope("train"): optimizer = tf.train.MomentumOptimizer(learning_rate, momentum=0.9) training_op = optimizer.minimize(loss) with tf.name_scope("eval"): correct = tf.nn.in_top_k(logits, y, 1) accuracy = tf.reduce_mean(tf.cast(correct, tf.float32)) init = tf.global_variables_initializer() saver = tf.train.Saver() n_epochs = 20 batch_size = 50 with tf.Session() as sess: init.run() for epoch in range(n_epochs): for iteration in range(mnist.train.num_examples // batch_size): X_batch, y_batch = mnist.train.next_batch(batch_size) sess.run(training_op, feed_dict={training: True, X: X_batch, y: y_batch}) acc_test = accuracy.eval(feed_dict={X: mnist.test.images, y: mnist.test.labels}) print(epoch, "Test accuracy:", acc_test) save_path = saver.save(sess, "./my_model_final.ckpt") """ Explanation: Dropout Note: the book uses tf.contrib.layers.dropout() rather than tf.layers.dropout() (which did not exist when this chapter was written). It is now preferable to use tf.layers.dropout(), because anything in the contrib module may change or be deleted without notice. The tf.layers.dropout() function is almost identical to the tf.contrib.layers.dropout() function, except for a few minor differences. Most importantly: * you must specify the dropout rate (rate) rather than the keep probability (keep_prob), where rate is simply equal to 1 - keep_prob, * the is_training parameter is renamed to training. End of explanation """ reset_graph() n_inputs = 28 * 28 n_hidden1 = 300 n_hidden2 = 50 n_outputs = 10 learning_rate = 0.01 momentum = 0.9 X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X") y = tf.placeholder(tf.int64, shape=(None), name="y") with tf.name_scope("dnn"): hidden1 = tf.layers.dense(X, n_hidden1, activation=tf.nn.relu, name="hidden1") hidden2 = tf.layers.dense(hidden1, n_hidden2, activation=tf.nn.relu, name="hidden2") logits = tf.layers.dense(hidden2, n_outputs, name="outputs") with tf.name_scope("loss"): xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits) loss = tf.reduce_mean(xentropy, name="loss") with tf.name_scope("train"): optimizer = tf.train.MomentumOptimizer(learning_rate, momentum) training_op = optimizer.minimize(loss) with tf.name_scope("eval"): correct = tf.nn.in_top_k(logits, y, 1) accuracy = tf.reduce_mean(tf.cast(correct, tf.float32)) """ Explanation: Max norm Let's go back to a plain and simple neural net for MNIST with just 2 hidden layers: End of explanation """ threshold = 1.0 weights = tf.get_default_graph().get_tensor_by_name("hidden1/kernel:0") clipped_weights = tf.clip_by_norm(weights, clip_norm=threshold, axes=1) clip_weights = tf.assign(weights, clipped_weights) """ Explanation: Next, let's get a handle on the first hidden layer's weight and create an operation that will compute the clipped weights using the clip_by_norm() function. Then we create an assignment operation to assign the clipped weights to the weights variable: End of explanation """ weights2 = tf.get_default_graph().get_tensor_by_name("hidden2/kernel:0") clipped_weights2 = tf.clip_by_norm(weights2, clip_norm=threshold, axes=1) clip_weights2 = tf.assign(weights2, clipped_weights2) """ Explanation: We can do this as well for the second hidden layer: End of explanation """ init = tf.global_variables_initializer() saver = tf.train.Saver() """ Explanation: Let's add an initializer and a saver: End of explanation """ n_epochs = 20 batch_size = 50 with tf.Session() as sess: # not shown in the book init.run() # not shown for epoch in range(n_epochs): # not shown for iteration in range(mnist.train.num_examples // batch_size): # not shown X_batch, y_batch = mnist.train.next_batch(batch_size) # not shown sess.run(training_op, feed_dict={X: X_batch, y: y_batch}) clip_weights.eval() clip_weights2.eval() # not shown acc_test = accuracy.eval(feed_dict={X: mnist.test.images, # not shown y: mnist.test.labels}) # not shown print(epoch, "Test accuracy:", acc_test) # not shown save_path = saver.save(sess, "./my_model_final.ckpt") # not shown """ Explanation: And now we can train the model. It's pretty much as usual, except that right after running the training_op, we run the clip_weights and clip_weights2 operations: End of explanation """ def max_norm_regularizer(threshold, axes=1, name="max_norm", collection="max_norm"): def max_norm(weights): clipped = tf.clip_by_norm(weights, clip_norm=threshold, axes=axes) clip_weights = tf.assign(weights, clipped, name=name) tf.add_to_collection(collection, clip_weights) return None # there is no regularization loss term return max_norm """ Explanation: The implementation above is straightforward and it works fine, but it is a bit messy. A better approach is to define a max_norm_regularizer() function: End of explanation """ reset_graph() n_inputs = 28 * 28 n_hidden1 = 300 n_hidden2 = 50 n_outputs = 10 learning_rate = 0.01 momentum = 0.9 X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X") y = tf.placeholder(tf.int64, shape=(None), name="y") max_norm_reg = max_norm_regularizer(threshold=1.0) with tf.name_scope("dnn"): hidden1 = tf.layers.dense(X, n_hidden1, activation=tf.nn.relu, kernel_regularizer=max_norm_reg, name="hidden1") hidden2 = tf.layers.dense(hidden1, n_hidden2, activation=tf.nn.relu, kernel_regularizer=max_norm_reg, name="hidden2") logits = tf.layers.dense(hidden2, n_outputs, name="outputs") with tf.name_scope("loss"): xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits) loss = tf.reduce_mean(xentropy, name="loss") with tf.name_scope("train"): optimizer = tf.train.MomentumOptimizer(learning_rate, momentum) training_op = optimizer.minimize(loss) with tf.name_scope("eval"): correct = tf.nn.in_top_k(logits, y, 1) accuracy = tf.reduce_mean(tf.cast(correct, tf.float32)) init = tf.global_variables_initializer() saver = tf.train.Saver() """ Explanation: Then you can call this function to get a max norm regularizer (with the threshold you want). When you create a hidden layer, you can pass this regularizer to the kernel_regularizer argument: End of explanation """ n_epochs = 20 batch_size = 50 clip_all_weights = tf.get_collection("max_norm") with tf.Session() as sess: init.run() for epoch in range(n_epochs): for iteration in range(mnist.train.num_examples // batch_size): X_batch, y_batch = mnist.train.next_batch(batch_size) sess.run(training_op, feed_dict={X: X_batch, y: y_batch}) sess.run(clip_all_weights) acc_test = accuracy.eval(feed_dict={X: mnist.test.images, # not shown in the book y: mnist.test.labels}) # not shown print(epoch, "Test accuracy:", acc_test) # not shown save_path = saver.save(sess, "./my_model_final.ckpt") # not shown """ Explanation: Training is as usual, except you must run the weights clipping operations after each training operation: End of explanation """ he_init = tf.contrib.layers.variance_scaling_initializer() def dnn(inputs, n_hidden_layers=5, n_neurons=100, name=None, activation=tf.nn.elu, initializer=he_init): with tf.variable_scope(name, "dnn"): for layer in range(n_hidden_layers): inputs = tf.layers.dense(inputs, n_neurons, activation=activation, kernel_initializer=initializer, name="hidden%d" % (layer + 1)) return inputs n_inputs = 28 * 28 # MNIST n_outputs = 5 reset_graph() X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X") y = tf.placeholder(tf.int64, shape=(None), name="y") dnn_outputs = dnn(X) logits = tf.layers.dense(dnn_outputs, n_outputs, kernel_initializer=he_init, name="logits") Y_proba = tf.nn.softmax(logits, name="Y_proba") """ Explanation: Exercise solutions 1. to 7. See appendix A. 8. Deep Learning 8.1. Exercise: Build a DNN with five hidden layers of 100 neurons each, He initialization, and the ELU activation function. We will need similar DNNs in the next exercises, so let's create a function to build this DNN: End of explanation """ learning_rate = 0.01 xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits) loss = tf.reduce_mean(xentropy, name="loss") optimizer = tf.train.AdamOptimizer(learning_rate) training_op = optimizer.minimize(loss, name="training_op") correct = tf.nn.in_top_k(logits, y, 1) accuracy = tf.reduce_mean(tf.cast(correct, tf.float32), name="accuracy") init = tf.global_variables_initializer() saver = tf.train.Saver() """ Explanation: 8.2. Exercise: Using Adam optimization and early stopping, try training it on MNIST but only on digits 0 to 4, as we will use transfer learning for digits 5 to 9 in the next exercise. You will need a softmax output layer with five neurons, and as always make sure to save checkpoints at regular intervals and save the final model so you can reuse it later. Let's complete the graph with the cost function, the training op, and all the other usual components: End of explanation """ from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("/tmp/data/") """ Explanation: Let's fetch the MNIST dataset: End of explanation """ X_train1 = mnist.train.images[mnist.train.labels < 5] y_train1 = mnist.train.labels[mnist.train.labels < 5] X_valid1 = mnist.validation.images[mnist.validation.labels < 5] y_valid1 = mnist.validation.labels[mnist.validation.labels < 5] X_test1 = mnist.test.images[mnist.test.labels < 5] y_test1 = mnist.test.labels[mnist.test.labels < 5] n_epochs = 1000 batch_size = 20 max_checks_without_progress = 20 checks_without_progress = 0 best_loss = np.infty with tf.Session() as sess: init.run() for epoch in range(n_epochs): rnd_idx = np.random.permutation(len(X_train1)) for rnd_indices in np.array_split(rnd_idx, len(X_train1) // batch_size): X_batch, y_batch = X_train1[rnd_indices], y_train1[rnd_indices] sess.run(training_op, feed_dict={X: X_batch, y: y_batch}) loss_val, acc_val = sess.run([loss, accuracy], feed_dict={X: X_valid1, y: y_valid1}) if loss_val < best_loss: save_path = saver.save(sess, "./my_mnist_model_0_to_4.ckpt") best_loss = loss_val checks_without_progress = 0 else: checks_without_progress += 1 if checks_without_progress > max_checks_without_progress: print("Early stopping!") break print("{}\tValidation loss: {:.6f}\tBest loss: {:.6f}\tAccuracy: {:.2f}%".format( epoch, loss_val, best_loss, acc_val * 100)) with tf.Session() as sess: saver.restore(sess, "./my_mnist_model_0_to_4.ckpt") acc_test = accuracy.eval(feed_dict={X: X_test1, y: y_test1}) print("Final test accuracy: {:.2f}%".format(acc_test * 100)) """ Explanation: Now let's create the training set, validation and test set (we need the validation set to implement early stopping): End of explanation """ from sklearn.base import BaseEstimator, ClassifierMixin from sklearn.exceptions import NotFittedError class DNNClassifier(BaseEstimator, ClassifierMixin): def __init__(self, n_hidden_layers=5, n_neurons=100, optimizer_class=tf.train.AdamOptimizer, learning_rate=0.01, batch_size=20, activation=tf.nn.elu, initializer=he_init, batch_norm_momentum=None, dropout_rate=None, random_state=None): """Initialize the DNNClassifier by simply storing all the hyperparameters.""" self.n_hidden_layers = n_hidden_layers self.n_neurons = n_neurons self.optimizer_class = optimizer_class self.learning_rate = learning_rate self.batch_size = batch_size self.activation = activation self.initializer = initializer self.batch_norm_momentum = batch_norm_momentum self.dropout_rate = dropout_rate self.random_state = random_state self._session = None def _dnn(self, inputs): """Build the hidden layers, with support for batch normalization and dropout.""" for layer in range(self.n_hidden_layers): if self.dropout_rate: inputs = tf.layers.dropout(inputs, self.dropout_rate, training=self._training) inputs = tf.layers.dense(inputs, self.n_neurons, kernel_initializer=self.initializer, name="hidden%d" % (layer + 1)) if self.batch_norm_momentum: inputs = tf.layers.batch_normalization(inputs, momentum=self.batch_norm_momentum, training=self._training) inputs = self.activation(inputs, name="hidden%d_out" % (layer + 1)) return inputs def _build_graph(self, n_inputs, n_outputs): """Build the same model as earlier""" if self.random_state is not None: tf.set_random_seed(self.random_state) np.random.seed(self.random_state) X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X") y = tf.placeholder(tf.int32, shape=(None), name="y") if self.batch_norm_momentum or self.dropout_rate: self._training = tf.placeholder_with_default(False, shape=(), name='training') else: self._training = None dnn_outputs = self._dnn(X) logits = tf.layers.dense(dnn_outputs, n_outputs, kernel_initializer=he_init, name="logits") Y_proba = tf.nn.softmax(logits, name="Y_proba") xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits) loss = tf.reduce_mean(xentropy, name="loss") optimizer = self.optimizer_class(learning_rate=self.learning_rate) training_op = optimizer.minimize(loss) correct = tf.nn.in_top_k(logits, y, 1) accuracy = tf.reduce_mean(tf.cast(correct, tf.float32), name="accuracy") init = tf.global_variables_initializer() saver = tf.train.Saver() # Make the important operations available easily through instance variables self._X, self._y = X, y self._Y_proba, self._loss = Y_proba, loss self._training_op, self._accuracy = training_op, accuracy self._init, self._saver = init, saver def close_session(self): if self._session: self._session.close() def _get_model_params(self): """Get all variable values (used for early stopping, faster than saving to disk)""" with self._graph.as_default(): gvars = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES) return {gvar.op.name: value for gvar, value in zip(gvars, self._session.run(gvars))} def _restore_model_params(self, model_params): """Set all variables to the given values (for early stopping, faster than loading from disk)""" gvar_names = list(model_params.keys()) assign_ops = {gvar_name: self._graph.get_operation_by_name(gvar_name + "/Assign") for gvar_name in gvar_names} init_values = {gvar_name: assign_op.inputs[1] for gvar_name, assign_op in assign_ops.items()} feed_dict = {init_values[gvar_name]: model_params[gvar_name] for gvar_name in gvar_names} self._session.run(assign_ops, feed_dict=feed_dict) def fit(self, X, y, n_epochs=100, X_valid=None, y_valid=None): """Fit the model to the training set. If X_valid and y_valid are provided, use early stopping.""" self.close_session() # infer n_inputs and n_outputs from the training set. n_inputs = X.shape[1] self.classes_ = np.unique(y) n_outputs = len(self.classes_) # Translate the labels vector to a vector of sorted class indices, containing # integers from 0 to n_outputs - 1. # For example, if y is equal to [8, 8, 9, 5, 7, 6, 6, 6], then the sorted class # labels (self.classes_) will be equal to [5, 6, 7, 8, 9], and the labels vector # will be translated to [3, 3, 4, 0, 2, 1, 1, 1] self.class_to_index_ = {label: index for index, label in enumerate(self.classes_)} y = np.array([self.class_to_index_[label] for label in y], dtype=np.int32) self._graph = tf.Graph() with self._graph.as_default(): self._build_graph(n_inputs, n_outputs) # extra ops for batch normalization extra_update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS) # needed in case of early stopping max_checks_without_progress = 20 checks_without_progress = 0 best_loss = np.infty best_params = None # Now train the model! self._session = tf.Session(graph=self._graph) with self._session.as_default() as sess: self._init.run() for epoch in range(n_epochs): rnd_idx = np.random.permutation(len(X)) for rnd_indices in np.array_split(rnd_idx, len(X) // self.batch_size): X_batch, y_batch = X[rnd_indices], y[rnd_indices] feed_dict = {self._X: X_batch, self._y: y_batch} if self._training is not None: feed_dict[self._training] = True sess.run(self._training_op, feed_dict=feed_dict) if extra_update_ops: sess.run(extra_update_ops, feed_dict=feed_dict) if X_valid is not None and y_valid is not None: loss_val, acc_val = sess.run([self._loss, self._accuracy], feed_dict={self._X: X_valid, self._y: y_valid}) if loss_val < best_loss: best_params = self._get_model_params() best_loss = loss_val checks_without_progress = 0 else: checks_without_progress += 1 print("{}\tValidation loss: {:.6f}\tBest loss: {:.6f}\tAccuracy: {:.2f}%".format( epoch, loss_val, best_loss, acc_val * 100)) if checks_without_progress > max_checks_without_progress: print("Early stopping!") break else: loss_train, acc_train = sess.run([self._loss, self._accuracy], feed_dict={self._X: X_batch, self._y: y_batch}) print("{}\tLast training batch loss: {:.6f}\tAccuracy: {:.2f}%".format( epoch, loss_train, acc_train * 100)) # If we used early stopping then rollback to the best model found if best_params: self._restore_model_params(best_params) return self def predict_proba(self, X): if not self._session: raise NotFittedError("This %s instance is not fitted yet" % self.__class__.__name__) with self._session.as_default() as sess: return self._Y_proba.eval(feed_dict={self._X: X}) def predict(self, X): class_indices = np.argmax(self.predict_proba(X), axis=1) return np.array([[self.classes_[class_index]] for class_index in class_indices], np.int32) def save(self, path): self._saver.save(self._session, path) """ Explanation: We get 98.05% accuracy on the test set. That's not too bad, but let's see if we can do better by tuning the hyperparameters. 8.3. Exercise: Tune the hyperparameters using cross-validation and see what precision you can achieve. Let's create a DNNClassifier class, compatible with Scikit-Learn's RandomizedSearchCV class, to perform hyperparameter tuning. Here are the key points of this implementation: * the __init__() method (constructor) does nothing more than create instance variables for each of the hyperparameters. * the fit() method creates the graph, starts a session and trains the model: * it calls the _build_graph() method to build the graph (much lile the graph we defined earlier). Once this method is done creating the graph, it saves all the important operations as instance variables for easy access by other methods. * the _dnn() method builds the hidden layers, just like the dnn() function above, but also with support for batch normalization and dropout (for the next exercises). * if the fit() method is given a validation set (X_valid and y_valid), then it implements early stopping. This implementation does not save the best model to disk, but rather to memory: it uses the _get_model_params() method to get all the graph's variables and their values, and the _restore_model_params() method to restore the variable values (of the best model found). This trick helps speed up training. * After the fit() method has finished training the model, it keeps the session open so that predictions can be made quickly, without having to save a model to disk and restore it for every prediction. You can close the session by calling the close_session() method. * the predict_proba() method uses the trained model to predict the class probabilities. * the predict() method calls predict_proba() and returns the class with the highest probability, for each instance. End of explanation """ dnn_clf = DNNClassifier(random_state=42) dnn_clf.fit(X_train1, y_train1, n_epochs=1000, X_valid=X_valid1, y_valid=y_valid1) """ Explanation: Let's see if we get the exact same accuracy as earlier using this class (without dropout or batch norm): End of explanation """ from sklearn.metrics import accuracy_score y_pred = dnn_clf.predict(X_test1) accuracy_score(y_test1, y_pred) """ Explanation: The model is trained, let's see if it gets the same accuracy as earlier: End of explanation """ from sklearn.model_selection import RandomizedSearchCV def leaky_relu(alpha=0.01): def parametrized_leaky_relu(z, name=None): return tf.maximum(alpha * z, z, name=name) return parametrized_leaky_relu param_distribs = { "n_neurons": [10, 30, 50, 70, 90, 100, 120, 140, 160], "batch_size": [10, 50, 100, 500], "learning_rate": [0.01, 0.02, 0.05, 0.1], "activation": [tf.nn.relu, tf.nn.elu, leaky_relu(alpha=0.01), leaky_relu(alpha=0.1)], # you could also try exploring different numbers of hidden layers, different optimizers, etc. #"n_hidden_layers": [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10], #"optimizer_class": [tf.train.AdamOptimizer, partial(tf.train.MomentumOptimizer, momentum=0.95)], } rnd_search = RandomizedSearchCV(DNNClassifier(random_state=42), param_distribs, n_iter=50, fit_params={"X_valid": X_valid1, "y_valid": y_valid1, "n_epochs": 1000}, random_state=42, verbose=2) rnd_search.fit(X_train1, y_train1) rnd_search.best_params_ y_pred = rnd_search.predict(X_test1) accuracy_score(y_test1, y_pred) """ Explanation: Yep! Working fine. Now we can use Scikit-Learn's RandomizedSearchCV class to search for better hyperparameters (this may take over an hour, depending on your system): End of explanation """ rnd_search.best_estimator_.save("./my_best_mnist_model_0_to_4") """ Explanation: Wonderful! Tuning the hyperparameters got us up to 99.32% accuracy! It may not sound like a great improvement to go from 98.05% to 99.32% accuracy, but consider the error rate: it went from roughly 2% to 0.7%. That's a 65% reduction of the number of errors this model will produce! It's a good idea to save this model: End of explanation """ dnn_clf = DNNClassifier(activation=leaky_relu(alpha=0.1), batch_size=500, learning_rate=0.01, n_neurons=140, random_state=42) dnn_clf.fit(X_train1, y_train1, n_epochs=1000, X_valid=X_valid1, y_valid=y_valid1) """ Explanation: 8.4. Exercise: Now try adding Batch Normalization and compare the learning curves: is it converging faster than before? Does it produce a better model? Let's train the best model found, once again, to see how fast it converges (alternatively, you could tweak the code above to make it write summaries for TensorBoard, so you can visualize the learning curve): End of explanation """ y_pred = dnn_clf.predict(X_test1) accuracy_score(y_test1, y_pred) """ Explanation: The best loss is reached at epoch 19, but it was already within 10% of that result at epoch 9. Let's check that we do indeed get 99.32% accuracy on the test set: End of explanation """ dnn_clf_bn = DNNClassifier(activation=leaky_relu(alpha=0.1), batch_size=500, learning_rate=0.01, n_neurons=90, random_state=42, batch_norm_momentum=0.95) dnn_clf_bn.fit(X_train1, y_train1, n_epochs=1000, X_valid=X_valid1, y_valid=y_valid1) """ Explanation: Good, now let's use the exact same model, but this time with batch normalization: End of explanation """ y_pred = dnn_clf_bn.predict(X_test1) accuracy_score(y_test1, y_pred) """ Explanation: The best params are reached during epoch 48, that's actually a slower convergence than earlier. Let's check the accuracy: End of explanation """ from sklearn.model_selection import RandomizedSearchCV param_distribs = { "n_neurons": [10, 30, 50, 70, 90, 100, 120, 140, 160], "batch_size": [10, 50, 100, 500], "learning_rate": [0.01, 0.02, 0.05, 0.1], "activation": [tf.nn.relu, tf.nn.elu, leaky_relu(alpha=0.01), leaky_relu(alpha=0.1)], # you could also try exploring different numbers of hidden layers, different optimizers, etc. #"n_hidden_layers": [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10], #"optimizer_class": [tf.train.AdamOptimizer, partial(tf.train.MomentumOptimizer, momentum=0.95)], "batch_norm_momentum": [0.9, 0.95, 0.98, 0.99, 0.999], } rnd_search_bn = RandomizedSearchCV(DNNClassifier(random_state=42), param_distribs, n_iter=50, fit_params={"X_valid": X_valid1, "y_valid": y_valid1, "n_epochs": 1000}, random_state=42, verbose=2) rnd_search_bn.fit(X_train1, y_train1) rnd_search_bn.best_params_ y_pred = rnd_search_bn.predict(X_test1) accuracy_score(y_test1, y_pred) """ Explanation: Well, batch normalization did not improve accuracy. Let's see if we can find a good set of hyperparameters that will work well with batch normalization: End of explanation """ y_pred = dnn_clf.predict(X_train1) accuracy_score(y_train1, y_pred) """ Explanation: Slightly better than earlier: 99.4% vs 99.3%. Let's see if dropout can do better. 8.5. Exercise: is the model overfitting the training set? Try adding dropout to every layer and try again. Does it help? Let's go back to the best model we trained earlier and see how it performs on the training set: End of explanation """ dnn_clf_dropout = DNNClassifier(activation=leaky_relu(alpha=0.1), batch_size=500, learning_rate=0.01, n_neurons=90, random_state=42, dropout_rate=0.5) dnn_clf_dropout.fit(X_train1, y_train1, n_epochs=1000, X_valid=X_valid1, y_valid=y_valid1) """ Explanation: The model performs significantly better on the training set than on the test set (99.91% vs 99.32%), which means it is overfitting the training set. A bit of regularization may help. Let's try adding dropout with a 50% dropout rate: End of explanation """ y_pred = dnn_clf_dropout.predict(X_test1) accuracy_score(y_test1, y_pred) """ Explanation: The best params are reached during epoch 23. Dropout somewhat slowed down convergence. Let's check the accuracy: End of explanation """ from sklearn.model_selection import RandomizedSearchCV param_distribs = { "n_neurons": [10, 30, 50, 70, 90, 100, 120, 140, 160], "batch_size": [10, 50, 100, 500], "learning_rate": [0.01, 0.02, 0.05, 0.1], "activation": [tf.nn.relu, tf.nn.elu, leaky_relu(alpha=0.01), leaky_relu(alpha=0.1)], # you could also try exploring different numbers of hidden layers, different optimizers, etc. #"n_hidden_layers": [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10], #"optimizer_class": [tf.train.AdamOptimizer, partial(tf.train.MomentumOptimizer, momentum=0.95)], "dropout_rate": [0.2, 0.3, 0.4, 0.5, 0.6], } rnd_search_dropout = RandomizedSearchCV(DNNClassifier(random_state=42), param_distribs, n_iter=50, fit_params={"X_valid": X_valid1, "y_valid": y_valid1, "n_epochs": 1000}, random_state=42, verbose=2) rnd_search_dropout.fit(X_train1, y_train1) rnd_search_dropout.best_params_ y_pred = rnd_search_dropout.predict(X_test1) accuracy_score(y_test1, y_pred) """ Explanation: We are out of luck, dropout does not seem to help either. Let's try tuning the hyperparameters, perhaps we can squeeze a bit more performance out of this model: End of explanation """ reset_graph() restore_saver = tf.train.import_meta_graph("./my_best_mnist_model_0_to_4.meta") X = tf.get_default_graph().get_tensor_by_name("X:0") y = tf.get_default_graph().get_tensor_by_name("y:0") loss = tf.get_default_graph().get_tensor_by_name("loss:0") Y_proba = tf.get_default_graph().get_tensor_by_name("Y_proba:0") logits = Y_proba.op.inputs[0] accuracy = tf.get_default_graph().get_tensor_by_name("accuracy:0") """ Explanation: Oh well, dropout did not improve the model. Better luck next time! :) But that's okay, we have ourselves a nice DNN that achieves 99.40% accuracy on the test set using Batch Normalization, or 99.32% without BN. Let's see if some of this expertise on digits 0 to 4 can be transferred to the task of classifying digits 5 to 9. For the sake of simplicity we will reuse the DNN without BN, since it is almost as good. 9. Transfer learning 9.1. Exercise: create a new DNN that reuses all the pretrained hidden layers of the previous model, freezes them, and replaces the softmax output layer with a new one. Let's load the best model's graph and get a handle on all the important operations we will need. Note that instead of creating a new softmax output layer, we will just reuse the existing one (since it has the same number of outputs as the existing one). We will reinitialize its parameters before training. End of explanation """ learning_rate = 0.01 output_layer_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope="logits") optimizer = tf.train.AdamOptimizer(learning_rate, name="Adam2") training_op = optimizer.minimize(loss, var_list=output_layer_vars) correct = tf.nn.in_top_k(logits, y, 1) accuracy = tf.reduce_mean(tf.cast(correct, tf.float32), name="accuracy") init = tf.global_variables_initializer() five_frozen_saver = tf.train.Saver() """ Explanation: To freeze the lower layers, we will exclude their variables from the optimizer's list of trainable variables, keeping only the output layer's trainable variables: End of explanation """ X_train2_full = mnist.train.images[mnist.train.labels >= 5] y_train2_full = mnist.train.labels[mnist.train.labels >= 5] - 5 X_valid2_full = mnist.validation.images[mnist.validation.labels >= 5] y_valid2_full = mnist.validation.labels[mnist.validation.labels >= 5] - 5 X_test2 = mnist.test.images[mnist.test.labels >= 5] y_test2 = mnist.test.labels[mnist.test.labels >= 5] - 5 """ Explanation: 9.2. Exercise: train this new DNN on digits 5 to 9, using only 100 images per digit, and time how long it takes. Despite this small number of examples, can you achieve high precision? Let's create the training, validation and test sets. We need to subtract 5 from the labels because TensorFlow expects integers from 0 to n_classes-1. End of explanation """ def sample_n_instances_per_class(X, y, n=100): Xs, ys = [], [] for label in np.unique(y): idx = (y == label) Xc = X[idx][:n] yc = y[idx][:n] Xs.append(Xc) ys.append(yc) return np.concatenate(Xs), np.concatenate(ys) X_train2, y_train2 = sample_n_instances_per_class(X_train2_full, y_train2_full, n=100) X_valid2, y_valid2 = sample_n_instances_per_class(X_valid2_full, y_valid2_full, n=30) """ Explanation: Also, for the purpose of this exercise, we want to keep only 100 instances per class in the training set (and let's keep only 30 instances per class in the validation set). Let's create a small function to do that: End of explanation """ import time n_epochs = 1000 batch_size = 20 max_checks_without_progress = 20 checks_without_progress = 0 best_loss = np.infty with tf.Session() as sess: init.run() restore_saver.restore(sess, "./my_best_mnist_model_0_to_4") for var in output_layer_vars: var.initializer.run() t0 = time.time() for epoch in range(n_epochs): rnd_idx = np.random.permutation(len(X_train2)) for rnd_indices in np.array_split(rnd_idx, len(X_train2) // batch_size): X_batch, y_batch = X_train2[rnd_indices], y_train2[rnd_indices] sess.run(training_op, feed_dict={X: X_batch, y: y_batch}) loss_val, acc_val = sess.run([loss, accuracy], feed_dict={X: X_valid2, y: y_valid2}) if loss_val < best_loss: save_path = five_frozen_saver.save(sess, "./my_mnist_model_5_to_9_five_frozen") best_loss = loss_val checks_without_progress = 0 else: checks_without_progress += 1 if checks_without_progress > max_checks_without_progress: print("Early stopping!") break print("{}\tValidation loss: {:.6f}\tBest loss: {:.6f}\tAccuracy: {:.2f}%".format( epoch, loss_val, best_loss, acc_val * 100)) t1 = time.time() print("Total training time: {:.1f}s".format(t1 - t0)) with tf.Session() as sess: five_frozen_saver.restore(sess, "./my_mnist_model_5_to_9_five_frozen") acc_test = accuracy.eval(feed_dict={X: X_test2, y: y_test2}) print("Final test accuracy: {:.2f}%".format(acc_test * 100)) """ Explanation: Now let's train the model. This is the same training code as earlier, using early stopping, except for the initialization: we first initialize all the variables, then we restore the best model trained earlier (on digits 0 to 4), and finally we reinitialize the output layer variables. End of explanation """ hidden5_out = tf.get_default_graph().get_tensor_by_name("hidden5_out:0") """ Explanation: Well that's not a great accuracy, is it? Of course with such a tiny training set, and with only one layer to tweak, we should not expect miracles. 9.3. Exercise: try caching the frozen layers, and train the model again: how much faster is it now? Let's start by getting a handle on the output of the last frozen layer: End of explanation """ import time n_epochs = 1000 batch_size = 20 max_checks_without_progress = 20 checks_without_progress = 0 best_loss = np.infty with tf.Session() as sess: init.run() restore_saver.restore(sess, "./my_best_mnist_model_0_to_4") for var in output_layer_vars: var.initializer.run() t0 = time.time() hidden5_train = hidden5_out.eval(feed_dict={X: X_train2, y: y_train2}) hidden5_valid = hidden5_out.eval(feed_dict={X: X_valid2, y: y_valid2}) for epoch in range(n_epochs): rnd_idx = np.random.permutation(len(X_train2)) for rnd_indices in np.array_split(rnd_idx, len(X_train2) // batch_size): h5_batch, y_batch = hidden5_train[rnd_indices], y_train2[rnd_indices] sess.run(training_op, feed_dict={hidden5_out: h5_batch, y: y_batch}) loss_val, acc_val = sess.run([loss, accuracy], feed_dict={hidden5_out: hidden5_valid, y: y_valid2}) if loss_val < best_loss: save_path = five_frozen_saver.save(sess, "./my_mnist_model_5_to_9_five_frozen") best_loss = loss_val checks_without_progress = 0 else: checks_without_progress += 1 if checks_without_progress > max_checks_without_progress: print("Early stopping!") break print("{}\tValidation loss: {:.6f}\tBest loss: {:.6f}\tAccuracy: {:.2f}%".format( epoch, loss_val, best_loss, acc_val * 100)) t1 = time.time() print("Total training time: {:.1f}s".format(t1 - t0)) with tf.Session() as sess: five_frozen_saver.restore(sess, "./my_mnist_model_5_to_9_five_frozen") acc_test = accuracy.eval(feed_dict={X: X_test2, y: y_test2}) print("Final test accuracy: {:.2f}%".format(acc_test * 100)) """ Explanation: Now let's train the model using roughly the same code as earlier. The difference is that we compute the output of the top frozen layer at the beginning (both for the training set and the validation set), and we cache it. This makes training roughly 1.5 to 3 times faster in this example (this may vary greatly, depending on your system): End of explanation """ reset_graph() n_outputs = 5 restore_saver = tf.train.import_meta_graph("./my_best_mnist_model_0_to_4.meta") X = tf.get_default_graph().get_tensor_by_name("X:0") y = tf.get_default_graph().get_tensor_by_name("y:0") hidden4_out = tf.get_default_graph().get_tensor_by_name("hidden4_out:0") logits = tf.layers.dense(hidden4_out, n_outputs, kernel_initializer=he_init, name="new_logits") Y_proba = tf.nn.softmax(logits) xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits) loss = tf.reduce_mean(xentropy) correct = tf.nn.in_top_k(logits, y, 1) accuracy = tf.reduce_mean(tf.cast(correct, tf.float32), name="accuracy") """ Explanation: 9.4. Exercise: try again reusing just four hidden layers instead of five. Can you achieve a higher precision? Let's load the best model again, but this time we will create a new softmax output layer on top of the 4th hidden layer: End of explanation """ learning_rate = 0.01 output_layer_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope="new_logits") optimizer = tf.train.AdamOptimizer(learning_rate, name="Adam2") training_op = optimizer.minimize(loss, var_list=output_layer_vars) init = tf.global_variables_initializer() four_frozen_saver = tf.train.Saver() """ Explanation: And now let's create the training operation. We want to freeze all the layers except for the new output layer: End of explanation """ n_epochs = 1000 batch_size = 20 max_checks_without_progress = 20 checks_without_progress = 0 best_loss = np.infty with tf.Session() as sess: init.run() restore_saver.restore(sess, "./my_best_mnist_model_0_to_4") for epoch in range(n_epochs): rnd_idx = np.random.permutation(len(X_train2)) for rnd_indices in np.array_split(rnd_idx, len(X_train2) // batch_size): X_batch, y_batch = X_train2[rnd_indices], y_train2[rnd_indices] sess.run(training_op, feed_dict={X: X_batch, y: y_batch}) loss_val, acc_val = sess.run([loss, accuracy], feed_dict={X: X_valid2, y: y_valid2}) if loss_val < best_loss: save_path = four_frozen_saver.save(sess, "./my_mnist_model_5_to_9_four_frozen") best_loss = loss_val checks_without_progress = 0 else: checks_without_progress += 1 if checks_without_progress > max_checks_without_progress: print("Early stopping!") break print("{}\tValidation loss: {:.6f}\tBest loss: {:.6f}\tAccuracy: {:.2f}%".format( epoch, loss_val, best_loss, acc_val * 100)) with tf.Session() as sess: four_frozen_saver.restore(sess, "./my_mnist_model_5_to_9_four_frozen") acc_test = accuracy.eval(feed_dict={X: X_test2, y: y_test2}) print("Final test accuracy: {:.2f}%".format(acc_test * 100)) """ Explanation: And once again we train the model with the same code as earlier. Note: we could of course write a function once and use it multiple times, rather than copying almost the same training code over and over again, but as we keep tweaking the code slightly, the function would need multiple arguments and if statements, and it would have to be at the beginning of the notebook, where it would not make much sense to readers. In short it would be very confusing, so we're better off with copy & paste. End of explanation """ learning_rate = 0.01 unfrozen_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope="hidden[34]|new_logits") optimizer = tf.train.AdamOptimizer(learning_rate, name="Adam3") training_op = optimizer.minimize(loss, var_list=unfrozen_vars) init = tf.global_variables_initializer() two_frozen_saver = tf.train.Saver() n_epochs = 1000 batch_size = 20 max_checks_without_progress = 20 checks_without_progress = 0 best_loss = np.infty with tf.Session() as sess: init.run() four_frozen_saver.restore(sess, "./my_mnist_model_5_to_9_four_frozen") for epoch in range(n_epochs): rnd_idx = np.random.permutation(len(X_train2)) for rnd_indices in np.array_split(rnd_idx, len(X_train2) // batch_size): X_batch, y_batch = X_train2[rnd_indices], y_train2[rnd_indices] sess.run(training_op, feed_dict={X: X_batch, y: y_batch}) loss_val, acc_val = sess.run([loss, accuracy], feed_dict={X: X_valid2, y: y_valid2}) if loss_val < best_loss: save_path = two_frozen_saver.save(sess, "./my_mnist_model_5_to_9_two_frozen") best_loss = loss_val checks_without_progress = 0 else: checks_without_progress += 1 if checks_without_progress > max_checks_without_progress: print("Early stopping!") break print("{}\tValidation loss: {:.6f}\tBest loss: {:.6f}\tAccuracy: {:.2f}%".format( epoch, loss_val, best_loss, acc_val * 100)) with tf.Session() as sess: two_frozen_saver.restore(sess, "./my_mnist_model_5_to_9_two_frozen") acc_test = accuracy.eval(feed_dict={X: X_test2, y: y_test2}) print("Final test accuracy: {:.2f}%".format(acc_test * 100)) """ Explanation: Still not fantastic, but much better. 9.5. Exercise: now unfreeze the top two hidden layers and continue training: can you get the model to perform even better? End of explanation """ learning_rate = 0.01 optimizer = tf.train.AdamOptimizer(learning_rate, name="Adam4") training_op = optimizer.minimize(loss) init = tf.global_variables_initializer() no_frozen_saver = tf.train.Saver() n_epochs = 1000 batch_size = 20 max_checks_without_progress = 20 checks_without_progress = 0 best_loss = np.infty with tf.Session() as sess: init.run() two_frozen_saver.restore(sess, "./my_mnist_model_5_to_9_two_frozen") for epoch in range(n_epochs): rnd_idx = np.random.permutation(len(X_train2)) for rnd_indices in np.array_split(rnd_idx, len(X_train2) // batch_size): X_batch, y_batch = X_train2[rnd_indices], y_train2[rnd_indices] sess.run(training_op, feed_dict={X: X_batch, y: y_batch}) loss_val, acc_val = sess.run([loss, accuracy], feed_dict={X: X_valid2, y: y_valid2}) if loss_val < best_loss: save_path = no_frozen_saver.save(sess, "./my_mnist_model_5_to_9_no_frozen") best_loss = loss_val checks_without_progress = 0 else: checks_without_progress += 1 if checks_without_progress > max_checks_without_progress: print("Early stopping!") break print("{}\tValidation loss: {:.6f}\tBest loss: {:.6f}\tAccuracy: {:.2f}%".format( epoch, loss_val, best_loss, acc_val * 100)) with tf.Session() as sess: no_frozen_saver.restore(sess, "./my_mnist_model_5_to_9_no_frozen") acc_test = accuracy.eval(feed_dict={X: X_test2, y: y_test2}) print("Final test accuracy: {:.2f}%".format(acc_test * 100)) """ Explanation: Let's check what accuracy we can get by unfreezing all layers: End of explanation """ dnn_clf_5_to_9 = DNNClassifier(n_hidden_layers=4, random_state=42) dnn_clf_5_to_9.fit(X_train2, y_train2, n_epochs=1000, X_valid=X_valid2, y_valid=y_valid2) y_pred = dnn_clf_5_to_9.predict(X_test2) accuracy_score(y_test2, y_pred) """ Explanation: Let's compare that to a DNN trained from scratch: End of explanation """ n_inputs = 28 * 28 # MNIST reset_graph() X = tf.placeholder(tf.float32, shape=(None, 2, n_inputs), name="X") X1, X2 = tf.unstack(X, axis=1) """ Explanation: Meh. How disappointing! ;) Transfer learning did not help much (if at all) in this task. At least we tried... Fortunately, the next exercise will get better results. 10. Pretraining on an auxiliary task In this exercise you will build a DNN that compares two MNIST digit images and predicts whether they represent the same digit or not. Then you will reuse the lower layers of this network to train an MNIST classifier using very little training data. 10.1. Exercise: Start by building two DNNs (let's call them DNN A and B), both similar to the one you built earlier but without the output layer: each DNN should have five hidden layers of 100 neurons each, He initialization, and ELU activation. Next, add one more hidden layer with 10 units on top of both DNNs. You should use TensorFlow's concat() function with axis=1 to concatenate the outputs of both DNNs along the horizontal axis, then feed the result to the hidden layer. Finally, add an output layer with a single neuron using the logistic activation function. Warning! There was an error in the book for this exercise: there was no instruction to add a top hidden layer. Without it, the neural network generally fails to start learning. If you have the latest version of the book, this error has been fixed. You could have two input placeholders, X1 and X2, one for the images that should be fed to the first DNN, and the other for the images that should be fed to the second DNN. It would work fine. However, another option is to have a single input placeholder to hold both sets of images (each row will hold a pair of images), and use tf.unstack() to split this tensor into two separate tensors, like this: End of explanation """ y = tf.placeholder(tf.int32, shape=[None, 1]) """ Explanation: We also need the labels placeholder. Each label will be 0 if the images represent different digits, or 1 if they represent the same digit: End of explanation """ dnn1 = dnn(X1, name="DNN_A") dnn2 = dnn(X2, name="DNN_B") """ Explanation: Now let's feed these inputs through two separate DNNs: End of explanation """ dnn_outputs = tf.concat([dnn1, dnn2], axis=1) """ Explanation: And let's concatenate their outputs: End of explanation """ dnn1.shape dnn2.shape """ Explanation: Each DNN outputs 100 activations (per instance), so the shape is [None, 100]: End of explanation """ dnn_outputs.shape """ Explanation: And of course the concatenated outputs have a shape of [None, 200]: End of explanation """ hidden = tf.layers.dense(dnn_outputs, units=10, activation=tf.nn.elu, kernel_initializer=he_init) logits = tf.layers.dense(hidden, units=1, kernel_initializer=he_init) y_proba = tf.nn.sigmoid(logits) """ Explanation: Now lets add an extra hidden layer with just 10 neurons, and the output layer, with a single neuron: End of explanation """ y_pred = tf.cast(tf.greater_equal(logits, 0), tf.int32) """ Explanation: The whole network predicts 1 if y_proba &gt;= 0.5 (i.e. the network predicts that the images represent the same digit), or 0 otherwise. We compute instead logits &gt;= 0, which is equivalent but faster to compute: End of explanation """ y_as_float = tf.cast(y, tf.float32) xentropy = tf.nn.sigmoid_cross_entropy_with_logits(labels=y_as_float, logits=logits) loss = tf.reduce_mean(xentropy) """ Explanation: Now let's add the cost function: End of explanation """ learning_rate = 0.01 momentum = 0.95 optimizer = tf.train.MomentumOptimizer(learning_rate, momentum, use_nesterov=True) training_op = optimizer.minimize(loss) """ Explanation: And we can now create the training operation using an optimizer: End of explanation """ y_pred_correct = tf.equal(y_pred, y) accuracy = tf.reduce_mean(tf.cast(y_pred_correct, tf.float32)) """ Explanation: We will want to measure our classifier's accuracy. End of explanation """ init = tf.global_variables_initializer() saver = tf.train.Saver() """ Explanation: And the usual init and saver: End of explanation """ X_train1 = mnist.train.images y_train1 = mnist.train.labels X_train2 = mnist.validation.images y_train2 = mnist.validation.labels X_test = mnist.test.images y_test = mnist.test.labels """ Explanation: 10.2. Exercise: split the MNIST training set in two sets: split #1 should containing 55,000 images, and split #2 should contain contain 5,000 images. Create a function that generates a training batch where each instance is a pair of MNIST images picked from split #1. Half of the training instances should be pairs of images that belong to the same class, while the other half should be images from different classes. For each pair, the training label should be 0 if the images are from the same class, or 1 if they are from different classes. The MNIST dataset returned by TensorFlow's input_data() function is already split into 3 parts: a training set (55,000 instances), a validation set (5,000 instances) and a test set (10,000 instances). Let's use the first set to generate the training set composed image pairs, and we will use the second set for the second phase of the exercise (to train a regular MNIST classifier). We will use the third set as the test set for both phases. End of explanation """ def generate_batch(images, labels, batch_size): size1 = batch_size // 2 size2 = batch_size - size1 if size1 != size2 and np.random.rand() > 0.5: size1, size2 = size2, size1 X = [] y = [] while len(X) < size1: rnd_idx1, rnd_idx2 = np.random.randint(0, len(images), 2) if rnd_idx1 != rnd_idx2 and labels[rnd_idx1] == labels[rnd_idx2]: X.append(np.array([images[rnd_idx1], images[rnd_idx2]])) y.append([1]) while len(X) < batch_size: rnd_idx1, rnd_idx2 = np.random.randint(0, len(images), 2) if labels[rnd_idx1] != labels[rnd_idx2]: X.append(np.array([images[rnd_idx1], images[rnd_idx2]])) y.append([0]) rnd_indices = np.random.permutation(batch_size) return np.array(X)[rnd_indices], np.array(y)[rnd_indices] """ Explanation: Let's write a function that generates pairs of images: 50% representing the same digit, and 50% representing different digits. There are many ways to implement this. In this implementation, we first decide how many "same" pairs (i.e. pairs of images representing the same digit) we will generate, and how many "different" pairs (i.e. pairs of images representing different digits). We could just use batch_size // 2 but we want to handle the case where it is odd (granted, that might be overkill!). Then we generate random pairs and we pick the right number of "same" pairs, then we generate the right number of "different" pairs. Finally we shuffle the batch and return it: End of explanation """ batch_size = 5 X_batch, y_batch = generate_batch(X_train1, y_train1, batch_size) """ Explanation: Let's test it to generate a small batch of 5 image pairs: End of explanation """ X_batch.shape, X_batch.dtype """ Explanation: Each row in X_batch contains a pair of images: End of explanation """ plt.figure(figsize=(3, 3 * batch_size)) plt.subplot(121) plt.imshow(X_batch[:,0].reshape(28 * batch_size, 28), cmap="binary", interpolation="nearest") plt.axis('off') plt.subplot(122) plt.imshow(X_batch[:,1].reshape(28 * batch_size, 28), cmap="binary", interpolation="nearest") plt.axis('off') plt.show() """ Explanation: Let's look at these pairs: End of explanation """ y_batch """ Explanation: And let's look at the labels (0 means "different", 1 means "same"): End of explanation """ X_test1, y_test1 = generate_batch(X_test, y_test, batch_size=len(X_test)) """ Explanation: Perfect! 10.3. Exercise: train the DNN on this training set. For each image pair, you can simultaneously feed the first image to DNN A and the second image to DNN B. The whole network will gradually learn to tell whether two images belong to the same class or not. Let's generate a test set composed of many pairs of images pulled from the MNIST test set: End of explanation """ n_epochs = 100 batch_size = 500 with tf.Session() as sess: init.run() for epoch in range(n_epochs): for iteration in range(mnist.train.num_examples // batch_size): X_batch, y_batch = generate_batch(X_train1, y_train1, batch_size) loss_val, _ = sess.run([loss, training_op], feed_dict={X: X_batch, y: y_batch}) print(epoch, "Train loss:", loss_val) if epoch % 5 == 0: acc_test = accuracy.eval(feed_dict={X: X_test1, y: y_test1}) print(epoch, "Test accuracy:", acc_test) save_path = saver.save(sess, "./my_digit_comparison_model.ckpt") """ Explanation: And now, let's train the model. There's really nothing special about this step, except for the fact that we need a fairly large batch_size, otherwise the model fails to learn anything and ends up with an accuracy of 50%: End of explanation """ reset_graph() n_inputs = 28 * 28 # MNIST n_outputs = 10 X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X") y = tf.placeholder(tf.int32, shape=(None), name="y") dnn_outputs = dnn(X, name="DNN_A") frozen_outputs = tf.stop_gradient(dnn_outputs) logits = tf.layers.dense(dnn_outputs, n_outputs, kernel_initializer=he_init) Y_proba = tf.nn.softmax(logits) xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits) loss = tf.reduce_mean(xentropy, name="loss") optimizer = tf.train.MomentumOptimizer(learning_rate, momentum, use_nesterov=True) training_op = optimizer.minimize(loss) correct = tf.nn.in_top_k(logits, y, 1) accuracy = tf.reduce_mean(tf.cast(correct, tf.float32)) init = tf.global_variables_initializer() dnn_A_vars = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope="DNN_A") restore_saver = tf.train.Saver(var_list={var.op.name: var for var in dnn_A_vars}) saver = tf.train.Saver() """ Explanation: All right, we reach 97.6% accuracy on this digit comparison task. That's not too bad, this model knows a thing or two about comparing handwritten digits! Let's see if some of that knowledge can be useful for the regular MNIST classification task. 10.4. Exercise: now create a new DNN by reusing and freezing the hidden layers of DNN A and adding a softmax output layer on top with 10 neurons. Train this network on split #2 and see if you can achieve high performance despite having only 500 images per class. Let's create the model, it is pretty straightforward. There are many ways to freeze the lower layers, as explained in the book. In this example, we chose to use the tf.stop_gradient() function. Note that we need one Saver to restore the pretrained DNN A, and another Saver to save the final model: End of explanation """ n_epochs = 100 batch_size = 50 with tf.Session() as sess: init.run() restore_saver.restore(sess, "./my_digit_comparison_model.ckpt") for epoch in range(n_epochs): rnd_idx = np.random.permutation(len(X_train2)) for rnd_indices in np.array_split(rnd_idx, len(X_train2) // batch_size): X_batch, y_batch = X_train2[rnd_indices], y_train2[rnd_indices] sess.run(training_op, feed_dict={X: X_batch, y: y_batch}) if epoch % 10 == 0: acc_test = accuracy.eval(feed_dict={X: X_test, y: y_test}) print(epoch, "Test accuracy:", acc_test) save_path = saver.save(sess, "./my_mnist_model_final.ckpt") """ Explanation: Now on to training! We first initialize all variables (including the variables in the new output layer), then we restore the pretrained DNN A. Next, we just train the model on the small MNIST dataset (containing just 5,000 images): End of explanation """ reset_graph() n_inputs = 28 * 28 # MNIST n_outputs = 10 X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X") y = tf.placeholder(tf.int32, shape=(None), name="y") dnn_outputs = dnn(X, name="DNN_A") logits = tf.layers.dense(dnn_outputs, n_outputs, kernel_initializer=he_init) Y_proba = tf.nn.softmax(logits) xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits) loss = tf.reduce_mean(xentropy, name="loss") optimizer = tf.train.MomentumOptimizer(learning_rate, momentum, use_nesterov=True) training_op = optimizer.minimize(loss) correct = tf.nn.in_top_k(logits, y, 1) accuracy = tf.reduce_mean(tf.cast(correct, tf.float32)) init = tf.global_variables_initializer() dnn_A_vars = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope="DNN_A") restore_saver = tf.train.Saver(var_list={var.op.name: var for var in dnn_A_vars}) saver = tf.train.Saver() n_epochs = 150 batch_size = 50 with tf.Session() as sess: init.run() for epoch in range(n_epochs): rnd_idx = np.random.permutation(len(X_train2)) for rnd_indices in np.array_split(rnd_idx, len(X_train2) // batch_size): X_batch, y_batch = X_train2[rnd_indices], y_train2[rnd_indices] sess.run(training_op, feed_dict={X: X_batch, y: y_batch}) if epoch % 10 == 0: acc_test = accuracy.eval(feed_dict={X: X_test, y: y_test}) print(epoch, "Test accuracy:", acc_test) save_path = saver.save(sess, "./my_mnist_model_final.ckpt") """ Explanation: Well, 96.7% accuracy, that's not the best MNIST model we have trained so far, but recall that we are only using a small training set (just 500 images per digit). Let's compare this result with the same DNN trained from scratch, without using transfer learning: End of explanation """
librosa/tutorial
Librosa tutorial.ipynb
cc0-1.0
import librosa print(librosa.__version__) y, sr = librosa.load(librosa.util.example_audio_file()) print(len(y), sr) """ Explanation: Librosa tutorial Version: 0.4.3 Tutorial home: https://github.com/librosa/tutorial Librosa home: http://librosa.github.io/ User forum: https://groups.google.com/forum/#!forum/librosa Environments We assume that you have already installed Anaconda. If you don't have an environment, create one by following command: bash conda create --name YOURNAME scipy jupyter ipython (Replace YOURNAME by whatever you want to call the new environment.) Then, activate the new environment bash source activate YOURNAME Installing librosa Librosa can then be installed by the following [🔗]: bash conda install -c conda-forge librosa NOTE: Windows need to install audio decoding libraries separately. We recommend ffmpeg. Test drive Start Jupyter: bash jupyter notebook and open a new notebook. Then, run the following: End of explanation """ y_orig, sr_orig = librosa.load(librosa.util.example_audio_file(), sr=None) print(len(y_orig), sr_orig) """ Explanation: Documentation! Librosa has extensive documentation with examples. When in doubt, go to http://librosa.github.io/librosa/ Conventions All data are basic numpy types Audio buffers are called y Sampling rate is called sr The last axis is time-like: y[1000] is the 1001st sample S[:, 100] is the 101st frame of S Defaults sr=22050, hop_length=512 Roadmap for today librosa.core librosa.feature librosa.display librosa.beat librosa.segment librosa.decompose librosa.core Low-level audio processes Unit conversion Time-frequency representations To load a signal at its native sampling rate, use sr=None End of explanation """ sr = 22050 y = librosa.resample(y_orig, sr_orig, sr) print(len(y), sr) """ Explanation: Resampling is easy End of explanation """ print(librosa.samples_to_time(len(y), sr)) """ Explanation: But what's that in seconds? End of explanation """ D = librosa.stft(y) print(D.shape, D.dtype) """ Explanation: Spectral representations Short-time Fourier transform underlies most analysis. librosa.stft returns a complex matrix D. D[f, t] is the FFT value at frequency f, time (frame) t. End of explanation """ import numpy as np S, phase = librosa.magphase(D) print(S.dtype, phase.dtype, np.allclose(D, S * phase)) """ Explanation: Often, we only care about the magnitude. D contains both magnitude S and phase $\phi$. $$ D_{ft} = S_{ft} \exp\left(j \phi_{ft}\right) $$ End of explanation """ C = librosa.cqt(y, sr=sr) print(C.shape, C.dtype) """ Explanation: Constant-Q transforms The CQT gives a logarithmically spaced frequency basis. This representation is more natural for many analysis tasks. End of explanation """ # Exercise 0 solution y2, sr2 = librosa.load( ) D = librosa.stft(y2, hop_length= ) """ Explanation: Exercise 0 Load a different audio file Compute its STFT with a different hop length End of explanation """ melspec = librosa.feature.melspectrogram(y=y, sr=sr) # Melspec assumes power, not energy as input melspec_stft = librosa.feature.melspectrogram(S=S**2, sr=sr) print(np.allclose(melspec, melspec_stft)) """ Explanation: librosa.feature Standard features: librosa.feature.melspectrogram librosa.feature.mfcc librosa.feature.chroma Lots more... Feature manipulation: librosa.feature.stack_memory librosa.feature.delta Most features work either with audio or STFT input End of explanation """ # Displays are built with matplotlib import matplotlib.pyplot as plt # Let's make plots pretty import matplotlib.style as ms ms.use('seaborn-muted') # Render figures interactively in the notebook %matplotlib nbagg # IPython gives us an audio widget for playback from IPython.display import Audio """ Explanation: librosa.display Plotting routines for spectra and waveforms Note: major overhaul coming in 0.5 End of explanation """ plt.figure() librosa.display.waveplot(y=y, sr=sr) """ Explanation: Waveform display End of explanation """ plt.figure() librosa.display.specshow(melspec, y_axis='mel', x_axis='time') plt.colorbar() """ Explanation: A basic spectrogram display End of explanation """ # Exercise 1 solution X = librosa.feature.XX() plt.figure() librosa.display.specshow( ) """ Explanation: Exercise 1 Pick a feature extractor from the librosa.feature submodule and plot the output with librosa.display.specshow Bonus: Customize the plot using either specshow arguments or pyplot functions End of explanation """ tempo, beats = librosa.beat.beat_track(y=y, sr=sr) print(tempo) print(beats) """ Explanation: librosa.beat Beat tracking and tempo estimation The beat tracker returns the estimated tempo and beat positions (measured in frames) End of explanation """ clicks = librosa.clicks(frames=beats, sr=sr, length=len(y)) Audio(data=y + clicks, rate=sr) """ Explanation: Let's sonify it! End of explanation """ chroma = librosa.feature.chroma_cqt(y=y, sr=sr) chroma_sync = librosa.feature.sync(chroma, beats) plt.figure(figsize=(6, 3)) plt.subplot(2, 1, 1) librosa.display.specshow(chroma, y_axis='chroma') plt.ylabel('Full resolution') plt.subplot(2, 1, 2) librosa.display.specshow(chroma_sync, y_axis='chroma') plt.ylabel('Beat sync') """ Explanation: Beats can be used to downsample features End of explanation """ R = librosa.segment.recurrence_matrix(chroma_sync) plt.figure(figsize=(4, 4)) librosa.display.specshow(R) """ Explanation: librosa.segment Self-similarity / recurrence Segmentation Recurrence matrices encode self-similarity R[i, j] = similarity between frames (i, j) Librosa computes recurrence between k-nearest neighbors. End of explanation """ R2 = librosa.segment.recurrence_matrix(chroma_sync, mode='affinity', sym=True) plt.figure(figsize=(5, 4)) librosa.display.specshow(R2) plt.colorbar() """ Explanation: We can include affinity weights for each link as well. End of explanation """ # Exercise 2 solution """ Explanation: Exercise 2 Plot a recurrence matrix using different features Bonus: Use a custom distance metric End of explanation """ D_harm, D_perc = librosa.decompose.hpss(D) y_harm = librosa.istft(D_harm) y_perc = librosa.istft(D_perc) Audio(data=y_harm, rate=sr) Audio(data=y_perc, rate=sr) """ Explanation: librosa.decompose hpss: Harmonic-percussive source separation nn_filter: Nearest-neighbor filtering, non-local means, Repet-SIM decompose: NMF, PCA and friends Separating harmonics from percussives is easy End of explanation """ # Fit the model W, H = librosa.decompose.decompose(S, n_components=16, sort=True) plt.figure(figsize=(6, 3)) plt.subplot(1, 2, 1), plt.title('W') librosa.display.specshow(librosa.logamplitude(W**2), y_axis='log') plt.subplot(1, 2, 2), plt.title('H') librosa.display.specshow(H, x_axis='time') # Reconstruct the signal using only the first component S_rec = W[:, :1].dot(H[:1, :]) y_rec = librosa.istft(S_rec * phase) Audio(data=y_rec, rate=sr) """ Explanation: NMF is pretty easy also! End of explanation """
Alenwang1/Python_Practice
Practice_03/Python与择天记.ipynb
gpl-3.0
# 读取择天记小说 with open('64565.txt',encoding='utf-8') as f: cont = [line.strip() for line in f.readlines() if line.strip()] # 尝试在控制台打印一段文本 for line in cont[105:107]: print(line) """ Explanation: 《择天记》与自然语言处理 本文的灵感来源于李金同学的一篇关于金庸武侠小说的自然语言处理的文章,大家有兴趣可以在GitHub上关注他。地址 02 前言 在上一篇文章中,我们使用Python来收集网络数据。这个可以复用的Python小程序,可以抓取任意电影的海报以及剧照。当你想要找一张《加勒比海盗5》的电影海报作为电脑桌面的时候,这个小程序就可以轻松帮到你了!上一篇文章的链接 你可能还是没有下定决心学习Python,那么今天的教程将和大家一起用Python做一点有趣的事情。相信大家可以在这个小项目中体验到Python的强大! 03 Python与择天记 我们将在今天用Python来分析一本热门的网络小说《择天记》,在这个项目中,我们可以做到以下几件很有意思的事: 用Python分析出《择天记》小说中实力境界的划分体系 用Python找出《择天记》中的情侣 用Python分析出《择天记》中的大人物 用Python分析猫腻(择天记的作者)的写作风格 ..... 04 开始分析 04.01 读取小说文本 先在网络上下载一本择天记小说的txt版本,保存在项目目录中。我下载的小说保存下来就是64565.txt。使用Python读取整个文本,并构建列表。 End of explanation """ #读取人物名字 with open('names.txt') as f: names = [name.strip() for name in f.readlines()] print(names) # 统计人物出场次数 def find_main_charecters(num = 10): novel = ''.join(cont) count = [] for name in names: count.append([name,novel.count(name)]) count.sort(key = lambda v : v[1],reverse=True) return count[:num] find_main_charecters() """ Explanation: 04.02 统计《择天记》人物出场次数 我们需要收集在择天记中出现的人物姓名,以便统计人物的出场次数。主要人物的姓名抓了《择天记》百度百科中的人物列表,保存成了TXT文本,名为names.txt。 End of explanation """ from IPython.display import HTML chart_header_html = """ <div id="main_charecters" style="width: 800px;height: 600px;" class="chart"></div> <script> require.config({ paths:{ echarts: '//cdn.bootcss.com/echarts/3.2.3/echarts.min', } }); require(['echarts'],function(ec){ var charectersNames = ['陈长生', '唐三十六', '徐有容', '苏离', '落落', '折袖', '苟寒食', '轩辕破', '魔君', '王破', '商行舟', '周通', '南客', '莫雨', '秋山君', '黑袍', '天海圣后', '王之策', '朱洛', '肖张'] var charectersCount = [15673, 3609, 3262, 1889, 1877, 1656, 1183, 1158, 1146, 1134, 1110, 945, 874, 746, 624, 589, 554, 532, 513, 476] var charectersCountChart = ec.init(document.getElementById('main_charecters')) charectersCountChart.setOption({ title:{ text: '择天记人物出场次数统计图' }, tooltip:{}, xAxis:{ type: 'value', data: ['出场次数'] }, yAxis:{ type: 'category', data: charectersNames.reverse() }, series:{ name: '主要人物', type: 'bar', data: charectersCount.reverse() } }) }); </script> """ # 我本地的浏览器可以渲染出来,放到git上就不行了...所以拿图片代替了 """ Explanation: 我们使用echarts做数据可视化的工作 End of explanation """ import jieba # 读取门派、境界、招式等名词 with open('novelitems.txt') as f: items = [item.strip() for item in f.readlines()] for i in items[10:20]: print(i) """ Explanation: 我们可以清楚的看到,《择天记》的主角为陈长生共出场接近16000次,紧随其后的是唐三十六(男性)和徐有容(有容奶大是女性)。仅从这个简单的数据,我们就可以推测唐三十六是主角陈长生的好基友,徐有容很有可能和陈长生是恋人关系。 另外,我们看到其他出场率比较相似的人物中,女性角色明显不多。我们可以大致推断《择天记》这本小说是一个单女主的小说。更进一步的说,徐有容和陈长生在整部小说中可能都很专情。 出场次数前20的人物中,可以看出一个明显的规律——主要人物的人名都非常奇葩,一看就不是普通人能叫的名字!在现实生活中不可能有人叫唐三十六,折袖,苟寒食,商行舟,南客这样的名字。所以,《择天记》的作者多半是一个很中二的人。 另外还有一个有趣的事情,就是这些人物的姓氏明显都不相同,只有王之策和王破都姓王。这一点我们可以推断,《择天记》中的故事很有可能不是关于家族之间的纷争,或者说着墨不多。我们看到魔君的出场次数达到700多次,这种名字一看就是反派,无一例外。 第一部分的结论 《择天记》男主陈长生,男配唐三十六,女主徐有容 《择天记》是单女主的小说,男主女主在整部小说中可能都很专情。 《择天记》的作者大概率是一个很中二的人 《择天记》中的故事很有可能不是关于家族之间的纷争 《择天记》中的反派BOSS可能叫做魔君 04.03 《择天记》与自然语言处理 统计人物出场次数显然是Python中最最基础的操作,那我们现在来使用更高级的算法来尝试分析《择天记》这本小说。 我们使用gensim库对《择天记》进行 Word2Vec 的操作,这种操作可以将小说中的词映射到向量空间中,从而分析出不同词汇之间的关系。另外值得一提的是,由于中文和英文不同,中文词语之间没有空格,所以我们需要使用Python第三方库结巴分词对文本进行分词。我们为了提高分词的准确性,我们需要将小说中一些专属名词添加到词库中。 中文分词 End of explanation """ for name in names: jieba.add_word(name) for item in items: jieba.add_word(item) """ Explanation: 我们需要将这些名词添加到结巴分词的词库中。 End of explanation """ novel_sentences = [] # 对小说进行分词,这里只是任选了一句 # for line in cont: for line in cont[:6]: words = list(jieba.cut(line)) novel_sentences.append(words) novel_sentences[4] """ Explanation: 我们现在就可以开始使用机器学习来训练模型了。 End of explanation """ # 按照默认参数进行训练 model = gensim.models.Word2Vec(sentences, size=100, window=5, min_count=5, workers=4) # 把训练好的模型存下来 model.save("zetianjied.model") # 训练模型需要大概20分钟左右的时间,因性能而异。由于模型太大了就不放在github上面了。 import gensim # 读取模型 model = gensim.models.Word2Vec.load("zetianjied.model") """ Explanation: 训练模型 End of explanation """ # 寻找相似境界 for s in model.most_similar(positive=["坐照"]): print(s) """ Explanation: 寻找境界体系 首先,让我们看看《择天记》中实力境界的划分。作者在一开始告诉我们有一种境界叫做坐照境。那么,我们就通过上文中用Word2Vec 训练出来的模型找到与坐照类似的词汇。 End of explanation """ for s in model.most_similar(positive=["魔君"])[:7]: print(s) """ Explanation: 择天记中的大人物 找到择天记中和反派魔君实力水平相似的人物 End of explanation """ d = model.most_similar(positive=['无穷碧', '陈长生'], negative=['徐有容'])[0] d """ Explanation: 我们可以看到结果中出现了与魔君实力水平相似的前七个人物,这些人物可以与反派BOSS相提并论,肯定是站在《择天记》实力巅峰的大人物。事实上这些人物在原著中都是从圣境。 择天记中的情侣 训练出来的模型还可以找到具有相似联系的词汇,比如给定情侣关系的两个人物,模型会找到小说中的情侣关系。 我们先来测试一下模型,因为我们知道别样红和无穷碧是在小说中直接描写的情侣,所以我们给定陈长生和徐有容之间的关系,看看程序能否找出和无穷碧有情侣关系的人物。 End of explanation """ d = model.most_similar(positive=['折袖', '无穷碧'], negative=['别样红'])[0] d """ Explanation: 我们随便找一个人物,比如:折袖。运行程序,看一看机器眼中折袖与谁是情侣? End of explanation """
roebius/deeplearning1_keras2
nbs/statefarm.ipynb
apache-2.0
from __future__ import division, print_function %matplotlib inline #path = "data/state/" path = "data/state/sample/" from importlib import reload # Python 3 import utils; reload(utils) from utils import * from IPython.display import FileLink batch_size=64 """ Explanation: Enter State Farm End of explanation """ batches = get_batches(path+'train', batch_size=batch_size) val_batches = get_batches(path+'valid', batch_size=batch_size*2, shuffle=False) steps_per_epoch = int(np.ceil(batches.samples/batch_size)) validation_steps = int(np.ceil(val_batches.samples/(batch_size*2))) (val_classes, trn_classes, val_labels, trn_labels, val_filenames, filenames, test_filenames) = get_classes(path) """ Explanation: Setup batches End of explanation """ trn = get_data(path+'train') val = get_data(path+'valid') save_array(path+'results/val.dat', val) save_array(path+'results/trn.dat', trn) val = load_array(path+'results/val.dat') trn = load_array(path+'results/trn.dat') """ Explanation: Rather than using batches, we could just import all the data into an array to save some processing time. (In most examples I'm using the batches, however - just because that's how I happened to start out.) End of explanation """ def conv1(batches): model = Sequential([ BatchNormalization(axis=1, input_shape=(3,224,224)), Conv2D(32,(3,3), activation='relu'), BatchNormalization(axis=1), MaxPooling2D((3,3)), Conv2D(64,(3,3), activation='relu'), BatchNormalization(axis=1), MaxPooling2D((3,3)), Flatten(), Dense(200, activation='relu'), BatchNormalization(), Dense(10, activation='softmax') ]) model.compile(Adam(lr=1e-4), loss='categorical_crossentropy', metrics=['accuracy']) model.fit_generator(batches, steps_per_epoch, epochs=2, validation_data=val_batches, validation_steps=validation_steps) model.optimizer.lr = 0.001 model.fit_generator(batches, steps_per_epoch, epochs=4, validation_data=val_batches, validation_steps=validation_steps) return model model = conv1(batches) """ Explanation: Re-run sample experiments on full dataset We should find that everything that worked on the sample (see statefarm-sample.ipynb), works on the full dataset too. Only better! Because now we have more data. So let's see how they go - the models in this section are exact copies of the sample notebook models. Single conv layer End of explanation """ gen_t = image.ImageDataGenerator(rotation_range=15, height_shift_range=0.05, shear_range=0.1, channel_shift_range=20, width_shift_range=0.1) batches = get_batches(path+'train', gen_t, batch_size=batch_size) model = conv1(batches) model.optimizer.lr = 0.0001 model.fit_generator(batches, steps_per_epoch, epochs=15, validation_data=val_batches, validation_steps=validation_steps) """ Explanation: Interestingly, with no regularization or augmentation we're getting some reasonable results from our simple convolutional model. So with augmentation, we hopefully will see some very good results. Data augmentation End of explanation """ gen_t = image.ImageDataGenerator(rotation_range=15, height_shift_range=0.05, shear_range=0.1, channel_shift_range=20, width_shift_range=0.1) batches = get_batches(path+'train', gen_t, batch_size=batch_size) model = Sequential([ BatchNormalization(axis=1, input_shape=(3,224,224)), Conv2D(32,(3,3), activation='relu'), BatchNormalization(axis=1), MaxPooling2D(), Conv2D(64,(3,3), activation='relu'), BatchNormalization(axis=1), MaxPooling2D(), Conv2D(128,(3,3), activation='relu'), BatchNormalization(axis=1), MaxPooling2D(), Flatten(), Dense(200, activation='relu'), BatchNormalization(), Dropout(0.5), Dense(200, activation='relu'), BatchNormalization(), Dropout(0.5), Dense(10, activation='softmax') ]) model.compile(Adam(lr=10e-5), loss='categorical_crossentropy', metrics=['accuracy']) model.fit_generator(batches, steps_per_epoch, epochs=2, validation_data=val_batches, validation_steps=validation_steps) model.optimizer.lr=0.001 model.fit_generator(batches, steps_per_epoch, epochs=10, validation_data=val_batches, validation_steps=validation_steps) model.optimizer.lr=0.00001 model.fit_generator(batches, steps_per_epoch, epochs=10, validation_data=val_batches, validation_steps=validation_steps) """ Explanation: I'm shocked by how good these results are! We're regularly seeing 75-80% accuracy on the validation set, which puts us into the top third or better of the competition. With such a simple model and no dropout or semi-supervised learning, this really speaks to the power of this approach to data augmentation. Four conv/pooling pairs + dropout Unfortunately, the results are still very unstable - the validation accuracy jumps from epoch to epoch. Perhaps a deeper model with some dropout would help. End of explanation """ vgg = Vgg16() model=vgg.model last_conv_idx = [i for i,l in enumerate(model.layers) if type(l) is Convolution2D][-1] conv_layers = model.layers[:last_conv_idx+1] conv_model = Sequential(conv_layers) (val_classes, trn_classes, val_labels, trn_labels, val_filenames, filenames, test_filenames) = get_classes(path) test_batches = get_batches(path+'test', batch_size=batch_size*2, shuffle=False) conv_feat = conv_model.predict_generator(batches, int(np.ceil(batches.samples/batch_size))) conv_val_feat = conv_model.predict_generator(val_batches, int(np.ceil(val_batches.samples/(batch_size*2)))) conv_test_feat = conv_model.predict_generator(test_batches, int(np.ceil(test_batches.samples/(batch_size*2)))) save_array(path+'results/conv_val_feat.dat', conv_val_feat) save_array(path+'results/conv_test_feat.dat', conv_test_feat) save_array(path+'results/conv_feat.dat', conv_feat) conv_feat = load_array(path+'results/conv_feat.dat') conv_val_feat = load_array(path+'results/conv_val_feat.dat') conv_val_feat.shape """ Explanation: This is looking quite a bit better - the accuracy is similar, but the stability is higher. There's still some way to go however... Imagenet conv features Since we have so little data, and it is similar to imagenet images (full color photos), using pre-trained VGG weights is likely to be helpful - in fact it seems likely that we won't need to fine-tune the convolutional layer weights much, if at all. So we can pre-compute the output of the last convolutional layer, as we did in lesson 3 when we experimented with dropout. (However this means that we can't use full data augmentation, since we can't pre-compute something that changes every image.) End of explanation """ def get_bn_layers(p): return [ MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]), Flatten(), Dropout(p/2), Dense(128, activation='relu'), BatchNormalization(), Dropout(p/2), Dense(128, activation='relu'), BatchNormalization(), Dropout(p), Dense(10, activation='softmax') ] p=0.8 bn_model = Sequential(get_bn_layers(p)) bn_model.compile(Adam(lr=0.001), loss='categorical_crossentropy', metrics=['accuracy']) bn_model.fit(conv_feat, trn_labels, batch_size=batch_size, epochs=1, validation_data=(conv_val_feat, val_labels)) bn_model.optimizer.lr=0.01 bn_model.fit(conv_feat, trn_labels, batch_size=batch_size, epochs=2, validation_data=(conv_val_feat, val_labels)) bn_model.save_weights(path+'models/conv8.h5') """ Explanation: Batchnorm dense layers on pretrained conv layers Since we've pre-computed the output of the last convolutional layer, we need to create a network that takes that as input, and predicts our 10 classes. Let's try using a simplified version of VGG's dense layers. End of explanation """ gen_t = image.ImageDataGenerator(rotation_range=15, height_shift_range=0.05, shear_range=0.1, channel_shift_range=20, width_shift_range=0.1) da_batches = get_batches(path+'train', gen_t, batch_size=batch_size, shuffle=False) """ Explanation: Looking good! Let's try pre-computing 5 epochs worth of augmented data, so we can experiment with combining dropout and augmentation on the pre-trained model. Pre-computed data augmentation + dropout We'll use our usual data augmentation parameters: End of explanation """ da_conv_feat = conv_model.predict_generator(da_batches, 5*int(np.ceil((da_batches.samples)/(batch_size))), workers=3) save_array(path+'results/da_conv_feat2.dat', da_conv_feat) da_conv_feat = load_array(path+'results/da_conv_feat2.dat') """ Explanation: We use those to create a dataset of convolutional features 5x bigger than the training set. End of explanation """ da_conv_feat = np.concatenate([da_conv_feat, conv_feat]) """ Explanation: Let's include the real training data as well in its non-augmented form. End of explanation """ da_trn_labels = np.concatenate([trn_labels]*6) """ Explanation: Since we've now got a dataset 6x bigger than before, we'll need to copy our labels 6 times too. End of explanation """ def get_bn_da_layers(p): return [ MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]), Flatten(), Dropout(p), Dense(256, activation='relu'), BatchNormalization(), Dropout(p), Dense(256, activation='relu'), BatchNormalization(), Dropout(p), Dense(10, activation='softmax') ] p=0.8 bn_model = Sequential(get_bn_da_layers(p)) bn_model.compile(Adam(lr=0.001), loss='categorical_crossentropy', metrics=['accuracy']) """ Explanation: Based on some experiments the previous model works well, with bigger dense layers. End of explanation """ bn_model.fit(da_conv_feat, da_trn_labels, batch_size=batch_size, epochs=1, validation_data=(conv_val_feat, val_labels)) bn_model.optimizer.lr=0.01 bn_model.fit(da_conv_feat, da_trn_labels, batch_size=batch_size, epochs=4, validation_data=(conv_val_feat, val_labels)) bn_model.optimizer.lr=0.0001 bn_model.fit(da_conv_feat, da_trn_labels, batch_size=batch_size, epochs=4, validation_data=(conv_val_feat, val_labels)) """ Explanation: Now we can train the model as usual, with pre-computed augmented data. End of explanation """ bn_model.save_weights(path+'models/da_conv8_1.h5') """ Explanation: Looks good - let's save those weights. End of explanation """ val_pseudo = bn_model.predict(conv_val_feat, batch_size=batch_size) """ Explanation: Pseudo labeling We're going to try using a combination of pseudo labeling and knowledge distillation to allow us to use unlabeled data (i.e. do semi-supervised learning). For our initial experiment we'll use the validation set as the unlabeled data, so that we can see that it is working without using the test set. At a later date we'll try using the test set. To do this, we simply calculate the predictions of our model... End of explanation """ comb_pseudo = np.concatenate([da_trn_labels, val_pseudo]) comb_feat = np.concatenate([da_conv_feat, conv_val_feat]) """ Explanation: ...concatenate them with our training labels... End of explanation """ bn_model.load_weights(path+'models/da_conv8_1.h5') bn_model.fit(comb_feat, comb_pseudo, batch_size=batch_size, epochs=1, validation_data=(conv_val_feat, val_labels)) bn_model.fit(comb_feat, comb_pseudo, batch_size=batch_size, epochs=4, validation_data=(conv_val_feat, val_labels)) bn_model.optimizer.lr=0.00001 bn_model.fit(comb_feat, comb_pseudo, batch_size=batch_size, epochs=4, validation_data=(conv_val_feat, val_labels)) """ Explanation: ...and fine-tune our model using that data. End of explanation """ bn_model.save_weights(path+'models/bn-ps8.h5') """ Explanation: That's a distinct improvement - even although the validation set isn't very big. This looks encouraging for when we try this on the test set. End of explanation """ def do_clip(arr, mx): return np.clip(arr, (1-mx)/9, mx) val_preds = bn_model.predict(conv_val_feat, batch_size=batch_size*2) np.mean(keras.metrics.categorical_crossentropy(val_labels, do_clip(val_preds, 0.93)).eval()) conv_test_feat = load_array(path+'results/conv_test_feat.dat') preds = bn_model.predict(conv_test_feat, batch_size=batch_size*2) subm = do_clip(preds,0.93) subm_name = path+'results/subm.gz' classes = sorted(batches.class_indices, key=batches.class_indices.get) submission = pd.DataFrame(subm, columns=classes) submission.insert(0, 'img', [a[4:] for a in test_filenames]) submission.head() submission.to_csv(subm_name, index=False, compression='gzip') FileLink(subm_name) """ Explanation: Submit We'll find a good clipping amount using the validation set, prior to submitting. End of explanation """ #for l in get_bn_layers(p): conv_model.add(l) # this choice would give a weight shape error for l in get_bn_da_layers(p): conv_model.add(l) # ... so probably this is the right one for l1,l2 in zip(bn_model.layers, conv_model.layers[last_conv_idx+1:]): l2.set_weights(l1.get_weights()) for l in conv_model.layers: l.trainable =False for l in conv_model.layers[last_conv_idx+1:]: l.trainable =True comb = np.concatenate([trn, val]) # not knowing what the experiment was about, added this to avoid a shape match error with comb using gen_t.flow comb_pseudo = np.concatenate([trn_labels, val_pseudo]) gen_t = image.ImageDataGenerator(rotation_range=8, height_shift_range=0.04, shear_range=0.03, channel_shift_range=10, width_shift_range=0.08) batches = gen_t.flow(comb, comb_pseudo, batch_size=batch_size) val_batches = get_batches(path+'valid', batch_size=batch_size*2, shuffle=False) conv_model.compile(Adam(lr=0.00001), loss='categorical_crossentropy', metrics=['accuracy']) conv_model.fit_generator(batches, steps_per_epoch, epochs=1, validation_data=val_batches, validation_steps=validation_steps) conv_model.optimizer.lr = 0.0001 conv_model.fit_generator(batches, steps_per_epoch, epochs=3, validation_data=val_batches, validation_steps=validation_steps) for l in conv_model.layers[16:]: l.trainable =True #- added compile instruction in order to avoid Keras 2.1 warning message conv_model.compile(Adam(), loss='categorical_crossentropy', metrics=['accuracy']) conv_model.optimizer.lr = 0.00001 conv_model.fit_generator(batches, steps_per_epoch, epochs=8, validation_data=val_batches, validation_steps=validation_steps) conv_model.save_weights(path+'models/conv8_ps.h5') #conv_model.load_weights(path+'models/conv8_da.h5') # conv8_da.h5 was not saved in this notebook val_pseudo = conv_model.predict(val, batch_size=batch_size*2) save_array(path+'models/pseudo8_da.dat', val_pseudo) """ Explanation: This gets 0.534 on the leaderboard. The "things that didn't really work" section You can safely ignore everything from here on, because they didn't really help. Finetune some conv layers too End of explanation """ drivers_ds = pd.read_csv(path+'driver_imgs_list.csv') drivers_ds.head() img2driver = drivers_ds.set_index('img')['subject'].to_dict() driver2imgs = {k: g["img"].tolist() for k,g in drivers_ds[['subject', 'img']].groupby("subject")} # It seems this function is not used in this notebook def get_idx(driver_list): return [i for i,f in enumerate(filenames) if img2driver[f[3:]] in driver_list] # drivers = driver2imgs.keys() # Python 2 drivers = list(driver2imgs) # Python 3 rnd_drivers = np.random.permutation(drivers) ds1 = rnd_drivers[:len(rnd_drivers)//2] ds2 = rnd_drivers[len(rnd_drivers)//2:] # The following cells seem to require some preparation code not included in this notebook models=[fit_conv([d]) for d in drivers] models=[m for m in models if m is not None] all_preds = np.stack([m.predict(conv_test_feat, batch_size=128) for m in models]) avg_preds = all_preds.mean(axis=0) avg_preds = avg_preds/np.expand_dims(avg_preds.sum(axis=1), 1) keras.metrics.categorical_crossentropy(val_labels, np.clip(avg_val_preds,0.01,0.99)).eval() keras.metrics.categorical_accuracy(val_labels, np.clip(avg_val_preds,0.01,0.99)).eval() """ Explanation: Ensembling End of explanation """
hashiprobr/redes-sociais
encontro02/5-kruskal.ipynb
gpl-3.0
import sys sys.path.append('..') import socnet as sn """ Explanation: Encontro 02, Parte 5: Algoritmo de Kruskal Este guia foi escrito para ajudar você a atingir os seguintes objetivos: implementar o algoritmo de Kruskal; praticar o uso da biblioteca da disciplina. Primeiramente, vamos importar a biblioteca: End of explanation """ sn.graph_width = 320 sn.graph_height = 180 """ Explanation: A seguir, vamos configurar as propriedades visuais: End of explanation """ g = sn.load_graph('5-kruskal.gml', has_pos=True) for e in g.edges_iter(): g.edge[e[0]][e[1]]['label'] = g.edge[e[0]][e[1]]['c'] sn.show_graph(g, elab=True) """ Explanation: Por fim, vamos carregar e visualizar um grafo: End of explanation """ class Forest(object): def __init__(self, g): self.g = g self.f = set() for n in g.nodes(): self._make_set(n) def _make_set(self, x): g.node[x]['p'] = x g.node[x]['rank'] = 0 def _union(self, x, y): self._link(self._find_set(x), self._find_set(y)) def _link(self, x, y): if g.node[x]['rank'] > g.node[y]['rank']: g.node[y]['p'] = x else: g.node[x]['p'] = y if g.node[x]['rank'] == g.node[y]['rank']: g.node[y]['rank'] = g.node[y]['rank'] + 1 def _find_set(self, x): if x != g.node[x]['p']: g.node[x]['p'] = self._find_set(g.node[x]['p']) return g.node[x]['p'] def adding_does_not_form_circuit(self, n, m): return self._find_set(n) != self._find_set(m) def add(self, n, m): self.f.add((n, m)) self._union(n, m) """ Explanation: Árvores geradoras mínimas Dizemos que: * um passeio $\langle n_0, n_1, \ldots, n_{k-1} \rangle$ é um circuito se $\langle n_0, n_1, \ldots, n_{k-2} \rangle$ é um caminho e $n_0 = n_{k-1}$; * um conjunto de arestas $F$ é uma floresta se não existem circuitos no grafo $(N, F)$; * um grafo é conexo se para quaisquer nós $s$ e $t$ existe um caminho de $s$ a $t$; * uma floresta $T$ é uma árvore geradora se o grafo $(N, T)$ é conexo. O custo de uma árvore geradora $T$ é $\sum_{{n, m} \in T} c(n, m)$. Uma árvore geradora é mínima se não existe outra árvore geradora de custo menor. Note que podem existir múltiplas árvores geradoras mínimas. Algoritmo de Kruskal Podemos eficientemente obter uma árvore geradora mínima usando o algoritmo de Kruskal. A ideia desse algoritmo é simples: inicializamos uma floresta $F$ como o conjunto vazio e verificamos todas as arestas em ordem não-decrescente de custo. Para cada aresta, adicionamos ela a $F$ se essa adição não formar circuito no grafo $(N, F)$. Vamos especificar uma classe que representa a floresta. Não é necessário entender todos os detalhes dela, apenas que o atributo f é o conjunto das arestas e os dois últimos métodos são auto-explicativos. End of explanation """ # sua resposta """ Explanation: Exercício Monte uma visualização do algoritmo de Kruskal. Use a classe Forest. End of explanation """
nicoguaro/notebooks_examples
Manufactured solutions.ipynb
mit
from __future__ import division from sympy import * x, y, z, t = symbols('x y z t') f, g, h = symbols('f g h', cls=Function) init_printing() L = symbols('L') a1, a2, a3, b1, b2, b3, c1, c2, c3 = symbols('a1 a2 a3 b1 b2 b3 c1 c2 c3') u0, ux, uy, uz, v0, vx, vy, vz, w0, wx, wy, wz = symbols('u0 u_x u_y u_z\ v0 v_x v_y v_z\ w0 w_x w_y w_z') lamda, mu, rho = symbols('lamda mu rho') u = u0 + ux*sin(a1*pi*x/L) + uy*sin(a2*pi*y/L) + uz*sin(a3*pi*z/L) v = v0 + vx*sin(b1*pi*x/L) + vy*sin(b2*pi*y/L) + vz*sin(b3*pi*z/L) w = w0 + wx*sin(c1*pi*x/L) + wy*sin(c2*pi*y/L) + wz*sin(c3*pi*z/L) def laplacian(U, X=[x, y, z]): lap_U = Matrix([sum([diff(U[k], X[j], 2) for j in range(3)]) for k in range(3)]) return lap_U def div(U, X=[x, y, z]): return sum([diff(U[k], X[k]) for k in range(3)]) def grad(f, X=[x, y, z]): return Matrix([diff(f, X[k]) for k in range(3)]) def rot(U, X=[x, y, z]): return Matrix([diff(U[2], X[1]) - diff(U[1], X[2]), diff(U[0], X[2]) - diff(U[2], X[0]), diff(U[1], X[0]) - diff(U[0], X[1])]) def navier(U, X=[x,y,z], lamda=lamda, mu=mu): return mu*laplacian(U, X) + (lamda + mu)*grad(div(U, X), X) def dt(U, t=t): return Matrix([diff(U[k], t) for k in range(3)]) U = [u,v,w] F = rho*dt(U) - navier(U) simplify(F) """ Explanation: Manufactured solutions for Elasticity This is a notebook to play with manufactured solutions for Elasticity. End of explanation """ L = symbols('L') h0, hx, hy = symbols('h0 hx hy') lamda, mu, rho = symbols('lamda mu rho') alpha, mu_c, xi, gamma, J = symbols('alpha mu_c xi gamma J') u = u0 + ux*sin(a1*pi*x/L) + uy*sin(a2*pi*y/L) v = v0 + vx*sin(b1*pi*x/L) + vy*sin(b2*pi*y/L) w = 0 h1 = 0 h2 = 0 h3 = h0 + hx*sin(c1*pi*x/L) + hy*sin(c2*pi*y/L) def coss(U, H, X=[x,y,z], lamda=lamda, mu=mu, mu_c=mu_c, xi=xi, gamma=gamma): f_u = (lamda + 2*mu)*grad(div(U)) - (mu + mu_c)*rot(rot(U)) + 2*mu_c*rot(H) f_h = (alpha + 2*gamma + 2*xi)*grad(div(H)) - 2*xi*rot(rot(H)) + 2*mu_c*rot(U) - 4*mu_c*Matrix(H) return f_u, f_h U = [u, v, w] H = [h1, h2, h3] f_u, f_h = coss(U, H) simplify(f_u) simplify(f_h) """ Explanation: Cosserat solid Let's start with the 2D case End of explanation """ from IPython.core.display import HTML def css_styling(): styles = open('./styles/custom_barba.css', 'r').read() return HTML(styles) css_styling() """ Explanation: References Malaya, Nicholas, et al. "MASA: a library for verification using manufactured and analytical solutions." Engineering with Computers 29.4 (2013): 487-496. End of explanation """
metpy/MetPy
v0.10/_downloads/e62e0f98c4e8c49126bfa0b8b589a902/Parse_Angles.ipynb
bsd-3-clause
import metpy.calc as mpcalc """ Explanation: Parse angles Demonstrate how to convert direction strings to angles. The code below shows how to parse directional text into angles. It also demonstrates the function's flexibility in handling various string formatting. End of explanation """ dir_str = 'SOUTH SOUTH EAST' print(dir_str) """ Explanation: Create a test value of a directional text End of explanation """ angle_deg = mpcalc.parse_angle(dir_str) print(angle_deg) """ Explanation: Now throw that string into the function to calculate the corresponding angle End of explanation """ dir_str_list = ['ne', 'NE', 'NORTHEAST', 'NORTH_EAST', 'NORTH east'] angle_deg_list = mpcalc.parse_angle(dir_str_list) print(angle_deg_list) """ Explanation: The function can also handle arrays of string in many different abbrieviations and capitalizations End of explanation """
zrhans/python
exemplos/dapp-bc/Leitura14.ipynb
gpl-2.0
import sys import numpy as np print(sys.version) # Versao do python - Opcional print(np.__version__) # VErsao do modulo numpy - Opcional # Criando um vetor padrao com 25 valores npa = np.arange(25) npa # Transformando o vetor npa em um vetor multidimensional usando o metodo reshape npa.reshape(5,5) # Podemos criar um array multidimensional de zeros usando o metodo zeros np.zeros([5,5]) # Algumas funcoes uteis npa2 = np.zeros([5,5]) # Tamanho (numero de elementos) npa2.size # Forma (numero de linhas e colunas) npa2.shape # Dimensoes npa2.ndim # Pode se criar arrays com o numero de dimensoes que voce quiser # Ex: um array com 3 dimensoes. (2 elemenso em cada dimensao) np.arange(8).reshape(2,2,2) # 4 Dimensoes com 4 elementos em cada (zeros como elementos) # np.zeros([4,4,4,4]) # 4 Dimensoes com 2 elementos em cada np.arange(16).reshape(2,2,2,2) """ Explanation: Leitura 14 - Arrays Multidimensionais (Matrize3) By Hans. Original: Bill Chambers End of explanation """ # Gerando uma semente para gerar numeros aleatorios np.random.seed(10) # Gerando uma lista de 25 numeros inteiros aleatorios entre 1 e 10 np.random.random_integers(1,10,25) #Veja que sempre obtemos a mesma sequencia aleatoria para a mesma semente] # caso contrario, se nao reajustarmos a semente ele sempre criara uma sequencia diferente np.random.seed(10) np.random.random_integers(1,10,25) np.random.seed(10) npa2 = np.random.random_integers(1,10,25).reshape(5,5) npa2 npa3 = np.random.random_integers(1,10,25).reshape(5,5) npa3 # Aplicando operaçoes # comparaçoes npa2 > npa3 # Contar quantos alores temos que npa2 > npa3 (em python, TRUE eh tratato como 1 ) (npa2 > npa3).sum() # Podemos aplicar esta soma por colunas [primeira dimensao] (axis=0) (npa2 > npa3).sum(axis=0) # Podemos aplicar esta soma por linha [segunda dimensao] (axis=1) (npa2 > npa3).sum(axis=1) npa2 # OPERAÇOES VALIDAS TANTO PARA O MAXIMO QUANTO PARA O MINIMO # Encontrando o valor maximo em toda a matrix npa2.max() # Encontrando o valor maximo por colunas npa2.max(axis=0) # Encontrando o valor maximo por linhas npa2.max(axis=1) # Fazer a Matriz Transposta usando o metodo transpose npa2.transpose() # Fazer a Matriz Transposta usando a propriedade T npa2.T # Multiplicar esta transposta por ela mesma npa2.T * npa2.T """ Explanation: Uma das maiores vantagens da vetorizaçao eh a possibilidade da aplicacao de inumeras operaçoes diretamente a cada um dos elementos do objeto. Tais operaçoes sao as aritmeticas, logica, aplicaçao de funçoes especificas, etc... End of explanation """ # Unidimensionalizando a matrix npa2 usando o metodo flatten [pythonizado como "flattening"] npa2.flatten() # Unidimensionalizando a matrix npa2 usando o metodo ravel [pythonizado como "raveling"] #npa2.flatten() r = npa2.ravel() r npa2 # Unidimensinalizando e tentando modificar o primeiro elemento para 25 npa2.flatten()[0] = 25 npa2 # nada deve ocorrer, o array multidimensional deve permanecer inalterado # Modificando o primeiro elemento no "raveled" array r[0] = 25 # Deve alterar o valor do array original npa2 # Mostrando os valores do array npa para comparar com as proximas funoes npa # Soma cumulativa dos valores dos elemntos do array cumsum npa.cumsum() # Produto cumulativo dos valores dos elemntos do array cumsum npa.cumprod() # Resultara em zeros porque o primeiro elemento eh zero """ Explanation: Metodos: flatten e ravel Estas duas propriedades sao bastante uteis quando para trabalhar com arrays multimensionais. * O metodo flatten unidemensionaliza o array multidimensional mantendo-o imutavel. * O metodo ravel unidemensionaliza o array multidimensional transformando-o em mutavel. Claramente percebemos a vantagem do metodo ravel caso queiramos alterar algum valor do array. End of explanation """
EoinTravers/QuickstartMousetracking
results/SqueakIntro.ipynb
gpl-2.0
# For reading data files import os import glob import numpy as np # Numeric calculation import pandas as pd # General purpose data analysis library import squeak # For mouse data # For plotting import matplotlib.pyplot as plt %matplotlib inline # Prettier default settings for plots (optional) import seaborn seaborn.set_style('darkgrid') from pylab import rcParams rcParams['figure.figsize'] = 8, 5 """ Explanation: Update: I've moved the code from this post, along with resources for designing mouse tracking experiments, and some example data, to a GitHub repository. The best way to learn how to use squeak is to play around with this repository, which also includes the content of this post. A while ago, I gathered up the python code I've been using to process mouse trajectory data into a package and gave it the jaunty title squeak. However, as this was mostly for my own use, I never got around to properly documenting it. Recently, a few people have asked me for advice on analysing mouse data not collected using MouseTracker - for instance, data generated using my OpenSesame implementation. In response, I've gone through a full example for this post, and written a script that should be able to preprocess any data collected using my OpenSesame implementation. To use any of this, you'll need to have the python language installed, along with some specific scientific packages, and of course squeak itself, which is available using the pip command: pip install squeak In this post, I go through the code bit by bit, explaining what specifically is going on. If you're not used to using python, you don't have to worry to much about understanding all of the syntax, although python is relatively easy to read as if it was plain English. The full, downloadable script is included at the bottom of the page. Data Processing End of explanation """ results = [] for datafile in glob.glob('data/*.csv'): this_data = pd.read_csv(datafile) results.append(this_data) data = pd.concat(results) """ Explanation: First, we need to load our data. I'll show how to do this using the .csv files saved by OpenSesame, as this gives us a chance to see how you can use squeak to handle trajectory data that's been saved in this exchangable format. We can combine all of our files into a single data structure by reading them one at a time, using pd.read_csv, storing them in a list, and then merging this list using pd.concat. End of explanation """ data = pd.concat( [pd.DataFrame(pd.read_csv(datafile)) for datafile in glob.glob('data/*.csv')]) """ Explanation: A faster and more concise alternative, using python's list comprehension abilities, would look like this instead: End of explanation """ print data.head() """ Explanation: Either way, we end up with data in the form shown below. End of explanation """ data['t'] = data.tTrajectory.map(squeak.list_from_string) data['x'] = data.xTrajectory.map(squeak.list_from_string) data['y'] = data.yTrajectory.map(squeak.list_from_string) """ Explanation: As you can see, there's one row per trial, and each of the coding variables we recorded in OpenSesame occupy a single column. The trajectory data, though, is stored in three columns, "tTrajectory", "xTrajectory", and "yTrajectory", corresponding to time elapsed, x-axis position, and y-axis position, respectively. Each cell here actually contains a string representation of the list of values in each case, in the form "[time1, time2, time3, ..., timeN]" We can parse these using squeak's list_from_string function. End of explanation """ for i in range(len(data)): x = data.x.iloc[i] y = data.y.iloc[i] plt.plot(x, y, color='blue', alpha=.5) # alpha controlls the transparency plt.show() """ Explanation: At this stage, we have our data in a format python can understand, and it looks like this. End of explanation """ data['y'] = data.y * -1 # Reverse y axis data['x'] = data.x.map(squeak.remap_right) # Flip the leftward responses data['x'] = data.x.map(squeak.normalize_space) data['y'] = data.y.map(squeak.normalize_space) * 1.5 for i in range(len(data)): x = data.x.iloc[i] y = data.y.iloc[i] plt.plot(x, y, color='blue', alpha=.5) plt.text(0, 0, 'START', horizontalalignment='center') plt.text(1, 1.5, 'END', horizontalalignment='center') plt.show() """ Explanation: We still need to do some preprocessing of the trajectories - OpenSesame logs y-axis coordinates upside down from what we would want, and more importantly, it's conventional to standardise trajectories so they start at [0,0] and end at [1,1.5], and to flip the trials where the left hand side response was chosen the other way around for comparison. Let's do that now. End of explanation """ for i in range(len(data)): x = data.x.iloc[i] t = data.t.iloc[i] plt.plot(t, x, color='blue', alpha=.3) plt.xlabel('Time (msec)') plt.ylabel('x axis position') plt.show() """ Explanation: Our next problem as that all of our trials last for different amounts of time. End of explanation """ data['nx'], data['ny'] = zip(*[squeak.even_time_steps(x, y, t) for x, y, t, in zip(data.x, data.y, data.t)]) for i, x in data.nx.iteritems(): plt.plot(x, color='blue', alpha=.3) plt.xlabel('Normalized time step') plt.ylabel('x axis position') plt.show() """ Explanation: We can deal with this in one of two ways, both of which I'll demonstrate. Most analyses standardize the trajectories into 101 time slices, for comparison, meaning that for every trajectory, sample 50 is halfway through, regardless of how long that actually takes. (the code looks a little intimidating, and future versions of squeak should include a more concise way of doing this. You don't need to worry too much about what's happening here). End of explanation """ max_time = 5000 # Alternatively, max_time = data.rt.max() data['rx'] = [squeak.uniform_time(x, t, max_duration=5000) for x, t in zip(data.x, data.t)] data['ry'] = [squeak.uniform_time(y, t, max_duration=5000) for y, t in zip(data.y, data.t)] for i in range(len(data)): x = data.rx.iloc[i] plt.plot(x.index, x, color='blue', alpha=.3) plt.xlabel('Time (msec)') plt.ylabel('x axis position') plt.show() """ Explanation: An alternative approach is to keep the actual timestamp for each sample, so you can analyse the development of the trajectories in real time. To do this, you need to "extend" the data for all of the trials so that they all last for the same amount of time. In this example, we'll extend every trial to 5 seconds (5000 milliseconds). This can be done by treating all of the time after the participant has clicked on their response as if they instead just kept the cursor right on top of the response until they reach 5 seconds. Again, you can copy this code literally, so don't worry about the details of the syntax here. End of explanation """ # Mouse Stats data['md'] = data.apply(lambda trial: squeak.max_deviation(trial['nx'], trial['ny']), axis=1) data['auc'] = data.apply(lambda trial: squeak.auc(trial['nx'], trial['ny']), axis=1) data['xflips'] = data.nx.map(squeak.count_x_flips) data['init_time'] = data.ry.map(lambda y: y.index[np.where(y > .05)][0]) # Taking a look at condition means print data.groupby('condition')['md', 'auc', 'xflips', 'init_time', 'rt'].mean() """ Explanation: With all of this done, you're ready to calculate the statistics you'll be using in your analyses. Again, don't worry too much about the syntax here. The most popular measures, calculated here, are: Maximum Deviation (MD): The size of the largest distance achieved between the actual trajectory and what it would have looked like if it was perfectly straight. Area Under the Curve (AUC): The area bounded between the trajectory and the ideal straight line path X-flips: changes of direction on the x axis Initiation time: The time taken for the participant to start moving the cursor. End of explanation """ nx = pd.concat(list(data.nx), axis=1).T ny = pd.concat(list(data.ny), axis=1).T rx = pd.concat(list(data.rx), axis=1).T ry = pd.concat(list(data.ry), axis=1).T """ Explanation: Finally, we'll save our processed data. First, we split of our processed mouse trajectory columns into seperate data structures, which I'll explore a little more below. The normalized time data are labelled nx and ny, and are formatted so that each row corresponds to a single trial, and each column is a time point, from 0 to 101. The real time data, rx and ry, are structured analagously, with each column corresponding to a timestamp. By default, these are broken up into 20 msec intervals, and the column headings (20, 40, 60, etc) reflect the actual timestamp. End of explanation """ redundant = ['xTrajectory', 'yTrajectory', 'tTrajectory', 'x', 'y', 't', 'nx', 'ny', 'rx', 'ry'] data = data.drop(redundant, axis=1) data.head() # Save data data.to_csv('processed.csv', index=False) nx.to_csv('nx.csv', index=False) ny.to_csv('ny.csv', index=False) rx.to_csv('rx.csv', index=False) ry.to_csv('ry.csv', index=False) """ Explanation: With that done, we can delete this information from our main data frame, so that it's compact enough to use easily in your data analysis package of choice, before finally saving everything as csv files. End of explanation """
geoneill12/phys202-2015-work
assignments/assignment10/ODEsEx03.ipynb
mit
%matplotlib inline import matplotlib.pyplot as plt import numpy as np import seaborn as sns from scipy.integrate import odeint from IPython.html.widgets import interact, fixed """ Explanation: Ordinary Differential Equations Exercise 3 Imports End of explanation """ g = 9.81 # m/s^2 l = 0.5 # length of pendulum, in meters tmax = 50. # seconds t = np.linspace(0, tmax, int(100*tmax)) """ Explanation: Damped, driven nonlinear pendulum The equations of motion for a simple pendulum of mass $m$, length $l$ are: $$ \frac{d^2\theta}{dt^2} = \frac{-g}{\ell}\sin\theta $$ When a damping and periodic driving force are added the resulting system has much richer and interesting dynamics: $$ \frac{d^2\theta}{dt^2} = \frac{-g}{\ell}\sin\theta - a \omega - b \sin(\omega_0 t) $$ In this equation: $a$ governs the strength of the damping. $b$ governs the strength of the driving force. $\omega_0$ is the angular frequency of the driving force. When $a=0$ and $b=0$, the energy/mass is conserved: $$E/m =g\ell(1-\cos(\theta)) + \frac{1}{2}\ell^2\omega^2$$ Basic setup Here are the basic parameters we are going to use for this exercise: End of explanation """ def derivs(y, t, a, b, omega0): """Compute the derivatives of the damped, driven pendulum. Parameters ---------- y : ndarray The solution vector at the current time t[i]: [theta[i],omega[i]]. t : float The current time t[i]. a, b, omega0: float The parameters in the differential equation. Returns ------- dy : ndarray The vector of derviatives at t[i]: [dtheta[i],domega[i]]. """ # YOUR CODE HERE raise NotImplementedError() assert np.allclose(derivs(np.array([np.pi,1.0]), 0, 1.0, 1.0, 1.0), [1.,-1.]) def energy(y): """Compute the energy for the state array y. The state array y can have two forms: 1. It could be an ndim=1 array of np.array([theta,omega]) at a single time. 2. It could be an ndim=2 array where each row is the [theta,omega] at single time. Parameters ---------- y : ndarray, list, tuple A solution vector Returns ------- E/m : float (ndim=1) or ndarray (ndim=2) The energy per mass. """ # YOUR CODE HERE raise NotImplementedError() assert np.allclose(energy(np.array([np.pi,0])),g) assert np.allclose(energy(np.ones((10,2))), np.ones(10)*energy(np.array([1,1]))) """ Explanation: Write a function derivs for usage with scipy.integrate.odeint that computes the derivatives for the damped, driven harmonic oscillator. The solution vector at each time will be $\vec{y}(t) = (\theta(t),\omega(t))$. End of explanation """ # YOUR CODE HERE raise NotImplementedError() # YOUR CODE HERE raise NotImplementedError() # YOUR CODE HERE raise NotImplementedError() assert True # leave this to grade the two plots and their tuning of atol, rtol. """ Explanation: Simple pendulum Use the above functions to integrate the simple pendulum for the case where it starts at rest pointing vertically upwards. In this case, it should remain at rest with constant energy. Integrate the equations of motion. Plot $E/m$ versus time. Plot $\theta(t)$ and $\omega(t)$ versus time. Tune the atol and rtol arguments of odeint until $E/m$, $\theta(t)$ and $\omega(t)$ are constant. Anytime you have a differential equation with a a conserved quantity, it is critical to make sure the numerical solutions conserve that quantity as well. This also gives you an opportunity to find other bugs in your code. The default error tolerances (atol and rtol) used by odeint are not sufficiently small for this problem. Start by trying atol=1e-3, rtol=1e-2 and then decrease each by an order of magnitude until your solutions are stable. End of explanation """ def plot_pendulum(a=0.0, b=0.0, omega0=0.0): """Integrate the damped, driven pendulum and make a phase plot of the solution.""" # YOUR CODE HERE raise NotImplementedError() """ Explanation: Damped pendulum Write a plot_pendulum function that integrates the damped, driven pendulum differential equation for a particular set of parameters $[a,b,\omega_0]$. Use the initial conditions $\theta(0)=-\pi + 0.1$ and $\omega=0$. Decrease your atol and rtol even futher and make sure your solutions have converged. Make a parametric plot of $[\theta(t),\omega(t)]$ versus time. Use the plot limits $\theta \in [-2 \pi,2 \pi]$ and $\theta \in [-10,10]$ Label your axes and customize your plot to make it beautiful and effective. End of explanation """ plot_pendulum(0.5, 0.0, 0.0) """ Explanation: Here is an example of the output of your plot_pendulum function that should show a decaying spiral. End of explanation """ # YOUR CODE HERE raise NotImplementedError() """ Explanation: Use interact to explore the plot_pendulum function with: a: a float slider over the interval $[0.0,1.0]$ with steps of $0.1$. b: a float slider over the interval $[0.0,10.0]$ with steps of $0.1$. omega0: a float slider over the interval $[0.0,10.0]$ with steps of $0.1$. End of explanation """
pagutierrez/tutorial-sklearn
notebooks-spanish/11-extraccion_caracteristicas_texto.ipynb
cc0-1.0
X = ["Algunos dicen que el mundo terminará siendo fuego,", "Otros dicen que terminará siendo hielo."] len(X) from sklearn.feature_extraction.text import CountVectorizer vectorizer = CountVectorizer() vectorizer.fit(X) vectorizer.vocabulary_ X_bag_of_words = vectorizer.transform(X) X_bag_of_words.shape X_bag_of_words X_bag_of_words.toarray() vectorizer.get_feature_names() # Volver al texto original (perdemos el orden y la capitalización) vectorizer.inverse_transform(X_bag_of_words) """ Explanation: Extracción de características de un texto mediante Bag-of-Words (bolsas de palabras) En muchas tareas, como en la detección de spam, los datos de entrada son cadenas de texto. El texto libre con longitud variable está muy lejos de lo que necesitamos para hacer aprendizaje automático en scikit-learn (representaciones numéricas de tamaño fijo). Sin embargo, hay una forma fácil y efectiva de transformar datos textuales en una representación numérica, utilizando lo que se conoce como bag-of-words, que proporciona una estructura de datos que es compatible con los algoritmos de aprendizaje automático de scikit-learn. <img src="figures/bag_of_words.svg" width="100%"> Vamos a asumir que cada texto del dataset es una cadena, que puede ser una frase, un correo, un libro o un artículo completo de noticias. Para representar el patrón, primero partimos la cadena en un conjunto de tokens, que se corresponden con palabras (normalizadas de alguna forma). Un modo simple de hacer esto es partir la frase según los espacios en blanco y luego pasar a minúsculas todas las palabras. Después hacemos un vocabulario a partir de todos los tokens (palabras en minúsculas) que encontramos en el dataset completo. Esto suele resultar en un vocabulario muy largo. Ahora tendríamos que ver si las palabras del vocabulario aparecen o no en nuestro patrón. Representamos cada patrón (cadena) con un vector, donde cada entrada nos informa acerca de cuántas veces aparece una palabra del vocabulario en el patrón (en su versión más simple un valor binario, 1 si aparece al menos una vez, 0 sino). Ya que cada ejemplo va a tener solo unas cuantas palabras y el vocabulario suele ser muy largo, la mayoría de entradas son ceros, lo que lleva a una representación de alta dimensionalidad pero dispersa. Este método se llama bolsa de palabras porque el orden de las palabras se pierde completamente (solo sabemos qué aparecen). End of explanation """ from sklearn.feature_extraction.text import TfidfVectorizer tfidf_vectorizer = TfidfVectorizer() tfidf_vectorizer.fit(X) import numpy as np np.set_printoptions(precision=2) print(tfidf_vectorizer.transform(X).toarray()) """ Explanation: Codificación tf-idf Una transformación bastante útil que a menudo es aplicada a la codificación bag-of-words es el escalado term-frequency inverse-document-frequency (tf-idf), frecuencia de término -- frecuencia inversa de documento (tener en cuenta la frecuencia de ocurrencia del término en la colección de documentos). Es una transformación no lineal del conteo de palabras. Consiste en una medida numérica que expresa cuán relevante es una palabra para un documento en una colección. Esta medida se utiliza a menudo como un factor de ponderación en recuperación de información y en minería de textos. El valor tf-idf aumenta proporcionalmente al número de veces que una palabra aparece en el documento, pero es compensado por la frecuencia de la palabra en la colección global de documentos, lo que permite manejar el hecho de que algunas palabras son generalmente más comunes que otras. La codificación tf-idf rescala las palabras que son comunes para que tengan menos peso: End of explanation """ # Utilizar secuencias de tokens de longitud mínima 2 y máxima 2 bigram_vectorizer = CountVectorizer(ngram_range=(2, 2)) bigram_vectorizer.fit(X) bigram_vectorizer.get_feature_names() bigram_vectorizer.transform(X).toarray() """ Explanation: Los tf-idfs son una forma de representar documentos como vectores de características. Se pueden entender como una modificación de la frecuencia de aparición de términos (tf): tf nos da una idea acerca de cuántas veces aparece el término en el documento (o patrón). La idea del tf-idf es bajar el peso de los términos proporcionalmente al número de documentos en que aparecen. Así, si un término aparece en muchos documentos en principio puede ser poco importante o al menos no aportar mucha información para las tareas de procesamiento de lenguaje natural (por ejemplo, la palabra que es muy común y no nos permite hacer una discriminación útil). Este libro externo de IPython proporciona mucha más información sobre las ecuaciones y el cálculo de la representación tf-idf. Bigramas y n-gramas En el ejemplo de la figura que había al principio de este libro, hemos usado la división en tokens basada en 1-gramas (unigramas): cada token representa un único elemento con respecto al criterio de división. Puede ser que no siempre sea una buena idea descartar completamente el orden de las palabras, ya que las frases compuestas suelen tener significados específicos y algunos modificadores (como la palabra no) pueden invertir el significado de una palabra. Una forma simple de incluir este orden son los n-gramas, que no miran un único token, sino todos los pares de tokens vecinos. Por ejemplo, si usamos división en tokens basada en 2-gramas (bigramas), agruparíamos juntas las palabras con un solape de una palabra. Con 3-gramas (trigramas), trabajaríamos con un solape de dos palabras... Texto original: "Así es como consigues hormigas" 1-gramas: "así", "es", "como", "consigues", "hormigas" 2-gramas: "así es", "es como", "como consigues", "consigues hormigas" 3-gramas: "así es como", "es como consigues", "como consigues hormigas" El valor de $n$ para los n-gramas que resultará en el rendimiento óptimo para nuestro modelo predictivo depende enteramente del algoritmo de aprendizaje, del dataset y de la tarea. O, en otras palabras, tenemos que considerar $n$ como un parámetro de ajuste (en cuadernos posteriores veremos como tratar estos parámetros de ajuste). Ahora vamos a crear un modelo basado en bag-of-words de bigramas usando la clase de scikit-learn CountVectorizer: End of explanation """ gram_vectorizer = CountVectorizer(ngram_range=(1, 2)) gram_vectorizer.fit(X) gram_vectorizer.get_feature_names() transformada = gram_vectorizer.transform(X).toarray() """ Explanation: Es común que queramos incluir unigramas (tokens individuales) y bigramas, lo que podemos hacer pasándole la siguiente tupla como argumento al parámetro ngram_range del constructor del CountVectorizer: End of explanation """ X char_vectorizer = CountVectorizer(ngram_range=(2, 2), analyzer="char") char_vectorizer.fit(X) print(char_vectorizer.get_feature_names()) """ Explanation: n-gramas de caracteres A veces resulta interesante analizar los caracteres individuales, además de las palabras. Esto es particularmente útil si tenemos datos muy ruidosos y queremos identificar el lenguaje o si queremos predecir algo sobre una sola palabra. Para analizar caracteres en lugar de palabras utilizamos el parámetro analyzer="char". Analizar los caracteres aislados no suele proporcionar mucha información, pero considerar n-gramas más largos si que puede servir: End of explanation """ zen = """Beautiful is better than ugly. Explicit is better than implicit. Simple is better than complex. Complex is better than complicated. Flat is better than nested. Sparse is better than dense. Readability counts. Special cases aren't special enough to break the rules. Although practicality beats purity. Errors should never pass silently. Unless explicitly silenced. In the face of ambiguity, refuse the temptation to guess. There should be one-- and preferably only one --obvious way to do it. Although that way may not be obvious at first unless you're Dutch. Now is better than never. Although never is often better than *right* now. If the implementation is hard to explain, it's a bad idea. If the implementation is easy to explain, it may be a good idea. Namespaces are one honking great idea -- let's do more of those!""" """ Explanation: <div class="alert alert-success"> <b>EJERCICIO</b>: <ul> <li> Obtener los n-gramas del "zen of python" que aparece a continuación (puedes verlo escribiendo ``import this``), y encuentra el n-grama más común. Considera valores de $n\in\{2,3,4\}$. Queremos tratar cada línea como un documento individual. Puedes conseguirlo si particionas el con el carácter de nueva línea (``\n``). Obtén la codificación tf-idf de los datos. ¿Qué palabras tienen la mayor puntuación tf-idf? ¿Por qué? ¿Qué es lo que cambia si utilizas ``TfidfVectorizer(norm="none")``? </li> </ul> </div> End of explanation """
vinecopulib/pyvinecopulib
examples/vine_copulas.ipynb
mit
import pyvinecopulib as pv import numpy as np # Specify pair-copulas bicop = pv.Bicop(pv.BicopFamily.bb1, 90, [3, 2]) pcs = [[bicop, bicop], [bicop]] # Specify R-vine matrix mat = np.array([[1, 1, 1], [2, 2, 0], [3, 0, 0]]) # Set-up a vine copula cop = pv.Vinecop(mat, pcs) print(cop) """ Explanation: Import the library End of explanation """ u = cop.simulate(n=10, seeds=[1, 2, 3]) fcts = [cop.pdf, cop.rosenblatt, cop.inverse_rosenblatt, cop.loglik, cop.aic, cop.bic] [f(u) for f in fcts] """ Explanation: Showcase some methods End of explanation """ u = cop.simulate(n=1000, seeds=[1, 2, 3]) # Define first an object to control the fits: # - pv.FitControlsVinecop objects store the controls # - here, we only restrict the parametric family # - see help(pv.FitControlsVinecop) for more details controls = pv.FitControlsVinecop(family_set=[pv.BicopFamily.bb1]) print(controls) # Create a new object an select family and parameters by fitting to data cop2 = pv.Vinecop(mat, pcs) cop2.select(data=u, controls=controls) print(cop2) # Otherwise, create directly from data cop2 = pv.Vinecop(data=u, matrix=mat, controls=controls) print(cop2) """ Explanation: Create a vine copula model Different ways to fit a copula (when the families and structure are known)... End of explanation """ # Create a new object and select strucutre, family, and parameters cop3 = pv.Vinecop(d=3) cop3.select(data=u) print(cop3) # Otherwise, create directly from data cop3 = pv.Vinecop(data=u) print(cop3) # create a C-vine structure with root node 1 in first tree, 2 in second, ... cvine = pv.CVineStructure([4, 3, 2, 1]) # specify pair-copulas in every tree tree1 = [pv.Bicop(pv.BicopFamily.gaussian, 0, [0.5]), pv.Bicop(pv.BicopFamily.clayton, 0, [3]), pv.Bicop(pv.BicopFamily.student, 0, [0.4, 4])] tree2 = [pv.Bicop(pv.BicopFamily.indep), pv.Bicop(pv.BicopFamily.gaussian, 0, [0.5])] tree3 = [pv.Bicop(pv.BicopFamily.gaussian)] # instantiate C-vine copula model cop = pv.Vinecop(cvine, [tree1, tree2, tree3]) print(cop) """ Explanation: When nothing is known, there are also two ways to fit a copula... End of explanation """
ES-DOC/esdoc-jupyterhub
notebooks/ncc/cmip6/models/sandbox-3/seaice.ipynb
gpl-3.0
# DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'ncc', 'sandbox-3', 'seaice') """ Explanation: ES-DOC CMIP6 Model Properties - Seaice MIP Era: CMIP6 Institute: NCC Source ID: SANDBOX-3 Topic: Seaice Sub-Topics: Dynamics, Thermodynamics, Radiative Processes. Properties: 80 (63 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:54:25 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation """ # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) """ Explanation: Document Authors Set document authors End of explanation """ # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) """ Explanation: Document Contributors Specify document contributors End of explanation """ # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) """ Explanation: Document Publication Specify document publication status End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.model.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: Document Table of Contents 1. Key Properties --&gt; Model 2. Key Properties --&gt; Variables 3. Key Properties --&gt; Seawater Properties 4. Key Properties --&gt; Resolution 5. Key Properties --&gt; Tuning Applied 6. Key Properties --&gt; Key Parameter Values 7. Key Properties --&gt; Assumptions 8. Key Properties --&gt; Conservation 9. Grid --&gt; Discretisation --&gt; Horizontal 10. Grid --&gt; Discretisation --&gt; Vertical 11. Grid --&gt; Seaice Categories 12. Grid --&gt; Snow On Seaice 13. Dynamics 14. Thermodynamics --&gt; Energy 15. Thermodynamics --&gt; Mass 16. Thermodynamics --&gt; Salt 17. Thermodynamics --&gt; Salt --&gt; Mass Transport 18. Thermodynamics --&gt; Salt --&gt; Thermodynamics 19. Thermodynamics --&gt; Ice Thickness Distribution 20. Thermodynamics --&gt; Ice Floe Size Distribution 21. Thermodynamics --&gt; Melt Ponds 22. Thermodynamics --&gt; Snow Processes 23. Radiative Processes 1. Key Properties --&gt; Model Name of seaice model used. 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of sea ice model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.model.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.variables.prognostic') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Sea ice temperature" # "Sea ice concentration" # "Sea ice thickness" # "Sea ice volume per grid cell area" # "Sea ice u-velocity" # "Sea ice v-velocity" # "Sea ice enthalpy" # "Internal ice stress" # "Salinity" # "Snow temperature" # "Snow depth" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 2. Key Properties --&gt; Variables List of prognostic variable in the sea ice model. 2.1. Prognostic Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of prognostic variables in the sea ice component. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "TEOS-10" # "Constant" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 3. Key Properties --&gt; Seawater Properties Properties of seawater relevant to sea ice 3.1. Ocean Freezing Point Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 3.2. Ocean Freezing Point Value Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If using a constant seawater freezing point, specify this value. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.resolution.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4. Key Properties --&gt; Resolution Resolution of the sea ice grid 4.1. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4.2. Canonical Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 4.3. Number Of Horizontal Gridpoints Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Total number of horizontal (XY) points (or degrees of freedom) on computational grid. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5. Key Properties --&gt; Tuning Applied Tuning applied to sea ice model component 5.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5.2. Target Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5.3. Simulations Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 *Which simulations had tuning applied, e.g. all, not historical, only pi-control? * End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5.4. Metrics Used Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List any observed metrics used in tuning model/parameters End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5.5. Variables Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Which variables were changed during the tuning process? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Ice strength (P*) in units of N m{-2}" # "Snow conductivity (ks) in units of W m{-1} K{-1} " # "Minimum thickness of ice created in leads (h0) in units of m" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 6. Key Properties --&gt; Key Parameter Values Values of key parameters 6.1. Typical Parameters Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N What values were specificed for the following parameters if used? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.2. Additional Parameters Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.assumptions.description') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7. Key Properties --&gt; Assumptions Assumptions made in the sea ice model 7.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General overview description of any key assumptions made in this model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.2. On Diagnostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.3. Missing Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8. Key Properties --&gt; Conservation Conservation in the sea ice component 8.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Provide a general description of conservation methodology. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.properties') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Energy" # "Mass" # "Salt" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 8.2. Properties Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Properties conserved in sea ice by the numerical schemes. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.budget') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.3. Budget Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3 End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 8.4. Was Flux Correction Used Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does conservation involved flux correction? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.5. Corrected Conserved Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List any variables which are conserved by more than the numerical scheme alone. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Ocean grid" # "Atmosphere Grid" # "Own Grid" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 9. Grid --&gt; Discretisation --&gt; Horizontal Sea ice discretisation in the horizontal 9.1. Grid Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Grid on which sea ice is horizontal discretised? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Structured grid" # "Unstructured grid" # "Adaptive grid" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 9.2. Grid Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the type of sea ice grid? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Finite differences" # "Finite elements" # "Finite volumes" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 9.3. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the advection scheme? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 9.4. Thermodynamics Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the time step in the sea ice model thermodynamic component in seconds. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 9.5. Dynamics Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the time step in the sea ice model dynamic component in seconds. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.6. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify any additional horizontal discretisation details. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Zero-layer" # "Two-layers" # "Multi-layers" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 10. Grid --&gt; Discretisation --&gt; Vertical Sea ice vertical properties 10.1. Layering Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 10.2. Number Of Layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 If using multi-layers specify how many. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 10.3. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify any additional vertical grid details. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 11. Grid --&gt; Seaice Categories What method is used to represent sea ice categories ? 11.1. Has Mulitple Categories Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Set to true if the sea ice model has multiple sea ice categories. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 11.2. Number Of Categories Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 If using sea ice categories specify how many. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11.3. Category Limits Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 If using sea ice categories specify each of the category limits. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11.4. Ice Thickness Distribution Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the sea ice thickness distribution scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.other') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11.5. Other Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 12. Grid --&gt; Snow On Seaice Snow on sea ice details 12.1. Has Snow On Ice Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is snow on ice represented in this model? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 12.2. Number Of Snow Levels Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of vertical levels of snow on ice? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 12.3. Snow Fraction Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how the snow fraction on sea ice is determined End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 12.4. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify any additional details related to snow on ice. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.horizontal_transport') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Incremental Re-mapping" # "Prather" # "Eulerian" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13. Dynamics Sea Ice Dynamics 13.1. Horizontal Transport Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the method of horizontal advection of sea ice? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Incremental Re-mapping" # "Prather" # "Eulerian" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.2. Transport In Thickness Space Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the method of sea ice transport in thickness space (i.e. in thickness categories)? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Hibler 1979" # "Rothrock 1975" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.3. Ice Strength Formulation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Which method of sea ice strength formulation is used? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.redistribution') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Rafting" # "Ridging" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.4. Redistribution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Which processes can redistribute sea ice (including thickness)? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.rheology') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Free-drift" # "Mohr-Coloumb" # "Visco-plastic" # "Elastic-visco-plastic" # "Elastic-anisotropic-plastic" # "Granular" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.5. Rheology Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Rheology, what is the ice deformation formulation? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Pure ice latent heat (Semtner 0-layer)" # "Pure ice latent and sensible heat" # "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)" # "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14. Thermodynamics --&gt; Energy Processes related to energy in sea ice thermodynamics 14.1. Enthalpy Formulation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the energy formulation? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Pure ice" # "Saline ice" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14.2. Thermal Conductivity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What type of thermal conductivity is used? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Conduction fluxes" # "Conduction and radiation heat fluxes" # "Conduction, radiation and latent heat transport" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14.3. Heat Diffusion Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the method of heat diffusion? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Heat Reservoir" # "Thermal Fixed Salinity" # "Thermal Varying Salinity" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14.4. Basal Heat Flux Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method by which basal ocean heat flux is handled? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 14.5. Fixed Salinity Value Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 14.6. Heat Content Of Precipitation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method by which the heat content of precipitation is handled. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 14.7. Precipitation Effects On Salinity Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15. Thermodynamics --&gt; Mass Processes related to mass in sea ice thermodynamics 15.1. New Ice Formation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method by which new sea ice is formed in open water. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15.2. Ice Vertical Growth And Melt Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method that governs the vertical growth and melt of sea ice. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Floe-size dependent (Bitz et al 2001)" # "Virtual thin ice melting (for single-category)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.3. Ice Lateral Melting Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the method of sea ice lateral melting? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15.4. Ice Surface Sublimation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method that governs sea ice surface sublimation. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15.5. Frazil Ice Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method of frazil ice formation. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 16. Thermodynamics --&gt; Salt Processes related to salt in sea ice thermodynamics. 16.1. Has Multiple Sea Ice Salinities Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 16.2. Sea Ice Salinity Thermal Impacts Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does sea ice salinity impact the thermal properties of sea ice? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Prescribed salinity profile" # "Prognostic salinity profile" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17. Thermodynamics --&gt; Salt --&gt; Mass Transport Mass transport of salt 17.1. Salinity Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How is salinity determined in the mass transport of salt calculation? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 17.2. Constant Salinity Value Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If using a constant salinity value specify this value in PSU? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.3. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the salinity profile used. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Prescribed salinity profile" # "Prognostic salinity profile" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 18. Thermodynamics --&gt; Salt --&gt; Thermodynamics Salt thermodynamics 18.1. Salinity Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How is salinity determined in the thermodynamic calculation? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 18.2. Constant Salinity Value Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If using a constant salinity value specify this value in PSU? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 18.3. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the salinity profile used. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Explicit" # "Virtual (enhancement of thermal conductivity, thin ice melting)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 19. Thermodynamics --&gt; Ice Thickness Distribution Ice thickness distribution details. 19.1. Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How is the sea ice thickness distribution represented? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Explicit" # "Parameterised" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 20. Thermodynamics --&gt; Ice Floe Size Distribution Ice floe-size distribution details. 20.1. Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How is the sea ice floe-size represented? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 20.2. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Please provide further details on any parameterisation of floe-size. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 21. Thermodynamics --&gt; Melt Ponds Characteristics of melt ponds. 21.1. Are Included Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Are melt ponds included in the sea ice model? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Flocco and Feltham (2010)" # "Level-ice melt ponds" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 21.2. Formulation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What method of melt pond formulation is used? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Albedo" # "Freshwater" # "Heat" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 21.3. Impacts Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N What do melt ponds have an impact on? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging') # PROPERTY VALUE(S): # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 22. Thermodynamics --&gt; Snow Processes Thermodynamic processes in snow on sea ice 22.1. Has Snow Aging Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Set to True if the sea ice model has a snow aging scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 22.2. Snow Aging Scheme Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the snow aging scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 22.3. Has Snow Ice Formation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Set to True if the sea ice model has snow ice formation. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 22.4. Snow Ice Formation Scheme Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the snow ice formation scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 22.5. Redistribution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the impact of ridging on snow cover? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Single-layered heat diffusion" # "Multi-layered heat diffusion" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 22.6. Heat Diffusion Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the heat diffusion through snow methodology in sea ice thermodynamics? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Delta-Eddington" # "Parameterized" # "Multi-band albedo" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 23. Radiative Processes Sea Ice Radiative Processes 23.1. Surface Albedo Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method used to handle surface albedo. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Delta-Eddington" # "Exponential attenuation" # "Ice radiation transmission per category" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 23.2. Ice Radiation Transmission Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Method by which solar radiation through sea ice is handled. End of explanation """
bmorris3/gsoc2015
constraints-demo.ipynb
mit
############################################################## # Import my dev version of astroplan: import os; astroplan_dev = os.environ['GITASTROPLANPATH'] import sys; sys.path.insert(0, astroplan_dev) ############################################################## from astroplan import Observer, FixedTarget from astroplan.constraints import (is_observable, is_always_observable, AltitudeConstraint, AirmassConstraint, AtNight) from astropy.time import Time import astropy.units as u # Are these targets visible on Jan 1 from 06:00-12:00 UTC, at Subaru? subaru = Observer.at_site("Subaru") time_range = Time(["2015-08-01 06:00", "2015-08-01 12:00"]) target_names = ["Polaris", "Vega", "Albireo", "Algol", "Rigel", "Regulus"] targets = [FixedTarget.from_name(name) for name in target_names] constraint_list = [AltitudeConstraint(10*u.deg, 80*u.deg), AirmassConstraint(5), AtNight.twilight_civil()] # Are targets *ever* observable in the time range? ever = is_observable(constraint_list, time_range, targets, subaru) # Are targets *always* observable in the time range? always = is_always_observable(constraint_list, time_range, targets, subaru) """ Explanation: Constraints Demo: Constraints to apply: 10 deg < target altitude < 80 deg target airmass < 5 can be observed between civil twilights (solar altitude < -6 deg) Targets: "Polaris", "Vega", "Albireo", "Algol", "Rigel", "Regulus" Observer: Subaru on the night of 2015-08-01 UTC End of explanation """ observability_report = [] justify = 15 def prep_line(*data): # For printing data in neat columns return ''.join([str(i).ljust(justify) for i in data]) headers = ['Star', 'ever obs-able', 'always obs-able'] observability_report.append(prep_line(*headers)) observability_report.append(prep_line(*['-'*len(header) for header in headers])) for target, e, a in zip(targets, ever, always): observability_report.append(prep_line(target.name, e, a)) print "\n".join(observability_report) """ Explanation: Print results in table: End of explanation """ %matplotlib inline from astroplan.plots import plot_sky import matplotlib.pyplot as plt import matplotlib.cm as cm cmap = cm.Set1 from astroplan.constraints import time_grid_from_range time_grid = time_grid_from_range(time_range) [plot_sky(target, subaru, time_grid, style_kwargs={'color':cmap(float(i)/len(targets)), 'label':target.name}) for i, target in enumerate(targets)]; legend = plt.gca().legend(loc='lower center', fontsize=10) legend.get_frame().set_facecolor('w') """ Explanation: Sanity check with sky plots End of explanation """
csaladenes/csaladenes.github.io
present/mcc2/PythonDataScienceHandbook/05.10-Manifold-Learning.ipynb
mit
%matplotlib inline import matplotlib.pyplot as plt import seaborn as sns; sns.set() import numpy as np """ Explanation: <!--BOOK_INFORMATION--> <img align="left" style="padding-right:10px;" src="figures/PDSH-cover-small.png"> This notebook contains an excerpt from the Python Data Science Handbook by Jake VanderPlas; the content is available on GitHub. The text is released under the CC-BY-NC-ND license, and code is released under the MIT license. If you find this content useful, please consider supporting the work by buying the book! <!--NAVIGATION--> < In Depth: Principal Component Analysis | Contents | In Depth: k-Means Clustering > <a href="https://colab.research.google.com/github/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/05.10-Manifold-Learning.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open and Execute in Google Colaboratory"></a> In-Depth: Manifold Learning We have seen how principal component analysis (PCA) can be used in the dimensionality reduction task—reducing the number of features of a dataset while maintaining the essential relationships between the points. While PCA is flexible, fast, and easily interpretable, it does not perform so well when there are nonlinear relationships within the data; we will see some examples of these below. To address this deficiency, we can turn to a class of methods known as manifold learning—a class of unsupervised estimators that seeks to describe datasets as low-dimensional manifolds embedded in high-dimensional spaces. When you think of a manifold, I'd suggest imagining a sheet of paper: this is a two-dimensional object that lives in our familiar three-dimensional world, and can be bent or rolled in that two dimensions. In the parlance of manifold learning, we can think of this sheet as a two-dimensional manifold embedded in three-dimensional space. Rotating, re-orienting, or stretching the piece of paper in three-dimensional space doesn't change the flat geometry of the paper: such operations are akin to linear embeddings. If you bend, curl, or crumple the paper, it is still a two-dimensional manifold, but the embedding into the three-dimensional space is no longer linear. Manifold learning algorithms would seek to learn about the fundamental two-dimensional nature of the paper, even as it is contorted to fill the three-dimensional space. Here we will demonstrate a number of manifold methods, going most deeply into a couple techniques: multidimensional scaling (MDS), locally linear embedding (LLE), and isometric mapping (IsoMap). We begin with the standard imports: End of explanation """ def make_hello(N=1000, rseed=42): # Make a plot with "HELLO" text; save as PNG fig, ax = plt.subplots(figsize=(4, 1)) fig.subplots_adjust(left=0, right=1, bottom=0, top=1) ax.axis('off') ax.text(0.5, 0.4, 'HELLO', va='center', ha='center', weight='bold', size=85) fig.savefig('hello.png') plt.close(fig) # Open this PNG and draw random points from it from matplotlib.image import imread data = imread('hello.png')[::-1, :, 0].T rng = np.random.RandomState(rseed) X = rng.rand(4 * N, 2) i, j = (X * data.shape).astype(int).T mask = (data[i, j] < 1) X = X[mask] X[:, 0] *= (data.shape[0] / data.shape[1]) X = X[:N] return X[np.argsort(X[:, 0])] """ Explanation: Manifold Learning: "HELLO" To make these concepts more clear, let's start by generating some two-dimensional data that we can use to define a manifold. Here is a function that will create data in the shape of the word "HELLO": End of explanation """ X = make_hello(1000) colorize = dict(c=X[:, 0], cmap=plt.cm.get_cmap('rainbow', 5)) plt.scatter(X[:, 0], X[:, 1], **colorize) plt.axis('equal'); """ Explanation: Let's call the function and visualize the resulting data: End of explanation """ def rotate(X, angle): theta = np.deg2rad(angle) R = [[np.cos(theta), np.sin(theta)], [-np.sin(theta), np.cos(theta)]] return np.dot(X, R) X2 = rotate(X, 20) + 5 plt.scatter(X2[:, 0], X2[:, 1], **colorize) plt.axis('equal'); """ Explanation: The output is two dimensional, and consists of points drawn in the shape of the word, "HELLO". This data form will help us to see visually what these algorithms are doing. Multidimensional Scaling (MDS) Looking at data like this, we can see that the particular choice of x and y values of the dataset are not the most fundamental description of the data: we can scale, shrink, or rotate the data, and the "HELLO" will still be apparent. For example, if we use a rotation matrix to rotate the data, the x and y values change, but the data is still fundamentally the same: End of explanation """ from sklearn.metrics import pairwise_distances D = pairwise_distances(X) D.shape """ Explanation: This tells us that the x and y values are not necessarily fundamental to the relationships in the data. What is fundamental, in this case, is the distance between each point and the other points in the dataset. A common way to represent this is to use a distance matrix: for $N$ points, we construct an $N \times N$ array such that entry $(i, j)$ contains the distance between point $i$ and point $j$. Let's use Scikit-Learn's efficient pairwise_distances function to do this for our original data: End of explanation """ plt.imshow(D, zorder=2, cmap='Blues', interpolation='nearest') plt.colorbar(); """ Explanation: As promised, for our N=1,000 points, we obtain a 1000×1000 matrix, which can be visualized as shown here: End of explanation """ D2 = pairwise_distances(X2) np.allclose(D, D2) """ Explanation: If we similarly construct a distance matrix for our rotated and translated data, we see that it is the same: End of explanation """ from sklearn.manifold import MDS model = MDS(n_components=2, dissimilarity='precomputed', random_state=1) out = model.fit_transform(D) plt.scatter(out[:, 0], out[:, 1], **colorize) plt.axis('equal'); """ Explanation: This distance matrix gives us a representation of our data that is invariant to rotations and translations, but the visualization of the matrix above is not entirely intuitive. In the representation shown in this figure, we have lost any visible sign of the interesting structure in the data: the "HELLO" that we saw before. Further, while computing this distance matrix from the (x, y) coordinates is straightforward, transforming the distances back into x and y coordinates is rather difficult. This is exactly what the multidimensional scaling algorithm aims to do: given a distance matrix between points, it recovers a $D$-dimensional coordinate representation of the data. Let's see how it works for our distance matrix, using the precomputed dissimilarity to specify that we are passing a distance matrix: End of explanation """ def random_projection(X, dimension=3, rseed=42): assert dimension >= X.shape[1] rng = np.random.RandomState(rseed) C = rng.randn(dimension, dimension) e, V = np.linalg.eigh(np.dot(C, C.T)) return np.dot(X, V[:X.shape[1]]) X3 = random_projection(X, 3) X3.shape """ Explanation: The MDS algorithm recovers one of the possible two-dimensional coordinate representations of our data, using only the $N\times N$ distance matrix describing the relationship between the data points. MDS as Manifold Learning The usefulness of this becomes more apparent when we consider the fact that distance matrices can be computed from data in any dimension. So, for example, instead of simply rotating the data in the two-dimensional plane, we can project it into three dimensions using the following function (essentially a three-dimensional generalization of the rotation matrix used earlier): End of explanation """ from mpl_toolkits import mplot3d ax = plt.axes(projection='3d') ax.scatter3D(X3[:, 0], X3[:, 1], X3[:, 2], **colorize) ax.view_init(azim=70, elev=50) """ Explanation: Let's visualize these points to see what we're working with: End of explanation """ model = MDS(n_components=2, random_state=1) out3 = model.fit_transform(X3) plt.scatter(out3[:, 0], out3[:, 1], **colorize) plt.axis('equal'); """ Explanation: We can now ask the MDS estimator to input this three-dimensional data, compute the distance matrix, and then determine the optimal two-dimensional embedding for this distance matrix. The result recovers a representation of the original data: End of explanation """ def make_hello_s_curve(X): t = (X[:, 0] - 2) * 0.75 * np.pi x = np.sin(t) y = X[:, 1] z = np.sign(t) * (np.cos(t) - 1) return np.vstack((x, y, z)).T XS = make_hello_s_curve(X) """ Explanation: This is essentially the goal of a manifold learning estimator: given high-dimensional embedded data, it seeks a low-dimensional representation of the data that preserves certain relationships within the data. In the case of MDS, the quantity preserved is the distance between every pair of points. Nonlinear Embeddings: Where MDS Fails Our discussion thus far has considered linear embeddings, which essentially consist of rotations, translations, and scalings of data into higher-dimensional spaces. Where MDS breaks down is when the embedding is nonlinear—that is, when it goes beyond this simple set of operations. Consider the following embedding, which takes the input and contorts it into an "S" shape in three dimensions: End of explanation """ from mpl_toolkits import mplot3d ax = plt.axes(projection='3d') ax.scatter3D(XS[:, 0], XS[:, 1], XS[:, 2], **colorize); """ Explanation: This is again three-dimensional data, but we can see that the embedding is much more complicated: End of explanation """ from sklearn.manifold import MDS model = MDS(n_components=2, random_state=2) outS = model.fit_transform(XS) plt.scatter(outS[:, 0], outS[:, 1], **colorize) plt.axis('equal'); """ Explanation: The fundamental relationships between the data points are still there, but this time the data has been transformed in a nonlinear way: it has been wrapped-up into the shape of an "S." If we try a simple MDS algorithm on this data, it is not able to "unwrap" this nonlinear embedding, and we lose track of the fundamental relationships in the embedded manifold: End of explanation """ from sklearn.manifold import LocallyLinearEmbedding model = LocallyLinearEmbedding(n_neighbors=100, n_components=2, method='modified', eigen_solver='dense') out = model.fit_transform(XS) fig, ax = plt.subplots() ax.scatter(out[:, 0], out[:, 1], **colorize) ax.set_ylim(0.15, -0.15); """ Explanation: The best two-dimensional linear embeding does not unwrap the S-curve, but instead throws out the original y-axis. Nonlinear Manifolds: Locally Linear Embedding How can we move forward here? Stepping back, we can see that the source of the problem is that MDS tries to preserve distances between faraway points when constructing the embedding. But what if we instead modified the algorithm such that it only preserves distances between nearby points? The resulting embedding would be closer to what we want. Visually, we can think of it as illustrated in this figure: figure source in Appendix Here each faint line represents a distance that should be preserved in the embedding. On the left is a representation of the model used by MDS: it tries to preserve the distances between each pair of points in the dataset. On the right is a representation of the model used by a manifold learning algorithm called locally linear embedding (LLE): rather than preserving all distances, it instead tries to preserve only the distances between neighboring points: in this case, the nearest 100 neighbors of each point. Thinking about the left panel, we can see why MDS fails: there is no way to flatten this data while adequately preserving the length of every line drawn between the two points. For the right panel, on the other hand, things look a bit more optimistic. We could imagine unrolling the data in a way that keeps the lengths of the lines approximately the same. This is precisely what LLE does, through a global optimization of a cost function reflecting this logic. LLE comes in a number of flavors; here we will use the modified LLE algorithm to recover the embedded two-dimensional manifold. In general, modified LLE does better than other flavors of the algorithm at recovering well-defined manifolds with very little distortion: End of explanation """ from sklearn.datasets import fetch_lfw_people faces = fetch_lfw_people(min_faces_per_person=30) faces.data.shape """ Explanation: The result remains somewhat distorted compared to our original manifold, but captures the essential relationships in the data! Some Thoughts on Manifold Methods Though this story and motivation is compelling, in practice manifold learning techniques tend to be finicky enough that they are rarely used for anything more than simple qualitative visualization of high-dimensional data. The following are some of the particular challenges of manifold learning, which all contrast poorly with PCA: In manifold learning, there is no good framework for handling missing data. In contrast, there are straightforward iterative approaches for missing data in PCA. In manifold learning, the presence of noise in the data can "short-circuit" the manifold and drastically change the embedding. In contrast, PCA naturally filters noise from the most important components. The manifold embedding result is generally highly dependent on the number of neighbors chosen, and there is generally no solid quantitative way to choose an optimal number of neighbors. In contrast, PCA does not involve such a choice. In manifold learning, the globally optimal number of output dimensions is difficult to determine. In contrast, PCA lets you find the output dimension based on the explained variance. In manifold learning, the meaning of the embedded dimensions is not always clear. In PCA, the principal components have a very clear meaning. In manifold learning the computational expense of manifold methods scales as O[N^2] or O[N^3]. For PCA, there exist randomized approaches that are generally much faster (though see the megaman package for some more scalable implementations of manifold learning). With all that on the table, the only clear advantage of manifold learning methods over PCA is their ability to preserve nonlinear relationships in the data; for that reason I tend to explore data with manifold methods only after first exploring them with PCA. Scikit-Learn implements several common variants of manifold learning beyond Isomap and LLE: the Scikit-Learn documentation has a nice discussion and comparison of them. Based on my own experience, I would give the following recommendations: For toy problems such as the S-curve we saw before, locally linear embedding (LLE) and its variants (especially modified LLE), perform very well. This is implemented in sklearn.manifold.LocallyLinearEmbedding. For high-dimensional data from real-world sources, LLE often produces poor results, and isometric mapping (IsoMap) seems to generally lead to more meaningful embeddings. This is implemented in sklearn.manifold.Isomap For data that is highly clustered, t-distributed stochastic neighbor embedding (t-SNE) seems to work very well, though can be very slow compared to other methods. This is implemented in sklearn.manifold.TSNE. If you're interested in getting a feel for how these work, I'd suggest running each of the methods on the data in this section. Example: Isomap on Faces One place manifold learning is often used is in understanding the relationship between high-dimensional data points. A common case of high-dimensional data is images: for example, a set of images with 1,000 pixels each can be thought of as a collection of points in 1,000 dimensions – the brightness of each pixel in each image defines the coordinate in that dimension. Here let's apply Isomap on some faces data. We will use the Labeled Faces in the Wild dataset, which we previously saw in In-Depth: Support Vector Machines and In Depth: Principal Component Analysis. Running this command will download the data and cache it in your home directory for later use: End of explanation """ fig, ax = plt.subplots(4, 8, subplot_kw=dict(xticks=[], yticks=[])) for i, axi in enumerate(ax.flat): axi.imshow(faces.images[i], cmap='gray') """ Explanation: We have 2,370 images, each with 2,914 pixels. In other words, the images can be thought of as data points in a 2,914-dimensional space! Let's quickly visualize several of these images to see what we're working with: End of explanation """ # from sklearn.decomposition import RandomizedPCA from sklearn.decomposition import PCA as RandomizedPCA model = RandomizedPCA(100).fit(faces.data) plt.plot(np.cumsum(model.explained_variance_ratio_)) plt.xlabel('n components') plt.ylabel('cumulative variance'); """ Explanation: We would like to plot a low-dimensional embedding of the 2,914-dimensional data to learn the fundamental relationships between the images. One useful way to start is to compute a PCA, and examine the explained variance ratio, which will give us an idea of how many linear features are required to describe the data: End of explanation """ from sklearn.manifold import Isomap model = Isomap(n_components=2) proj = model.fit_transform(faces.data) proj.shape """ Explanation: We see that for this data, nearly 100 components are required to preserve 90% of the variance: this tells us that the data is intrinsically very high dimensional—it can't be described linearly with just a few components. When this is the case, nonlinear manifold embeddings like LLE and Isomap can be helpful. We can compute an Isomap embedding on these faces using the same pattern shown before: End of explanation """ from matplotlib import offsetbox def plot_components(data, model, images=None, ax=None, thumb_frac=0.05, cmap='gray'): ax = ax or plt.gca() proj = model.fit_transform(data) ax.plot(proj[:, 0], proj[:, 1], '.k') if images is not None: min_dist_2 = (thumb_frac * max(proj.max(0) - proj.min(0))) ** 2 shown_images = np.array([2 * proj.max(0)]) for i in range(data.shape[0]): dist = np.sum((proj[i] - shown_images) ** 2, 1) if np.min(dist) < min_dist_2: # don't show points that are too close continue shown_images = np.vstack([shown_images, proj[i]]) imagebox = offsetbox.AnnotationBbox( offsetbox.OffsetImage(images[i], cmap=cmap), proj[i]) ax.add_artist(imagebox) """ Explanation: The output is a two-dimensional projection of all the input images. To get a better idea of what the projection tells us, let's define a function that will output image thumbnails at the locations of the projections: End of explanation """ fig, ax = plt.subplots(figsize=(10, 10)) plot_components(faces.data, model=Isomap(n_components=2), images=faces.images[:, ::2, ::2]) """ Explanation: Calling this function now, we see the result: End of explanation """ # DEPRECATED # from sklearn.datasets import fetch_mldata # mnist = fetch_mldata('MNIST original') # MLDATA SERVER IS DOWN # mldata.org seems to still be down # DOES NOT WORK SOMETIMES # this step might fail based on permssions and network access # if in Docker, specify --network=host # if in docker-compose specify version 3.4 and build -> network: host from sklearn.datasets import fetch_openml mnist = fetch_openml('mnist_784') mdata=mnist.data.astype(int) mtarget=mnist.target.astype(int) # ALTERNATIVE SOLUTION # https://stackoverflow.com/a/54986237/2080425 # from scipy.io import loadmat # mnist = loadmat('data/mnist-original.mat') # mdata=mnist['data'].T # mtarget=mnist['label'].T # mdata.shape """ Explanation: The result is interesting: the first two Isomap dimensions seem to describe global image features: the overall darkness or lightness of the image from left to right, and the general orientation of the face from bottom to top. This gives us a nice visual indication of some of the fundamental features in our data. We could then go on to classify this data (perhaps using manifold features as inputs to the classification algorithm) as we did in In-Depth: Support Vector Machines. Example: Visualizing Structure in Digits As another example of using manifold learning for visualization, let's take a look at the MNIST handwritten digits set. This data is similar to the digits we saw in In-Depth: Decision Trees and Random Forests, but with many more pixels per image. It can be downloaded from http://mldata.org/ with the Scikit-Learn utility: End of explanation """ fig, ax = plt.subplots(6, 8, subplot_kw=dict(xticks=[], yticks=[])) for i, axi in enumerate(ax.flat): axi.imshow(mdata[1250 * i].reshape(28, 28), cmap='gray_r') """ Explanation: This consists of 70,000 images, each with 784 pixels (i.e. the images are 28×28). As before, we can take a look at the first few images: End of explanation """ # use only 1/30 of the data: full dataset takes a long time! data = mdata[::30] target = mtarget[::30] model = Isomap(n_components=2) proj = model.fit_transform(data) plt.scatter(proj[:, 0], proj[:, 1], c=target, cmap=plt.cm.get_cmap('jet', 10)) plt.colorbar(ticks=range(10)) plt.clim(-0.5, 9.5); """ Explanation: This gives us an idea of the variety of handwriting styles in the dataset. Let's compute a manifold learning projection across the data. For speed here, we'll only use 1/30 of the data, which is about ~2000 points (because of the relatively poor scaling of manifold learning, I find that a few thousand samples is a good number to start with for relatively quick exploration before moving to a full calculation): End of explanation """ from sklearn.manifold import Isomap # Choose 1/4 of the "1" digits to project # data = mdata[[mtarget == 1]][::4] data = mdata[np.array([mtarget == 1]).flatten()][::4] fig, ax = plt.subplots(figsize=(10, 10)) model = Isomap(n_neighbors=5, n_components=2, eigen_solver='dense') plot_components(data, model, images=data.reshape((-1, 28, 28)), ax=ax, thumb_frac=0.05, cmap='gray_r') """ Explanation: The resulting scatter plot shows some of the relationships between the data points, but is a bit crowded. We can gain more insight by looking at just a single number at a time: End of explanation """
DigitalSlideArchive/HistomicsTK
docs/examples/semantic_segmentation_superpixel_approach.ipynb
apache-2.0
import tempfile import girder_client import numpy as np from histomicstk.annotations_and_masks.annotation_and_mask_utils import ( delete_annotations_in_slide) from histomicstk.saliency.cellularity_detection_superpixels import ( Cellularity_detector_superpixels) import matplotlib.pylab as plt from matplotlib.colors import ListedColormap %matplotlib inline # color map vals = np.random.rand(256,3) vals[0, ...] = [0.9, 0.9, 0.9] cMap = ListedColormap(1 - vals) """ Explanation: Finding cellular regions with superpixel analysis Overview: Whole-slide images often contain artifacts like marker or acellular regions that need to be avoided during analysis. In this example we show how HistomicsTK can be used to develop saliency detection algorithms that segment the slide at low magnification to generate a map to guide higher magnification analyses. Here we show how superpixel analysis can be used to locate hypercellular regions that correspond to tumor-rich content. This uses Simple Linear Iterative Clustering (SLIC) to get superpixels at a low slide magnification to detect cellular regions. The first step of this pipeline detects tissue regions (i.e. individual tissue pieces) using the get_tissue_mask method of the histomicstk.saliency module. Then, each tissue piece is processed separately for accuracy and disk space efficiency. It is important to keep in mind that this does NOT rely on a tile iterator, but loads the entire tissue region (but NOT the whole slide) in memory and passes it on to skimage.segmentation.slic method. Not using a tile iterator helps keep the superpixel sizes large enough to correspond to tissue boundaries. Once superpixels are segmented, the image is deconvolved and features are extracted from the hematoxylin channel. Features include intensity and possibly also texture features. Then, a mixed component Gaussian mixture model is fit to the features, and median intensity is used to rank superpixel clusters by 'cellularity' (since we are working with the hematoxylin channel). Note that the decison to fit a gaussian mixture model instead of using K-means clustering is a design choice. If you'd like to experiment, feel free to try other methods of classifying superpixels into clusters using other approaches. Additional functionality includes contour extraction to get the final segmentation boundaries of cellular regions and to visualize them in HistomicsUI using one's preferred colormap. Here are some sample results: From left to right: Slide thumbnail, superpixel classifications, contiguous cellular/acellular regions Where to look? |_ histomicstk/ |_saliency/ |_cellularity_detection.py |_tests/ |_test_saliency.py End of explanation """ APIURL = 'http://candygram.neurology.emory.edu:8080/api/v1/' SAMPLE_SLIDE_ID = "5d586d76bd4404c6b1f286ae" # SAMPLE_SLIDE_ID = "5d8c296cbd4404c6b1fa5572" gc = girder_client.GirderClient(apiUrl=APIURL) gc.authenticate(apiKey='kri19nTIGOkWH01TbzRqfohaaDWb6kPecRqGmemb') # This is where the run logs will be saved logging_savepath = tempfile.mkdtemp() # color normalization values from TCGA-A2-A3XS-DX1 cnorm_thumbnail = { 'mu': np.array([9.24496373, -0.00966569, 0.01757247]), 'sigma': np.array([0.35686209, 0.02566772, 0.02500282]), } # from the ROI in Amgad et al, 2019 cnorm_main = { 'mu': np.array([8.74108109, -0.12440419, 0.0444982]), 'sigma': np.array([0.6135447, 0.10989545, 0.0286032]), } # deleting existing annotations in target slide (if any) delete_annotations_in_slide(gc, SAMPLE_SLIDE_ID) """ Explanation: Prepwork End of explanation """ print(Cellularity_detector_superpixels.__init__.__doc__) """ Explanation: Initialize the cellularity detector End of explanation """ # init cellularity detector cds = Cellularity_detector_superpixels( gc, slide_id=SAMPLE_SLIDE_ID, MAG=3.0, compactness=0.1, spixel_size_baseMag=256 * 256, max_cellularity=40, visualize_spixels=True, visualize_contiguous=True, get_tissue_mask_kwargs={ 'deconvolve_first': False, 'n_thresholding_steps': 2, 'sigma': 1.5, 'min_size': 500, }, verbose=2, monitorPrefix='test', logging_savepath=logging_savepath) """ Explanation: In this example, and as the default behavior, we use a handful of informative intensity features extracted from the hematoxylin channel after color deconvolution to fit a gaussian mixture model. Empirically (on a few test slides), this seems to give better results than using the full suite of intensity and texture features available. Feel free to experiment with this and find the optimum combination of features for your application. End of explanation """ # set color normalization for thumbnail # cds.set_color_normalization_values( # mu=cnorm_thumbnail['mu'], # sigma=cnorm_thumbnail['sigma'], what='thumbnail') # set color normalization values for main tissue cds.set_color_normalization_values( mu=cnorm_main['mu'], sigma=cnorm_main['sigma'], what='main') """ Explanation: Set the color normalization values You can choose to reinhard color normalize the slide thumbnail and/or the tissue image at target magnificaion. You can either provide the mu and sigma values directly or provide the path to an image from which to infer these values. Please refer to the color_normalization module for reinhard normalization implementation details. In this example, we use a "high-sensitivity, low-specificity" strategy to detect tissue, followed by the more specific cellularity detection module. In other words, the tissue_detection module is used to detect all tissue, and only exclude whitespace and marker. Here we do NOT perform color normalization before tissue detection (empirically gives worse results), but we do normalize when detecting the cellular regions within the tissue. End of explanation """ print(cds.run.__doc__) tissue_pieces = cds.run() """ Explanation: Run the detector End of explanation """ plt.imshow(tissue_pieces[0].tissue_mask, cmap=cMap) plt.imshow(tissue_pieces[0].spixel_mask, cmap=cMap) tissue_pieces[0].fdata.head() tissue_pieces[0].cluster_props """ Explanation: Check the results The resultant list of objects correspond to the results for each "tissue piece" detected in the slide. You may explore various attributes like the offset coordinates, tissue mask, superpixel labeled mask, superpixel feature data, and superpixel cluster properties. End of explanation """
NelisW/ComputationalRadiometry
12d-SpectralTemperatureEstimation.ipynb
mpl-2.0
from IPython.display import display from IPython.display import Image from IPython.display import HTML %matplotlib inline import numpy as np from scipy.optimize import curve_fit import pyradi.ryutils as ryutils import pyradi.ryplot as ryplot import pyradi.ryplanck as ryplanck #make pngs at required dpi import matplotlib as mpl mpl.rc("savefig", dpi=75) mpl.rc('figure', figsize=(10,8)) """ Explanation: Spectral Temperature Estimation Problem Statement A spectral radiometer is used to determine the surface temperature of a hot object exposed to sunlight. The surface normal vector is pointing directly towards the sun. The temperature and emissivity of the object surface, as well as the temperature of the sun are unknown. The object is considerably hotter than the environment, hence atmospheric path radiance and reflected ambient flux can be ignored. Ignore atmospheric transmittance between the sun and the object and between the object and the sensor. The measurement results are corrupted by noise. The sun and object both radiate as Planck radiators. Emissivity is constant at all wavelengths. Three different objects were characterised, each with different sun temperature, object temperature and object surface emissivity values. In other words, these are three completely independent cases. The spectral measured radiance for all three cases are contained in the file https://raw.githubusercontent.com/NelisW/pyradi/master/pyradi/data/EOSystemAnalysisDesign-Data/twoSourceSpectrum.txt The first column in the file is wavelength in $\mu$m, and the remaining three columns are spectral radiance data for the three cases in W/(m$^2$.sr.$\mu$m). Develop a model for the measurement setup; complete with mathematical description and a diagram of the setup. [5] Develop two different techniques to solve the model parameters (sun temperature, object temperature and object emissivity) for the given spectral data. [13] Evaluate the two techniques in terms of accuracy and risk in finding a stable and true solution. [2] [20] End of explanation """ filename = 'twoSourceSpectrum.txt' def bbfunc(wl,tsun,tobj,emis): """Given the two temperatures and emissivity, calculate the spectum. """ lSun = 2.175e-5 * ryplanck.planck(wl,tsun, 'el') / np.pi # W/(m2.sr.um) lObj = ryplanck.planck(wl,tobj, 'el') / np.pi # W/(m2.sr.um) return lSun * (1 - emis) + lObj * emis def percent(val1, val2): return 100. * (val1 -val2) / val2 """ Explanation: Prepare the data file This section is not part of the solution, it is used to prepare the data for the problem statement. Calculate the spectral radiance curves from first principles. End of explanation """ wl = np.linspace(0.2, 14, 1000).reshape(-1,1) # wavelength tempsuns = [5715, 5855, 6179] tempobjs = [503, 998, 1420] emiss = [0.89, 0.51, 0.2] plotmins = [1e-0, 1e1, 1e1] plotmaxs = [1e3, 1e4, 1e5] noise = [.1, .5 , 10] radiancesun = np.zeros((wl.shape[0], len(emiss))) radianceobj = np.zeros((wl.shape[0], len(emiss))) radiancesum = np.zeros((wl.shape[0], len(emiss))) outarr = wl p = ryplot.Plotter(2,3,1,figsize=(12,10)) for i,(tempsun, tempobj,emis, plotmin, plotmax) in enumerate(zip(tempsuns, tempobjs,emiss, plotmins, plotmaxs)): radiancesum[:,i] = bbfunc(wl,tempsun,tempobj,emis) + noise[i] * np.random.normal(size=wl.shape[0]) radiancesun[:,i] = bbfunc(wl,tempsun,tempobj,0.0) radianceobj[:,i] = bbfunc(wl,tempsun,tempobj,1.0) outarr = np.hstack((outarr, radiancesum[:,i].reshape(-1,1))) p.logLog(1+i,wl, radiancesun[:,i], 'Self exitance plus sun reflection, case {}'.format(i+1), 'Wavelength $\mu$m', 'Radiance W/(m$^2$.sr.$\mu$m)', maxNX=4, pltaxis=[0.2, 10, plotmin, plotmax]) p.logLog(1+i,wl, radianceobj[:,i], 'Self exitance plus sun reflection, case {}'.format(i+1), 'Wavelength $\mu$m', 'Radiance W/(m$^2$.sr.$\mu$m)', maxNX=4, pltaxis=[0.2, 10, plotmin, plotmax]) p.logLog(1+i,wl, radiancesum[:,i], 'Self exitance plus sun reflection, case {}'.format(i+1), 'Wavelength $\mu$m', 'Radiance W/(m$^2$.sr.$\mu$m)', maxNX=4, pltaxis=[0.2, 10, plotmin, plotmax]) #data not to be written out again, already committed to the pyradi web site. if False: with open(filename, 'wt') as fout: fout.write(('{:25s}' * 4 + '\n').format('wavelength-um', 'Radiance-case1', 'Radiance-case2', 'Radiance-case3')) np.savetxt(fout, outarr) """ Explanation: Plot the data for visual check. Plot on log scale to capture the wide range of values. End of explanation """ radIn = np.loadtxt(filename, skiprows=1) wl = radIn[:,0] q = ryplot.Plotter(1,2,2,figsize=(12,8)) for i in range(0,len(emiss)): tPeak = 2898. / wl q.logLog(1,wl, radIn[:,i+1], 'Input data', 'Wavelength $\mu$m', 'Radiance W/(m$^2$.sr.$\mu$m)', label=['Case {}'.format(i+1)], maxNX=4, pltaxis=[0.2, 10, 1e0, 1e4]) q.semilogY(2,wl, radIn[:,i+1], 'Input data', 'Wavelength $\mu$m', 'Radiance W/(m$^2$.sr.$\mu$m)', label=['Case {}'.format(i+1)], maxNX=4, pltaxis=[0.2, 10, 1e0, 1e4]) q.semilogY(3,tPeak, radIn[:,i+1], 'Input data', 'Temperature K', 'Radiance W/(m$^2$.sr.$\mu$m)', label=['Case {}'.format(i+1)], maxNX=4, pltaxis=[0., 2000., 1e0, 1e4]) q.semilogY(4,tPeak, radIn[:,i+1], 'Input data', 'Temperature K', 'Radiance W/(m$^2$.sr.$\mu$m)', label=['Case {}'.format(i+1)], maxNX=4, pltaxis=[4000., 8000., 4e1, 1e3]) """ Explanation: Solution Recognise that the signature comprises two components: reflected sunlight and self emittance. The total signature is given by $L_{\rm tot} = \epsilon \,L_{\rm bb}(T_{\rm obj}) + (1\,-\,\epsilon)\, \psi L_{\rm bb}(T_{\rm sun})$, where $\psi=A_{\rm sun}/(\pi R_{\rm sun}^2) =2.1757\times10^{-5}$ [sr/sr] follows from the geometry between the earth and the sun, $\epsilon$ is the object surface emissivity, $T_{\rm obj}$ the object temperature, and $T_{\rm sun}$ is the sun surface temperature. The three variables to be solved are $\epsilon$, $T_{\rm obj}$, and $T_{\rm sun}$. The problem is coded in mathematical form as def bbfunc(wl,tsun,tobj,emis): """Given the two temperatures and emissivity, calculate the spectum. """ lSun = 2.175e-5 * ryplanck.planck(wl,tsun, 'el') / np.pi # W/(m2.sr.um) lObj = ryplanck.planck(wl,tobj, 'el') / np.pi # W/(m2.sr.um) return lSun * (1 - emis) + lObj * emis The first step in the analysis is to plot the supplied data. Sometimes it helps to plot the data in different plotting scales. In this case the sun and object are Planck radiators (given in problem statement) where the wavelength of peak radiance is related to the temperature by Wien's displacement law $T = 2898/\lambda_{p}$. Exploiting this fact, the radiance curves are plotted on a Wien-law scale where wavelength is converted to temperature. For an ideal Planck-law radiator the radiance peak can be used to read of the temperature on the temperature scale. End of explanation """ #temperatures read of from above graphs tLo = np.asarray([[400, 600],[900,1100],[1400,1600]]) tHi = np.asarray([[5000, 6500],[5000,6500],[5000,6500]]) #get peak wavelengths associated with these temperatures wLo = 2897.77 / tLo wHi = 2897.77 / tHi print(tLo) print(tHi) print(wLo) print(wHi) """ Explanation: Analysing the graphs above, rough estimates of the temperatures can be formed. The object temperatures can be estimated relatively easily from the bottom-left curve: approximately 500 K, 1000 K and 1500 K. The sun temperature is not so easily estimated: 5500 K, 5900 K and 6000 K. These estimates in themselves are inaccurate for final results, but can guide subsequent analysis. Method 1: Differentiating Spectral Radiance The first method is a refinement of the peak-detection method used above for the first order estimates. It relies on the mathematical method that the location of peaks and minima can be determined by the zero crossings of the first derivative. The derivative is calculated by using the Numpy diff method. The problem with this technique is that the measurement is somewhat noisy, and the derivation operator amplifies the noise. So we are making a noisy signal more noisy and then look for zero crossings - a method that hold tremendous promise to fail. The noisy zero crossings make it very difficult to determine the true zero crossing accurately. We will use the estimates determined above to limit the search range. The data is noisy, so a filter was used to improve the signal to noise somewhat. There is however the risk that the filtering process may interfere with the true nature of the data, so the degree of filtering is limited. The Savitzky-Golay filter is used here. This filter is commonly used in the spectroscopy community. In the code below the first derivative is calculated using numpy.diff and then normalised (because the absolute scale is not important). The differentiated signal is plotted for inspection. Note the noise in the signal - filtering seems to help quite a lot (but not sufficiently to count zero crossings naively). In the analysis below uses privileged problem data, but only to determine the error between the answers and the true temperatures. First detemine the approximate wavelength ranges where the estimated zero crossing occur, based on temperatures read of from the above graphs. End of explanation """ def processTempEmis(i, data, datadiff, dataName, tempsuns, tempobjs, emiss, wldcen, wLo, wHi): """ data is the radiance spectrum datadiff is the radiance spectrum differential dataName is the text name to be used in printout tempsuns, tempobjs, emiss are the reference data to determine errors wldcen is wavelength values at the differentiation samples wLo is the low temperature spectral range containing zero crossing wHi is the high temperature spectral range containing zero crossing """ #find all zero crossings (all wavelengths) to determine inflection wavelengths zero_crossings = np.where(np.diff(np.sign(datadiff)),1,0) # set up spectral filters according to the estimated temperature ranges wLofilter = np.where((wldcen >= wLo[i,1]) & (wldcen <= wLo[i,0]),1., 0.)[1:] wHifilter = np.where((wldcen >= wHi[i,1]) & (wldcen <= wHi[i,0]),1., 0.)[1:] #find indexes of zero crossings within the spectral filters zcHi = np.where(zero_crossings*wHifilter) zcLo = np.where(zero_crossings*wLofilter) #use Wien law to determine temperature # but this could be many samples if more than one zero crossing, take mean bbtempHi = np.mean(2897.77 / wldcen[zcHi]) bbtempLo = np.mean(2897.77 / wldcen[zcLo]) #calculate emissivity, given the two above temperatures lsun = 2.15e-5 * ryplanck.planck(wldcen,bbtempHi, 'el') / np.pi # W/(m2.sr.um) lobj = ryplanck.planck(wldcen, bbtempLo, 'el') / np.pi # W/(m2.sr.um) estEmis = np.average((data[1:] - lsun)/(lobj - lsun)) print('\nCase {} {}:'.format(i+1, dataName)) print('Estimated Tsun={:.2f} Tobj={:.2f} emis={:.4f}'.format(bbtempHi,bbtempLo,estEmis )) print('True Tsun={:.2f} Tobj={:.2f} emis={:.4f}'.format(tempsuns[i], tempobjs[i], emiss[i])) print('Error Tsun={:.2f}% Tobj={:.2f}% emis={:.4f}%'.format( percent(bbtempHi, tempsuns[i]),percent(bbtempLo, tempobjs[i]),percent(estEmis, emiss[i]))) return bbtempHi, bbtempLo, estEmis radiancesumdif = np.zeros((wl.shape[0]-1, len(emiss))) wldiff = np.diff(radIn[:,0],axis=0).reshape(-1,1) wldcen = (radIn[:-1,0] + radIn[1:,0]) / 2. p = ryplot.Plotter(2,3,1,figsize=(10,10)) for i in range(0,len(emiss)): #take derivative and normalise because scale is not important radiancesumdif[:,i] = (np.diff(radIn[:,i+1],axis=0).reshape(-1,1)/ wldiff).reshape(-1,) radiancesumdif[:,i] /= np.max(radiancesumdif[:,i],axis=0) #the derivative is too noisy, so filter the signal to suppress noise sgfiltered = ryutils.savitzkyGolay1D(radIn[:,i+1], window_size=11, order=3, deriv=0, rate=1) sgfiltereddif = (np.diff(sgfiltered,axis=0).reshape(-1,1)/ wldiff).reshape(-1,) sgfiltereddif /= np.max(sgfiltereddif,axis=0) #plot p.semilogX(1+i,wldcen, radiancesumdif[:,i],'Differentiated signal case {}'.format(i+1), 'Wavelength $\mu$m','Radiance W/(m$^2$.sr.$\mu$m)', plotCol=['r'],maxNX=4, pltaxis=[0.2, 10, -0.5, 1.0],label=['Raw']) p.semilogX(1+i,wldcen, sgfiltereddif,'Differentiated signal case {}'.format(i+1), 'Wavelength $\mu$m', 'Radiance W/(m$^2$.sr.$\mu$m)',plotCol=['b'], maxNX=4, pltaxis=[0.2, 10, -0.5, 1.0],label=['Filtered']) #process to find the zero crossings processTempEmis(i, radIn[:,i+1], radiancesumdif[:,i], 'Raw', tempsuns, tempobjs, emiss, wldcen, wLo, wHi) processTempEmis(i, radIn[:,i+1], sgfiltereddif, 'Filtered', tempsuns, tempobjs, emiss, wldcen, wLo, wHi) print(30*'-') """ Explanation: The following function does the hard work to find the maxima. First find all the zero crossings in the data, but there are many such crossings (at least three actual crossings) because of the noise in the signal. Near each actual zero crossing there are a multitude of noisy crossings. We are unable to tell the actual crossing from the noise-caused ones. This is done by selecting (1) a spectral range according to the expected temperature (by Wien law), (2) finding all the zero crossings in the spectral range, (3) using the wavelengths where the zero crossings occur to determine the temperatures of such zero crossings, and (4) taking the mean value of the temperatures associated with all the zero crossings in the filter spectral range. The emissivity determined by solving from $L_{\rm tot} = \epsilon \,L_{\rm bb}(T_{\rm obj}) + (1\,-\,\epsilon)\, L_{\rm bb}(T_{\rm sun})$, having determined the sun and object temperatures. Finally, print the estimated temperature and the errors relative to the original values used to construct the problem. End of explanation """ radIn = np.loadtxt(filename, skiprows=1) for i in range(0,len(emiss)): popt, pcov = curve_fit(bbfunc,radIn[:,0], radIn[:,i+1], p0=(6000., 1000., .5) ) print('\nCase {}:'.format(i+1)) print('Estimated Tsun={:.2f} Tobj={:.2f} emis={:.4f}'.format(popt[0],popt[1],popt[2])) print('True Tsun={:.2f} Tobj={:.2f} emis={:.4f}'.format(tempsuns[i], tempobjs[i], emiss[i])) print('Error Tsun={:.2f}% Tobj={:.2f}% emis={:.4f}%'.format( percent(popt[0], tempsuns[i]),percent(popt[1], tempobjs[i]),percent(popt[2], emiss[i]))) """ Explanation: From the above results, it is evident that the filtered data set did not really provide a better answer: proof that filtering affects the data and subsequent results. It can be argued that the use of a spectral filtering/selection range for each of the three cases already predetermines the final solution. Note however that the ranges were selected on the basis of the peak evident in the input data. Taking the average of the calculated zero crossings' temperatures is somewhat brute force, but appears to work reasonably well. The method is complex and requires some fine tuning and careful algorithmic design. It is however a 'general' solution in the sense that it will work for any scenario. Method 2: Curve fitting to a known equation The second method is far simpler and yields better answers. The problem can be stated in the following mathematical form: $L_{\rm tot} = \epsilon \,L_{\rm bb}(T_{\rm obj}) + (1\,-\,\epsilon)\, L_{\rm bb}(T_{\rm sun})$. This function is implemented in the function bbfunc(). Finding the solution is then simply a matter of using the scipy.optimize.curve_fit function to find the sun temperature, object temperature and emissivity that will provide the best fit. This approach is motivated on the premise that the problem statement supports a direct and accurate mathematical formulation, which can then be used curve fitting the given data. End of explanation """ # you only need to do this once #!pip install --upgrade version_information %load_ext version_information %version_information numpy, scipy, matplotlib, pyradi """ Explanation: This method is simple and very accurate, but it does require confirmation that the measured scenarios correspond with the model used in the data analysis. It is not a general solution, but specifically tuned to the equation used. Python and module versions, and dates End of explanation """
mldbai/mldb
container_files/tutorials/Querying Data Tutorial.ipynb
apache-2.0
from pymldb import Connection mldb = Connection() """ Explanation: Querying Data Tutorial MLDB comes with a powerful SQL-like Select Query implementation accessible via its REST API. This tutorial will show a few different ways to query data. The notebook cells below use pymldb; you can check out the Using pymldb Tutorial for more details. End of explanation """ ex = mldb.put('/v1/datasets/example', {"type":"sparse.mutable"}) mldb.post('/v1/datasets/example/rows', { "rowName": "r1", "columns": [ ["a", 1, 0], ["b", 2, 0] ] }) mldb.post('/v1/datasets/example/rows', { "rowName": "r2", "columns": [ ["a", 3, 0], ["b", 4, 0] ] }) mldb.post('/v1/datasets/example/rows', { "rowName": "r3", "columns": [ ["a", 5, 0], ["b", 6, 0] ] }) mldb.post('/v1/datasets/example/rows', { "rowName": "r4", "columns": [ ["a", 7, 0], ["b", 8, 0] ] }) mldb.post('/v1/datasets/example/commit') """ Explanation: Creating a sample dataset First we will create a sample dataset, much like in the Loading Data Tutorial: End of explanation """ df = mldb.query("select * from example") print type(df) df """ Explanation: Querying into a DataFrame We can use the query() shortcut function to run queries and get the results as Pandas DataFrames End of explanation """ mldb.get('/v1/query', q="select * from example where a > 4", format="table") """ Explanation: Querying via REST We can also make lower-level REST API calls to the query endpoint in the Query API, /v1/query for full SQL queries. End of explanation """ mldb.get('/v1/query', q="select * from example where a > 4", format="aos") """ Explanation: We can control the format of the output JSON using the format attribute: End of explanation """