repo_name stringlengths 6 77 | path stringlengths 8 215 | license stringclasses 15
values | content stringlengths 335 154k |
|---|---|---|---|
GoogleCloudPlatform/training-data-analyst | courses/machine_learning/tensorflow/c_batched.ipynb | apache-2.0 | import tensorflow.compat.v1 as tf
import numpy as np
import shutil
print(tf.__version__)
"""
Explanation: <h1> 2c. Refactoring to add batching and feature-creation </h1>
In this notebook, we continue reading the same small dataset, but refactor our ML pipeline in two small, but significant, ways:
<ol>
<li> Refactor the input to read data in batches.
<li> Refactor the feature creation so that it is not one-to-one with inputs.
</ol>
The Pandas function in the previous notebook also batched, only after it had read the whole data into memory -- on a large dataset, this won't be an option.
End of explanation
"""
CSV_COLUMNS = ['fare_amount', 'pickuplon','pickuplat','dropofflon','dropofflat','passengers', 'key']
LABEL_COLUMN = 'fare_amount'
DEFAULTS = [[0.0], [-74.0], [40.0], [-74.0], [40.7], [1.0], ['nokey']]
def read_dataset(filename, mode, batch_size = 512):
def _input_fn():
def decode_csv(value_column):
columns = tf.decode_csv(value_column, record_defaults = DEFAULTS)
features = dict(zip(CSV_COLUMNS, columns))
label = features.pop(LABEL_COLUMN)
return features, label
# Create list of files that match pattern
file_list = tf.gfile.Glob(filename)
# Create dataset from file list
dataset = tf.data.TextLineDataset(file_list).map(decode_csv)
if mode == tf.estimator.ModeKeys.TRAIN:
num_epochs = None # indefinitely
dataset = dataset.shuffle(buffer_size = 10 * batch_size)
else:
num_epochs = 1 # end-of-input after this
dataset = dataset.repeat(num_epochs).batch(batch_size)
return dataset.make_one_shot_iterator().get_next()
return _input_fn
def get_train():
return read_dataset('./taxi-train.csv', mode = tf.estimator.ModeKeys.TRAIN)
def get_valid():
return read_dataset('./taxi-valid.csv', mode = tf.estimator.ModeKeys.EVAL)
def get_test():
return read_dataset('./taxi-test.csv', mode = tf.estimator.ModeKeys.EVAL)
"""
Explanation: <h2> 1. Refactor the input </h2>
Read data created in Lab1a, but this time make it more general and performant. Instead of using Pandas, we will use TensorFlow's Dataset API.
End of explanation
"""
INPUT_COLUMNS = [
tf.feature_column.numeric_column('pickuplon'),
tf.feature_column.numeric_column('pickuplat'),
tf.feature_column.numeric_column('dropofflat'),
tf.feature_column.numeric_column('dropofflon'),
tf.feature_column.numeric_column('passengers'),
]
def add_more_features(feats):
# Nothing to add (yet!)
return feats
feature_cols = add_more_features(INPUT_COLUMNS)
"""
Explanation: <h2> 2. Refactor the way features are created. </h2>
For now, pass these through (same as previous lab). However, refactoring this way will enable us to break the one-to-one relationship between inputs and features.
End of explanation
"""
tf.logging.set_verbosity(tf.logging.INFO)
OUTDIR = 'taxi_trained'
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
model = tf.estimator.LinearRegressor(
feature_columns = feature_cols, model_dir = OUTDIR)
model.train(input_fn = get_train(), steps = 100);
"""
Explanation: <h2> Create and train the model </h2>
Note that we train for num_steps * batch_size examples.
End of explanation
"""
def print_rmse(model, name, input_fn):
metrics = model.evaluate(input_fn = input_fn, steps = 1)
print('RMSE on {} dataset = {}'.format(name, np.sqrt(metrics['average_loss'])))
print_rmse(model, 'validation', get_valid())
"""
Explanation: <h3> Evaluate model </h3>
As before, evaluate on the validation data. We'll do the third refactoring (to move the evaluation into the training loop) in the next lab.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/thu/cmip6/models/sandbox-2/aerosol.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'thu', 'sandbox-2', 'aerosol')
"""
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: THU
Source ID: SANDBOX-2
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 69 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:40
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
"""
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
"""
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.5. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact internal mixture?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
"""
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation
"""
|
cavestruz/MLPipeline | notebooks/time_series/sample_time_series.ipynb | mit | import numpy as np
from matplotlib import pyplot as plt
from astroML.time_series import lomb_scargle, generate_damped_RW
from astroML.time_series import ACF_scargle
"""
Explanation: In this first example, we will explore a simulated lightcurve that follows a damped random walk, which is often used to model variability in the optical flux of quasar.
End of explanation
"""
tdays = np.arange(0, 1E3)
z = 2.0 # redshift
tau = 300 # damping timescale
"""
Explanation: Use the numpy.arange method to generate 1000 days of data.
End of explanation
"""
from sklearn.gaussian_process import GaussianProcess
"""
Explanation: Use the help function to figure out how to generate a dataset of this evenly spaced damped random walk over the 1000 days.
Add errors to your 1000 points using numpy.random.normal. Note, you will need 1000 points, each centered on the actual data point, and assume a sigma 0.1.
Randomly select a subsample of 200 data points from your generated dataset. This is now unevenly spaced, and will serve as your observed lightcurve.
Plot the observed lightcurve.
Use the help menu to figure out how to calculate the autocorrelation function of your lightcurve with ACF_scargle.
In this next example, we will explore data drawn from a gaussian process.
End of explanation
"""
gp = GaussianProcess(corr='squared_exponential', theta0=0.5,
random_state=0)
"""
Explanation: Define a covariance function as the one dimensional squared-exponential covariance function described in class. This will be a function of x1, x2, and the bandwidth h. Name this function covariance_squared_exponential.
Generate values for the x-axis as 1000 evenly points between 0 and 10 using numpy.linspace. Define a bandwidth of h=1.
Generate an output of your covariance_squared_exponential with x as x1, x[:,None] as x2, and h as the bandwidth.
Use numpy.random.multivariate_normal to generate a numpy array of the same length as your x-axis points. Each point is centered on 0 (your mean is a 1-d array of zeros), and your covariance is the output of your covariance_squared_exponential above.
Choose two values in your x-range as sample x values, and put in an array, x_sample_test. Choose a function (e.g. numpy.cos) as your example function to constrain.
Define an instance of a gaussian proccess
End of explanation
"""
|
SubhankarGhosh/NetworkX | 7. Bipartite Graphs (Instructor).ipynb | mit | G = cf.load_crime_network()
G.edges(data=True)[0:5]
G.nodes(data=True)[0:10]
"""
Explanation: Introduction
Bipartite graphs are graphs that have two (bi-) partitions (-partite) of nodes. Nodes within each partition are not allowed to be connected to one another; rather, they can only be connected to nodes in the other partition.
Bipartite graphs can be useful for modelling relations between two sets of entities. We will explore the construction and analysis of bipartite graphs here.
Let's load a crime data bipartite graph and quickly explore it.
This bipartite network contains persons who appeared in at least one crime case as either a suspect, a victim, a witness or both a suspect and victim at the same time. A left node represents a person and a right node represents a crime. An edge between two nodes shows that the left node was involved in the crime represented by the right node.
End of explanation
"""
person_nodes = [n for n in G.nodes() if G.node[n]['bipartite'] == 'person']
pG = bipartite.projection.projected_graph(G, person_nodes)
pG.nodes(data=True)[0:5]
"""
Explanation: Projections
Bipartite graphs can be projected down to one of the projections. For example, we can generate a person-person graph from the person-crime graph, by declaring that two nodes that share a crime node are in fact joined by an edge.
Exercise
Find the bipartite projection function in the NetworkX bipartite module docs, and use it to obtain the unipartite projection of the bipartite graph.
End of explanation
"""
nodes = sorted(pG.nodes(), key=lambda x: (pG.node[x]['gender'], len(pG.neighbors(x))))
edges = pG.edges()
edgeprops = dict(alpha=0.1)
node_cmap = {0:'blue', 1:'red'}
nodecolor = [node_cmap[pG.node[n]['gender']] for n in nodes]
fig = plt.figure(figsize=(6,6))
ax = fig.add_subplot(111)
c = CircosPlot(nodes, edges, radius=10, ax=ax, fig=fig, edgeprops=edgeprops, nodecolor=nodecolor)
c.draw()
c.fig.savefig('images/crime-person.png', dpi=300)
"""
Explanation: Exercise
Try visualizing the person-person crime network by using a Circos plot. Ensure that the nodes are grouped by gender and then by number of connections.
End of explanation
"""
crime_nodes = [n for n in G.nodes() if G.node[n]['bipartite'] == 'crime']
cG = bipartite.projection.projected_graph(G, crime_nodes)
"""
Explanation: Exercise
Use a similar logic to extract crime links.
End of explanation
"""
nodes = sorted(cG.nodes(), key=lambda x: len(cG.neighbors(x)))
edges = cG.edges()
edgeprops = dict(alpha=0.1)
nodecolor = plt.cm.viridis(np.arange(len(nodes)) / len(nodes))
fig = plt.figure(figsize=(6,6))
ax = fig.add_subplot(111)
c = CircosPlot(nodes, edges, radius=10, ax=ax, fig=fig, edgeprops=edgeprops, nodecolor=nodecolor)
c.draw()
plt.savefig('images/crime-crime.png', dpi=300)
"""
Explanation: Exercise
Can you plot how the crimes are connected, using a Circos plot? Try ordering it by number of connections.
End of explanation
"""
# Degree Centrality
bpdc = bipartite.degree_centrality(G, person_nodes)
sorted(bpdc.items(), key=lambda x: x[1], reverse=True)
"""
Explanation: Exercise
NetworkX also implements centrality measures for bipartite graphs, which allows you to obtain their metrics without first converting to a particular projection. This is useful for exploratory data analysis.
Try the following challenges, referring to the API documentation to help you:
Which crimes have the most number of people involved?
Which people are involved in the most number of crimes?
End of explanation
"""
|
theandygross/TCGA_differential_expression | Notebooks/Figures/Purgatory/DX_screen_figs.ipynb | mit | import NotebookImport
from Imports import *
import seaborn as sns
sns.set_context('paper',font_scale=1.5)
sns.set_style('white')
"""
Explanation: Differential Analysis
Import everything from the imports notebook. This reads in all of the expression data as well as the functions needed to analyse differential expression data.
End of explanation
"""
store = '/data_ssd/TCGA_methylation_2014_04_16.h5'
matched_meth = pd.read_hdf(store, 'matched_tn')
matched_meth = matched_meth.groupby(axis=1, level=[0,1]).first()
matched_meth.T.head(10).T.head()
"""
Explanation: matched_meth is our matched methylation data.
End of explanation
"""
matched_rna = pd.read_hdf('/data_ssd/RNASeq_2014_07_15.h5', 'matched_tn')
matched_mir = pd.read_hdf('/data_ssd/miRNASeq_2014_07_15.h5', 'matched_tn')
"""
Explanation: Read in matched Gene and miRNA expression data.
End of explanation
"""
dx_rna = binomial_test_screen(matched_rna, fc=1.)
dx_rna = dx_rna[dx_rna.num_dx > 300]
dx_rna.ix[['ADH1A','ADH1B','ADH1C']]
dx_rna.shape
dx_rna.p.rank().ix[['ADH1A','ADH1B','ADH1C']]
dx_rna.sort('p').head(10)
plt.rcParams['savefig.dpi'] = 150
fig, ax = subplots(figsize=(7.75,2.))
paired_bp_tn_split(matched_rna.ix['ADH1B'], codes, data_type='mRNA',
ax=ax)
fig.tight_layout()
fig.savefig('/cellar/users/agross/figures/ADH1B.pdf')
dx_mir = binomial_test_screen(matched_mir, fc=1.)
dx_mir = dx_mir[dx_mir.num_dx > 300]
dx_mir.sort('p').head()
paired_bp_tn_split(matched_mir.ix['hsa-mir-139'], codes, data_type='')
fig, ax = subplots(figsize=(6.5,2.))
paired_bp_tn_split(matched_mir.ix['hsa-mir-21'], codes, data_type='',
ax=ax)
fig.tight_layout()
fig.savefig('/cellar/users/agross/figures/mir21.pdf')
dx_meth = binomial_test_screen(matched_meth, fc=1.)
dx_meth = dx_meth[dx_meth.num_dx > 300]
dx_meth.sort('p').head()
paired_bp_tn_split(matched_meth.ix['cg10216717'], codes, data_type='Beta')
"""
Explanation: Run a simple screen for DX probes
Here we take the matched data and run a basic screen
fc = 1 means that we have no foldchange buffer for a gene to be considered over or underexpressed in a patient
If there are ties or missing data, I omit these from the test. This can cause underpowered tests which have extreme test statistics but weak p-values. For this reason I filter all gene/probes/markers with a sample size of less than 300 patients.
End of explanation
"""
def fig_1e(ax):
draw_dist(dx_meth.frac, ax=ax, lw=2.5)
draw_dist(dx_rna.frac, ax=ax, lw=2.5, bins=200)
draw_dist(dx_mir.frac, ax=ax, lw=2.5, bins=100)
ax.set_yticks([])
ax.set_xticks([0,.5,1])
ax.set_ylabel('Density')
ax.set_xlabel('Fraction')
ax.legend(('Methylation','mRNA','miRNA'), frameon=False)
prettify_ax(ax)
return ax
#Do not import
fig, ax = subplots(1,1, figsize=(5,3))
fig_1e(ax);
"""
Explanation: We are going to want to reuse this plot so here I'm wrapping it in a function.
End of explanation
"""
gs2 = gene_sets.ix[dx_rna.index].fillna(0)
rr = screen_feature(dx_rna.frac, rev_kruskal, gs2.T,
align=False)
fp = (1.*gene_sets.T * dx_rna.frac).T.dropna().replace(0, np.nan).mean().order()
fp.name = 'mean frac'
"""
Explanation: Pathway and Gene Annotation Analysis
End of explanation
"""
rr.ix[ti(fp > .5)].join(fp).sort('p').head()
"""
Explanation: Overexpressed pathways
End of explanation
"""
rr.ix[ti(fp < .5)].join(fp).sort('p').head()
"""
Explanation: Underexpressed pathways
End of explanation
"""
def fig_1f(ax):
v = pd.concat([dx_rna.frac,
dx_rna.frac.ix[ti(gs2['REACTOME_CELL_CYCLE_MITOTIC']>0)],
dx_rna.frac.ix[ti(gs2['KEGG_FATTY_ACID_METABOLISM']>0)]])
v1 = pd.concat([pd.Series('All Genes', dx_rna.frac.index),
pd.Series('Cell Cycle\nMitotic',
ti(gs2['REACTOME_CELL_CYCLE_MITOTIC']>0)),
pd.Series('Fatty Acid\nMetabolism',
ti(gs2['KEGG_FATTY_ACID_METABOLISM']>0))])
v1.name = ''
v.name = 'Fraction Overexpressed'
violin_plot_pandas(v1, v, ann=None, ax=ax)
prettify_ax(ax)
return ax
#Do not import
fig, ax = subplots(1,1, figsize=(4,3))
fig_1f(ax)
fig.tight_layout()
fig.savefig('/cellar/users/agross/figures/gsea.pdf')
"""
Explanation: I am folling up on Fatty Acid Metabolism as opposed to biological oxidations, because it has a larger effect size, although the smaller gene-set size gives it a less extreme p-value.
End of explanation
"""
os.chdir('../Methlation/')
import Setup.DX_Imports as DX
#Do not import
fig, axs = subplots(2,4, figsize=(12,5))
axs = axs.flatten()
for i,p in enumerate(DX.probe_sets.keys()):
draw_dist(dx_meth.frac, DX.probe_sets[p], ax=axs[i])
axs[i].legend().set_visible(False)
axs[i].set_yticks([])
axs[i].set_title(p)
prettify_ax(axs[i])
f_odds = pd.DataFrame({f: fisher_exact_test((dx_meth.frac - .5).abs() > .25, v)
for f,v in DX.probe_sets.iteritems()}).T
np.log2(f_odds.odds_ratio).plot(kind='bar', ax=axs[-1])
prettify_ax(axs[-1])
fig.tight_layout()
def fig_1g(ax):
lw = 2.5
draw_dist(dx_meth.frac.ix[ti(DX.probe_sets['Promoter'])], ax=ax, lw=lw)
draw_dist(dx_meth.frac.ix[ti(DX.probe_sets['CpG Island'])], ax=ax, lw=lw)
draw_dist(dx_meth.frac.ix[ti(DX.probe_sets['PRC2'])], ax=ax, lw=lw)
draw_dist(dx_meth.frac, ax=ax, colors='grey', lw=lw)
ax.set_yticks([])
ax.set_xticks([0,.5,1])
ax.set_ylabel('Density')
ax.set_xlabel('Fraction with Increased Methylation')
ax.legend(('Promoter','CpG Island','PRC2','All Probes'))
prettify_ax(ax)
return ax
plt.rcParams['savefig.dpi'] = 150
#Do not import
fig, ax = subplots(1,1, figsize=(5,3))
fig_1g(ax);
fig.tight_layout()
fig.savefig('/cellar/users/agross/figures/fx_enrichment.pdf')
"""
Explanation: I need to wrap my methylation helper functions into a package
End of explanation
"""
#Do not import
fig, axs = subplots(1,3, figsize=(15,3.5))
fig_1e(axs[0])
fig_1f(axs[1])
fig_1g(axs[2])
fig.tight_layout()
fig.savefig('/cellar/users/agross/figures/fig1_bottom.png', dpi=300)
"""
Explanation: Merge Bottom Half of Figure 1
End of explanation
"""
|
willsa14/ras2las | data/kgs/DownloadLogs_v1.ipynb | mit | elogs = pd.read_csv('temp/ks_elog_scans.txt', parse_dates=True)
lases = pd.read_csv('temp/ks_las_files.txt', parse_dates=True)
elogs_mask = elogs['KID'].isin(lases['KGS_ID']) # Create mask for elogs
both_elog = elogs[elogs_mask] # select items elog that fall in both
both_elog_unique = both_elog.drop_duplicates('KID') # remove duplicates
print('How many logs fall in both and have unique KGS_ID? '+str(both_elog_unique.shape[0]))
both_elog_unique_new = both_elog_unique.loc['2000-1-1' : '2017-1-1']
both_elog_unique_new['KID']
lases_mask = lases['KGS_ID'].isin(elogs['KID']) # Create mask for elogs
both_lases = lases[las_mask] # select items elog that fall in both
both_lases_unique = both_lases.drop_duplicates('KGS_ID') # remove duplicates
print('Other direction -- how many logs fall in both and have unique KGS_ID? '+str(both_lases_unique.shape[0]))
if both_elog_unique.shape[0] == both_lases_unique.shape[0]:
print('Same in both directions.')
elogs_hasdup_bool = elogs['KID'].isin(elogs[elogs.duplicated('KID')]['KID'])
elogs_nodup = elogs[-elogs_hasdup_bool]
elogs_nodup.shape # How many logs have no duplicate?
elogs_nodup.drop_duplicates('KID').shape == elogs_nodup.shape
lases_hasdup_bool = lases['KGS_ID'].isin(lases[lases.duplicated('KGS_ID')]['KGS_ID'])
lases_nodup = lases[-lases_hasdup_bool]
lases_nodup.shape # How many logs have no duplicate?
# lases_nodup.drop_duplicates('KGS_ID').shape == lases_nodup.shape
"""
Explanation: Elogs is collection of KGS TIFF files; las is KGS .las files
End of explanation
"""
elogs_nodup_mask = elogs_nodup['KID'].isin(lases_nodup['KGS_ID']) # Create mask for elogs
both_elog_nodup = elogs_nodup[elogs_nodup_mask] # select items elog that fall in both
print('How many logs fall in both and have unique KGS_ID? '+str(both_elog_nodup.shape[0]))
lases_nodup_mask = lases_nodup['KGS_ID'].isin(elogs_nodup['KID']) # Create mask for elogs
both_lases_nodup = lases_nodup[lases_nodup_mask] # select items elog that fall in both
print('From other direction -- how many logs fall in both and have unique KGS_ID? '+str(both_lases_nodup.shape[0]))
"""
Explanation: Trying again after filtering out any wells that have duplicate logs
End of explanation
"""
both_elog_nodup.loc['1980-1-1' : '2017-1-1'].shape
"""
Explanation: Select logs from 1980 onward
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/cccr-iitm/cmip6/models/sandbox-3/atmos.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cccr-iitm', 'sandbox-3', 'atmos')
"""
Explanation: ES-DOC CMIP6 Model Properties - Atmos
MIP Era: CMIP6
Institute: CCCR-IITM
Source ID: SANDBOX-3
Topic: Atmos
Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos.
Properties: 156 (127 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:48
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of atmospheric model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the atmosphere.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.4. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on the computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 2.5. High Top
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required: TRUE Type: STRING Cardinality: 1.1
Timestep for the dynamics, e.g. 30 min.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.2. Timestep Shortwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the shortwave radiative transfer, e.g. 1.5 hours.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Timestep Longwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the longwave radiative transfer, e.g. 3 hours.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the orography.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
"""
Explanation: 4.2. Changes
Is Required: TRUE Type: ENUM Cardinality: 1.N
If the orography type is modified describe the time adaptation changes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of grid discretisation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
"""
Explanation: 6.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.3. Scheme Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation function order
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.4. Horizontal Pole
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal discretisation pole singularity treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.5. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type of vertical coordinate system
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere dynamical core
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the dynamical core of the model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Timestepping Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Timestepping framework type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of the model prognostic variables
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Top boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Top Heat
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary heat treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Top Wind
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary wind treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required: FALSE Type: ENUM Cardinality: 0.1
Type of lateral boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Horizontal diffusion scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal diffusion scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Tracer advection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.3. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme conserved quantities
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.4. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracer advection scheme conservation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Momentum advection schemes name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.3. Scheme Staggering Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme staggering type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.4. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme conserved quantities
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme conservation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required: TRUE Type: ENUM Cardinality: 1.N
Aerosols whose radiative effect is taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of shortwave radiation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Shortwave radiation scheme spectral integration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Shortwave radiation transport calculation methods
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Shortwave radiation scheme number of spectral intervals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud ice crystals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud liquid droplets
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with aerosols
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with gases
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of longwave radiation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the longwave radiation scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Longwave radiation scheme spectral integration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Longwave radiation transport calculation methods
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 22.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Longwave radiation scheme number of spectral intervals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud ice crystals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24.2. Physical Reprenstation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud liquid droplets
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with aerosols
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with gases
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere convection and turbulence
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Boundary layer turbulence scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Boundary layer turbulence scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Closure Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Boundary layer turbulence scheme closure order
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.4. Counter Gradient
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Uses boundary layer turbulence scheme counter gradient
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Deep convection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of deep convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Shallow convection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
shallow convection scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
"""
Explanation: 32.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
shallow convection scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of shallow convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for shallow convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of large scale cloud microphysics and precipitation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the large scale precipitation parameterisation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 34.2. Hydrometeors
Is Required: TRUE Type: ENUM Cardinality: 1.N
Precipitating hydrometeors taken into account in the large scale precipitation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the microphysics parameterisation scheme used for large scale clouds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 35.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Large scale cloud microphysics processes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the atmosphere cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
"""
Explanation: 36.3. Atmos Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Atmosphere components that are linked to the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.4. Uses Separate Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.6. Prognostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a prognostic scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.7. Diagnostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a diagnostic scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.8. Prognostic Variables
Is Required: FALSE Type: ENUM Cardinality: 0.N
List the prognostic variables used by the cloud scheme, if applicable.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required: FALSE Type: ENUM Cardinality: 0.1
Method for taking into account overlapping of cloud layers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.2. Cloud Inhomogeneity
Is Required: FALSE Type: STRING Cardinality: 0.1
Method for taking into account cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
"""
Explanation: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale water distribution type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 38.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale water distribution function name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 38.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale water distribution function type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
"""
Explanation: 38.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale water distribution coupling with convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
"""
Explanation: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale ice distribution type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 39.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale ice distribution function name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 39.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale ice distribution function type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
"""
Explanation: 39.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale ice distribution coupling with convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of observation simulator characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator ISSCP top height estimation methodUo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41.2. Top Height Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator ISSCP top height direction
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator COSP run configuration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.2. Number Of Grid Points
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of grid points
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.3. Number Of Sub Columns
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.4. Number Of Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of levels
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Cloud simulator radar frequency (Hz)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 43.2. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator radar type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 43.3. Gas Absorption
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses gas absorption
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 43.4. Effective Radius
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses effective radius
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator lidar ice type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 44.2. Overlap
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator lidar overlap
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of gravity wave parameterisation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.2. Sponge Layer
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sponge layer in the upper levels in order to avoid gravity wave reflection at the top.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.3. Background
Is Required: TRUE Type: ENUM Cardinality: 1.1
Background wave distribution
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.4. Subgrid Scale Orography
Is Required: TRUE Type: ENUM Cardinality: 1.N
Subgrid scale orography effects taken into account.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the orographic gravity wave scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave source mechanisms
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave calculation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave propogation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave dissipation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the non-orographic gravity wave scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave source mechanisms
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
"""
Explanation: 47.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave calculation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave propogation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave dissipation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of solar insolation of the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required: TRUE Type: ENUM Cardinality: 1.N
Pathways for the solar forcing of the atmosphere model domain
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
"""
Explanation: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the solar constant.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 50.2. Fixed Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If the solar constant is fixed, enter the value of the solar constant (W m-2).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 50.3. Transient Characteristics
Is Required: TRUE Type: STRING Cardinality: 1.1
solar constant transient characteristics (W m-2)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
"""
Explanation: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of orbital parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 51.2. Fixed Reference Date
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Reference date for fixed orbital parameters (yyyy)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 51.3. Transient Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Description of transient orbital parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 51.4. Computation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used for computing orbital parameters.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does top of atmosphere insolation impact on stratospheric ozone?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the implementation of volcanic effects in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How volcanic effects are modeled in the atmosphere.
End of explanation
"""
|
CNS-OIST/STEPS_Example | user_manual/source/memb_pot.ipynb | gpl-2.0 | from __future__ import print_function # for backward compatibility with Py2
import steps.model as smodel
import steps.geom as sgeom
import steps.rng as srng
import steps.solver as ssolver
import steps.utilities.meshio as meshio
import numpy
import math
import time
from random import *
"""
Explanation: Simulating Membrane Potential
The simulation scripts described in this chapter are available at STEPS_Example repository.
This chapter introduces the concept of simulating the electric potential
across a membrane in STEPS using a method that calculates electric potentials on tetrahedral meshes called 'E-Field' (see Hepburn I. et al. (2013) Efficient calculation of the quasi-static electrical potential on a tetrahedral mesh. Front Comput Neurosci. DOI: 10.3389/fncom.2013.00129).
We'll be introduced to new objects that
represent phenomena linked to the membrane potential simulation,
such as voltage-dependent channel transitions and currents across the membrane. We will
look at an example based on a very widely-used
model in computational neuroscience, the classical Hodgkin-Huxley model of the
action-potential, in molecular form. To demonstrate some useful techniques for
spatial simulations we will model action potential propagation in a simple mesh. As with previous chapters,
we will briefly introduce the model, then go through Python code used to run the
model in STEPS, with thorough descriptions where necessary.
We will start with spatial stochastic simulation in solvers 'Tetexact' (steps.solver.Tetexact) and 'TetOpSplit' (steps.solver.TetOpSplit), then discuss what modifications are necessary to run the
equivalent spatial deterministic solution in solver 'TetODE' (steps.solver.TetODE).
Markov gating scheme
While many readers may not be familiar with conversion of the classical Hodgkin-Huxley (HH)
model to a Markov gating scheme we will only give a brief description here, though there are many
sources a reader may consult for a more detailed description (for example Hille B. Gating Mechanisms: Kinetic Thinking. In Ion Channels of Excitable Membranes, 3rd ed. Sinauer Associates, Sunderland, MA: 2001:583-589).
In brief, conductances are converted to a population of individual channels (each with single-channel
conductance of typically 20pS), and each individual channel may exist in one of a number of
states with rates described of possible first-order transitions to other states. Certain assumptions,
such as that the the rate constants do not depend on the history of the system (a Markov process),
and with the simplification that states with the same number of 'open' and 'closed' gates behave
identically regardless of specific configuration, lead to gating schemes as shown in the two figures below
for the HH potassium and sodium channels respectively.
In this representation the potassium channel is described by 4 gates which may be in open or closed configuration. State n3, for example, means that any 3 of the 4 gates are in open state. Where all 4 gates are open (state n4) the channel may conduct a current- all other states are non-conducting states.
The sodium channel is represented by 8 possible states- the m3h1 state is the conducting state.
The transition rates ($a_n$, $b_n$ for the potassium channel - $a_m$, $b_m$, $a_h$, $b_h$ for the sodium channel)
should be very familiar to anyone well-acquainted with the HH model:
\begin{equation}
a_n = \frac{0.01\times(10-(V+65))}{\exp\left(\frac{10-(V+65)}{10}\right)-1}
\end{equation}
\begin{equation}
b_n = 0.125\exp\left(\frac{-(V+65)}{80}\right)
\end{equation}
\begin{equation}
a_m = \frac{0.1\times(25-(V+65))}{\exp\left(\frac{25-(V+65)}{10}\right)-1}
\end{equation}
\begin{equation}
b_m = 4\exp\left(\frac{-(V+65)}{18}\right)
\end{equation}
\begin{equation}
a_h = 0.07\exp\left(\frac{-(V+65)}{20}\right)
\end{equation}
\begin{equation}
b_h = \frac{1}{\exp\left(\frac{30-(V+65)}{10}\right)+1}
\end{equation}
Where V is the potential across the membrane (in millivolts). Modelled as a stochastic process where each state is discretely populated, these functions form the basis of the propensity functions for each possible transition at any given voltage (here units are per millisecond). Voltage continuously changes during simulation, yet over a short period of time the change is small enough so that the transition rates may be considered constant and stochastic algorithms applied. The transition rates must then be updated when the voltage change becomes large enough to merit a reevaluation of these functions.
Modelling solution
Organisation of code
As in previous chapters we will go through code line-by-line from a script
used to run this simulation in STEPS, but this time without using the command prompt style.
Readers should note that actual indentation in the Python code and the indentation in the examples
here can be different, and indentation is very important in Python code.
The first thing to do is to import modules from STEPS that we need to run the simulation,
and assign them shorter names to reduce typing (for example smodel refers to steps.model).
In addition we will make use of modules numpy, math, time and random to assist with the simulation:
End of explanation
"""
# Potassium single-channel conductance
K_G = 20.0e-12 # Siemens
# Potassium channel density
K_ro = 18.0e12 # per square meter
# Potassium reversal potential
K_rev = -77e-3 # volts
"""
Explanation: Next we define some parameters for the simulation, which are intended to remain constant throughout
the script. We start with the potassium channel and define the single-channel conductance, channel
density and reversal potential, keeping to a conductance to 0.036 S/cm2 (see Simulation with Tetexact for more on converting continuous conductance to discrete conductance):
End of explanation
"""
# Sodium single-channel conductance
Na_G = 20.0e-12 # Siemens
# Sodium channel density
Na_ro = 60.0e12 # per square meter
# Sodium reversal potential
Na_rev = 50e-3 # volts
"""
Explanation: The first thing to note is that, as usual in STEPS, units are s.i., which means in the above example the single channel conductance is given in Siemens and
the reversal potential for the ohmic current is in volts.
Similarly, we define parameters for the sodium channel, also choosing a single-channel conductance
of 20pS:
End of explanation
"""
# Leak single-channel conductance
L_G = 0.3e-12 # Siemens
# Leak density
L_ro = 10.0e12 # per square meter
# Leak reversal potential
leak_rev = -54.4e-3 # volts
"""
Explanation: The HH model also includes a leak conductance, which may also be discretised (although another option is to use solver function steps.solver.Tetexact.setMembRes). The overall conductance is
small compared to maximal potassium and sodium conductances, but we choose a similar channel density to give
a good spatial spread of the conductance, which means a fairly low single-channel conductance:
End of explanation
"""
# A table of potassium channel population factors:
# n0, n1, n2, n3, n4
K_facs = [ 0.21768, 0.40513, 0.28093, 0.08647, 0.00979 ]
# A table of sodium channel population factors
# m0h0, m1h0, m2h0, m3h0, m0h1, m1h1, m2h1, m3h1:
Na_facs = [ 0.34412, 0.05733, 0.00327, 6.0e-05,\
0.50558, 0.08504, 0.00449, 0.00010 ]
"""
Explanation: The next parameters require a little explanation. Taking the potassium conductance as an example, the
potassium density will convert to a discrete number of channels that will give (approximately) our intended
maximal conductance of 0.036 S/$cm^2$. In the molecular sense, this means that if all potassium channels
are in the 'open' conducting state then we will reach the maximal conductance. However, in fact
each individual channel can be in any one of 5 states (including the conducting state) (see figure above) and these states are
described by separate objects in the STEPS simulation (as we will see later), where the sum of populations of each state should
be equal to the total number of channels. For example, if the surface of the mesh is 100 square microns
by the above density we expect to have a total of 1800 potassium channels in the simulation but at some time
we might have e.g. 400 in the n0 state, 700 in the n1 state, 500 in the n2 state, 150 in the n3 state
and 50 in the conducting n4 state, and the total at any time will be equal to 1800.
So we intend to initialise our populations of channel states to some starting value. The details of how to
calculate the initial condition will not be given here, but the factors used here are steady-state approximations for
the HH model at an initial potential of -65mV. We then give a table of fractional channel state populations (which
add up to a value of 1). For each channel state the factor multiplied by the channel density and the surface area
of the mesh will give our initial population of channels in that state:
End of explanation
"""
# Temperature for gating kinetics
celsius = 20.0
# Current clamp
Iclamp = 50.0e-12 # amps
# Voltage range for gating kinetics in Volts
Vrange = [-100.0e-3, 50e-3, 1e-4]
"""
Explanation: We now define some more important parameters for our simulation. The first is temperature assumed for
the gating kinetics, which we will give in units of degrees celsius but is not directly used in simulation
(as we will see). The second is a current clamp that we intend for one end of the mesh. The third is a
voltage-range for simulation. These parameters will all be discussed in more detail later:
End of explanation
"""
# The number of simulation time-points
N_timepoints = 41
# The simulation dt
DT_sim = 1.0e-4 # seconds
"""
Explanation: Finally we set some simulation control parameters, the number of 'time-points' to run and
the 'time-step' at which we will record data. So we will run for 4ms in increments of 0.1ms:
End of explanation
"""
mdl = smodel.Model()
ssys = smodel.Surfsys('ssys', mdl)
"""
Explanation: Model specification
We move on to the biochemical model description. This is quite different from previous chapters, with
new objects to look at, which are important building blocks of any simulation that includes
voltage-dependent processes in STEPS.
To start, we create a Model container object (steps.model.Model) and one surface system
(steps.model.Surfsys), with no volume system necessary for this relatively simple model:
End of explanation
"""
# Potassium channel
K = smodel.Chan('K', mdl)
K_n0 = smodel.ChanState('K_n0', mdl, K)
K_n1 = smodel.ChanState('K_n1', mdl, K)
K_n2 = smodel.ChanState('K_n2', mdl, K)
K_n3 = smodel.ChanState('K_n3', mdl, K)
K_n4 = smodel.ChanState('K_n4', mdl, K)
"""
Explanation: To make our potassium, sodium and leak channels we need to use two new objects. The steps.model.ChanState
objects are used to describe each separate channel state, and steps.model.Chan objects group a set of
channel states together to form a channel. At present the role of Channel objects (steps.model.Chan)
is mainly conceptual and not functional, with the ChannelState objects (steps.model.ChanState)
playing the important roles in simulation: for example, voltage-dependent transitions occur between channel states
and a channel current object is associated with a channel state, both of which we will see later. As discussed in Hepburn I. et al. (2013) Efficient calculation of the quasi-static electrical potential on a tetrahedral mesh. Front Comput Neurosci. DOI: 10.3389/fncom.2013.00129, Channel states also include
the same functionality as steps.model.Spec objects and so can interact with other molecules and diffuse on the surface
or in a volume, however there is no example of that functionality in this model.
The code to create the potassium channel looks like this:
End of explanation
"""
Na = smodel.Chan('Na', mdl)
Na_m0h0 = smodel.ChanState('Na_m0h0', mdl, Na)
Na_m1h0 = smodel.ChanState('Na_m1h0', mdl, Na)
Na_m2h0 = smodel.ChanState('Na_m2h0', mdl, Na)
Na_m3h0 = smodel.ChanState('Na_m3h0', mdl, Na)
Na_m0h1 = smodel.ChanState('Na_m0h1', mdl, Na)
Na_m1h1 = smodel.ChanState('Na_m1h1', mdl, Na)
Na_m2h1 = smodel.ChanState('Na_m2h1', mdl, Na)
Na_m3h1 = smodel.ChanState('Na_m3h1', mdl, Na)
"""
Explanation: steps.model.ChanState object construction looks quite similar to that for steps.model.Spec objects,
with the difference that, as well as the usual string identifier and steps.model.Model container object
arguments, the constructor also expects to see a reference to a steps.model.Chan object that conceptually
groups the channel states together. It is obvious to see here which channel configuration each
state is intended to represent in this model.
Similarly we create the sodium channel objects:
End of explanation
"""
# Leak channel
L = smodel.Chan('L', mdl)
Leak = smodel.ChanState('Leak', mdl, L)
"""
Explanation: and also the leak channel objects, which only exist in conducting state:
End of explanation
"""
# Temperature dependence
thi = math.pow(3.0, ((celsius-6.3)/10.0))
_a_n = lambda mV: thi*((0.01*(10-(mV+65.))/(math.exp((10-(mV+65.))/10.)-1)))
_b_n = lambda mV: thi*((0.125*math.exp(-(mV+65.)/80.)))
_a_m = lambda mV: thi*((0.1*(25-(mV+65.))/(math.exp((25-(mV+65.))/10.)-1)))
_b_m = lambda mV: thi*((4.*math.exp(-(mV+65.)/18.)))
_a_h = lambda mV: thi*((0.07*math.exp(-(mV+65.)/20.)))
_b_h = lambda mV: thi*((1./(math.exp((30-(mV+65.))/10.)+1)))
"""
Explanation: We move on to describing the transitions between channel states. Firstly, we describe the transition rates
in the model, as described in Markov gating scheme, and we do so for each using a lambda expressions, which is
a shorthand way to define a function object in Python. We can use any callable function here (as will be
explained later) so we could just as easily use the more familiar def syntax if we wanted to. We also introduce
temperature dependence and use the previously defined celsius variable to find thi at 20 degrees celsius:
End of explanation
"""
Kn0n1 = smodel.VDepSReac('Kn0n1', ssys, slhs = [K_n0], srhs = [K_n1], \
k=lambda V: 1.0e3 *4.*_a_n(V*1.0e3), vrange = [-100.0e-3, 50e-3, 1e-4])
"""
Explanation: We should bear in mind that these functions will expect a voltage to be given in units of millivolts, and
will return the transition rate in unit of /ms.
To define voltage-dependent channel transitions we use a new STEPS object, the 'Voltage-dependent surface reaction'
(steps.model.VDepSReac). This object may be used to define any reaction in STEPS that is voltage-dependent, which
often involves 1st-order voltage-dependent transitions between different channel states, but also supports
higher order interactions which may include interactions between volume-diffusing molecules and surface-bound molecules
and thus allows modelling of, for example, voltage-dependent channel block. Because all
of these processes are only permitted to occur on a surface and not in a volume, we choose the term
'voltage-dependent surface reaction'.
The syntax of creating this object, therefore, shares similarities with steps.model.SReac, but with some
important differences. Let's look at a first example:
End of explanation
"""
k = lambda V: 1.0e3 *4.*_a_n(V*1.0e3)
"""
Explanation: The first few arguments to the steps.model.VDepSReac constructor are identical to those for
steps.model.SReac: in order, a string-identifier is required (which must be unique amongst all objects of the
same type), a reference to a steps.model.Surfsys object, a list of reactants- the 'left-hand side' arguments
(which may exist in the 'inner' volume, the surface, or the 'outer' volume, but not in both volumes) and a list of products- the 'right-hand side' arguments. The syntax up to this point follows exactly as described for
steps.model.SReac in Surface-Volume Reactions (Example: IP3 Model), with one noteworthy difference: now the reactants and products may be
steps.model.ChanState objects, as well as steps.model.Spec objects, or a mixture of both. Indeed,
in the context of reactions in STEPS (voltage-dependent or otherwise) steps.model.ChanState objects
behave exactly like steps.model.Spec objects, with the only difference between the two being that
steps.model.ChanState objects support additional functionality, namely the ability to conduct current, as we
will see later.
The other arguments, keyword arguments k and vrange require some explanation. The macroscopic reaction 'constant' is
of course now not a constant at all, but instead depends on voltage. To describe the voltage-dependence we pass
a function to argument k which returns the reaction rate as a function of voltage. We tell STEPS to evaluate this
function over a voltage range, which we choose so as to easily cover all voltages we expect the membrane potential to
reach during the simulation. As with other reaction objects, all units are specified as s.i. units, with the exception of
higher-order reactions which are based on Molar units. Since this is a 1st-order reaction we must ensure that the
function passed to the k argument returns in units of /second over the range of potentials passed in units of Volts.
Since this particular voltage-dependent surface reaction object is clearly intended to model the forward n0 to n1
transition, as shown in the figure above, we require a factor of 4 to be applied to the _a_n function to cover each possible 0 to 1 transition. To achieve this our
function to k is:
End of explanation
"""
vrange = [-100.0e-3, 50e-3, 1e-4]
"""
Explanation: where the unit conversions should be clear (recall _a_n expects an argument in mV units, and returns /ms).
The vrange argument requires the voltage-range to evaluate the rate-function as a Python sequence in the order
of: minimum voltage, maximum voltage, voltage-step. We should choose the voltage range to cover
what we expect from the simulation, but not by too much since a smaller range gives faster performance, and the voltage-step
should be chosen to give only a small error from linear interpolation between voltage-points. It is a very important point
that if, during a simulation, the membrane potential goes outside the voltage range for any voltage-dependent surface
reaction object located in that membrane the simulation will fail.
In our example we choose a voltage range of -100mV to +50mV, and tell STEPS to evaluate the voltage every 0.1mV, so
the vrange argument is:
End of explanation
"""
Kn1n2 = smodel.VDepSReac('Kn1n2', ssys, slhs = [K_n1], srhs = [K_n2], \
k=lambda V: 1.0e3 *3.*_a_n(V*1.0e3), vrange = Vrange)
Kn2n3 = smodel.VDepSReac('Kn2n3', ssys, slhs = [K_n2], srhs = [K_n3], \
k=lambda V: 1.0e3 *2.*_a_n(V*1.0e3), vrange = Vrange)
Kn3n4 = smodel.VDepSReac('Kn3n4', ssys, slhs = [K_n3], srhs = [K_n4], \
k=lambda V: 1.0e3 *1.*_a_n(V*1.0e3), vrange = Vrange)
Kn4n3 = smodel.VDepSReac('Kn4n3', ssys, slhs = [K_n4], srhs = [K_n3], \
k=lambda V: 1.0e3 *4.*_b_n(V*1.0e3), vrange = Vrange)
Kn3n2 = smodel.VDepSReac('Kn3n2', ssys, slhs = [K_n3], srhs = [K_n2], \
k=lambda V: 1.0e3 *3.*_b_n(V*1.0e3), vrange = Vrange)
Kn2n1 = smodel.VDepSReac('Kn2n1', ssys, slhs = [K_n2], srhs = [K_n1], \
k=lambda V: 1.0e3 *2.*_b_n(V*1.0e3), vrange = Vrange)
Kn1n0 = smodel.VDepSReac('Kn1n0', ssys, slhs = [K_n1], srhs = [K_n0], \
k=lambda V: 1.0e3 *1.*_b_n(V*1.0e3), vrange = Vrange)
"""
Explanation: In the 'Kn0n1' example the sequence of voltages was given directly to the vrange argument, but in fact at the beginning
of our script we defined a voltage-range as list Vrange, which we pass to all future VDepSreac objects we create in
this script. The rest of our voltage-dependent channel transitions for the Potassium channel are:
End of explanation
"""
Na_m0h1_m1h1 = smodel.VDepSReac('Na_m0h1_m1h1', ssys, \
slhs=[Na_m0h1], srhs=[Na_m1h1], \
k=lambda V:1.0e3*3.*_a_m(V*1.0e3), vrange=Vrange)
Na_m1h1_m2h1 = smodel.VDepSReac('Na_m1h1_m2h1', ssys, \
slhs=[Na_m1h1], srhs=[Na_m2h1], \
k=lambda V:1.0e3*2.*_a_m(V*1.0e3), vrange=Vrange)
Na_m2h1_m3h1 = smodel.VDepSReac('Na_m2h1_m3h1', ssys, \
slhs=[Na_m2h1], srhs=[Na_m3h1], \
k=lambda V:1.0e3*1.*_a_m(V*1.0e3), vrange=Vrange)
Na_m3h1_m2h1 = smodel.VDepSReac('Na_m3h1_m2h1', ssys, \
slhs=[Na_m3h1], srhs=[Na_m2h1], \
k=lambda V:1.0e3*3.*_b_m(V*1.0e3), vrange=Vrange)
Na_m2h1_m1h1 = smodel.VDepSReac('Na_m2h1_m1h1', ssys, \
slhs=[Na_m2h1], srhs=[Na_m1h1], \
k=lambda V:1.0e3*2.*_b_m(V*1.0e3), vrange=Vrange)
Na_m1h1_m0h1 = smodel.VDepSReac('Na_m1h1_m0h1', ssys, \
slhs=[Na_m1h1], srhs=[Na_m0h1], \
k=lambda V:1.0e3*1.*_b_m(V*1.0e3), vrange=Vrange)
Na_m0h0_m1h0 = smodel.VDepSReac('Na_m0h0_m1h0', ssys, \
slhs=[Na_m0h0], srhs=[Na_m1h0], \
k=lambda V:1.0e3*3.*_a_m(V*1.0e3), vrange=Vrange)
Na_m1h0_m2h0 = smodel.VDepSReac('Na_m1h0_m2h0', ssys, \
slhs=[Na_m1h0], srhs=[Na_m2h0], \
k=lambda V:1.0e3*2.*_a_m(V*1.0e3), vrange=Vrange)
Na_m2h0_m3h0 = smodel.VDepSReac('Na_m2h0_m3h0', ssys, \
slhs=[Na_m2h0], srhs=[Na_m3h0], \
k=lambda V:1.0e3*1.*_a_m(V*1.0e3), vrange=Vrange)
Na_m3h0_m2h0 = smodel.VDepSReac('Na_m3h0_m2h0', ssys, \
slhs=[Na_m3h0], srhs=[Na_m2h0], \
k=lambda V:1.0e3*3.*_b_m(V*1.0e3), vrange=Vrange)
Na_m2h0_m1h0 = smodel.VDepSReac('Na_m2h0_m1h0', ssys, \
slhs=[Na_m2h0], srhs=[Na_m1h0], \
k=lambda V:1.0e3*2.*_b_m(V*1.0e3), vrange=Vrange)
Na_m1h0_m0h0 = smodel.VDepSReac('Na_m1h0_m0h0', ssys, \
slhs=[Na_m1h0], srhs=[Na_m0h0], \
k=lambda V:1.0e3*1.*_b_m(V*1.0e3), vrange=Vrange)
Na_m0h0_m0h1 = smodel.VDepSReac('Na_m0h0_m0h1', ssys, \
slhs=[Na_m0h0], srhs=[Na_m0h1], \
k=lambda V:1.0e3*_a_h(V*1.0e3), vrange=Vrange)
Na_m1h0_m1h1 = smodel.VDepSReac('Na_m1h0_m1h1', ssys, \
slhs=[Na_m1h0], srhs=[Na_m1h1], \
k=lambda V:1.0e3*_a_h(V*1.0e3), vrange=Vrange)
Na_m2h0_m2h1 = smodel.VDepSReac('Na_m2h0_m2h1', ssys, \
slhs=[Na_m2h0], srhs=[Na_m2h1], \
k=lambda V:1.0e3*_a_h(V*1.0e3), vrange=Vrange)
Na_m3h0_m3h1 = smodel.VDepSReac('Na_m3h0_m3h1', ssys, \
slhs=[Na_m3h0], srhs=[Na_m3h1], \
k=lambda V:1.0e3*_a_h(V*1.0e3), vrange=Vrange)
Na_m0h1_m0h0 = smodel.VDepSReac('Na_m0h1_m0h0', ssys, \
slhs=[Na_m0h1], srhs=[Na_m0h0], \
k=lambda V:1.0e3*_b_h(V*1.0e3), vrange=Vrange)
Na_m1h1_m1h0 = smodel.VDepSReac('Na_m1h1_m1h0', ssys, \
slhs=[Na_m1h1], srhs=[Na_m1h0], \
k=lambda V:1.0e3*_b_h(V*1.0e3), vrange=Vrange)
Na_m2h1_m2h0 = smodel.VDepSReac('Na_m2h1_m2h0', ssys, \
slhs=[Na_m2h1], srhs=[Na_m2h0], \
k=lambda V:1.0e3*_b_h(V*1.0e3), vrange=Vrange)
Na_m3h1_m3h0 = smodel.VDepSReac('Na_m3h1_m3h0', ssys, \
slhs=[Na_m3h1], srhs=[Na_m3h0], \
k=lambda V:1.0e3*_b_h(V*1.0e3), vrange=Vrange)
"""
Explanation: The voltage-dependent surface reactions for the Sodium channel follow. Since there are 20 different possible
transitions (see figure above) we need to create 20 steps.model.VDepSReac objects:
End of explanation
"""
OC_K = smodel.OhmicCurr('OC_K', ssys, chanstate=K_n4, g=K_G, erev=K_rev)
OC_Na = smodel.OhmicCurr('OC_Na', ssys, chanstate=Na_m3h1, g=Na_G, erev=Na_rev)
OC_L = smodel.OhmicCurr('OC_L', ssys, chanstate=Leak, g=L_G, erev=leak_rev)
"""
Explanation: The final part of our model specification is to add currents. Presently in STEPS we have the choice of two types of current that have quite different behaviour: Ohmic currents- which are represented by steps.model.OhmicCurr objects- and currents based on the GHK flux equation- represented by steps.model.GHKcurr objects. Since the Hodgkin-Huxley model utilises Ohmic currents we only need to concern ourselves with those objects here.
The assumption made in STEPS is that Ohmic current objects are used to model currents of ions that play no other important role in the system other than in membrane excitability, and so it is not necessary to add, in this example, ions of sodium and potassium diffusing both extra- and intra-cellularly. Because of the relatively large concentration of these ions simulating diffusion would be incredibly slowing to simulations with no perceptible benefit to accuracy. It is due to these arguments that an Ohmic current in STEPS will not result in transport of ions between compartments. The GHK current objects are able to model ion transport and so should always be used when modelling currents of important signalling ions, a good example of which for many systems is calcium.
Because STEPS is primarily a discrete simulator the Current objects in STEPS are based on single-channel currents. A steps.model.OhmicCurr, applied to a specific steps.model.ChanState object will result in an Ohmic current through every single Channel in that specific state located in the Membrane (which we will create later) at any given time. Therefore, to create an Ohmic current in STEPS we need to pass information as to which Channel state the current will be applied to, as well as its single-channel conductance to this current, along with the reversal potential. As usual in STEPS all units are based on s.i. units, and so the single-channel conductance unit is Siemens and reversal potential unit is volts.
The steps.model.OhmicCurr constructor expects 5 arguments: a string identifier (as usual in STEPS this must be unique amongst other Ohmic current objects), a reference to a steps.model.Surfsys object, a reference to a steps.model.ChanState to which this current applies (chanstate argument), a single-channel conductance (g argument), and a reversal potential (erev argument). At the top of our script we already defined conductance and reversal potential for all of our channels in this simulation, i.e. the potassium single-channel conductance K_G = 20.0e-12 Siemens and reversal potential K_rev = -77e-3 volts, the sodium single-channel conductance Na_G = 20.0e-12 Siemens and reversal potential Na_rev = 50e-3 volts, the leak single-channel conductance L_G = 0.3e-12 Siemens and reversal potential leak_rev = -54.4e-3 volts, so we use these values when creating the Ohmic current objects. The conducting states of the potassium, sodium and leak currents respectively are K_n4, Na_m3h1 and Leak:
End of explanation
"""
mesh = meshio.importAbaqus('meshes/axon_cube_L1000um_D443nm_equiv0.5_19087tets.inp', 1e-6)[0]
"""
Explanation: Now in the STEPS simulation when, for example, the number of potassium channels in state K_n4 is non-zero a potassium conductance will exist equal to the population of K_n4 channel states multiplied by the single channel conductance, and a current will be calculated depending on the local voltage relative to the given reversal potential.
Geometry specification
With the model completed we move on to geometry specification. To simulate action potential propagation we'll demonstrate the rather unusual case of using a long cuboid mesh whereas other simulators may typically assume cylindrical geometry. This is partly to demonstrate that the only restriction on geometry used for the membrane potential calculation in STEPS is that it can be represented by a tetrahedral mesh. Since tetrahedral meshes are capable of representing real cellular geometry with high accuracy this opens up many interesting applications, yet for this example we'll stick with a rather basic shape. As in previous sections we'll import a mesh in Abaqus format, which represents a cuboid of length 1000µm in the z-axis, and a diameter of 0.44µm (which is an equivalent cylindrical diamter of 0.5µm) in the x and y axes (as shown in the figure below) :
End of explanation
"""
# Find the vertices for the current clamp and store in a list
injverts = []
for i in range(mesh.nverts):
if ((mesh.getVertex(i)[2] < (mesh.getBoundMin()[2]+0.1e-6))):
injverts.append(i)
print("Found ", injverts.__len__(), "I_inject vertices")
facetris = []
for i in range(mesh.ntris):
tri = mesh.getTri(i)
if ((tri[0] in injverts) and (tri[1] in injverts) and (tri[2] in injverts)):
facetris.append(i)
print("Found ", facetris.__len__(), "triangles on bottom face")
"""
Explanation: In the figure above we show a portion of the tetrahedral mesh representing a cuboid of length 1000µm oriented along the z-axis.
The following section of code will not be explained in detail, but simply serves two purposes. Firstly, to find the vertices at one end of the cuboid in which a current pulse will be applied (which will be stored in list injverts)- since the long axis of the cuboid in the z-axis these will be the minimum z vertices. Secondly, to find the corresponding triangles on that face, which will be excluded from the membrane (stored in list facetris) since this end is intended to be an 'open' end:
End of explanation
"""
memb_tris = list(mesh.getSurfTris())
# Remove triangles on bottom face from membrane triangles
for t in facetris: memb_tris.remove(t)
"""
Explanation: Now we will use a mesh function to find all the triangles on the surface of the mesh and exclude those on the bottom face:
End of explanation
"""
# Bin the surface triangles for recording current
bins_n = 100
memb_tris_binned = [None]*bins_n
mtb_area = numpy.zeros(bins_n)
# In m
bin_dz = 1000.0e-6/bins_n
# The centre positions of the bins
bin_pos = numpy.arange((bin_dz/2.0), 1000e-6, bin_dz)
for m in range(bins_n): memb_tris_binned[m]=[]
# Bin the triangles
for t in memb_tris:
barycz = mesh.getTriBarycenter(t)[2]
idx = 0
for p in bin_pos:
if (barycz >= p-(bin_dz/2.0) and barycz < p+(bin_dz/2.0)):
memb_tris_binned[idx].append(t)
mtb_area[idx]+=(mesh.getTriArea(t)*1.0e12)
break
idx +=1
"""
Explanation: The following section of code, which will also not be described in full detail, simply serves to bin the surface triangles by distance along the z-axis and to store the total area of the bins, which will be used later in the script to convert recorded current to a current density (current per unit area):
End of explanation
"""
# The points along (z) axis at which to record potential
pot_pos = numpy.arange(mesh.getBoundMin()[2], mesh.getBoundMax()[2], 10e-6)
pot_n = len(pot_pos)
pot_tet = numpy.zeros(pot_n, dtype = 'uint')
i=0
for p in pot_pos:
# Axis is aligned with z-axis
pot_tet[i] = mesh.findTetByPoint([0.0, 0.0, pot_pos[i]])
i=i+1
"""
Explanation: The final piece of geometry manipulation is to find a point at every 10µm along the z-axis at which to record potential. In STEPS it is possible to record potential anywhere in the membrane or conduction volume and from vertices, triangles and tetrahedrons. Here we intend to record the potential at intracellular tetrahedrons along the centre of the cuboid, and so find their indices and store in numpy array pot_tet:
End of explanation
"""
# Create cytosol compartment
cyto = sgeom.TmComp('cyto', mesh, range(mesh.ntets))
# Create the patch and associate with surface system 'ssys'
patch = sgeom.TmPatch('patch', mesh, memb_tris, cyto)
patch.addSurfsys('ssys')
"""
Explanation: Now, much like in previous chapters, we will create a compartment which simply consists of all tetrahedrons in the mesh, and a surface patch which consists of all surface triangles (except those on the minimum z face), which we found earlier and stored in list memb_tris:
End of explanation
"""
# Create the membrane across which the potential will be solved
membrane = sgeom.Memb('membrane', mesh, [patch], opt_method = 1)
"""
Explanation: And now we create a new and very important object for the membrane potential calculation, the 'membrane' itself. The membrane object, steps.geom.Memb, simply consists of one or more patch objects which must together form one continuos surface, although the membrane may be 'open' or 'closed' ('closed' means all member triangles are directly connected to 3 other membrane triangles and so form a closed surface, and 'open' means some triangles have fewer than 3 neighbours and so the surface contains holes). Any channels that exist in the patch(es) that comprise(s) the membrane are available to conduct a current (specified by steps.model.OhmicCurr or steps.model.GHKcurr objects). The INNER compartment(s) to the membrane patches will comprise the 'conduction volume' representing the intracellular region. The potential at all vertices in the membrane and conduction volume will be calculated and will vary with any channel, capacitive or externally applied currents, relative to the (earthed) extracellular region.
Where the extracellular space is included in simulations the membrane may be comprised of internal mesh triangles, but for this relatively simple model the membrane is formed from triangles on the surface of the mesh and is comprised of only one patch. This patch contains an inner compartment consisting of all tetrahedrons in the mesh, which will form the conduction volume. So we create the membrane:
End of explanation
"""
# Create the random number generator
r = srng.create('mt19937',512)
r.initialize(int(time.time()%10000))
"""
Explanation: The steps.geom.Memb constructor requires a string identifier argument and a reference to a steps.geom.Tetmesh object plus a list of the composite steps.geom.TmPatch objects (here there in only one), and finally an optional argument named opt_method. This allows the choice of a method for optimization of the ordering of vertices in the membrane and conduction volume, which is essential to produce an efficient calculation, as discussed in Hepburn I. et al. (2013) Efficient calculation of the quasi-static electrical potential on a tetrahedral mesh. Front Comput Neurosci. DOI: 10.3389/fncom.2013.00129. Two methods are presently available: 1) a fast ordering of vertices by their position along the principle axis, which is suitable if one axis is much longer than an other (as is the case here) and 2) a slower breadth-first tree iteration, which produces a similar result to method (1) in cable-like structures but offers a significant improvement to simulation efficiency in complex geometries. Although the initial search for (2) can be slow it is possible to save an optimisation in a file for a specific membrane with solver function steps.solver.Tetexact.saveMembOpt, and this optimisation file can then be supplied as an argument to the steps.geom.Memb constructor, so each optimisation for any given membrane need only be found once. However, since this example uses a cable-like mesh we can use the faster principle-axis ordering method, though method (2) is recommended when working with complex, realistic geometries.
There is also an optional boolean argument verify, which defaults to False, but if True will verify that the membrane is a suitable surface for the potential calculation- although this verification can take rather a long time for larger meshes, so should only be used when one is not confident in the suitability of the membrane.
Simulation with Tetexact
As always for a stochastic simulation in STEPS, we create the random number generator and provide a random initial seed based on the current time, here with 10,000 possible unique values:
End of explanation
"""
# Create solver object
sim = ssolver.Tetexact(mdl, mesh, r, True)
"""
Explanation: And with our model, geometry and random number generator created we are ready to create the solver object. The membrane potential calculation in STEPS is an extension to the steps.solver.Tetexact and steps.solver.TetODE solvers, and creating the solver is much like in previous mesh-based examples, with arguments to the constructor of a steps.model.Model object, a steps.geom.Tetmesh object and a steps.rng.RNG object in that order, plus a simple boolean flag that switches on the membrane potential calculation when set to True (and defaults to False):
End of explanation
"""
surfarea = sim.getPatchArea('patch')
"""
Explanation: If requested to perform the membrane potential calculation (with the boolean argument set to True) a Tetexact solver requires one (and currently only one) steps.geom.Memb to exist within the geometry description, and will therefore fail to be created if such an object does not exist.
With the steps.solver.Tetexact solver successfully created, with the membrane potential calculation included, it is time to set the simulation initial conditions. Much like in previous examples, this requires injecting molecules into a specific location. In this case we wish to inject a number of molecules represented by steps.model.ChanState objects in the model description into the membrane surface represented by a steps.geom.TmPatch object in the geometry description. As we will see, at the solver stage the Channel State objects behave just like Species objects and any solver method previously used for Species objects may be used for Channel State objects, such as steps.solver.Tetexact.setPatchCount, steps.solver.Tetexact.setCompConc and so on.
At this point we should pause to look at how to specify conductance in STEPS models. Conductance in STEPS comes from steps.model.OhmicCurr objects, which provide a single-channel conductance that will be applied to any Channel State molecule to which that conductance in mapped. For example, recall in this model that we created an Ohmic Current called OC_K to represent the potassium current in the simulation, which will apply to Channel State K_n4, with a single-channel conductance of 20 pS and reversal potential of -77mV, with this statement:
OC_K = smodel.OhmicCurr('OC_K', ssys, chanstate=K_n4, g=20.0e-12, erev=-77e-3)
The overall potassium conductance in the simulation at any time will be equal to the number of K_n4 Channel States in existence multiplied by the single-channel conductance, with a maximum conductance equal to the highest possible number of K_n4 Channel States (the total number of potassium channels).
Other simulators may use different methods from STEPS to specify conductance, and many modellers may be more comfortable working with conductance per unit area, so some care should be taken with the conversion for STEPS models. This typically involves multiplying conductance per unit area by the membrane area to find overall conductance, then injecting the correct amount of channels into the membrane in STEPS to represent this conductance, depending on the single-channel conductance. Since the conducting channels are discrete in STEPS there may be a small discrepancy from the continuous value.
Recall we have specified potassium channel density, K_ro, as 18 per square micron and sodium channel density, Na_ro, as 60 per square micron, previously in our script with statements:
K_ro = 18.0e12 # per square meter
Na_ro = 60.0e12 # per square meter
when multiplied by single-channel conductance to give maximum potassium conductance of 0.036 Siemens per square cm and sodium conductance of 0.120 Siemens per square cm. So when injecting our channels in STEPS we simply need to multiply these densities by the surface area of the membrane to find the number to inject. An added complication for this model is that we want to inject steady-state initial conditions, so all channel states have some initial non-zero proportion, which we specified previously in lists K_facs and Na_facs (and we will not go into the derivation of the steady-state factors here).
So to inject our channels, first we find the membrane surface area, which is the same as the area of its only constituent patch:
End of explanation
"""
sim.setPatchCount('patch', 'Na_m0h0', Na_ro*surfarea*Na_facs[0])
sim.setPatchCount('patch', 'Na_m1h0', Na_ro*surfarea*Na_facs[1])
sim.setPatchCount('patch', 'Na_m2h0', Na_ro*surfarea*Na_facs[2])
sim.setPatchCount('patch', 'Na_m3h0', Na_ro*surfarea*Na_facs[3])
sim.setPatchCount('patch', 'Na_m0h1', Na_ro*surfarea*Na_facs[4])
sim.setPatchCount('patch', 'Na_m1h1', Na_ro*surfarea*Na_facs[5])
sim.setPatchCount('patch', 'Na_m2h1', Na_ro*surfarea*Na_facs[6])
sim.setPatchCount('patch', 'Na_m3h1', Na_ro*surfarea*Na_facs[7])
sim.setPatchCount('patch', 'K_n0', K_ro*surfarea*K_facs[0])
sim.setPatchCount('patch', 'K_n1', K_ro*surfarea*K_facs[1])
sim.setPatchCount('patch', 'K_n2', K_ro*surfarea*K_facs[2])
sim.setPatchCount('patch', 'K_n3', K_ro*surfarea*K_facs[3])
sim.setPatchCount('patch', 'K_n4', K_ro*surfarea*K_facs[4])
sim.setPatchCount('patch', 'Leak', L_ro * surfarea)
"""
Explanation: And call solver method steps.solver.Tetexact.setPatchCount for every Channel State in the model (including leak) to set the initial number:
End of explanation
"""
# Set dt for membrane potential calculation to 0.01ms
sim.setEfieldDT(1.0e-5)
"""
Explanation: One example run of the above code resulted in potassium Channel State populations of 3135, 5834, 4046, 1245 and 141 respectively giving an initial potassium conductance (from K_n4) of 2.8nS (0.00035 Siemens per square cm) and maximum conductance of 288nS (0.036 Siemens per square cm) as desired.
The next few lines of code set some important new simulation variables, all to do with the membranes potential calculation. The first function (steps.solver.Tetexact.setEfieldDT) sets the time-step period for the potential calculation, specified in seconds. This tells STEPS how often to perform the 'E-Field' calculation to evaluate potential, and update any voltage-dependent processes in the simulation. The optimal value for this time-step will vary for different simulations, so some things should be kept in mind when making the choice. Firstly, the time-step should be short enough that the voltage change occurring during each time-step is small and voltage can be assumed constant during each time-step for any voltage-dependent processes in the model. A large time-step may result in loss of accuracy. Secondly, the shorter the time-step the slower the simulation will be. Thirdly, the time-step must be shorter or equal to the simulation time-step (this is 0.1ms in our model) so that at least one membrane potential calculation can be carried out per simulation time-step. As a rough guide 0.01ms is usually highly accurate, and it is not recommended to exceed 0.1ms. So for this simulation we choose a calculation time-step of 0.01ms (which happens to be the default value):
End of explanation
"""
# Initialise potential to -65mV
sim.setMembPotential('membrane', -65e-3)
"""
Explanation: Now we set the initial potential of the membrane with function :func:steps.solver.Tetexact.setMembPotential with an argument given in volts:
End of explanation
"""
# Set capacitance of the membrane to 1 uF/cm^2 = 0.01 F/m^2
sim.setMembCapac('membrane', 1.0e-2)
"""
Explanation: Which also happens to be the default.
And we set the specific capacitance of the membrane, in units of Farad per square meter, with function
steps.solver.Tetexact.setMembCapac. So for 1 microFarad per square cm this is 0.01 (which is also the default setting):
End of explanation
"""
# Set resistivity of the conduction volume to 100 ohm.cm = 1 ohm.meter
sim.setMembVolRes('membrane', 1.0)
"""
Explanation: Finally we set bulk resistivity, which is actually a property of the conduction volume encompassed by the membrane. We use function
steps.solver.Tetexact.setMembVolRes. STEPS expects units of ohm.metre here:
End of explanation
"""
# Set the current clamp
niverts = injverts.__len__()
for t in injverts:
sim.setVertIClamp(t, Iclamp/niverts)
"""
Explanation: Again, this is in fact the default setting.
The last condition to set is something that will remain unchanged throughout our simulation in this example, which is a constant current injection at one end of the long cubic geometry. This will have an effect of inducing action potentials at the depolarised end, which will then propagate, and a constant current at the correct level will ensure a train of action potentials. In STEPS it is possible to inject current to any node in the conduction volume or membrane with solver method steps.solver.Tetexact.setVertIClamp, or any membrane triangle (where current will be shared equally between its 3 nodes) with solver method steps.solver.Tetexact.setTriIClamp. Here, we have already found the vertices at one end of the geometry, the minimum z end, and stored them in list injverts. We now wish to set the current clamp for each of these vertices as a share of the 50pA current we have already defined in variable Iclamp. Note: STEPS maintains the convention that the effect of a positive applied current is to make potential more positive, which is the opposite signing convention to channel currents.
End of explanation
"""
# Create result structures
res = numpy.zeros((N_timepoints, pot_n))
res_I_Na = numpy.zeros((N_timepoints, bins_n))
res_I_K = numpy.zeros((N_timepoints, bins_n))
"""
Explanation: The current clamp set will remain in existence throughout the simulation, until we specify otherwise.
Just before running the simulation we need to create empty data structures, much like in previous chapters. Here we intend to record potential, along with sodium and potassium currents, by the 10µm bins we previously arranged:
End of explanation
"""
# Run the simulation
for l in range(N_timepoints):
if l%10 == 0:
print ("Tpnt: ", l)
sim.run(DT_sim*l)
# Loop through membrane triangle bins and record sodium and potassium currents
for b in range(bins_n):
for mt in memb_tris_binned[b]:
res_I_Na[l,b]+= sim.getTriOhmicI(mt, 'OC_Na')*1.0e12
res_I_K[l,b]+= sim.getTriOhmicI(mt, 'OC_K')*1.0e12
res_I_Na[l,b]/=mtb_area[b]
res_I_K[l,b]/=mtb_area[b]
# Loop through central tetrahedrons and record potential
for p in range(pot_n):
res[l,p] = sim.getTetV(int(pot_tet[p]))*1.0e3
"""
Explanation: So finally we are ready to run the simulation. We will use some new methods to record information from the simulation: steps.solver.Tetexact.getTriOhmicI to record the current from membrane triangles and steps.solver.Tetexact.getTetV to record potential from tetrahedrons within the conduction volume. At every time-point we will use information found in the geometry section to loop over the binned membrane triangles (by every 10µm along he z-axis) and record current, then loop over an array of tetrahedral indices to record potential from one central tetrahedron at every 10µm along the z-axis:
End of explanation
"""
results = (res, pot_pos, res_I_Na, res_I_K, bin_pos)
"""
Explanation: If we want to put all this code into a function, we should return the tuple
End of explanation
"""
%matplotlib inline
from pylab import *
"""
Explanation: Plotting simulation output
We begin by importing some matplotlib plotting functions:
End of explanation
"""
tpnt = arange(0.0, N_timepoints*DT_sim, DT_sim)
"""
Explanation: Now we create an array of 'time-points' to be used in the plots:
End of explanation
"""
def plotVz(tidx):
if (tidx >= tpnt.size):
print('Time index out of range')
return
plot(results[1]*1e6, results[0][tidx],\
label=str(1e3*tidx*DT_sim)+'ms', linewidth=3)
legend(numpoints=1)
xlim(0, 1000)
ylim(-80,40)
xlabel('Z-axis (um)')
ylabel('Membrane potential (mV)')
"""
Explanation: And create two functions: one to plot potential as along the z-axis at a given 'time-point':
End of explanation
"""
def plotIz(tidx, plotstyles = ['-', '--']):
if (tidx >= tpnt.size):
print('Time index out of range')
return
plot(results[4]*1e6, results[2][tidx], plotstyles[0],\
label = 'Na: '+str(1e3*tidx*DT_sim)+'ms', linewidth=3)
plot(results[4]*1e6, results[3][tidx], plotstyles[1],\
label = 'K: '+str(1e3*tidx*DT_sim)+'ms', linewidth=3)
legend(loc='best')
xlim(0, 1000)
ylim(-10, 15)
xlabel('Z-axis (um)')
ylabel('Current (pA/um^2)')
"""
Explanation: and another to plot the sodium and potassium currents (separately) along the z-axis at a given 'time-point':
End of explanation
"""
figure(figsize=(12,7))
plotVz(10)
plotVz(20)
plotVz(30)
show()
"""
Explanation: Finally, with the simulation finished we can use the plotting functions to plot the potential along the z-axis at 1ms, 2ms and 3ms:
End of explanation
"""
figure(figsize=(12,7))
plotIz(10)
plotIz(20)
plotIz(30)
show()
"""
Explanation: And to plot the membrane currents along the z-axis also at 1ms, 2ms and 3ms:
End of explanation
"""
import steps.mpi
import steps.mpi.solver as mpi_solver
import steps.utilities.geom_decompose as gd
"""
Explanation: Simulation with TetOpSplit
The spatial stochastic approximate solver, which runs in parallel steps.mpi.solver.TetOpSplit also supports the membrane potential calculation. The solver is described in detail in a separate chapter, so usage will only
be described briefly here.
Usage is similar as for Tetexact, however we must import some different STEPS modules:
End of explanation
"""
tet_hosts = gd.binTetsByAxis(mesh, steps.mpi.nhosts)
tri_hosts = gd.partitionTris(mesh, tet_hosts, mesh.getSurfTris())
"""
Explanation: We must also partition the mesh along the axis based on the number of MPI hosts:
End of explanation
"""
sim = mpi_solver.TetOpSplit(mdl, mesh, r, True, tet_hosts, tri_hosts)
"""
Explanation: Now we can create the steps.mpi.solver.TetOpSplit solver object, passing the partitioning information
as well as the usual arguments:
End of explanation
"""
sim.setPatchCount('patch', 'Na_m0h0', Na_ro*surfarea*Na_facs[0])
sim.setPatchCount('patch', 'Na_m1h0', Na_ro*surfarea*Na_facs[1])
sim.setPatchCount('patch', 'Na_m2h0', Na_ro*surfarea*Na_facs[2])
sim.setPatchCount('patch', 'Na_m3h0', Na_ro*surfarea*Na_facs[3])
sim.setPatchCount('patch', 'Na_m0h1', Na_ro*surfarea*Na_facs[4])
sim.setPatchCount('patch', 'Na_m1h1', Na_ro*surfarea*Na_facs[5])
sim.setPatchCount('patch', 'Na_m2h1', Na_ro*surfarea*Na_facs[6])
sim.setPatchCount('patch', 'Na_m3h1', Na_ro*surfarea*Na_facs[7])
sim.setPatchCount('patch', 'K_n0', K_ro*surfarea*K_facs[0])
sim.setPatchCount('patch', 'K_n1', K_ro*surfarea*K_facs[1])
sim.setPatchCount('patch', 'K_n2', K_ro*surfarea*K_facs[2])
sim.setPatchCount('patch', 'K_n3', K_ro*surfarea*K_facs[3])
sim.setPatchCount('patch', 'K_n4', K_ro*surfarea*K_facs[4])
sim.setPatchCount('patch', 'Leak', L_ro * surfarea)
sim.setEfieldDT(1.0e-5)
sim.setMembPotential('membrane', -65e-3)
sim.setMembCapac('membrane', 1.0e-2)
sim.setMembVolRes('membrane', 1.0)
# Set the current clamp
niverts = injverts.__len__()
for t in injverts:
sim.setVertIClamp(t, Iclamp/niverts)
# Create result structures
res = numpy.zeros((N_timepoints, pot_n))
res_I_Na = numpy.zeros((N_timepoints, bins_n))
res_I_K = numpy.zeros((N_timepoints, bins_n))
# Run the simulation
for l in range(N_timepoints):
if steps.mpi.rank ==0:
if l%10 == 0:
print ("Tpnt: ", l)
sim.run(DT_sim*l)
if steps.mpi.rank ==0:
for p in range(pot_n):
res[l,p] = sim.getTetV(int(pot_tet[p]))*1.0e3
"""
Explanation: This time we only record voltage from the simulation due to the reduced functionality of the solver at present:
End of explanation
"""
if steps.mpi.rank ==0:
results = (res, pot_pos)
tpnt = arange(0.0, N_timepoints*DT_sim, DT_sim)
figure(figsize=(12,7))
for tidx in (10,20,30,40):
plot(results[1]*1e6, results[0][tidx], \
label=str(1e3*tidx*DT_sim)+'ms', linewidth=3)
legend(numpoints=1)
xlim(0, 1000)
ylim(-80,40)
xlabel('Z-axis (um)')
ylabel('Membrane potential (mV)')
show()
"""
Explanation: And simply plot the data:
End of explanation
"""
# The number of simulation time-points
N_timepoints = 401
# The simulation dt, now also the E-Field dt
DT_sim = 1.0e-5 # seconds
"""
Explanation: Assuming that all this script was written in a file named HH_APprop_tetopsplit.py, to run from the command line with 4 MPI processes we should use
mpirun -n 4 python HH_APprop_tetopsplit.py
Simulation with TetODE
The spatial deterministic solver steps.solver.TetODE, which was introduced in Simulating Diffusion on Surfaces, is also available for membrane potential simulations.
As discussed in Simulating Diffusion on Surfaces, simulations in TetODE share model and geometry construction with solver Tetexact, with a few
differences to the solver to run deterministic simulations, such as the possibility of setting tolerance levels. Coupling with the membrane potential solution introduces some new considerations. Firstly,
since reaction-diffusion solutions are solved in CVODE this reduces the possibilities of coupling the reaction-diffusion simulation with the membrane
potential calculation. As discussed in Simulating Diffusion on Surfaces, a call to steps.solver.TetODE.run hands control to CVODE until the specified
endtime during which there can be no communication with the membrane potential calculation. For this reason function setEfieldDT is not supported in
TetODE: rather the E-Field time-step is implicitly taken as the simulation time step. E.g. one E-Field calculation will be performed every time the STEPS simulation
is advanced with a call to steps.solver.TetODE.run. Therefore, in this model, to achieve an E-Field calculation time-step of 0.01ms we need to change
constant DT_sim to $10^{-5}$, which will also of course change how often we record data, so we need to also change constant N_timepoints to 401
to ensure we run the simulation to 4ms. If we create a new script called 'HH_APprop_tetode.py' to run the deterministic simulation then, compared to 'HH_APprop.py'
we need to change the following constants:
End of explanation
"""
sim = ssolver.TetODE(mdl, mesh, r, True)
"""
Explanation: And we create a TetODE solver object instead of a Tetexact solver:
End of explanation
"""
# sim.setEfieldDT(1.0e-5)
"""
Explanation: And remove the unsupported function:
End of explanation
"""
sim.setPatchCount('patch', 'Na_m0h0', Na_ro*surfarea*Na_facs[0])
sim.setPatchCount('patch', 'Na_m1h0', Na_ro*surfarea*Na_facs[1])
sim.setPatchCount('patch', 'Na_m2h0', Na_ro*surfarea*Na_facs[2])
sim.setPatchCount('patch', 'Na_m3h0', Na_ro*surfarea*Na_facs[3])
sim.setPatchCount('patch', 'Na_m0h1', Na_ro*surfarea*Na_facs[4])
sim.setPatchCount('patch', 'Na_m1h1', Na_ro*surfarea*Na_facs[5])
sim.setPatchCount('patch', 'Na_m2h1', Na_ro*surfarea*Na_facs[6])
sim.setPatchCount('patch', 'Na_m3h1', Na_ro*surfarea*Na_facs[7])
sim.setPatchCount('patch', 'K_n0', K_ro*surfarea*K_facs[0])
sim.setPatchCount('patch', 'K_n1', K_ro*surfarea*K_facs[1])
sim.setPatchCount('patch', 'K_n2', K_ro*surfarea*K_facs[2])
sim.setPatchCount('patch', 'K_n3', K_ro*surfarea*K_facs[3])
sim.setPatchCount('patch', 'K_n4', K_ro*surfarea*K_facs[4])
sim.setPatchCount('patch', 'Leak', L_ro * surfarea)
sim.setMembPotential('membrane', -65e-3)
sim.setMembCapac('membrane', 1.0e-2)
sim.setMembVolRes('membrane', 1.0)
# Set the current clamp
niverts = injverts.__len__()
for t in injverts:
sim.setVertIClamp(t, Iclamp/niverts)
# Create result structures
res = numpy.zeros((N_timepoints, pot_n))
res_I_Na = numpy.zeros((N_timepoints, bins_n))
res_I_K = numpy.zeros((N_timepoints, bins_n))
# Run the simulation
for l in range(N_timepoints):
if steps.mpi.rank ==0:
if l%100 == 0:
print ("Tpnt: ", l)
sim.run(DT_sim*l)
for p in range(pot_n):
res[l,p] = sim.getTetV(int(pot_tet[p]))*1.0e3
"""
Explanation: Finally, since it is unfortunately not possible to record information about the spatial currents in TetODE (functions such as getTriOhmicI are not supported),
we remove anything to do with recording the Na and K currents, which makes our simulation loop rather simple:
End of explanation
"""
results = (res, pot_pos)
"""
Explanation: And now we return only the information related to the recordings of the spatial membrane potential:
End of explanation
"""
figure(figsize=(12,7))
for tidx in (100,200,300):
plot(results[1]*1e6, results[0][tidx], \
label=str(1e3*tidx*DT_sim)+'ms', linewidth=3)
legend(numpoints=1)
xlim(0, 1000)
ylim(-80,40)
xlabel('Z-axis (um)')
ylabel('Membrane potential (mV)')
show()
"""
Explanation: We can finally plot the results as follows
End of explanation
"""
|
beyondvalence/biof509_wtl | Wk10-feature_selection_dimension_reduction_clustering/Wk10-dimensionality-reduction-clustering.ipynb | mit | import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
"""
Explanation: Week 10 - Dimensionality reduction and clustering
Learning Objectives
List options available for dimensionality reduction in scikit-learn
Discuss different clustering algorithms
Demonstrate clustering in scikit-learn
End of explanation
"""
from sklearn.feature_selection import VarianceThreshold
X = np.array([[0, 0, 1], [0, 1, 0], [1, 0, 0], [0, 1, 1], [0, 1, 0], [0, 1, 1]])
print(X)
sel = VarianceThreshold(threshold=(.8 * (1 - .8)))
X_selected = sel.fit_transform(X)
print(X_selected)
from sklearn.datasets import load_iris
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import chi2
iris = load_iris()
X, y = iris.data, iris.target
print(X.shape)
X_new = SelectKBest(chi2, k=2).fit_transform(X, y)
print(X_new.shape)
"""
Explanation: Dimensionality reduction
Many types of data can contain a massive number of features. Whether this is individual pixels in images, transcripts or proteins in -omics data or word occurrances in text data this bounty of features can bring with it several challenges.
Visualizing more than 4 dimensions directly is difficult complicating our data analysis and exploration. In machine learning models we run the risk of overfitting to the data and having a model that does not generalize to new observations. There are two main approaches to handling this situation:
Identify important features and discard less important features
Transform the data into a lower dimensional space
Identify important features
Feature selection can be used to choose the most informative features. This can improve the performance of subsequent models, reduce overfitting and have practical advantages when the model is ready to be utilized. For example, RT-qPCR on a small number of transcripts will be faster and cheaper than RNAseq, and similarly targeted mass spectrometry such as MRM on a limited number of proteins will be cheaper, faster and more accurate than data independent acquisition mass spectrometry.
There are a variety of approaches for feature selection:
Remove uninformative features (same value for all, or nearly all, samples)
Remove features that perform poorly at the task when used alone
Iteratively remove the weakest features from a model until the desires number is reached
End of explanation
"""
from sklearn import linear_model
from sklearn.datasets import load_digits
from sklearn.feature_selection import RFE
# Load the digits dataset
digits = load_digits()
X = digits.images.reshape((len(digits.images), -1))
y = digits.target
# Create the RFE object and rank each pixel
clf = linear_model.LogisticRegression(C=1.0, penalty='l1', tol=1e-6)
rfe = RFE(estimator=clf, n_features_to_select=16, step=1) #rfe, recursive feature elimination
rfe.fit(X, y)
ranking = rfe.ranking_.reshape(digits.images[0].shape)
# Plot pixel ranking
plt.matshow(ranking)
plt.colorbar()
plt.title("Ranking of pixels with RFE")
plt.show()
rfe.ranking_
"""
Explanation: When iteratively removing weak features the choice of model is important. We will discuss the different models available for regression and classification next week but there are a few points relevant to feature selection we will cover here.
A linear model is a useful and easily interpreted model, and when used for feature selection L1 regularization should be used. L1 regularization penalizes large coefficients based on their absolute values. This favors a sparse model with weak features having coefficients close to zero. In contrast, L2 regularization penalizes large coefficients based on their squared value, and this has a tendency to favor many small coefficients rather than a smaller set of larger coefficients.
End of explanation
"""
from sklearn.linear_model import RandomizedLogisticRegression
randomized_logistic = RandomizedLogisticRegression()
# Load the digits dataset
digits = load_digits()
X = digits.images.reshape((len(digits.images), -1))
y = digits.target
randomized_logistic.fit(X, y)
ranking = randomized_logistic.scores_.reshape(digits.images[0].shape)
# Plot pixel ranking
plt.matshow(ranking)
plt.colorbar()
plt.title("Ranking of pixels")
plt.show()
"""
Explanation: The disadvantage with L1 regularization is that if multiple features are correlated only one of them will have a high coefficient.
End of explanation
"""
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(n_estimators=100)
# Load the digits dataset
digits = load_digits()
X = digits.images.reshape((len(digits.images), -1))
y = digits.target
clf.fit(X, y)
ranking = clf.feature_importances_.reshape(digits.images[0].shape)
# Plot pixel ranking
plt.matshow(ranking)
plt.colorbar()
plt.title("Ranking of pixels")
plt.show()
"""
Explanation: Also important is to normalize the means and variances of the features before comparing the coefficients. The approaches we covered last week are crucial for feature selection from a linear model.
A limitation of linear models is that any interactions must be hand coded. A feature that is poorly predictive overall may actually be very powerful but only in a limited subgroup. This might be missed in a linear model when we would prefer to keep the feature.
Any model exposing a coef_ or feature_importances_ attribute can be used with the SelectFromModel class for feature selection. Forests of randomized decision trees handle interactions well and unlike some of the other models do not require careful tuning of parameters to achieve reasonable performance.
End of explanation
"""
# http://scikit-learn.org/stable/auto_examples/decomposition/plot_pca_vs_lda.html#example-decomposition-plot-pca-vs-lda-py
from sklearn import datasets
from sklearn.decomposition import PCA
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
iris = datasets.load_iris()
X = iris.data
y = iris.target
target_names = iris.target_names
pca = PCA(n_components=2)
X_r = pca.fit(X).transform(X)
lda = LinearDiscriminantAnalysis(n_components=2)
X_r2 = lda.fit(X, y).transform(X)
# Percentage of variance explained for each components
print('explained variance ratio (first two components): %s'
% str(pca.explained_variance_ratio_))
plt.figure()
for c, i, target_name in zip("rgb", [0, 1, 2], target_names):
plt.scatter(X_r[y == i, 0], X_r[y == i, 1], c=c, label=target_name)
plt.legend(scatterpoints=1)
plt.title('PCA of IRIS dataset')
plt.figure()
for c, i, target_name in zip("rgb", [0, 1, 2], target_names):
plt.scatter(X_r2[y == i, 0], X_r2[y == i, 1], c=c, label=target_name)
plt.legend(scatterpoints=1)
plt.title('LDA of IRIS dataset')
plt.show()
"""
Explanation: Transformation into lower dimensional space
An alternative approach is to transform the data in such a way that the variance observed in the features is maintained while only using a smaller number of dimensions. This approach includes all the features so is not a simpler model when considering the entire process from data acquisition onwards. It can however improve the performance of subsequent algorithms and is a very popular approach for visualization and the initial data analysis phase of a project.
The classical method is Principal Components Analysis although other algorithms are also available. Given a set of features usually some will be at least weakly correlated. By performing an orthogonal transformation a reduced number of features that are uncorrelated can be chosen that maintains as much of the variation in the original data as possible.
An orthogonal transformation simply means that the data is rotated and reflected about the axes.
Scikit-learn has several different implementations of PCA available together with other techniques performing a similar function.
Most of these techniques are unsupervised. Linear Discriminant Analysis (LDA) is one algorithm that can include labels and will attempt to create features that account for the greatest amount of variance between classes.
End of explanation
"""
# Exercise 1
import sklearn.datasets
faces = sklearn.datasets.fetch_olivetti_faces()
# Load the olivetti faces dataset
X = faces.data
y = faces.target
plt.matshow(X[0].reshape((64,64)))
plt.colorbar()
plt.title("face1")
plt.show()
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(n_estimators=100)
clf.fit(X, y)
ranking = clf.feature_importances_.reshape(faces.images[0].shape)
# Plot pixel ranking
plt.matshow(ranking.reshape(64,64))
plt.colorbar()
plt.title("ranking")
plt.show()
p75 = np.percentile(ranking, 75)
mask = ranking > p75
# Plot pixel ranking
plt.matshow(mask.reshape(64,64))
plt.colorbar()
plt.title("mask")
plt.show()
# apply pca and lda to digits
digits = datasets.load_digits()
X = digits.data
y = digits.target
target_names = digits.target_names
pca = PCA(n_components=2)
X_r = pca.fit(X).transform(X)
lda = LinearDiscriminantAnalysis(n_components=2)
X_r2 = lda.fit(X, y).transform(X)
# Percentage of variance explained for each components
print('explained variance ratio (first two components): %s'
% str(pca.explained_variance_ratio_))
plt.figure()
for c, i, target_name in zip("rgb", [0, 1, 2], target_names):
plt.scatter(X_r[y == i, 0], X_r[y == i, 1], c=c, label=target_name)
plt.legend(scatterpoints=1)
plt.title('PCA of IRIS dataset')
plt.figure()
for c, i, target_name in zip("rgb", [0, 1, 2], target_names):
plt.scatter(X_r2[y == i, 0], X_r2[y == i, 1], c=c, label=target_name)
plt.legend(scatterpoints=1)
plt.title('LDA of IRIS dataset')
plt.show()
"""
Explanation: Exercises
Apply feature selection to the Olivetti faces dataset, identifying the most important 25% of features.
Apply PCA and LDA to the digits dataset used above
End of explanation
"""
import matplotlib
import matplotlib.pyplot as plt
from skimage.data import camera
from skimage.filters import threshold_otsu
matplotlib.rcParams['font.size'] = 9
image = camera()
thresh = threshold_otsu(image)
binary = image > thresh
#fig, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(8, 2.5))
fig = plt.figure(figsize=(8, 2.5))
ax1 = plt.subplot(1, 3, 1, adjustable='box-forced')
ax2 = plt.subplot(1, 3, 2)
ax3 = plt.subplot(1, 3, 3, sharex=ax1, sharey=ax1, adjustable='box-forced')
ax1.imshow(image, cmap=plt.cm.gray)
ax1.set_title('Original')
ax1.axis('off')
ax2.hist(image)
ax2.set_title('Histogram')
ax2.axvline(thresh, color='r')
ax3.imshow(binary, cmap=plt.cm.gray)
ax3.set_title('Thresholded')
ax3.axis('off')
plt.show()
"""
Explanation: Clustering
In clustering we attempt to group observations in such a way that observations assigned to the same cluster are more similar to each other than to observations in other clusters.
Although labels may be known, clustering is usually performed on unlabeled data as a step in exploratory data analysis.
Previously we looked at the Otsu thresholding method as a basic example of clustering. This is very closely related to k-means clustering. A variety of other methods are available with different characteristics.
The best method to use will vary developing on the particular problem.
End of explanation
"""
from sklearn import cluster, datasets
dataset, true_labels = datasets.make_blobs(n_samples=200, n_features=2, random_state=0,
centers=3, cluster_std=0.1)
fig, ax = plt.subplots(1,1)
ax.scatter(dataset[:,0], dataset[:,1], c=true_labels)
plt.show()
# Clustering algorithm can be used as a class
means = cluster.KMeans(n_clusters=2)
prediction = means.fit_predict(dataset)
fig, ax = plt.subplots(1,1)
ax.scatter(dataset[:,0], dataset[:,1], c=prediction)
plt.show()
"""
Explanation: Different clustering algorithms
Cluster comparison
The following algorithms are provided by scikit-learn
K-means
Affinity propagation
Mean Shift
Spectral clustering
Ward
Agglomerative Clustering
DBSCAN
Birch
K-means clustering divides samples between clusters by attempting to minimize the within-cluster sum of squares. It is an iterative algorithm repeatedly updating the position of the centroids (cluster centers), re-assigning samples to the best cluster and repeating until an optimal solution is reached. The clusters will depend on the starting position of the centroids so k-means is often run multiple times with random initialization and then the best solution chosen.
Affinity Propagation operates by passing messages between the samples updating a record of the exemplar samples. These are samples that best represent other samples. The algorithm functions on an affinity matrix that can be eaither user supplied or computed by the algorothm. Two matrices are maintained. One matrix records how well each sample represents other samples in the dataset. When the algorithm finishes the highest scoring samples are chosen to represent the clusters. The second matrix records which other samples best represent each sample so that the entire dataset can be assigned to a cluster when the algorithm terminates.
Mean Shift iteratively updates candidate centroids to represent the clusters. The algorithm attempts to find areas of higher density.
Spectral clustering operates on an affinity matrix that can be user supplied or computed by the model. The algorithm functions by minimizing the value of the links cut in a graph created from the affinity matrix. By focusing on the relationships between samples this algorithm performs well for non-convex clusters.
Ward is a type of agglomerative clustering using minimization of the within-cluster sum of squares to join clusters together until the specified number of clusters remain.
Agglomerative clustering starts all the samples in their own cluster and then progressively joins clusters together minimizing some performance measure. In addition to minimizing the variance as seen with Ward other options are, 1) minimizing the average distance between samples in each cluster, and 2) minimizing the maximum distance between observations in each cluster.
DBSCAN is another algorithm that attempts to find regions of high density and then expands the clusters from there.
Birch is a tree based clustering algorithm assigning samples to nodes on a tree
End of explanation
"""
from sklearn import cluster, datasets, metrics
dataset, true_labels = datasets.make_blobs(n_samples=200, n_features=2, random_state=0,
centers=[[x,y] for x in range(3) for y in range(3)], cluster_std=0.1)
fig, ax = plt.subplots(1,1)
ax.scatter(dataset[:,0], dataset[:,1], c=true_labels)
plt.show()
inertia = []
predictions = []
for i in range(1,20):
means = cluster.KMeans(n_clusters=i)
prediction = means.fit_predict(dataset)
inertia.append(means.inertia_)
predictions.append(prediction)
plt.plot(range(1,20), inertia)
plt.show()
plt.scatter(dataset[:,0], dataset[:,1], c=predictions[8])
plt.show()
"""
Explanation: Model evaluation
Several approaches have been developed for evaluating clustering models but are generally limited in requiring the true clusters to be known. In the general use case for clustering this is not known with the goal being exploratory.
Ultimately, a model is just a tool to better understand the structure of our data. If we are able to gain insight from using a clustering algorithm then it has served its purpose.
The metrics available are Adjusted Rand Index, Mutual Information based scores, Homogeneity, completeness, v-measure, and silhouette coefficient. Of these, only the silhouette coefficient does not require the true clusters to be known.
Although the silhouette coefficient can be useful it takes a very similar approach to k-means, favoring convex clusters over more complex, equally valid, clusters.
How to determine number of clusters
One important use for the model evaluation algorithms is in choosing the number of clusters. The clustering algorithms take as parameters either the number of clusters to partition a dataset into or other scaling factors that ultimately determine the number of clusters. It is left to the user to determine the correct value for these parameters.
As the number of clusters increases the fit to the data will always improve until each point is in a cluster by itself. As such, classical optimization algorithms searching for a minimum or maximum score will not work. Often, the goal is to find an inflection point.
If the cluster parameter is too low adding an additional cluster will have a large impact on the evaluation score. The gradient will be high at numbers of clusters less than the true value. If the cluster parameter is too high adding an additional cluster will have a small impact on the evaluation score. The gradient will be low at numbers of clusters higher than the true value.
At the correct number of clusters the gradient should suddenly change, this is an inflection point.
End of explanation
"""
# Exercise 1
dataset, true_labels = datasets.make_blobs(n_samples=200, n_features=2, random_state=0,
centers=[[x,y] for x in range(0,20,2) for y in range(2)], cluster_std=0.2)
inertia = []
predictions = []
for i in range(1,25):
means = cluster.KMeans(n_clusters=i)
prediction = means.fit_predict(dataset)
inertia.append(means.inertia_)
predictions.append(prediction)
fig = plt.figure(figsize=(10,10))
ax = plt.subplot(2,1,1)
ax.scatter(dataset[:,0], dataset[:,1], c=true_labels)
ax.scatter(means.cluster_centers_[:20,0], means.cluster_centers_[:20,1], s=160, alpha=0.5)
ax.set_title('Observations and cluster centers')
ax1 = plt.subplot(2,1,2)
ax1.plot(range(1,25), inertia)
ax1.set_title('Inertia for varying cluster sizes')
plt.show()
means.cluster_centers_[:,0]
"""
Explanation: This is an ideal case with clusters that can be clearly distinguished - convex clusters with similar distributions and large gaps between the clusters. Most real world datasets will not be as easy to work with and determining the correct number of clusters will be more challenging. As an example, compare the performance between challenges 1 and 2 (unknown number of clusters) and challenge 3 (known number of clusters) in table 2 of this report on automated FACS.
There is a pull request on the scikit-learn repository to add several automated algorithms to identify the correct number of clusters but it has not been integrated.
Exercises
Using the grid of blobs sample dataset investigate how different cluster shapes and distances alter the plot of inertia with number of clusters. Is the plot still interpretable if the distances are 1/2, 1/4, 1/8, etc? If the variance in the first and second dimensions is unequal is the plot still interpretable?
Does using a different algorithm improve performance?
End of explanation
"""
|
mbeyeler/opencv-machine-learning | notebooks/04.05-Representing-Images.ipynb | mit | import cv2
import matplotlib.pyplot as plt
%matplotlib inline
img_bgr = cv2.imread('data/lena.jpg')
"""
Explanation: <!--BOOK_INFORMATION-->
<a href="https://www.packtpub.com/big-data-and-business-intelligence/machine-learning-opencv" target="_blank"><img align="left" src="data/cover.jpg" style="width: 76px; height: 100px; background: white; padding: 1px; border: 1px solid black; margin-right:10px;"></a>
This notebook contains an excerpt from the book Machine Learning for OpenCV by Michael Beyeler.
The code is released under the MIT license,
and is available on GitHub.
Note that this excerpt contains only the raw code - the book is rich with additional explanations and illustrations.
If you find this content useful, please consider supporting the work by
buying the book!
<!--NAVIGATION-->
< Representing Text Features | Contents | Using Decision Trees to Make a Medical Diagnosis >
Representing images
One of the most common and important data types for computer vision are, of course,
images. The most straightforward way to represent images is probably by using the
grayscale value of each pixel in the image. Usually, grayscale values are not very indicative
of the data they describe. For example, if we saw a single pixel with grayscale value 128,
could we tell what object this pixel belonged to? Probably not. Therefore, grayscale values
are not very effective image features.
Using color spaces
Alternatively, we might find that colors contain some information that raw grayscale values
cannot capture. Most often, images come in the conventional RGB color space, where every
pixel in the image gets an intensity value for its apparent redness (R), greenness (G), and
blueness (B). However, OpenCV offers a whole range of other color spaces, such as Hue
Saturation Value (HSV), Hue Saturation Lightness (HSL), and the Lab color space. Let's
have a quick look at them.
Encoding images in RGB space
I am sure that you are already familiar with the RGB color space, which uses additive
mixing of different shades of red, green, and blue to produce different composite colors.
The RGB color space is useful in everyday life, because it covers a large part of the color
space that the human eye can see. This is why color television sets or color computer
monitors only need to care about producing mixtures of red, green, and blue light.
We can load a sample image in BGR format using cv2.imread:
End of explanation
"""
img_rgb = cv2.cvtColor(img_bgr, cv2.COLOR_BGR2RGB)
"""
Explanation: If you have ever tried to display a BGR image using Matplotlib or similar libraries, you
might have noticed a weird blue tint to the image. This is due to the fact that Matplotlib
expects an RGB image. To achieve this, we have to permute the color channels using
cv2.cvtColor:
End of explanation
"""
plt.figure(figsize=(12, 6))
plt.subplot(121)
plt.imshow(img_bgr)
plt.subplot(122)
plt.imshow(img_rgb);
"""
Explanation: Then we can use Matplotlib to plot the images (BGR on the left, RGB on the right):
End of explanation
"""
img_hsv = cv2.cvtColor(img_bgr, cv2.COLOR_BGR2HSV)
"""
Explanation: Encoding images in HSV and HLS space
However, ever since the RGB color space was created, people have realized that it is
actually quite a poor representation of human vision. Therefore, researchers have
developed two alternative representations. One of them is called HSV, which stands for
hue, saturation, and value, and the other one is called HLS, which stands for hue,
lightness, and saturation. You might have seen these color spaces in color pickers and
common image editing software. In these color spaces, the hue of the color is captured by a
single hue channel, the colorfulness is captured by a saturation channel, and the lightness or
brightness is captured by a lightness or value channel.
In OpenCV, an RGB image can easily be converted to HSV color space using
cv2.cvtColor:
End of explanation
"""
img_gray = cv2.cvtColor(img_bgr, cv2.COLOR_BGR2GRAY)
"""
Explanation: The same is true for the HLS color space. In fact, OpenCV provides a whole range of
additional color spaces, which are available via cv2.cvtColor. All we need to do is to
replace the color flag with one of the following:
- HLS (hue, lightness, saturation) using cv2.COLOR_BGR2HLS
- LAB (lightness, green-red, and blue-yellow) using cv2.COLOR_BGR2LAB
- YUV (overall luminance, blue-luminance, red-luminance) using cv2.COLOR_BGR2YUV
Detecting corners in images
One of the most straightforward features to find in an image are corners.
OpenCV
provides at least two different algorithms to find corners in an image:
- Harris corner detection: Knowing that edges are areas with high-intensity changes in all directions, Harris and Stephens came up with a fast way of finding such areas. This algorithm is implemented as cv2.cornerHarris in OpenCV.
- Shi-Tomasi corner detection: Shi and Tomasi have their own idea of what are good features to track, and they usually do better than Harris corner detection by finding the N strongest corners. This algorithm is implemented as cv2.goodFeaturesToTrack in OpenCV.
Harris corner detection works only on grayscale images, so we first want to convert our
BGR image to grayscale:
End of explanation
"""
corners = cv2.cornerHarris(img_gray, 2, 3, 0.04)
"""
Explanation: We specify the pixel neighborhood size considered for corner detection (blockSize), an aperture parameter for the edge detection (ksize), and the so-called Harris detector-free parameter (k):
End of explanation
"""
plt.figure(figsize=(12, 6))
plt.imshow(corners, cmap='gray');
"""
Explanation: Let's have a look at the result:
End of explanation
"""
sift = cv2.xfeatures2d.SIFT_create()
"""
Explanation: Using the Scale-Invariant Feature Transform (SIFT)
However, corner detection is not sufficient when the scale of an image changes. To this end, David Lowe
came up with a method to describe interesting points in image independently of orientation and size - hence
the name scale-invariant feature transform (SIFT). In OpenCV 3, this function is part of the
xfeatures2d module:
End of explanation
"""
kp = sift.detect(img_bgr)
"""
Explanation: The algorithm works in two steps:
detect: This step identifies interesting points in an image (also known as keypoints)
compute: This step computes the actual feature values for every keypoint.
Keypoints can be detected with a single line of code:
End of explanation
"""
import numpy as np
img_kp = np.zeros_like(img_bgr)
img_kp = cv2.drawKeypoints(img_rgb, kp, img_kp, flags=cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
plt.figure(figsize=(12, 6))
plt.imshow(img_kp)
"""
Explanation: We can use the drawKeypoints function to visualize identified keypoints.
We can also pass
an optional flag cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS to surround every keypoint
with a circle whose size denotes its importance, and a radial line that indicates the orientation of the
keypoint:
End of explanation
"""
kp, des = sift.compute(img_bgr, kp)
"""
Explanation: Feature descriptors can be computed with compute:
End of explanation
"""
des.shape
"""
Explanation: Typically you get 128 feature values for every keypoint found:
End of explanation
"""
kp2, des2 = sift.detectAndCompute(img_bgr, None)
"""
Explanation: You can do these two steps in one go, too:
End of explanation
"""
np.allclose(des, des2)
"""
Explanation: And the result should be the same:
End of explanation
"""
surf = cv2.xfeatures2d.SURF_create()
kp = surf.detect(img_bgr)
img_kp = np.zeros_like(img_bgr)
img_kp = cv2.drawKeypoints(img_rgb, kp, img_kp,
flags=cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
plt.figure(figsize=(12, 6))
plt.imshow(img_kp)
kp, des = surf.compute(img_bgr, kp)
"""
Explanation: Using Speeded Up Robust Features (SURF)
SIFT has proved to be really good, but it is not fast enough for most applications. This is where speeded up
robust features (SURF) come in, which replaced the computationally expensive computations of SIFT with a
box filter. In OpenCV, SURF works in the exact same way as SIFT:
End of explanation
"""
des.shape
"""
Explanation: SURF finds more keypoints, and usually returns 64 feature values per keypoint:
End of explanation
"""
|
Upward-Spiral-Science/spect-team | Code/Assignment-3/Exploratory_Kmeans_PCA.ipynb | apache-2.0 | %matplotlib inline
import matplotlib.pyplot as plt
print '\nPlot the distributions of unknown columns (BSC, GSC, LDS):'
print '\nBSC_1 to BSC_101'
bsc = ['BSC_' + str(i) for i in xrange(1, 102)]
plot = df_unknowns[bsc].plot(kind='hist', alpha=0.5, legend=None)
print '\nPlot several random BSC samples:'
fig, axes = plt.subplots(nrows=2, ncols=2)
df_unknowns['BSC_1'].plot(ax=axes[0,0], kind='hist', alpha=0.5)
df_unknowns['BSC_10'].plot(ax=axes[0,1], kind='hist', alpha=0.5)
df_unknowns['BSC_20'].plot(ax=axes[1,0], kind='hist', alpha=0.5)
df_unknowns['BSC_30'].plot(ax=axes[1,1], kind='hist', alpha=0.5)
print '\nGSC_1 to GSC_119'
gsc = ['GSC_' + str(i) for i in xrange(1, 120)]
plot = df_unknowns[gsc].plot(kind='hist', alpha=0.5, legend=None)
print '\nLDS_1 to LDS_79'
lds = ['LDS_' + str(i) for i in xrange(1, 80)]
plot = df_unknowns[lds].plot(kind='hist', alpha=0.5, legend=None)
def row_summary(df):
# Extract column headers
featNames = list(df.columns.get_values())
# Get row summary (whether number of NaNs in each row)
row_summary = df.isnull().sum(axis=1)
# Get incomplete row indices
nan_row_inds = list() # incomplete row indices
for i, x in enumerate(row_summary):
if x > 0: nan_row_inds.append(i)
return nan_row_inds
def clean_records(df):
nan_row_inds = row_summary(df)
clean_df = df.drop(df.index[nan_row_inds], inplace=False)
# Double check for NaNs
print 'Is there any NaNs in the clean records?', clean_df.isnull().values.any()
return clean_df
df = pd.DataFrame.from_csv('Data_Adults_1.csv')
clean_df = clean_records(df)
# Keep only numerical values
numerics = ['int16', 'int32', 'int64', 'float16', 'float32', 'float64']
X = clean_df.select_dtypes(include=numerics)
cols2drop = ['Patient_ID', 'GROUP_ID', 'doa', 'Baseline_header_id', 'Concentration_header_id', \
'Baseline_Reading_id', 'Concentration_Reading_id']
# Drop certain columns
X = X.drop(cols2drop, axis=1, inplace=False)
print 'm =', X.shape[1]
"""
Explanation: Distribution of unknown columns
End of explanation
"""
from sklearn.cluster import KMeans
k = 4
data = X.get_values().T
kmeans = KMeans(n_clusters=k)
kmeans.fit(data)
labels = kmeans.labels_
centroids = kmeans.cluster_centers_
for i in range(k):
# Extract observations within each cluster
ds = data[np.where(labels==i)]
# Plot the observations with symbol o
plt.plot(ds[:,0], ds[:,1], 'o')
# Plot the centroids with simbol x
lines = plt.plot(centroids[i,0], centroids[i,1], 'x')
plt.setp(lines, ms=8.0)
plt.setp(lines, mew=2.0)
"""
Explanation: K-Means Clustering
End of explanation
"""
from sklearn.decomposition import PCA
pca = PCA(n_components=15)
pca.fit(X.get_values())
print '\nExplained Variance Ratios:'
print pca.explained_variance_ratio_
plt.plot(pca.explained_variance_ratio_)
plt.ylabel('Variance Explained')
plt.xlabel('Number of Principal Components')
"""
Explanation: Inferential Question:
Should we divide the features into groups and analyze their distributions separately? More specifically, can we cluster the features?
Based on the plotted figure above, clustering all the features is not a good idea at this point, since there seem to be no clear cluster boundaries. However, we are able to identify the existence of outliers using this method (clustering).
Principal Component Analysis
End of explanation
"""
# Looking at what columns are favored by the first two principal dimensions
print '\nColumns favored by the first principal component:'
pc = pd.DataFrame(pca.components_, columns=X.iloc[:, :].columns).T
pc.sort_values(0, ascending=False)[:6]
print '\nColumns favored by the second principal component:'
pc.sort_values(1, ascending=False)[:6]
"""
Explanation: Can we find k most important features to construct our training data, where k is significantly smaller than m?
Given the explained variance ratios printed and plotted above, if we can settle for capturing 90% of variance, then we can find a k such that k = 10 << m = 736.
End of explanation
"""
|
aschaffn/phys202-2015-work | assignments/assignment07/AlgorithmsEx02.ipynb | mit | %matplotlib inline
from matplotlib import pyplot as plt
import seaborn as sns
import numpy as np
"""
Explanation: Algorithms Exercise 2
Imports
End of explanation
"""
def find_peaks(a):
"""Find the indices of the local maxima in a sequence."""
a = np.array(a)
s = np.sign(np.diff(a))
d = np.diff(s)
ind = [i for i in range(len(d)) if d[i] == 2]
if s[-1] == 1:
ind.append(len(a)-1)
return(np.array(ind))
p1 = find_peaks([2,0,1,0,2,0,1])
assert np.allclose(p1, np.array([0,2,4,6]))
p2 = find_peaks(np.array([0,1,2,3]))
assert np.allclose(p2, np.array([3]))
p3 = find_peaks([3,2,1,0])
assert np.allclose(p3, np.array([0]))
find_peaks([2,2,2,1,2,2,2])
"""
Explanation: Peak finding
Write a function find_peaks that finds and returns the indices of the local maxima in a sequence. Your function should:
Properly handle local maxima at the endpoints of the input array.
Return a Numpy array of integer indices.
Handle any Python iterable as input.
End of explanation
"""
from sympy import pi, N
pi_digits_str = str(N(pi, 10001))[2:]
pi_int = np.array(list(pi_digits_str), dtype="int")
pks = find_peaks(pi_int)
pks_diff = np.diff(pks)
plt.hist(pks_diff, bins = range(0,max(pks_diff)+1))
min(pks_diff), max(pks_diff)
pks
assert True # use this for grading the pi digits histogram
"""
Explanation: Here is a string with the first 10000 digits of $\pi$ (after the decimal). Write code to perform the following:
Convert that string to a Numpy array of integers.
Find the indices of the local maxima in the digits of $\pi$.
Use np.diff to find the distances between consequtive local maxima.
Visualize that distribution using an appropriately customized histogram.
End of explanation
"""
|
qutip/qutip-notebooks | examples/heom/heom-1d-spin-bath-model-ohmic-fitting.ipynb | lgpl-3.0 | %pylab inline
import contextlib
import time
import numpy as np
from scipy.optimize import curve_fit
from qutip import *
from qutip.nonmarkov.heom import HEOMSolver, BosonicBath
# Import mpmath functions for evaluation of gamma and zeta functions in the expression for the correlation:
from mpmath import mp
mp.dps = 15
mp.pretty = True
def cot(x):
""" Vectorized cotangent of x. """
return 1. / np.tan(x)
def coth(x):
""" Vectorized hyperbolic cotangent of x. """
return 1. / np.tanh(x)
def plot_result_expectations(plots, axes=None):
""" Plot the expectation values of operators as functions of time.
Each plot in plots consists of (solver_result, measurement_operation, color, label).
"""
if axes is None:
fig, axes = plt.subplots(1, 1, sharex=True, figsize=(8,8))
fig_created = True
else:
fig = None
fig_created = False
# add kw arguments to each plot if missing
plots = [p if len(p) == 5 else p + ({},) for p in plots]
for result, m_op, color, label, kw in plots:
exp = np.real(expect(result.states, m_op))
kw.setdefault("linewidth", 2)
if color == 'rand':
axes.plot(result.times, exp, c=numpy.random.rand(3,), label=label, **kw)
else:
axes.plot(result.times, exp, color, label=label, **kw)
if fig_created:
axes.legend(loc=0, fontsize=12)
axes.set_xlabel("t", fontsize=28)
return fig
@contextlib.contextmanager
def timer(label):
""" Simple utility for timing functions:
with timer("name"):
... code to time ...
"""
start = time.time()
yield
end = time.time()
print(f"{label}: {end - start}")
# Defining the system Hamiltonian
eps = .0 # Energy of the 2-level system.
Del = .2 # Tunnelling term
Hsys = 0.5 * eps * sigmaz() + 0.5 * Del * sigmax()
# Initial state of the system.
rho0 = basis(2,0) * basis(2,0).dag()
"""
Explanation: Example 1d: Spin-Bath model, fitting of spectrum and correlation functions
Introduction
The HEOM method solves the dynamics and steady state of a system and its environment, the latter of which is encoded in a set of auxiliary density matrices.
In this example we show the evolution of a single two-level system in contact with a single bosonic environment.
The properties of the system are encoded in Hamiltonian, and a coupling operator which describes how it is coupled to the environment.
The bosonic environment is implicitly assumed to obey a particular Hamiltonian (see paper), the parameters of which are encoded in the spectral density, and subsequently the free-bath correlation functions.
In the example below we show how model an Ohmic environment with exponential cut-off in two ways:
First we fit the spectral density with a set of underdamped brownian oscillator functions.
Second, we evaluate the correlation functions, and fit those with a certain choice of exponential functions.
In each case we will use the fit parameters to determine the correlation function expansion co-efficients needed to construct a description of the bath (i.e. a BosonicBath object) to supply to the HEOMSolver so that we can solve for the system dynamics.
End of explanation
"""
def ohmic_correlation(t, alpha, wc, beta, s=1):
""" The Ohmic bath correlation function as a function of t (and the bath parameters). """
corr = (1/pi) * alpha * wc**(1 - s) * beta**(-(s + 1)) * mp.gamma(s + 1)
z1_u = (1 + beta * wc - 1.0j * wc * t) / (beta * wc)
z2_u = (1 + 1.0j * wc * t) / (beta * wc)
# Note: the arguments to zeta should be in as high precision as possible.
# See http://mpmath.org/doc/current/basics.html#providing-correct-input
return np.array([
complex(corr * (mp.zeta(s + 1, u1) + mp.zeta(s + 1, u2)))
for u1, u2 in zip(z1_u, z2_u)
], dtype=np.complex128)
def ohmic_spectral_density(w, alpha, wc):
""" The Ohmic bath spectral density as a function of w (and the bath parameters). """
return w * alpha * e**(-w / wc)
def ohmic_power_spectrum(w, alpha, wc, beta):
""" The Ohmic bath power spectrum as a function of w (and the bath parameters). """
return w * alpha * e**(-abs(w) / wc) * ((1 / (e**(w * beta) - 1)) + 1) * 2
"""
Explanation: Analytical expressions for the Ohmic bath correlation function and spectral density
Before we begin fitting, let us examine the analytic expressions for the correlation and spectral density functions and write Python equivalents.
The correlation function is given by (see, e.g., http://www1.itp.tu-berlin.de/brandes/public_html/publications/notes.pdf for a derivation, equation 7.59, but with a factor of $\pi$ moved into the definition of the correlation function):
\begin{align}
C(t) =& \: \frac{1}{\pi}\alpha \omega_{c}^{1 - s} \beta^{- (s + 1)} \: \times \
& \: \Gamma(s + 1) \left[ \zeta \left(s + 1, \frac{1 + \beta \omega_c - i \omega_c t}{\beta \omega_c}\right) + \zeta \left(s + 1, \frac{1 + i \omega_c t}{\beta \omega_c}\right) \right]
\end{align}
where $\Gamma$ is the Gamma function and
\begin{equation}
\zeta(z, u) \equiv \sum_{n=0}^{\infty} \frac{1}{(n + u)^z}, \; u \neq 0, -1, -2, \ldots
\end{equation}
is the generalized Zeta function. The Ohmic case is given by $s = 1$.
The corresponding spectral density for the Ohmic case is:
\begin{equation}
J(\omega) = \omega \alpha e^{- \frac{\omega}{\omega_c}}
\end{equation}
End of explanation
"""
# Bath parameters:
Q = sigmaz() # coupling operator
alpha = 3.25
T = 0.5
wc = 1
beta = 1 / T
s = 1
# Define some operators with which we will measure the system
# 1,1 element of density matrix - corresonding to groundstate
P11p = basis(2,0) * basis(2,0).dag()
P22p = basis(2,1) * basis(2,1).dag()
# 1,2 element of density matrix - corresonding to coherence
P12p = basis(2,0) * basis(2,1).dag()
"""
Explanation: Finally, let's set the bath parameters we will work with and write down some measurement operators:
End of explanation
"""
# Helper functions for packing the paramters a, b and c into a single numpy
# array as required by SciPy's curve_fit:
def pack(a, b, c):
""" Pack parameter lists for fitting. """
return np.concatenate((a, b, c))
def unpack(params):
""" Unpack parameter lists for fitting. """
N = len(params) // 3
a = params[:N]
b = params[N:2 * N]
c = params[2 * N:]
return a, b, c
# The approximate spectral density and a helper for fitting the approximate spectral density
# to values calculated from the analytical formula:
def spectral_density_approx(w, a, b, c):
""" Calculate the fitted value of the function for the given parameters. """
tot = 0
for i in range(len(a)):
tot += 2 * a[i] * b[i] * w / (((w + c[i])**2 + b[i]**2) * ((w - c[i])**2 + b[i]**2))
return tot
def fit_spectral_density(J, w, alpha, wc, N):
""" Fit the spectral density with N underdamped oscillators. """
sigma = [0.0001] * len(w)
J_max = abs(max(J, key=abs))
guesses = pack([J_max] * N, [wc] * N, [wc] * N)
lower_bounds = pack([-100 * J_max] * N, [0.1 * wc] * N, [0.1 * wc] * N)
upper_bounds = pack([100 * J_max] * N, [100 * wc] * N, [100 * wc] * N)
params, _ = curve_fit(
lambda x, *params: spectral_density_approx(w, *unpack(params)),
w, J,
p0=guesses,
bounds=(lower_bounds, upper_bounds),
sigma=sigma,
maxfev=1000000000,
)
return unpack(params)
"""
Explanation: Building the HEOM bath by fitting the spectral density
We begin by fitting the spectral density, using a series of $k$ underdamped harmonic oscillators case with the Meier-Tannor form (J. Chem. Phys. 111, 3365 (1999); https://doi.org/10.1063/1.479669):
\begin{equation}
J_{\mathrm approx}(\omega; a, b, c) = \sum_{i=0}^{k-1} \frac{2 a_i b_i w}{((w + c_i)^2 + b_i^2) ((w - c_i)^2 + b_i^2)}
\end{equation}
where $a, b$ and $c$ are the fit parameters and each is a vector of length $k$.
End of explanation
"""
w = np.linspace(0, 25, 20000)
J = ohmic_spectral_density(w, alpha=alpha, wc=wc)
params_k = [
fit_spectral_density(J, w, alpha=alpha, wc=wc, N=i+1)
for i in range(4)
]
"""
Explanation: With the spectral density approximation $J_{\mathrm approx}(w; a, b, c)$ implemented above, we can now perform the fit and examine the results.
End of explanation
"""
for k, params in enumerate(params_k):
lam, gamma, w0 = params
y = spectral_density_approx(w, lam, gamma, w0)
print(f"Parameters [k={k}]: lam={lam}; gamma={gamma}; w0={w0}")
plt.plot(w, J, w, y)
plt.show()
"""
Explanation: Let's plot the fit for each $k$ and examine how it improves with an increasing number of terms:
End of explanation
"""
# The parameters for the fit with four terms:
lam, gamma, w0 = params_k[-1]
print(f"Parameters [k={len(params_k) - 1}]: lam={lam}; gamma={gamma}; w0={w0}")
# Plot the components of the fit separately:
def spectral_density_ith_component(w, i, lam, gamma, w0):
""" Return the i'th term of the approximation for the spectral density. """
return 2 * lam[i] * gamma[i] * w / (((w + w0[i])**2 + gamma[i]**2) * ((w - w0[i])**2 + gamma[i]**2))
def plot_spectral_density_fit_components(J, w, lam, gamma, w0, save=True):
""" Plot the individual components of a fit to the spectral density. """
fig, axes = plt.subplots(1, 1, sharex=True, figsize=(8,8))
axes.plot(w, J, 'r--', linewidth=2, label="original")
for i in range(len(lam)):
axes.plot(
w, spectral_density_ith_component(w, i, lam, gamma, w0),
linewidth=2,
label=f"fit component {i}",
)
axes.set_xlabel(r'$w$', fontsize=28)
axes.set_ylabel(r'J', fontsize=28)
axes.legend()
if save:
fig.savefig('noisepower.eps')
plot_spectral_density_fit_components(J, w, lam, gamma, w0, save=False)
"""
Explanation: The fit with four terms looks good. Let's take a closer look at it by plotting the contribution of each term of the fit:
End of explanation
"""
def plot_power_spectrum(alpha, wc, beta, lam, gamma, w0, save=True):
""" Plot the power spectrum of a fit against the actual power spectrum. """
w = np.linspace(-10, 10, 50000)
s_orig = ohmic_power_spectrum(w, alpha=alpha, wc=wc, beta=beta)
s_fit = spectral_density_approx(w, lam, gamma, w0) * ((1 / (e**(w * beta) - 1)) + 1) *2
fig, axes = plt.subplots(1, 1, sharex=True, figsize=(8,8))
axes.plot(w, s_orig, 'r', linewidth=2, label="original")
axes.plot(w, s_fit, 'b', linewidth=2, label="fit")
axes.set_xlabel(r'$\omega$', fontsize=28)
axes.set_ylabel(r'$S(\omega)$', fontsize=28)
axes.legend()
if save:
fig.savefig('powerspectrum.eps')
plot_power_spectrum(alpha, wc, beta, lam, gamma, w0, save=False)
"""
Explanation: And let's also compare the power spectrum of the fit and the analytical spectral density:
End of explanation
"""
def matsubara_coefficients_from_spectral_fit(lam, gamma, w0, beta, Q, Nk):
""" Calculate the Matsubara co-efficients for a fit to the spectral density. """
terminator = 0. * spre(Q) # initial 0 value with the correct dimensions
terminator_max_k = 1000 # the number of matsubara expansion terms to include in the terminator
ckAR = []
vkAR = []
ckAI = []
vkAI = []
for lamt, Gamma, Om in zip(lam, gamma, w0):
ckAR.extend([
(lamt / (4 * Om)) * coth(beta * (Om + 1.0j * Gamma) / 2),
(lamt / (4 * Om)) * coth(beta * (Om - 1.0j * Gamma) / 2),
])
for k in range(1, Nk + 1):
ek = 2 * np.pi * k / beta
ckAR.append(
(-2 * lamt * 2 * Gamma / beta) * ek /
(((Om + 1.0j * Gamma)**2 + ek**2) * ((Om - 1.0j * Gamma)**2 + ek**2))
)
terminator_factor = 0
for k in range(Nk + 1, terminator_max_k):
ek = 2 * pi * k / beta
ck = (
(-2 * lamt * 2 * Gamma / beta) * ek /
(((Om + 1.0j * Gamma)**2 + ek**2) * ((Om - 1.0j * Gamma)**2 + ek**2))
)
terminator_factor += ck / ek
terminator += terminator_factor * (
2 * spre(Q) * spost(Q.dag()) - spre(Q.dag() * Q) - spost(Q.dag() * Q)
)
vkAR.extend([
-1.0j * Om + Gamma,
1.0j * Om + Gamma,
])
vkAR.extend([
2 * np.pi * k * T + 0.j
for k in range(1, Nk + 1)
])
ckAI.extend([
-0.25 * lamt * 1.0j / Om,
0.25 * lamt * 1.0j / Om,
])
vkAI.extend([
-(-1.0j * Om - Gamma),
-(1.0j * Om - Gamma),
])
return ckAR, vkAR, ckAI, vkAI, terminator
options = Options(nsteps=15000, store_states=True, rtol=1e-12, atol=1e-12, method="bdf")
# This problem is a little stiff, so we use the BDF method to solve the ODE ^^^
def generate_spectrum_results(p_k=-1, Nk=1, max_depth=5):
lam, gamma, w0 = params_k[p_k]
ckAR, vkAR, ckAI, vkAI, terminator = matsubara_coefficients_from_spectral_fit(lam, gamma, w0, beta=beta, Q=Q, Nk=Nk)
Ltot = liouvillian(Hsys) + terminator
tlist = np.linspace(0, 30 * pi / Del, 600)
with timer("RHS construction time"):
bath = BosonicBath(Q, ckAR, vkAR, ckAI, vkAI)
HEOM_spectral_fit = HEOMSolver(Ltot, bath, max_depth=max_depth, options=options)
with timer("ODE solver time"):
results_spectral_fit = (HEOM_spectral_fit.run(rho0, tlist))
return results_spectral_fit
"""
Explanation: Now that we have a good fit to the spectral density, we can calculate the Matsubara expansion terms for the BosonicBath from them. At the same time we will calculate the Matsubara terminator for this expansion.
End of explanation
"""
#Generate results for different number of lorentzians in fit
Pklist = range(len(params_k))
results_spectral_fit_pk = []
for p_k in Pklist:
print(p_k+1)
results_spectral_fit_pk.append(generate_spectrum_results(p_k = p_k, Nk = 1, max_depth = 11, ))
#generate results for different number of Matsubara terms per Lorentzian
#for max number of Lorentzians
Nklist = range(2,3)
results_spectral_fit_nk = []
for Nk in Nklist:
print(Nk)
results_spectral_fit_nk.append(generate_spectrum_results(max_depth =10, Nk=Nk))
#Generate results for different depths
Nclist = range(3,12)
results_spectral_fit_nc = []
for Nc in Nclist:
print(Nc)
results_spectral_fit_nc.append(generate_spectrum_results(max_depth = Nc,Nk = 1))
plot_result_expectations([
(results_spectral_fit_pk[Kk], P11p,
'rand',
"P11 (spectral fit) $k_J$="+str(Kk+1))
for Kk in Pklist]);
plot_result_expectations([
(results_spectral_fit_nk[Kk], P11p,
'rand',
"P11 (spectral fit) K="+str(Kk+1))
for Kk in range(len(Nklist))]);
plot_result_expectations([
(results_spectral_fit_nc[Kk], P11p,
'rand',
"P11 (spectral fit) $N_C=$"+str(Kk+3))
for Kk in range(len(Nclist))]);
"""
Explanation: Below we generate results for different convergence parameters (number of terms in the fit, number of matsubara terms, and depth of the hierarchy). For the parameter choices here, we need a relatively large depth of around '11', which can be a little slow.
End of explanation
"""
def correlation_approx_matsubara(t, ck, vk):
""" Calculate the approximate real or imaginary part of the correlation function
from the matsubara expansion co-efficients.
"""
ck = np.array(ck)
vk = np.array(vk)
y = []
for i in t:
y.append(np.sum(ck * np.exp(-vk * i)))
return y
def set_paper_figure_rcparams():
matplotlib.rcParams['figure.figsize'] = (7, 5)
matplotlib.rcParams['axes.titlesize'] = 25
matplotlib.rcParams['axes.labelsize'] = 30
matplotlib.rcParams['xtick.labelsize'] = 28
matplotlib.rcParams['ytick.labelsize'] = 28
matplotlib.rcParams['legend.fontsize'] = 20
matplotlib.rcParams['axes.grid'] = False
matplotlib.rcParams['savefig.bbox'] = 'tight'
matplotlib.rcParams['lines.markersize'] = 5
matplotlib.rcParams['font.family'] = 'STIXgeneral'
matplotlib.rcParams['mathtext.fontset'] = 'stix'
matplotlib.rcParams["font.serif"] = "STIX"
matplotlib.rcParams['text.usetex'] = False
set_paper_figure_rcparams()
def plot_matsubara_spectrum_fit_vs_actual(t, C, ckAR, vkAR, ckAI, vkAI, save = False):
fig = plt.figure(figsize=(12, 10))
grid = plt.GridSpec(2, 2, wspace=0.4, hspace=0.3)
# C_R(t)
yR = correlation_approx_matsubara(t, ckAR, vkAR)
axes1 = fig.add_subplot(grid[0, 0])
axes1.plot(t, np.real(C), "r", linewidth=3, label="Original")
axes1.plot(t, np.real(yR), "g", dashes=[3, 3], linewidth=2, label="Reconstructed")
axes1.legend(loc=0)
axes1.set_ylabel(r'$C_R(t)$', fontsize=28)
axes1.set_xlabel(r'$t\;\omega_c$', fontsize=28)
axes1.locator_params(axis='y', nbins=4)
axes1.locator_params(axis='x', nbins=4)
axes1.text(2., 1.5, "(a)", fontsize=28)
# C_I(t)
yI = correlation_approx_matsubara(t, ckAI, vkAI)
axes2 = fig.add_subplot(grid[0, 1])
axes2.plot(t, np.imag(C), "r", linewidth=3, label="Original")
axes2.plot(t, np.real(yI), "g", dashes=[3, 3], linewidth=2, label="Reconstructed")
axes2.legend(loc=0)
axes2.set_ylabel(r'$C_I(t)$', fontsize=28)
axes2.set_xlabel(r'$t\;\omega_c$', fontsize=28)
axes2.locator_params(axis='y', nbins=4)
axes2.locator_params(axis='x', nbins=4)
axes2.text(12.5, -0.2, "(b)", fontsize=28)
# J(w)
w = np.linspace(0, 25, 20000)
J_orig = ohmic_spectral_density(w, alpha=alpha, wc=wc)
J_fit = spectral_density_approx(w, lam, gamma, w0)
axes3 = fig.add_subplot(grid[1, 0])
axes3.plot(w, J_orig, "r", linewidth=3, label="$J(\omega)$ original")
axes3.plot(w, J_fit, "g", dashes=[3, 3], linewidth=2, label="$J(\omega)$ Fit $k_J = 4$")
axes3.legend(loc=0)
axes3.set_ylabel(r'$J(\omega)$', fontsize=28)
axes3.set_xlabel(r'$\omega/\omega_c$', fontsize=28)
axes3.locator_params(axis='y', nbins=4)
axes3.locator_params(axis='x', nbins=4)
axes3.text(3, 1.1, "(c)", fontsize=28)
# S(w)
# avoid the pole in the fit around zero:
w = np.concatenate([np.linspace(-10, -0.1, 5000), np.linspace(0.1, 10, 5000)])
s_orig = ohmic_power_spectrum(w, alpha=alpha, wc=wc, beta=beta)
s_fit = spectral_density_approx(w, lam, gamma, w0) * ((1 / (e**(w * beta) - 1)) + 1) *2
axes4 = fig.add_subplot(grid[1, 1])
axes4.plot(w, s_orig,"r",linewidth=3,label="Original")
axes4.plot(w, s_fit, "g", dashes=[3, 3], linewidth=2,label="Reconstructed")
axes4.legend()
axes4.set_ylabel(r'$S(\omega)$', fontsize=28)
axes4.set_xlabel(r'$\omega/\omega_c$', fontsize=28)
axes4.locator_params(axis='y', nbins=4)
axes4.locator_params(axis='x', nbins=4)
axes4.text(-8., 2.5, "(d)", fontsize=28)
if save:
fig.savefig("figFiJspec.pdf")
t = linspace(0,15,100)
C = ohmic_correlation(t, alpha=alpha, wc=wc, beta=beta)
ckAR, vkAR, ckAI, vkAI, terminator = matsubara_coefficients_from_spectral_fit(lam, gamma, w0, beta=beta, Q=Q, Nk=1)
plot_matsubara_spectrum_fit_vs_actual(t, C, ckAR, vkAR, ckAI, vkAI, save = True)
"""
Explanation: We now combine the fitting and correlation function data into one large plot.
End of explanation
"""
# The approximate correlation functions and a helper for fitting the approximate correlation
# function to values calculated from the analytical formula:
def correlation_approx_real(t, a, b, c):
""" Calculate the fitted value of the function for the given parameters. """
tot = 0
for i in range(len(a)):
tot += a[i] * np.exp(b[i] * t) * np.cos(c[i] * t)
return tot
def correlation_approx_imag(t, a, b, c):
""" Calculate the fitted value of the function for the given parameters. """
tot = 0
for i in range(len(a)):
tot += a[i] * np.exp(b[i] * t) * np.sin(c[i] * t)
return tot
def fit_correlation_real(C, t, wc, N):
""" Fit the spectral density with N underdamped oscillators. """
sigma = [0.1] * len(t)
C_max = abs(max(C, key=abs))
guesses = pack([C_max] * N, [-wc] * N, [wc] * N)
lower_bounds = pack([-20 * C_max] * N, [-np.inf] * N, [0.] * N)
upper_bounds = pack([20 * C_max] * N, [0.1] * N, [np.inf] * N)
params, _ = curve_fit(
lambda x, *params: correlation_approx_real(t, *unpack(params)),
t, C,
p0=guesses,
bounds=(lower_bounds, upper_bounds),
sigma=sigma,
maxfev=1000000000,
)
return unpack(params)
def fit_correlation_imag(C, t, wc, N):
""" Fit the spectral density with N underdamped oscillators. """
sigma = [0.0001] * len(t)
C_max = abs(max(C, key=abs))
guesses = pack([-C_max] * N, [-2] * N, [1] * N)
lower_bounds = pack([-5 * C_max] * N, [-100] * N, [0.] * N)
upper_bounds = pack([5 * C_max] * N, [0.01] * N, [100] * N)
params, _ = curve_fit(
lambda x, *params: correlation_approx_imag(t, *unpack(params)),
t, C,
p0=guesses,
bounds=(lower_bounds, upper_bounds),
sigma=sigma,
maxfev=1000000000,
)
return unpack(params)
t = linspace(0,15,50000)
C = ohmic_correlation(t, alpha=alpha, wc=wc, beta=beta)
params_k_real = [
fit_correlation_real(np.real(C), t, wc=wc, N=i+1)
for i in range(3)
]
params_k_imag = [
fit_correlation_imag(np.imag(C), t, wc=wc, N=i+1)
for i in range(3)
]
for k, params in enumerate(params_k_real):
lam, gamma, w0 = params
y = correlation_approx_real(t, lam, gamma, w0)
print(f"Parameters [k={k}]: lam={lam}; gamma={gamma}; w0={w0}")
plt.plot(t, np.real(C), t, y)
plt.show()
for k, params in enumerate(params_k_imag):
lam, gamma, w0 = params
y = correlation_approx_imag(t, lam, gamma, w0)
print(f"Parameters [k={k}]: lam={lam}; gamma={gamma}; w0={w0}")
plt.plot(t, np.imag(C), t, y)
plt.show()
"""
Explanation: Building the HEOM bath by fitting the correlation function
Having successfully fitted the spectral density and used the result to calculate the Matsubara expansion and terminator for the HEOM bosonic bath, we now proceed to the second case of fitting the correlation function itself instead.
Here we fit the real and imaginary parts seperately, using the following ansatz
$$C_R^F(t) = \sum_{i=1}^{k_R} c_R^ie^{-\gamma_R^i t}\cos(\omega_R^i t)$$
$$C_I^F(t) = \sum_{i=1}^{k_I} c_I^ie^{-\gamma_I^i t}\sin(\omega_I^i t)$$
End of explanation
"""
def matsubara_coefficients_from_corr_fit_real(lam, gamma, w0):
""" Return the matsubara coefficients for the imaginary part of the correlation function """
ckAR = [0.5 * x + 0j for x in lam] # the 0.5 is from the cosine
ckAR.extend(np.conjugate(ckAR)) # extend the list with the complex conjugates
vkAR = [-x - 1.0j * y for x, y in zip(gamma, w0)]
vkAR.extend([-x + 1.0j * y for x, y in zip(gamma, w0)])
return ckAR, vkAR
def matsubara_coefficients_from_corr_fit_imag(lam, gamma, w0):
""" Return the matsubara coefficients for the imaginary part of the correlation function. """
ckAI = [-0.5j * x for x in lam] # the 0.5 is from the sine
ckAI.extend(np.conjugate(ckAI)) # extend the list with the complex conjugates
vkAI = [-x - 1.0j * y for x, y in zip(gamma, w0)]
vkAI.extend([-x + 1.0j * y for x, y in zip(gamma, w0)])
return ckAI, vkAI
ckAR, vkAR = matsubara_coefficients_from_corr_fit_real(*params_k_real[-1])
ckAI, vkAI = matsubara_coefficients_from_corr_fit_imag(*params_k_imag[-1])
def corr_spectrum_approx(w, ckAR, vkAR, ckAI, vkAI):
""" Calculates the approximate power spectrum from ck and vk. """
S = np.zeros(len(w), dtype=np.complex128)
for ck, vk in zip(ckAR, vkAR):
S += 2 * ck * np.real(vk) / ((w - np.imag(vk))**2 + (np.real(vk)**2))
for ck, vk in zip(ckAI, vkAI):
S += 2 * 1.0j * ck * np.real(vk) / ((w - np.imag(vk))**2 + (np.real(vk)**2))
return S
def plot_matsubara_correlation_fit_vs_actual(t, C, ckAR, vkAR, ckAI, vkAI, save = False):
fig = plt.figure(figsize=(12, 10))
grid = plt.GridSpec(2, 2, wspace=0.4, hspace=0.3)
# C_R(t)
yR = correlation_approx_matsubara(t, ckAR, vkAR)
axes1 = fig.add_subplot(grid[0, 0])
axes1.plot(t, np.real(C), "r", linewidth=3, label="Original")
axes1.plot(t, np.real(yR), "g", dashes=[3, 3], linewidth=2, label="Fit $k_R = 3$")
axes1.legend(loc=0)
axes1.set_ylabel(r'$C_R(t)$', fontsize=28)
axes1.set_xlabel(r'$t\;\omega_c$', fontsize=28)
axes1.locator_params(axis='y', nbins=4)
axes1.locator_params(axis='x', nbins=4)
axes1.text(2., 1.4, "(a)", fontsize=28)
# C_I(t)
yI = correlation_approx_matsubara(t, ckAI, vkAI)
axes2 = fig.add_subplot(grid[0, 1])
axes2.plot(t, np.imag(C), "r", linewidth=3, label="Original")
axes2.plot(t, np.real(yI), "g", dashes=[3, 3], linewidth=2, label="Fit $K_I=3$")
axes2.legend(loc=0)
axes2.set_ylabel(r'$C_I(t)$', fontsize=28)
axes2.set_xlabel(r'$t\;\omega_c$', fontsize=28)
axes2.locator_params(axis='y', nbins=4)
axes2.locator_params(axis='x', nbins=4)
axes2.text(12.5, -0.2, "(b)", fontsize=28)
# J(w)
w = np.linspace(0, 25, 20000)
J_orig = ohmic_spectral_density(w, alpha=alpha, wc=wc)
J_fit = corr_spectrum_approx(w, ckAR, vkAR, ckAI, vkAI) / (((1 / (e**(w * beta) - 1)) + 1) * 2)
axes3 = fig.add_subplot(grid[1, 0])
axes3.plot(w, J_orig, "r", linewidth=3, label="Original")
axes3.plot(w, J_fit, "g", dashes=[3, 3], linewidth=2, label="Reconstructed")
axes3.legend(loc=0)
axes3.set_ylabel(r'$J(\omega)$', fontsize=28)
axes3.set_xlabel(r'$\omega/\omega_c$', fontsize=28)
axes3.locator_params(axis='y', nbins=4)
axes3.locator_params(axis='x', nbins=4)
axes3.text(3, 1.1, "(c)", fontsize=28)
#axes3.set_ylim(0,2)
# S(w)
# avoid the pole in the fit around zero:
w = np.concatenate([np.linspace(-10, -0.1, 5000), np.linspace(0.1, 10, 5000)])
s_orig = ohmic_power_spectrum(w, alpha=alpha, wc=wc, beta=beta)
s_fit = corr_spectrum_approx(w, ckAR, vkAR, ckAI, vkAI)
axes4 = fig.add_subplot(grid[1, 1])
axes4.plot(w, s_orig,"r",linewidth=3,label="Original")
axes4.plot(w, s_fit, "g", dashes=[3, 3], linewidth=2,label="Reconstructed")
axes4.legend()
axes4.set_ylabel(r'$S(\omega)$', fontsize=28)
axes4.set_xlabel(r'$\omega/\omega_c$', fontsize=28)
axes4.locator_params(axis='y', nbins=4)
axes4.locator_params(axis='x', nbins=4)
axes4.text(-8., 2.5, "(d)", fontsize=28)
if save:
fig.savefig("figFiCspec.pdf")
plot_matsubara_correlation_fit_vs_actual(t, C, ckAR, vkAR, ckAI, vkAI, save = True)
options = Options(nsteps=15000, store_states=True, rtol=1e-12, atol=1e-12, method="bdf")
# This problem is a little stiff, so we use the BDF method to solve the ODE ^^^
tlist = np.linspace(0, 30 * pi / Del, 600)
with timer("RHS construction time"):
bath = BosonicBath(Q, ckAR, vkAR, ckAI, vkAI)
HEOM_corr_fit = HEOMSolver(Hsys, bath, max_depth=5, options=options)
with timer("ODE solver time"):
results_corr_fit = HEOM_corr_fit.run(rho0, tlist)
options = Options(nsteps=15000, store_states=True, rtol=1e-12, atol=1e-12, method="bdf")
# This problem is a little stiff, so we use the BDF method to solve the ODE ^^^
def generate_corr_results(p_k=-1, max_depth=11):
ckAR, vkAR = matsubara_coefficients_from_corr_fit_real(*params_k_real[p_k])
ckAI, vkAI = matsubara_coefficients_from_corr_fit_imag(*params_k_imag[p_k])
tlist = np.linspace(0, 30 * pi / Del, 600)
with timer("RHS construction time"):
bath = BosonicBath(Q, ckAR, vkAR, ckAI, vkAI)
HEOM_corr_fit = HEOMSolver(Hsys, bath, max_depth=max_depth, options=options)
with timer("ODE solver time"):
results_corr_fit = (HEOM_corr_fit.run(rho0, tlist))
return results_corr_fit
#Generate results for different number of lorentzians in fit
Pklist = range(len(params_k_real))
results_corr_fit_pk = []
for p_k in Pklist:
print(p_k+1)
results_corr_fit_pk.append(generate_corr_results(p_k = p_k))
Pklist = range(len(params_k_real))
plot_result_expectations([
(results_corr_fit_pk[Kk], P11p,
'rand',
"P11 (correlation fit) k_R=k_I="+str(Kk+1))
for Kk in Pklist]);
#Nc = 5
tlist4 = np.linspace(0, 4*pi/Del, 600)
# Plot the results
fig, axes = plt.subplots(1, 1, sharex=True, figsize=(12,7))
axes.set_yticks([0.6,0.8,1])
axes.set_yticklabels([0.6,0.8,1])
axes.plot(tlist4, np.real(expect(results_corr_fit_pk[0].states,P11p)), 'y', linewidth=2, label="Correlation Function Fit $k_R=k_I=1$")
axes.plot(tlist4, np.real(expect(results_corr_fit_pk[2].states,P11p)), 'y-.', linewidth=2, label="Correlation Function Fit $k_R=k_I=3$")
axes.plot(tlist4, np.real(expect(results_spectral_fit_pk[0].states,P11p)), 'b', linewidth=2, label="Spectral Density Fit $k_J=1$")
axes.plot(tlist4, np.real(expect(results_spectral_fit_pk[2].states,P11p)), 'g--', linewidth=2, label="Spectral Density Fit $k_J=3$")
axes.plot(tlist4, np.real(expect(results_spectral_fit_pk[3].states,P11p)), 'r-.', linewidth=2, label="Spectral Density Fit $k_J=4$")
axes.set_ylabel(r'$\rho_{11}$',fontsize=30)
axes.set_xlabel(r'$t\;\omega_c$',fontsize=30)
axes.locator_params(axis='y', nbins=3)
axes.locator_params(axis='x', nbins=3)
axes.legend(loc=0, fontsize=20)
fig.savefig("figFit.pdf")
from qutip.ipynbtools import version_table
version_table()
"""
Explanation: Now we construct the BosonicBath co-efficients and frequencies from the fit to the correlation function:
End of explanation
"""
|
augfranco/CienciadosDados | Projeto02.ipynb | mit | %%capture
#Instalando o tweepy
!pip install tweepy
"""
Explanation: Projeto 2 - Classificador Automático de Sentimento - Augusto Franco e Pedro Isidoro
Você foi contratado por uma empresa parar analisar como os clientes estão reagindo a um determinado produto no Twitter. A empresa deseja que você crie um programa que irá analisar as mensagens disponíveis e classificará como "relevante" ou "irrelevante". Com isso ela deseja que mensagens negativas, que denigrem o nome do produto, ou que mereçam destaque, disparem um foco de atenção da área de marketing.<br /><br />
Como aluno de Ciência dos Dados, você lembrou do Teorema de Bayes, mais especificamente do Classificador Naive-Bayes, que é largamente utilizado em filtros anti-spam de e-mails. O classificador permite calcular qual a probabilidade de uma mensagem ser relevante dadas as palavras em seu conteúdo.<br /><br />
Para realizar o MVP (minimum viable product) do projeto, você precisa implementar uma versão do classificador que "aprende" o que é relevante com uma base de treinamento e compara a performance dos resultados com uma base de testes.<br /><br />
Após validado, o seu protótipo poderá também capturar e classificar automaticamente as mensagens da plataforma.
Informações do Projeto
Prazo: 13/Set até às 23:59.<br />
Grupo: 1 ou 2 pessoas.<br /><br />
Entregáveis via GitHub:
* Arquivo notebook com o código do classificador, seguindo as orientações abaixo.
* Arquivo Excel com as bases de treinamento e teste totalmente classificado.
NÃO disponibilizar o arquivo com os access keys/tokens do Twitter.
Check 3:
Até o dia 06 de Setembro às 23:59, o notebook e o xlsx devem estar no Github com as seguintes evidências:
* Conta no twitter criada.
* Produto escolhido.
* Arquivo Excel contendo a base de treinamento e teste já classificado.
Sugestão de leitura:<br />
http://docs.tweepy.org/en/v3.5.0/index.html<br />
https://monkeylearn.com/blog/practical-explanation-naive-bayes-classifier/
Preparando o ambiente
Instalando a biblioteca tweepy para realizar a conexão com o Twitter:
End of explanation
"""
import tweepy
import math
import os.path
import pandas as pd
import json
from random import shuffle
import string
"""
Explanation: Importando as Bibliotecas que serão utilizadas. Esteja livre para adicionar outras.
End of explanation
"""
#Dados de autenticação do twitter:
#Coloque aqui o identificador da conta no twitter: @fulano
#leitura do arquivo no formato JSON
with open('auth.pass') as fp:
data = json.load(fp)
#Configurando a biblioteca. Não modificar
auth = tweepy.OAuthHandler(data['consumer_key'], data['consumer_secret'])
auth.set_access_token(data['access_token'], data['access_token_secret'])
"""
Explanation: Autenticando no Twitter
Para realizar a captura dos dados é necessário ter uma conta cadastrada no twitter:
Conta: @augfranco97
Caso ainda não tenha uma: https://twitter.com/signup
Depois é necessário registrar um app para usar a biblioteca: https://apps.twitter.com/
Dentro do registro do App, na aba Keys and Access Tokens, anotar os seguintes campos:
Consumer Key (API Key)
Consumer Secret (API Secret)
Mais abaixo, gere um Token e anote também:
Access Token
Access Token Secret
Preencha os valores no arquivo "auth.pass"
ATENÇÃO: Nunca divulgue os dados desse arquivo online (GitHub, etc). Ele contém as chaves necessárias para realizar as operações no twitter de forma automática e portanto é equivalente a ser "hackeado". De posse desses dados, pessoas mal intencionadas podem fazer todas as operações manuais (tweetar, seguir, bloquear/desbloquear, listar os seguidores, etc). Para efeito do projeto, esse arquivo não precisa ser entregue!!!
End of explanation
"""
#Produto escolhido:
produto = 'itau'
#Quantidade mínima de mensagens capturadas:
n = 500
#Quantidade mínima de mensagens para a base de treinamento:
t = 300
#Filtro de língua, escolha uma na tabela ISO 639-1.
lang = 'pt'
"""
Explanation: Coletando Dados
Agora vamos coletar os dados. Tenha em mente que dependendo do produto escolhido, não haverá uma quantidade significativa de mensagens, ou ainda poder haver muitos retweets.<br /><br />
Configurando:
End of explanation
"""
#Cria um objeto para a captura
api = tweepy.API(auth)
#Inicia a captura, para mais detalhes: ver a documentação do tweepy
i = 1
msgs = []
for msg in tweepy.Cursor(api.search, q=produto, lang=lang).items():
msgs.append(msg.text.lower())
i += 1
if i > n:
break
#Embaralhando as mensagens para reduzir um possível viés
shuffle(msgs)
"""
Explanation: Capturando os dados do twitter:
End of explanation
"""
#Verifica se o arquivo não existe para não substituir um conjunto pronto
if not os.path.isfile('./{0}.xlsx'.format(produto)):
#Abre o arquivo para escrita
writer = pd.ExcelWriter('{0}.xlsx'.format(produto))
#divide o conjunto de mensagens em duas planilhas
dft = pd.DataFrame({'Treinamento' : pd.Series(msgs[:t])})
dft.to_excel(excel_writer = writer, sheet_name = 'Treinamento', index = False)
dfc = pd.DataFrame({'Teste' : pd.Series(msgs[t:])})
dfc.to_excel(excel_writer = writer, sheet_name = 'Teste', index = False)
#fecha o arquivo
writer.save()
exceltr=pd.read_excel("itau.xlsx")
excelte = pd.read_excel("itauteste.xlsx")
SIMs = []
NAOs = []
listaprobs = []
"""
Explanation: Salvando os dados em uma planilha Excel:
End of explanation
"""
for i in range(len(exceltr.Treinamento)):
coluna1 = exceltr.Treinamento[i].lower().split()
exceltr.Treinamento[i] = coluna1
for k in range(len(coluna1)):
for punctuation in string.punctuation:
coluna1[k] = coluna1[k].replace(punctuation, '')
coluna1[k] = coluna1[k].replace('—', '')
if exceltr.Relevancia[i] == 'sim':
SIMs.append(coluna1[k])
elif exceltr.Relevancia[i] == 'não':
NAOs.append(coluna1[k])
while '' in coluna1:
coluna1.remove('')
while '' in SIMs:
SIMs.remove('')
while '' in NAOs:
NAOs.remove('')
"""
Explanation: Classificando as Mensagens
Agora você deve abrir o arquivo Excel com as mensagens capturadas e classificar na Coluna B se a mensagem é relevante ou não.<br />
Não se esqueça de colocar um nome para a coluna na célula B1.<br /><br />
Fazer o mesmo na planilha de Controle.
Montando o Classificador Naive-Bayes
Com a base de treinamento montada, comece a desenvolver o classificador. Escreva o seu código abaixo:
Opcionalmente:
* Limpar as mensagens removendo os caracteres: enter, :, ", ', (, ), etc. Não remover emojis.<br />
* Corrigir separação de espaços entre palavras e/ou emojis.
* Propor outras limpezas/transformações que não afetem a qualidade da informação.
Cleanning Strings:
End of explanation
"""
for i in exceltr.Relevancia:
if i == 'sim':
listaprobs.append(i)
if i == 'não':
listaprobs.append(i)
QY = 0
QN = 0
for a in listaprobs:
if a == 'sim':
QY += 1
if a == 'não':
QN += 1
#Conta cada palavra da lista
LS = [[x,SIMs.count(x)] for x in set(SIMs)]
LN = [[y,NAOs.count(y)] for y in set(NAOs)]
#Calcula quantas palavras existem no espaço amostral
palav = 0
sins = 0
naos = 0
for a in range(len(LS)):
palav = palav + LS[a][1]
sins = sins + LS[a][1]
for a in range(len(LN)):
palav = palav + LN[a][1]
naos = naos + LN[a][1]
print("Quantidade de sim", QY)
print("Quantidade de sim", QN)
print('Total de palavras', len(LS)+len(LN))
print('Total relevantes', len(LS))
print('Total não relevantes', len(LN))
#Limpando a nova planilha
for a in range(len(excelte.Teste)):
coluna11 = excelte.Teste[a].lower().split()
for b in range(len(coluna11)):
for punctuation in string.punctuation:
coluna11[b] = coluna11[b].replace(punctuation, '')
coluna11[b] = coluna11[b].replace('—', '')
coluna11[b] = coluna11[b].replace('rt', '')
while '' in coluna11:
coluna11.remove('')
excelte.Teste[i] = coluna11
"""
Explanation: Cálculos e contagem de palavras:
End of explanation
"""
probSIM = []
l = 1
for i in range(len(excelte.Teste)):
clinha = []
for k in range(len(excelte.Teste[i])):
chance = 0
for j in range(len(LS)):
if LS[j][0] == excelte.Teste[i][k]:
chance = ((LS[j][1]+1)/(len(LS)+(len(LS)+len(LN))))
break
if chance > 0:
clinha.append(chance)
elif chance == 0:
clinha.append(1/(len(LS)+(len(LS)+len(LN))))
l = 1
for x in clinha:
l *= x
probSIM.append(l)
probNAO = []
l = 1
for i in range(len(excelte.Teste)):
clinha = []
for k in range(len(excelte.Teste[i])):
chance = 0
for j in range(len(LN)):
if LN[j][0] == excelte.Teste[i][k]:
chance = ((LN[j][1]+1)/(len(LN)+(len(LS)+len(LN))))
break
if chance > 0:
clinha.append(chance)
elif chance == 0:
clinha.append(1/(len(LN)+(len(LS)+len(LN))))
l = 1
for x in clinha:
l *= x
probNAO.append(l)
"""
Explanation: Calculando as Probabilidade da Relevância dos Tweets:
End of explanation
"""
L2 = []
for a in range(len(probSIM)):
if probSIM[a]>probNAO[a]:
L2.append('sim')
else:
L2.append('não')
"""
Explanation: Por fim, podemos comparar as probabilidades:
End of explanation
"""
print("Positivos Falsos", (20/161))
print("Positivos verdadeiros", (16/39))
print("Irrelevantes verdadeiros", (141/161))
print("Irrelevantes Falsos", (23/39))
"""
Explanation: Verificando a performance
Agora você deve testar o seu Classificador com a base de Testes.<br /><br />
Você deve extrair as seguintes medidas:
* Porcentagem de positivos falsos (marcados como relevante mas não são relevantes)
* Porcentagem de positivos verdadeiros (marcado como relevante e são relevantes)
* Porcentagem de negativos verdadeiros (marcado como não relevante e não são relevantes)
* Porcentagem de negativos falsos (marcado como não relevante e são relevantes)
Opcionalmente:
* Criar categorias intermediárias de relevância baseado na diferença de probabilidades. Exemplo: muito relevante, relevante, neutro, irrelevante e muito irrelevante.
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.18/_downloads/ae1f146de31a4665192262a211d6d103/plot_metadata_epochs.ipynb | bsd-3-clause | # Authors: Chris Holdgraf <choldgraf@gmail.com>
# Jona Sassenhagen <jona.sassenhagen@gmail.com>
# Eric Larson <larson.eric.d@gmail.com>
# License: BSD (3-clause)
import mne
import numpy as np
import matplotlib.pyplot as plt
# Load the data from the internet
path = mne.datasets.kiloword.data_path() + '/kword_metadata-epo.fif'
epochs = mne.read_epochs(path)
# The metadata exists as a Pandas DataFrame
print(epochs.metadata.head(10))
"""
Explanation: Pandas querying and metadata with Epochs objects
Demonstrating pandas-style string querying with Epochs metadata.
For related uses of :class:mne.Epochs, see the starting tutorial
tut-epochs-class.
Sometimes you may have a complex trial structure that cannot be easily
summarized as a set of unique integers. In this case, it may be useful to use
the metadata attribute of :class:mne.Epochs objects. This must be a
:class:pandas.DataFrame where each row corresponds to an epoch, and each
column corresponds to a metadata attribute of each epoch. Columns must
contain either strings, ints, or floats.
In this dataset, subjects were presented with individual words
on a screen, and the EEG activity in response to each word was recorded.
We know which word was displayed in each epoch, as well as
extra information about the word (e.g., word frequency).
Loading the data
First we'll load the data. If metadata exists for an :class:mne.Epochs
fif file, it will automatically be loaded in the metadata attribute.
End of explanation
"""
av1 = epochs['Concreteness < 5 and WordFrequency < 2'].average()
av2 = epochs['Concreteness > 5 and WordFrequency > 2'].average()
joint_kwargs = dict(ts_args=dict(time_unit='s'),
topomap_args=dict(time_unit='s'))
av1.plot_joint(show=False, **joint_kwargs)
av2.plot_joint(show=False, **joint_kwargs)
"""
Explanation: We can use this metadata attribute to select subsets of Epochs. This
uses the Pandas :meth:pandas.DataFrame.query method under the hood.
Any valid query string will work. Below we'll make two plots to compare
between them:
End of explanation
"""
words = ['film', 'cent', 'shot', 'cold', 'main']
epochs['WORD in {}'.format(words)].plot_image(show=False)
"""
Explanation: Next we'll choose a subset of words to keep.
End of explanation
"""
epochs['cent'].average().plot(show=False, time_unit='s')
"""
Explanation: Note that traditional epochs sub-selection still works. The traditional
MNE methods for selecting epochs will supersede the rich metadata querying.
End of explanation
"""
# Create two new metadata columns
metadata = epochs.metadata
is_concrete = metadata["Concreteness"] > metadata["Concreteness"].median()
metadata["is_concrete"] = np.where(is_concrete, 'Concrete', 'Abstract')
is_long = metadata["NumberOfLetters"] > 5
metadata["is_long"] = np.where(is_long, 'Long', 'Short')
epochs.metadata = metadata
"""
Explanation: Below we'll show a more involved example that leverages the metadata
of each epoch. We'll create a new column in our metadata object and use
it to generate averages for many subsets of trials.
End of explanation
"""
query = "is_long == '{0}' & is_concrete == '{1}'"
evokeds = dict()
for concreteness in ("Concrete", "Abstract"):
for length in ("Long", "Short"):
subset = epochs[query.format(length, concreteness)]
evokeds["/".join((concreteness, length))] = list(subset.iter_evoked())
# For the actual visualisation, we store a number of shared parameters.
style_plot = dict(
colors={"Long": "Crimson", "Short": "Cornflowerblue"},
linestyles={"Concrete": "-", "Abstract": ":"},
split_legend=True,
ci=.68,
show_sensors='lower right',
show_legend='lower left',
truncate_yaxis="max_ticks",
picks=epochs.ch_names.index("Pz"),
)
fig, ax = plt.subplots(figsize=(6, 4))
mne.viz.plot_compare_evokeds(evokeds, axes=ax, **style_plot)
plt.show()
"""
Explanation: Now we can quickly extract (and plot) subsets of the data. For example, to
look at words split by word length and concreteness:
End of explanation
"""
letters = epochs.metadata["NumberOfLetters"].unique().astype(int).astype(str)
evokeds = dict()
for n_letters in letters:
evokeds[n_letters] = epochs["NumberOfLetters == " + n_letters].average()
style_plot["colors"] = {n_letters: int(n_letters)
for n_letters in letters}
style_plot["cmap"] = ("# of Letters", "viridis_r")
del style_plot['linestyles']
fig, ax = plt.subplots(figsize=(6, 4))
mne.viz.plot_compare_evokeds(evokeds, axes=ax, **style_plot)
plt.show()
"""
Explanation: To compare words which are 4, 5, 6, 7 or 8 letters long:
End of explanation
"""
evokeds = dict()
query = "is_concrete == '{0}' & NumberOfLetters == {1}"
for concreteness in ("Concrete", "Abstract"):
for n_letters in letters:
subset = epochs[query.format(concreteness, n_letters)]
evokeds["/".join((concreteness, n_letters))] = subset.average()
style_plot["linestyles"] = {"Concrete": "-", "Abstract": ":"}
fig, ax = plt.subplots(figsize=(6, 4))
mne.viz.plot_compare_evokeds(evokeds, axes=ax, **style_plot)
plt.show()
"""
Explanation: And finally, for the interaction between concreteness and continuous length
in letters:
End of explanation
"""
data = epochs.get_data()
metadata = epochs.metadata.copy()
epochs_new = mne.EpochsArray(data, epochs.info, metadata=metadata)
"""
Explanation: <div class="alert alert-info"><h4>Note</h4><p>Creating an :class:`mne.Epochs` object with metadata is done by passing
a :class:`pandas.DataFrame` to the ``metadata`` kwarg as follows:</p></div>
End of explanation
"""
|
cbuntain/TutorialSocialMediaCrisis | notebooks/T03 - Parsing Twitter Data.ipynb | apache-2.0 | jsonString = '{"key": "value"}'
# Parse the JSON string
dictFromJson = json.loads(jsonString)
# Python now has a dictionary representing this data
print ("Resulting dictionary object:\n", dictFromJson)
# Will print the value
print ("Data stored in \"key\":\n", dictFromJson["key"])
# This will cause an error!
print ("Data stored in \"value\":\n", dictFromJson["value"])
"""
Explanation: Topic 3. JSON - JavaScript Object Notation
Much of the data with which we will work comes in the JavaScript Object Notation (JSON) format.
JSON is a lightweight text format that allows one to describe objects by keys and values without needing to specify a schema beforehand (as compared to XML).
Many "RESTful" APIs available on the web today return data in JSON format, and the data we have stored from Twitter follows this rule as well.
Python's JSON support is relatively robust and is included in the language under the json package.
This package allows us to read and write JSON to/from a string or file and convert many of Python's types into a text format.
JSON and Keys/Values
The main idea here is that JSON allows one to specify a key, or name, for some data and then that data's value as a string, number, or object.
An example line of JSON might look like:
{"key": "value"}
End of explanation
"""
jsonString = '{ "name": "Cody", "occupation": "PostDoc", "goal": "Tenure" }'
# Parse the JSON string
dictFromJson = json.loads(jsonString)
# Python now has a dictionary representing this data
print ("Resulting dictionary object:\n", dictFromJson)
"""
Explanation: Multile Keys and Values
A JSON string/file can have many keys and values, but a key should always have a value.
We can have values without keys if we're doing arrays, but this can be awkward.
An example of JSON string with multiple keys is below:
{
"name": "Cody",
"occupation": "Student",
"goal": "PhD"
}
Note the comma after the first two values.
These commas are needed for valid JSON and to separate keys from other values.
End of explanation
"""
jsonString = '{"students": [{"name": "Cody", "occupation": "PostDoc", "goal": "Tenure"}, {"name": "Scott", "occupation": "Student", "goal": "Masters"}]}'
# Parse the JSON string
dictFromJson = json.loads(jsonString)
# Python now has a dictionary representing this data
print ("Resulting array:\n", dictFromJson)
print ("Each student:")
for student in dictFromJson["students"]:
print (student)
"""
Explanation: JSON and Arrays
The above JSON string describes an object whose name is "Cody".
How would we describe a list of similar students?
Arrays are useful here and are denoted with "[]" rather than the "{}" object notation.
For example:
{
"students": [
{
"name": "Cody",
"occupation": "Student",
"goal": "PhD"
},
{
"name": "Scott",
"occupation": "Student",
"goal": "Masters"
}
]
}
Again, note the comma between the "}" and "{" separating the two student objects and how they are both surrounded by "[]".
End of explanation
"""
jsonString = '[{"name": "Cody","occupation": "PostDoc","goal": "Tenure"},{"name": "Scott","occupation": "Student","goal": "Masters","completed": true}]'
# Parse the JSON string
arrFromJson = json.loads(jsonString)
# Python now has an array representing this data
print ("Resulting array:\n", arrFromJson)
print ("Each student:")
for student in arrFromJson:
print (student)
"""
Explanation: More JSON + Arrays
A couple of things to note:
1. JSON does not need a name for the array. It could be declared just as an array.
1. The student objects need not be identical.
As an example:
[
{
"name": "Cody",
"occupation": "Student",
"goal": "PhD"
},
{
"name": "Scott",
"occupation": "Student",
"goal": "Masters",
"completed": true
}
]
End of explanation
"""
jsonString = '{"disasters" : [{"event": "Nepal Earthquake","date": "25 April 2015","casualties": 8964,"magnitude": 7.8,"affectedAreas": [{"country": "Nepal","capital": "Kathmandu","population": 26494504},{"country": "India","capital": "New Dehli","population": 1276267000},{"country": "China","capital": "Beijing","population": 1376049000},{"country": "Bangladesh","capital": "Dhaka","population": 168957745}]}]}'
disasters = json.loads(jsonString)
for disaster in disasters["disasters"]:
print (disaster["event"])
print (disaster["date"])
for country in disaster["affectedAreas"]:
print (country["country"])
"""
Explanation: Nested JSON Objects
We've shown you can have an array as a value, and you can do the same with objects.
In fact, one of the powers of JSON is its essentially infinite depth/expressability.
You can very easily nest objects within objects, and JSON in the wild relies on this heavily.
An example:
{
"disasters" : [
{
"event": "Nepal Earthquake",
"date": "25 April 2015",
"casualties": 8964,
"magnitude": 7.8,
"affectedAreas": [
{
"country": "Nepal",
"capital": "Kathmandu",
"population": 26494504
},
{
"country": "India",
"capital": "New Dehli",
"population": 1276267000
},
{
"country": "China",
"capital": "Beijing",
"population": 1376049000
},
{
"country": "Bangladesh",
"capital": "Dhaka",
"population": 168957745
}
]
}
]
}
End of explanation
"""
exObj = {
"event": "Nepal Earthquake",
"date": "25 April 2015",
"casualties": 8964,
"magnitude": 7.8
}
print ("Python Object:", exObj, "\n")
# now we can convert to JSON
print ("Object JSON:")
print (json.dumps(exObj), "\n")
# We can also pretty-print the JSON
print ("Readable JSON:")
print (json.dumps(exObj, indent=4)) # Indent adds space
"""
Explanation: From Python Dictionaries to JSON
We can also go from a Python object to JSON with relative ease.
End of explanation
"""
tweetFilename = "first_BlackLivesMatter.json"
# Use Python's os.path.join to account for Windows, OSX/Linux differences
tweetFilePath = os.path.join("..", "00_data", "ferguson", tweetFilename)
print ("Opening", tweetFilePath)
# We use codecs to ensure we open the file in Unicode format,
# which supports larger character encodings
tweetFile = codecs.open(tweetFilePath, "r", "utf8")
# Read in the whole file, which contains ONE tweet and close
tweetFileContent = tweetFile.read()
tweetFile.close()
# Print the raw json
print ("Raw Tweet JSON:\n")
print (tweetFileContent)
# Convert the JSON to a Python object
tweet = json.loads(tweetFileContent)
print ("Tweet Object:\n")
print (tweet)
# We could have done this in one step with json.load()
# called on the open file, but our data files have
# a single tweet JSON per line, so this is more consistent
"""
Explanation: Reading Twitter JSON
We should now have all the tools necessary to understand how Python can read Twitter JSON data.
To show this, we'll read in a single tweet from the Ferguson, MO protests review its format, and parse it with Python's JSON loader.
End of explanation
"""
# What fields can we see?
print ("Keys:")
for k in sorted(tweet.keys()):
print ("\t", k)
print ("Tweet Text:", tweet["text"])
print ("User Name:", tweet["user"]["screen_name"])
print ("Author:", tweet["user"]["name"])
print("Source:", tweet["source"])
print("Retweets:", tweet["retweet_count"])
print("Favorited:", tweet["favorite_count"])
print("Tweet Location:", tweet["place"])
print("Tweet GPS Coordinates:", tweet["coordinates"])
print("Twitter's Guessed Language:", tweet["lang"])
# Tweets have a list of hashtags, mentions, URLs, and other
# attachments in "entities" field
print ("\n", "Entities:")
for eType in tweet["entities"]:
print ("\t", eType)
for e in tweet["entities"][eType]:
print ("\t\t", e)
"""
Explanation: Twitter JSON Fields
This tweet is pretty big, but we can still see some of the fields it contains.
Note it also has many nested fields.
We'll go through some of the more important fields below.
End of explanation
"""
|
mattilyra/gensim | docs/notebooks/gensim_news_classification.ipynb | lgpl-2.1 | import os
import re
import operator
import matplotlib.pyplot as plt
import warnings
import gensim
import numpy as np
warnings.filterwarnings('ignore') # Let's not pay heed to them right now
import nltk
nltk.download('stopwords') # Let's make sure the 'stopword' package is downloaded & updated
nltk.download('wordnet') # Let's also download wordnet, which will be used for lemmatization
from gensim.models import CoherenceModel, LdaModel, LsiModel, HdpModel
from gensim.models.wrappers import LdaMallet
from gensim.corpora import Dictionary
from pprint import pprint
from smart_open import smart_open
%matplotlib inline
test_data_dir = '{}'.format(os.sep).join([gensim.__path__[0], 'test', 'test_data'])
lee_train_file = test_data_dir + os.sep + 'lee_background.cor'
"""
Explanation: News classification with topic models in gensim
News article classification is a task which is performed on a huge scale by news agencies all over the world. We will be looking into how topic modeling can be used to accurately classify news articles into different categories such as sports, technology, politics etc.
Our aim in this tutorial is to come up with some topic model which can come up with topics that can easily be interpreted by us. Such a topic model can be used to discover hidden structure in the corpus and can also be used to determine the membership of a news article into one of the topics.
For this tutorial, we will be using the Lee corpus which is a shortened version of the Lee Background Corpus. The shortened version consists of 300 documents selected from the Australian Broadcasting Corporation's news mail service. It consists of texts of headline stories from around the year 2000-2001.
Accompanying slides can be found here.
Requirements
In this tutorial we look at how different topic models can be easily created using gensim.
Following are the dependencies for this tutorial:
- Gensim Version >=0.13.1 would be preferred since we will be using topic coherence metrics extensively here.
- matplotlib
- nltk.stopwords and nltk.wordnet
- pyLDAVis
We will be playing around with 4 different topic models here:
- LSI (Latent Semantic Indexing)
- HDP (Hierarchical Dirichlet Process)
- LDA (Latent Dirichlet Allocation)
- LDA (tweaked with topic coherence to find optimal number of topics) and
- LDA as LSI with the help of topic coherence metrics
First we'll fit those topic models on our existing data then we'll compare each against the other and see how they rank in terms of human interpretability.
All can be found in gensim and can be easily used in a plug-and-play fashion. We will tinker with the LDA model using the newly added topic coherence metrics in gensim based on this paper by Roeder et al and see how the resulting topic model compares with the exsisting ones.
End of explanation
"""
with smart_open(lee_train_file, 'rb') as f:
for n, l in enumerate(f):
if n < 5:
print([l])
def build_texts(fname):
"""
Function to build tokenized texts from file
Parameters:
----------
fname: File to be read
Returns:
-------
yields preprocessed line
"""
with smart_open(fname, 'rb') as f:
for line in f:
yield gensim.utils.simple_preprocess(line, deacc=True, min_len=3)
train_texts = list(build_texts(lee_train_file))
len(train_texts)
"""
Explanation: Analysing our corpus.
- The first document talks about a bushfire that had occured in New South Wales.
- The second talks about conflict between India and Pakistan in Kashmir.
- The third talks about road accidents in the New South Wales area.
- The fourth one talks about Argentina's economic and political crisis during that time.
- The last one talks about the use of drugs by midwives in a Sydney hospital.
Our final topic model should be giving us keywords which we can easily interpret and make a small summary out of. Without this the topic model cannot be of much practical use.
End of explanation
"""
bigram = gensim.models.Phrases(train_texts) # for bigram collocation detection
bigram[['new', 'york', 'example']]
from gensim.utils import lemmatize
from nltk.corpus import stopwords
stops = set(stopwords.words('english')) # nltk stopwords list
def process_texts(texts):
"""
Function to process texts. Following are the steps we take:
1. Stopword Removal.
2. Collocation detection.
3. Lemmatization (not stem since stemming can reduce the interpretability).
Parameters:
----------
texts: Tokenized texts.
Returns:
-------
texts: Pre-processed tokenized texts.
"""
texts = [[word for word in line if word not in stops] for line in texts]
texts = [bigram[line] for line in texts]
from nltk.stem import WordNetLemmatizer
lemmatizer = WordNetLemmatizer()
texts = [[word for word in lemmatizer.lemmatize(' '.join(line), pos='v').split()] for line in texts]
return texts
train_texts = process_texts(train_texts)
train_texts[5:6]
"""
Explanation: Preprocessing our data. Remember: Garbage In Garbage Out
"NLP is 80% preprocessing."
-Lev Konstantinovskiy
This is the single most important step in setting up a good topic modeling system. If the preprocessing is not good, the algorithm can't do much since we would be feeding it a lot of noise. In this tutorial, we will be filtering out the noise using the following steps in this order for each line:
1. Stopword removal using NLTK's english stopwords dataset.
2. Bigram collocation detection (frequently co-occuring tokens) using gensim's Phrases. This is our first attempt to find some hidden structure in the corpus. You can even try trigram collocation detection.
3. Lemmatization (using gensim's lemmatize) to only keep the nouns. Lemmatization is generally better than stemming in the case of topic modeling since the words after lemmatization still remain understable. However, generally stemming might be preferred if the data is being fed into a vectorizer and isn't intended to be viewed.
End of explanation
"""
dictionary = Dictionary(train_texts)
corpus = [dictionary.doc2bow(text) for text in train_texts]
"""
Explanation: Finalising our dictionary and corpus
End of explanation
"""
lsimodel = LsiModel(corpus=corpus, num_topics=10, id2word=dictionary)
lsimodel.show_topics(num_topics=5) # Showing only the top 5 topics
lsitopics = lsimodel.show_topics(formatted=False)
"""
Explanation: Topic modeling with LSI
This is a useful topic modeling algorithm in that it can rank topics by itself. Thus it outputs topics in a ranked order. However it does require a num_topics parameter (set to 200 by default) to determine the number of latent dimensions after the SVD.
End of explanation
"""
hdpmodel = HdpModel(corpus=corpus, id2word=dictionary)
hdpmodel.show_topics()
hdptopics = hdpmodel.show_topics(formatted=False)
"""
Explanation: Topic modeling with HDP
An HDP model is fully unsupervised. It can also determine the ideal number of topics it needs through posterior inference.
End of explanation
"""
ldamodel = LdaModel(corpus=corpus, num_topics=10, id2word=dictionary)
"""
Explanation: Topic modeling using LDA
This is one the most popular topic modeling algorithms today. It is a generative model in that it assumes each document is a mixture of topics and in turn, each topic is a mixture of words. To understand it better you can watch this lecture by David Blei. Let's choose 10 topics to initialize this.
End of explanation
"""
import pyLDAvis.gensim
pyLDAvis.enable_notebook()
pyLDAvis.gensim.prepare(ldamodel, corpus, dictionary)
ldatopics = ldamodel.show_topics(formatted=False)
"""
Explanation: pyLDAvis is a great way to visualize an LDA model. To summarize in short, the area of the circles represent the prevelance of the topic. The length of the bars on the right represent the membership of a term in a particular topic. pyLDAvis is based on this paper.
End of explanation
"""
def evaluate_graph(dictionary, corpus, texts, limit):
"""
Function to display num_topics - LDA graph using c_v coherence
Parameters:
----------
dictionary : Gensim dictionary
corpus : Gensim corpus
limit : topic limit
Returns:
-------
lm_list : List of LDA topic models
c_v : Coherence values corresponding to the LDA model with respective number of topics
"""
c_v = []
lm_list = []
for num_topics in range(1, limit):
lm = LdaModel(corpus=corpus, num_topics=num_topics, id2word=dictionary)
lm_list.append(lm)
cm = CoherenceModel(model=lm, texts=texts, dictionary=dictionary, coherence='c_v')
c_v.append(cm.get_coherence())
# Show graph
x = range(1, limit)
plt.plot(x, c_v)
plt.xlabel("num_topics")
plt.ylabel("Coherence score")
plt.legend(("c_v"), loc='best')
plt.show()
return lm_list, c_v
%%time
lmlist, c_v = evaluate_graph(dictionary=dictionary, corpus=corpus, texts=train_texts, limit=10)
pyLDAvis.gensim.prepare(lmlist[2], corpus, dictionary)
lmtopics = lmlist[5].show_topics(formatted=False)
"""
Explanation: Finding out the optimal number of topics
Introduction to topic coherence:
<img src="https://rare-technologies.com/wp-content/uploads/2016/06/pipeline.png">
Topic coherence in essence measures the human interpretability of a topic model. Traditionally perplexity has been used to evaluate topic models however this does not correlate with human annotations at times. Topic coherence is another way to evaluate topic models with a much higher guarantee on human interpretability. Thus this can be used to compare different topic models among many other use-cases. Here's a short blog I wrote explaining topic coherence:
What is topic coherence?
End of explanation
"""
def ret_top_model():
"""
Since LDAmodel is a probabilistic model, it comes up different topics each time we run it. To control the
quality of the topic model we produce, we can see what the interpretability of the best topic is and keep
evaluating the topic model until this threshold is crossed.
Returns:
-------
lm: Final evaluated topic model
top_topics: ranked topics in decreasing order. List of tuples
"""
top_topics = [(0, 0)]
while top_topics[0][1] < 0.97:
lm = LdaModel(corpus=corpus, id2word=dictionary)
coherence_values = {}
for n, topic in lm.show_topics(num_topics=-1, formatted=False):
topic = [word for word, _ in topic]
cm = CoherenceModel(topics=[topic], texts=train_texts, dictionary=dictionary, window_size=10)
coherence_values[n] = cm.get_coherence()
top_topics = sorted(coherence_values.items(), key=operator.itemgetter(1), reverse=True)
return lm, top_topics
lm, top_topics = ret_top_model()
print(top_topics[:5])
"""
Explanation: LDA as LSI
One of the problem with LDA is that if we train it on a large number of topics, the topics get "lost" among the numbers. Let us see if we can dig out the best topics from the best LDA model we can produce. The function below can be used to control the quality of the LDA model we produce.
End of explanation
"""
pprint([lm.show_topic(topicid) for topicid, c_v in top_topics[:10]])
lda_lsi_topics = [[word for word, prob in lm.show_topic(topicid)] for topicid, c_v in top_topics]
"""
Explanation: Inference
We can clearly see below that the first topic is about cinema, second is about email malware, third is about the land which was given back to the Larrakia aboriginal community of Australia in 2000. Then there's one about Australian cricket. LDA as LSI has worked wonderfully in finding out the best topics from within LDA.
End of explanation
"""
lsitopics = [[word for word, prob in topic] for topicid, topic in lsitopics]
hdptopics = [[word for word, prob in topic] for topicid, topic in hdptopics]
ldatopics = [[word for word, prob in topic] for topicid, topic in ldatopics]
lmtopics = [[word for word, prob in topic] for topicid, topic in lmtopics]
lsi_coherence = CoherenceModel(topics=lsitopics[:10], texts=train_texts, dictionary=dictionary, window_size=10).get_coherence()
hdp_coherence = CoherenceModel(topics=hdptopics[:10], texts=train_texts, dictionary=dictionary, window_size=10).get_coherence()
lda_coherence = CoherenceModel(topics=ldatopics, texts=train_texts, dictionary=dictionary, window_size=10).get_coherence()
lm_coherence = CoherenceModel(topics=lmtopics, texts=train_texts, dictionary=dictionary, window_size=10).get_coherence()
lda_lsi_coherence = CoherenceModel(topics=lda_lsi_topics[:10], texts=train_texts, dictionary=dictionary, window_size=10).get_coherence()
def evaluate_bar_graph(coherences, indices):
"""
Function to plot bar graph.
coherences: list of coherence values
indices: Indices to be used to mark bars. Length of this and coherences should be equal.
"""
assert len(coherences) == len(indices)
n = len(coherences)
x = np.arange(n)
plt.bar(x, coherences, width=0.2, tick_label=indices, align='center')
plt.xlabel('Models')
plt.ylabel('Coherence Value')
evaluate_bar_graph([lsi_coherence, hdp_coherence, lda_coherence, lm_coherence, lda_lsi_coherence],
['LSI', 'HDP', 'LDA', 'LDA_Mod', 'LDA_LSI'])
"""
Explanation: Evaluating all the topic models
Any topic model which can come up with topic terms can be plugged into the coherence pipeline. You can even plug in an NMF topic model created with scikit-learn.
End of explanation
"""
from gensim.topic_coherence import (segmentation, probability_estimation,
direct_confirmation_measure, indirect_confirmation_measure,
aggregation)
from gensim.matutils import argsort
from collections import namedtuple
make_pipeline = namedtuple('Coherence_Measure', 'seg, prob, conf, aggr')
measure = make_pipeline(segmentation.s_one_one,
probability_estimation.p_boolean_sliding_window,
direct_confirmation_measure.log_ratio_measure,
aggregation.arithmetic_mean)
"""
Explanation: Customizing the topic coherence measure
Till now we only used the c_v coherence measure. There are others such as u_mass, c_uci, c_npmi. All of these calculate coherence in a different way. c_v is found to be most in line with human ratings but can be much slower than u_mass since it uses a sliding window over the texts.
Making your own coherence measure
Let's modify c_uci to use s_one_pre instead of s_one_one segmentation
End of explanation
"""
topics = []
for topic in lm.state.get_lambda():
bestn = argsort(topic, topn=10, reverse=True)
topics.append(bestn)
"""
Explanation: To get topics out of the topic model:
End of explanation
"""
# Perform segmentation
segmented_topics = measure.seg(topics)
"""
Explanation: Step 1: Segmentation
End of explanation
"""
# Since this is a window-based coherence measure we will perform window based prob estimation
per_topic_postings, num_windows = measure.prob(texts=train_texts, segmented_topics=segmented_topics,
dictionary=dictionary, window_size=2)
"""
Explanation: Step 2: Probability estimation
End of explanation
"""
confirmed_measures = measure.conf(segmented_topics, per_topic_postings, num_windows, normalize=False)
"""
Explanation: Step 3: Confirmation Measure
End of explanation
"""
print(measure.aggr(confirmed_measures))
"""
Explanation: Step 4: Aggregation
End of explanation
"""
|
blue-yonder/tsfresh | notebooks/advanced/05 Timeseries Forecasting (multiple ids).ipynb | mit | %matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pylab as plt
from tsfresh import extract_features, select_features
from tsfresh.utilities.dataframe_functions import roll_time_series, make_forecasting_frame
from tsfresh.utilities.dataframe_functions import impute
try:
import pandas_datareader.data as web
except ImportError:
print("You need to install the pandas_datareader. Run pip install pandas_datareader.")
from sklearn.ensemble import AdaBoostRegressor
"""
Explanation: Timeseries Forecasting
This notebook explains how to use tsfresh in time series foreacasting.
Make sure you also read through the documentation to learn more on this feature.
It is basically a copy of the other time series forecasting notebook, but this time using more than one
stock.
This is conceptionally not much different, but the pandas multi-index magic is a bit advanced :-)
We will use the Google, Facebook and Alphabet stock.
Please find all documentation in the other notebook.
End of explanation
"""
df = web.DataReader(['F', "AAPL", "GOOGL"], 'stooq')["High"]
df.head()
plt.figure(figsize=(15, 6))
df.plot(ax=plt.gca())
plt.show()
"""
Explanation: Reading the data
End of explanation
"""
df_melted = df.copy()
df_melted["date"] = df_melted.index
df_melted = df_melted.melt(id_vars="date", value_name="high").sort_values(["Symbols", "date"])
df_melted = df_melted[["Symbols", "date", "high"]]
df_melted.head()
"""
Explanation: This time we need to make sure to preserve the stock symbol information while reordering:
End of explanation
"""
df_rolled = roll_time_series(df_melted, column_id="Symbols", column_sort="date",
max_timeshift=20, min_timeshift=5)
df_rolled.head()
"""
Explanation: Create training data sample
End of explanation
"""
X = extract_features(df_rolled.drop("Symbols", axis=1),
column_id="id", column_sort="date", column_value="high",
impute_function=impute, show_warnings=False)
X.head()
"""
Explanation: Extract Features
End of explanation
"""
# split up the two parts of the index and give them proper names
X = X.set_index([X.index.map(lambda x: x[0]), X.index.map(lambda x: x[1])], drop=True)
X.index.names = ["Symbols", "last_date"]
X.head()
"""
Explanation: We make the data a bit easier to work with by giving them a multi-index instead ot the tuple index:
End of explanation
"""
X.loc["AAPL", pd.to_datetime('2020-07-14')]
"""
Explanation: Our (AAPL, 2020-07-14 00:00:00) is also in the data again:
End of explanation
"""
y = df_melted.groupby("Symbols").apply(lambda x: x.set_index("date")["high"].shift(-1)).T.unstack()
"""
Explanation: Just to repeat: the features in this row were only calculated using the time series values of AAPL up to and including 2015-07-15 and the last 20 days.
Prediction
The next line might look like magic if you are not used to pandas transformations, but what it does is:
for each stock symbol separately:
* sort by date
* take the high value
* shift 1 time step in the future
* bring into the same multi-index format as X above
End of explanation
"""
y["AAPL", pd.to_datetime("2020-07-13")], df.loc[pd.to_datetime("2020-07-14"), "AAPL"]
y = y[y.index.isin(X.index)]
X = X[X.index.isin(y.index)]
"""
Explanation: Quick consistency test:
End of explanation
"""
X_train = X.loc[(slice(None), slice(None, "2018")), :]
X_test = X.loc[(slice(None), slice("2019", "2020")), :]
y_train = y.loc[(slice(None), slice(None, "2018"))]
y_test = y.loc[(slice(None), slice("2019", "2020"))]
X_train_selected = select_features(X_train, y_train)
"""
Explanation: The splitting into train and test samples workes in principle the same as with a single identifier, but this time we have a multi-index symbol-date, so the loc call looks a bit more complicated:
End of explanation
"""
adas = {stock: AdaBoostRegressor() for stock in ["AAPL", "F", "GOOGL"]}
for stock, ada in adas.items():
ada.fit(X_train_selected.loc[stock], y_train.loc[stock])
"""
Explanation: We are training a regressor for each of the stocks separately
End of explanation
"""
X_test_selected = X_test[X_train_selected.columns]
y_pred = pd.concat({
stock: pd.Series(adas[stock].predict(X_test_selected.loc[stock]), index=X_test_selected.loc[stock].index)
for stock in adas.keys()
})
y_pred.index.names = ["Symbols", "last_date"]
plt.figure(figsize=(15, 6))
y.unstack("Symbols").plot(ax=plt.gca())
y_pred.unstack("Symbols").plot(ax=plt.gca(), legend=None, marker=".")
"""
Explanation: Now lets check again how good our prediction is:
End of explanation
"""
|
tata-antares/tagging_LHCb | Analysis-scheme.ipynb | apache-2.0 | from IPython.display import Image
import pandas
"""
Explanation: Inclusive B-tagging
Authors:
Tatiana Likhomanenko (contact)
Alexey Rogozhnikov
Denis Derkach
Data (from working group):
real data $B^{\pm} \to J/\psi K^{\pm}$ (RECO 14), 2012
real data $B_d \to J/\psi K^*$ (RECO 14), 2012 (use EPM for assymetry estimation)
Apply sPlot to obtain sWeight ~ P(B)
Monte Carlo:
MC $B^{\pm} \to J/\psi K^{\pm}$ for training
MC for cross check
$B_d \to J\psi K_s$
$B_d \to J\psi K^*$
End of explanation
"""
pandas.set_option('display.precision', 4)
pandas.read_csv('img/old-tagging-parts.csv').drop(['AUC, with untag', '$\Delta$ AUC, with untag'], axis=1)
"""
Explanation: Old tagging
https://github.com/tata-antares/tagging_LHCb/blob/master/old-tagging.ipynb
We first tested the current algorithm (OS taggers: muon, electron, kaon, vertex).
TMVA original method was compared with XGBoost.
isotonic symmetric calibration
use different train-test divisions to calculate $D^2$
compute mean and std
detail see below (the same formulas)
Data
Taggers: electron, muon, kaon and vertex
End of explanation
"""
pandas.set_option('display.precision', 4)
pandas.read_csv('img/old-tagging-parts-MC.csv').drop(['AUC, with untag', '$\Delta$ AUC, with untag'], axis=1)
"""
Explanation: MC
Taggers: electron, muon, kaon and vertex
End of explanation
"""
pandas.set_option('display.precision', 4)
pandas.read_csv('img/old-tagging.csv').drop(['$\Delta$ AUC, with untag'], axis=1)
pandas.set_option('display.precision', 4)
pandas.read_csv('img/old-tagging-MC.csv').drop(['$\Delta$ AUC, with untag'], axis=1)
"""
Explanation: Taggers combination
We then tested a combination with two calibrations for individual taggers:
isotonic regression
logistic regression.
Combination was calibrated using isotonic regression.
End of explanation
"""
pandas.set_option('display.precision', 5)
pandas.read_csv('img/eff_OS.csv').drop(['$\Delta$ AUC, with untag'], axis=1)
"""
Explanation: Additional information
Details see in the previous presentation: https://indico.cern.ch/event/369520/contribution/3/attachments/1178333/1704665/15.10.28.Tagging.pdf
$\epsilon_{tag}$ calculation
$$N (\text{B events, passed selection}) = \sum_{\text{B events, passed selection}} sw_i$$
$$N (\text{all B events}) = \sum_{\text{all B events}} sw_i,$$
where $sw_i$ - sPLot weight
$$\epsilon_{tag} = \frac{N (\text{passed selection})} {N (\text{all events})}$$
$$\Delta\epsilon_{tag} = \frac{\sqrt{N (\text{passed selection})}} {N (\text{all events})}$$
Data for training
data_sw_passed - tracks/vertices with B-sWeight > 1, are used for training
data_sw_not_passed - tracks/vertices with B-sWeight <= 1, are tagged after training
Training
Tracks Features (sig = signal, part = tagger track):
cos_diff_phi = $\cos(\phi^{sig} - \phi^{\rm part})$
diff_pt = $\max(p_T^{part}) - p_T^{part}$
partPt= $p_T^{part}$
max_PID_e_mu = $\max(PIDNN(e), PIDNN(\mu))^{part}$
partP = $p^{part}$
nnkrec = Number of reconstructed vertices
diff_eta = $(\eta^{sig} - \eta^{\rm part})$
EOverP = E/P (from CALO)
sum_PID_k_mu = $\sum\limits_{i\in part}(PIDNN(K)+PIDNN(\mu))$
ptB = $p_T^{sig}$
sum_PID_e_mu = $\sum\limits_{i\in part}(PIDNN(e)+PIDNN(\mu))$
sum_PID_k_e = $\sum\limits_{i\in part}(PIDNN(K)+PIDNN(e))$
proj = $(\vec{p}^{sig},\vec{p}^{part})$
PIDNNe = $PIDNN(e)$
PIDNNk = $PIDNN(K)$
PIDNNm = $PIDNN(\mu)$
phi = $\phi^{part}$
IP = number of IPs in the event
max_PID_k_mu = $max(PIDNN(K)+PIDNN(\mu))$
IPerr = error of IP
IPs = IP/IPerr
veloch = dE/dx track charge from the VELO system
max_PID_k_e = $max(PIDNN(K)+PIDNN(e))$
diff_phi = $(\phi^{sig} - \phi^{\rm part})$
ghostProb = ghost probability
IPPU = impact parameter with respect to any other reconstructed primary vertex.
eta = pseudorapity of track particle
partlcs = chi2PerDoF for a track
Vertex Selections
All selection are removed except DaVinci probability cuts
Vertex Features:
mult = multiplicity in the event
nnkrec = number of reconstructed vertices
ptB = signal B transverse momentum
vflag = number of tracks in the vertex
ipsmean = mean of tracks IPs
ptmean = mean pt of the tracks
vcharge = charge of the vertex weigthed by pt
svm = mass of the vertex
svp = momentum of the vertex
BDphiDir = angle betwen B and vertex
svtau = lifetime of the vertex
docamax = mean DOCA of the tracks
Classifier
Try to define B sign using track/vertex sign (to define they have the same signs or opposite).
target = signB * signTrack/signVertex > 0
classifier returns
$$P(\text{track/vertex same sign as B| B sign}) = $$
$$ =P(\text{B same sign as track/vertex| track/vertex sign})$$
2-folding training on the full training sample to use full sample for further analysis (folding scheme provides not overfitted model, details: http://yandex.github.io/rep/metaml.html#module-rep.metaml.folding)
Calibration of $P(\text{track/vertex same sign as B| B sign})$
use 2-folding logistic/isotonic calibration for track/vertex classifier's prediction
compare with isotonic/logistic calibration
compare with absent calibration (bad, have shift predictions)
Computation of $p(B^+)$ using $P(\text{track/vertex same sign as B| B sign})$
Compute $p(B^+)$ using this probabilistic model representation (similar to the previous tagging combination):
$$ \frac{P(B^+)}{P(B^-)} = \prod_{track, vertex} \frac{P(\text{track/vertex}|B^+)} {P(\text{track/vertex} |B^-)} = \alpha
\qquad $$
$$\Rightarrow\qquad P(B^+) = \frac {\alpha}{1+\alpha}, \qquad \qquad [1] $$
where
$$
\frac{P(B^+)}{P(B^-)} = \prod_{track, vertex}
\begin{cases}
\frac{P(\text{track/vertex same sign as } B| B)}{P(\text{track/vertex opposite sign as } B| B)}, \text{if track/vertex}^+ \ \
\frac{P(\text{track/vertex opposite sign as } B| B)}{P(\text{track/vertex same sign as } B| B)}, \text{if track/vertex}^-
\end{cases}
$$
$$p_{mistag} = min(p(B^+), p(B^-))$$
Intermediate estimation $ < D^2 > $ for tracking
Do calibration of $p(B^+)$ and compute $ < D^2 > $ :
use Isotonic calibration (generalization of bins fitting by linear function) - piecewise-constant monotonic function
randomly divide events into two parts (1-train, 2-calibrate)
symmetric isotonic fitting on train and $ < D^2 > $ computation on test
take mean and std for computed $ < D^2 > $
$ < D^2 > $ formula for sample:
$$ < D^2 > = \frac{\sum_i[2(p^{mistag}_i - 0.5)]^2 * sw_i}{\sum_i sw_i} = $$
$$ = \frac{\sum_i[2(p_i(B^+) - 0.5)]^2 * sw_i}{\sum_i sw_i}$$
Formula is symmetric and it is not necessary to compute mistag probability
Preliminary estimation
$\epsilon$ calculation
$$\epsilon = < D^2 > * \epsilon_{tag}$$
$$\Delta \epsilon = \sqrt{ \left(\frac{\Delta < D^2 > }{ < D^2 > }\right)^2 + \left(\frac{\Delta \epsilon_{tag} }{\epsilon_{tag}} \right)^2 }$$
Combine track-based and vertex-based tagging using formula [1]
symmetric isotonic calibration on random subsample with $D^2$ calculation
take mean and std for computed $ < D^2 > $
Full estimation of systematic error
set random state
train the best model (track and vertex taggers with 2-folding with fixed random state)
do calibration for track and vertex taggers with 2-folding with fixed random state
compute $p(B^+)$
do calibration with isotonic 2-folding (random state is fixed)
compute $ < D^2 > $
This procedure is repeated (from the scratch) for 30 different random states and then we compute mean and std for these 30 values of $ < D^2 > $.
Check calibration of mistag
axis x: predicted mistag probability
$$p_{mistag} = min(p(B^+), p(B^-))$$
axis y: true mistag probability (computed for bin)
$$p_{mistag} = \frac{N_{wrong}} {N_{wrong} + N_{right}}$$
$$\Delta p_{mistag} = \frac{\sqrt{N_{wrong} N_{right}}} {(N_{wrong} + N_{right})^{1.5}}$$
Stability of calibration
Add random noise after isotonic calibration of $p(B^+)$ for stability:
$$ 0.001 * normal(0, 1)$$
Inclusive tagging (NEW)
Check "OS" and "SS" regions separately (to check that tagging includes "SS" and "OS")
Check dependences on lifetime, lifetime error, number of tracks, momentum, transverse momentum, mass
Asymmetry of charges in events: understanding of high tagging quality or what information we use
Tracking "OS" tagging
https://github.com/tata-antares/tagging_LHCb/blob/master/track-based-tagging-OS.ipynb
Take all possible tracks for all B-events.
Apply:
(IPs > 3) & ((abs(diff_eta) > 0.6) | (abs(diff_phi) > 0.825)) - geometrical cuts
(PIDNNp < 0.5) & (PIDNNpi < 0.5) & (ghostProb < 0.4)
((PIDNNk > trk) | (PIDNNm > trm) | (PIDNNe > tre)), trk=0., trm=0., tre=0.
B mass before sWeight cut
B mass after sWeight cut
Number of tracks in event
PIDNN distributions after selection
<img src='https://raw.githubusercontent.com/tata-antares/tagging_LHCb/master/img/PID_selected_OS.png' style='height: 700px;'/>
Preliminary estimation (track OS + vertex OS)
https://github.com/tata-antares/tagging_LHCb/blob/master/combined-tagging-OS.ipynb
End of explanation
"""
pandas.set_option('display.precision', 5)
pandas.read_csv('img/eff_tracking_SS.csv').drop(['$\Delta$ AUC, with untag'], axis=1)
"""
Explanation: Check calibration of mistag
before calibration
<img src='https://raw.githubusercontent.com/tata-antares/tagging_LHCb/master/img/Bprob_calibration_check_percentile_OS.png' style='height: 600px;'/>
Symmetric isotonic calibration + random noise * 0.001 (noise for stability of bins)
<img src='https://raw.githubusercontent.com/tata-antares/tagging_LHCb/master/img/Bprob_calibration_check_iso_percentile_OS.png' style='height: 600px;'/>
Tracking "SS" tagging
https://github.com/tata-antares/tagging_LHCb/blob/master/track-based-tagging-SS.ipynb
Take all possible tracks for all B-events.
Apply:
(IPs < 3) & (abs(diff_eta) < 0.6) & (abs(diff_phi) < 0.825) & (ghostProb < 0.4)
((PIDNNk > {trk}) | (PIDNNm > {trm}) | (PIDNNe > {tre}) | (PIDNNpi > {trpi}) | (PIDNNp > {trp})), trk=0, trm=0, tre=0, trpi=0, trp=0
B mass before sWeight cut
B mass after sWeight cut
PIDNN distributions after selection
<img src='https://raw.githubusercontent.com/tata-antares/tagging_LHCb/master/img/PID_selected_SS.png' style='height: 700px;'/>
Preliminary estimation (track "SS" only)
https://github.com/tata-antares/tagging_LHCb/blob/master/combined-tagging-SS.ipynb
End of explanation
"""
pandas.set_option('display.precision', 5)
pandas.read_csv('img/new-tagging.csv').drop(['$\Delta$ AUC, with untag'], axis=1)
"""
Explanation: Check calibration of mistag
before calibration
<img src='https://raw.githubusercontent.com/tata-antares/tagging_LHCb/master/img/Bprob_calibration_check_percentile_SS.png' style='height: 600px;'/>
Symmetric isotonic calibration + random noise * 0.001 (noise for stability of bins)
<img src='https://raw.githubusercontent.com/tata-antares/tagging_LHCb/master/img/Bprob_calibration_check_iso_percentile_SS.png' style='height: 600px;'/>
Tracking inclusive tagging
https://github.com/tata-antares/tagging_LHCb/blob/master/track-based-tagging-PID-less.ipynb
Take all possible tracks for all B-events.
Apply:
(ghostProb < 0.4)
((PIDNNk > {trk}) | (PIDNNm > {trm}) | (PIDNNe > {tre}) | (PIDNNpi > {trpi}) | (PIDNNp > {trp})), trk=0, trm=0, tre=0, trpi=0, trp=0
B mass before sWeight cut
B mass after sWeight cut
Number of tracks in event
PIDNN distributions after selection
<img src='https://raw.githubusercontent.com/tata-antares/tagging_LHCb/master/img/PID_selected_less_PID.png' style='height: 700px;'/>
Dependence on PIDNN cuts
(PIDNNp < 0.6) & (PIDNNpi < 0.6) & (ghostProb < 0.4)
( (PIDNNk > 0.7) | (PIDNNm > 0.4) | (PIDNNe > 0.6) )
End of explanation
"""
pandas.set_option('display.precision', 5)
pandas.read_csv('img/new-tagging_relax1.csv')
"""
Explanation: (PIDNNp < 0.6) & (PIDNNpi < 0.6) & (ghostProb < 0.4)
( (PIDNNk > 0.1) | (PIDNNm > 0.1) | (PIDNNe > 0.1) )
End of explanation
"""
pandas.set_option('display.precision', 5)
pandas.read_csv('img/new-tagging_relax2.csv')
"""
Explanation: (PIDNNpi < 0.6) & (ghostProb < 0.4)
( (PIDNNk > 0.) | (PIDNNm > 0.) | (PIDNNe > 0.) )
End of explanation
"""
pandas.set_option('display.precision', 5)
pandas.read_csv('img/new-tagging-PID-less.csv').drop(['$\Delta$ AUC, with untag'], axis=1)
"""
Explanation: Preliminary estimation (track: OS+SS, OS vertex)
https://github.com/tata-antares/tagging_LHCb/blob/master/combined-tagging-PID-less.ipynb
End of explanation
"""
pandas.set_option('display.precision', 5)
pandas.read_csv('img/new-tagging_full_tracks.csv')
"""
Explanation: Preliminary estimation (track: OS+SS, no vertex)
https://github.com/tata-antares/tagging_LHCb/blob/master/track-based-tagging-PID-less.ipynb
End of explanation
"""
pandas.set_option('display.precision', 5)
pandas.read_csv('img/track_signs_assymetry_means.csv', index_col='name')
"""
Explanation: Checks on track: OS+SS, OS vertex model
Check calibration of mistag
for signal (B-like events)
for background
before calibration
<img src='https://raw.githubusercontent.com/tata-antares/tagging_LHCb/master/img/Bprob_calibration_check_percentile_PID_less.png' style='height: 600px;'/>
Symmetric isotonic calibration + random noise * 0.001 (noise for stability of bins)
<img src='https://raw.githubusercontent.com/tata-antares/tagging_LHCb/master/img/Bprob_calibration_check_iso_percentile_PID_less.png' style='height: 600px;'/>
Tagging power dependency on ...
For B mass, B momentum, B transverse momentum, B lifetime use sidebands as bck and peak region as signal:
mask_signal = ((Bmass > 5.27) & (Bmass < 5.3))
mask_bck = ((Bmass < 5.25) | (Bmass > 5.32))
For B lifetime error and number of tracks use sWeights
Procedure:
* divide variable into 5 percentile bins
* for each bin plot mistag vs true mistag
Signal dependence
<img src='https://raw.githubusercontent.com/tata-antares/tagging_LHCb/master/img/dependence_sig_B_mass.png' style='height: 500px;'/>
<img src='https://raw.githubusercontent.com/tata-antares/tagging_LHCb/master/img/dependence_sig_B_P.png' style='height: 500px;'/>
<img src='https://raw.githubusercontent.com/tata-antares/tagging_LHCb/master/img/dependence_sig_B_Pt.png' style='height: 500px;'/>
<img src='https://raw.githubusercontent.com/tata-antares/tagging_LHCb/master/img/dependence_sig_life_time.png' style='height: 500px;'/>
<img src='https://raw.githubusercontent.com/tata-antares/tagging_LHCb/master/img/dependence_sig_life_time_error.png' style='height: 500px;'/>
<img src='https://raw.githubusercontent.com/tata-antares/tagging_LHCb/master/img/dependence_sig_N_tracks.png' style='height: 500px;'/>
Bck dependence
<img src='https://raw.githubusercontent.com/tata-antares/tagging_LHCb/master/img/dependence_bck_B_mass.png' style='height: 500px;'/>
<img src='https://raw.githubusercontent.com/tata-antares/tagging_LHCb/master/img/dependence_bck_B_P.png' style='height: 500px;'/>
<img src='https://raw.githubusercontent.com/tata-antares/tagging_LHCb/master/img/dependence_bck_B_Pt.png' style='height: 500px;'/>
<img src='https://raw.githubusercontent.com/tata-antares/tagging_LHCb/master/img/dependence_bck_life_time.png' style='height: 500px;'/>
<img src='https://raw.githubusercontent.com/tata-antares/tagging_LHCb/master/img/dependence_bck_life_time_error.png' style='height: 500px;'/>
<img src='https://raw.githubusercontent.com/tata-antares/tagging_LHCb/master/img/dependence_bck_N_tracks.png' style='height: 500px;'/>
Why effective efficiency is so high for this model (model of tracks probability combination to obtain B probability)?
Let's see on the following characteristic of the event:
$$ -\sum_{track} charge_{track}$$
It seems, that for $B^+$ event it should be around +1 + constant (because we exclude signal part)
Regions:
'OS' region: (IP > 3) & ((abs(diff_eta) > 0.6) | (abs(diff_phi) > 0.825))
'SS' region: (IP < 3) & (abs(diff_eta) < 0.6) & (abs(diff_phi) < 0.825)
full data
"OS" data
"SS" data
Full sample
Add signal track
"OS" data
"SS" data
Full sample
Means of distributions (with signal track and without it)
End of explanation
"""
pandas.set_option('display.precision', 5)
pandas.concat([pandas.read_csv('img/track_signs_assymetry_means.csv', index_col='name'),
pandas.read_csv('img/track_signs_assymetry_means_mc.csv', index_col='name')])
"""
Explanation: This ROC AUC score is similar to the current tagging implementation
Charges asymmetry checks on MC sample
"OS" sample
"SS" sample
Full sample
Means of distributions for MC and data (with signal track and without it)
End of explanation
"""
|
catherinezucker/dustcurve | Old_Runs/tutorial_6slices.ipynb | gpl-3.0 | import emcee
from dustcurve import model
import seaborn as sns
import numpy as np
from dustcurve import pixclass
import matplotlib.pyplot as plt
import pandas as pd
import warnings
from dustcurve import io
from dustcurve import hputils
from dustcurve import kdist
from dustcurve import globalvars as gv
%matplotlib inline
print(gv.unique_co.shape)
#bounds for prior function, in form [lower_dist, upper_dist, sigma]
bounds=[4,19,0.1]
#ratio for gas-to-dust coefficient
ratio=0.06
#suppress obnoxious deprecation warning that doesn't affect output
warnings.filterwarnings("ignore", category=Warning, module="emcee")
#our pixels of choice
indices=hputils.list_indices(128,(109.0,110.5,13.0,14.5)).astype(str)
fnames=[str(i)+'.h5' for i in indices]
#fetch the required likelihood and prior arguments for PTSampler
#io.fetch_args(fnames)
# the model has 12 parameters; we'll use 50 walkers, 10000 steps each, at 5 different temps
ndim=12
nslices=6
nwalkers = 50
nsteps = 10000
ntemps=5
#setting off the walkers at the kinematic distance given by the literature, assuming a flat rotation curve, theta=220 km/s, R=8.5 kpc
#Details on rotation curve given in Rosolowsky and Leroy 2006
vslices=np.linspace(-11.7,-5.2,nslices)
klong=np.ones(nslices)*109.75
klat=np.ones(nslices)*13.75
kdist=kdist.kdist(klong,klat,vslices)
kdistmod=5*np.log10(kdist)-5
#slightly perturb the starting positions for each walker, in a ball centered around result
#perturb all walkers in a Gaussian ball with mean 0 and variance 1
result_dist=kdistmod.tolist()
result_coeff= [1.0 for i in range (nslices)]
starting_positions_dist=np.array([[result_dist + np.random.randn(nslices) for i in range(nwalkers)] for j in range(ntemps)]).clip(4,19)
starting_positions_coeff=np.array([[result_coeff + 0.5*np.random.randn(nslices) for i in range(nwalkers)] for j in range(ntemps)]).clip(0)
starting_positions=np.concatenate((starting_positions_dist,starting_positions_coeff), axis=2)
#set up the sampler object
sampler = emcee.PTSampler(ntemps, nwalkers, ndim, model.log_likelihood, model.log_prior, loglargs=[ratio], logpargs=[bounds],threads=20)
print("Setup complete")
# run the sampler and time how long it takes
%time sampler.run_mcmc(starting_positions, nsteps)
print('Sampler Done')
"""
Explanation: Logistics
We are going to use parallel-tempering, implemented via the python emcee package, to explore our posterior, which consists of the set of distances and gas to dust conversion coefficients to the six velocity slices towards the center of the Cepheus molecular cloud. Since we need to explore a 12 dimensional parameter space, we are going to use 50 walkers, 10000 steps each, at 5 different temperatures. If you would like to edit this parameters, simply edit "nwalkers", "ntemps", and "nsteps" in the cell below. However, we are only going to keep the lowest temperature chain ($\beta=1$) for analysis. Since the sampler.chain object from PTSampler returns an array of shape (Ntemps, Nwalkers, Nsteps, Ndim), returning the samples for all walkers, steps, and dimensions at $\beta=1$ would correspond to sampler.chain[0,:,:,:]. To decrease your value of $\beta$ simply increase the index for the first dimension. For more information on how PTSampler works, see http://dan.iel.fm/emcee/current/user/pt/. We will set off our walkers in a Gaussian ball around a) the kinematic distance estimates for the Cepheus molecular cloud given by a flat rotation curve from Leroy & Rosolowsky 2006 and b) the gas-to-dust coefficient given by the literature. We perturb the walkers in a Gaussian ball with mean 0 and variance 1. You can edit the starting positions of the walkers by editing the "result" variable below. We are going to discard the first half of every walker's chain as burn-in.
Setting up the positional arguments for PTSampler
We need to feed PTSampler the required positional arguments for the log_likelihood and log_prior function. We do this using the fetch_args function from the io module, which creates an instance of the pixclass object that holds our data and metadata. Fetch_args accepts three arguments:
A string specifiying the h5 filenames containing your data, in our case 10 healpix nside 128 pixels centered around (l,b)=(109.75, 13.75), which covers a total area of 2 sq. deg.
The prior bounds you want to impose on distances (flat prior) and the standard deviation you'd like for the log-normal prior on the conversion coefficients. For distances, this must be between 4 and 19, because that's the distance modulus range of our stellar posterior array. The prior bounds must be in the format [lowerbound_distance, upperbound_distance, sigma]
The gas-to-dust coefficient you'd like to use, given as a float; for this tutorial, we are pulling a value from the literature of 0.06 magnitudes/K. This value is then multiplied by the set of c coefficients we're determining as part of the parameter estimation problem.
Fetch_args will then return the correct arguments for the log_likelihood and log_prior functions within the model module.
Here we go!
End of explanation
"""
#Extract the coldest [beta=1] temperature chain from the sampler object; discard first half of samples as burnin
samples_cold = sampler.chain[0,:,int(.5*nsteps):,:]
traces_cold = samples_cold.reshape(-1, ndim).T
#check out acceptance fraction:
print("Our mean acceptance fraction for the coldest chain is %.2f" % np.mean(sampler.acceptance_fraction[0]))
#find best fit values for each of the 24 parameters (12 d's and 12 c's)
theta=pd.DataFrame(traces_cold)
quantile_50=theta.quantile(.50, axis=1).values
quantile_84=theta.quantile(.84, axis=1).values
quantile_16=theta.quantile(.16, axis=1).values
upperlim=quantile_84-quantile_50
lowerlim=quantile_50-quantile_16
#print out distances
for i in range(0,int(len(quantile_50)/2)):
print('d%i: %.3f + %.3f - %.3f' % (i+1,quantile_50[i],upperlim[i], lowerlim[i]))
#print out coefficients
for i in range(int(len(quantile_50)/2), int(len(quantile_50))):
print('c%i: %.3f + %.3f - %.3f' % (i+1-int(len(quantile_50)/2),quantile_50[i],upperlim[i], lowerlim[i]))
"""
Explanation: The sampler is done running, so now let's check out the results. We are going to print out our mean acceptance fraction across all walkers for the coldest temperature chain.
We are also going to discard the first half of each walker's chain as burn-in; to change the number of steps to burn off, simply edit the 3rd dimension of sampler.chain[0,:,n:,:] and input your desired value of n. Next, we are going to compute and print out the 50th, 16th, and 84th percentile of the chains for each distance parameter, using the "quantile" attribute of a pandas dataframe object. The 50th percentile measurement represents are best guess for the each distance parameter, while the difference between the 16th and 50th gives us a lower limit and the difference between the 50th and 84th percentile gives us an upper limit:
End of explanation
"""
#set up subplots for chain plotting
axes=['ax'+str(i) for i in range(ndim)]
fig, (axes) = plt.subplots(ndim, figsize=(10,60))
plt.tight_layout()
for i in range(0,ndim):
if i<int(ndim/2):
axes[i].set(ylabel='d%i' % (i+1))
else:
axes[i].set(ylabel='c%i' % (i-5))
#plot traces for each parameter
for i in range(0,ndim):
sns.tsplot(traces_cold[i],ax=axes[i])
"""
Explanation: Let's see what our chains look like by producing trace plots:
End of explanation
"""
#set up subplots for histogram plotting
axes=['ax'+str(i) for i in range(ndim)]
fig, (axes) = plt.subplots(ndim, figsize=(10,60))
plt.tight_layout()
for i in range(0,ndim):
if i<int(ndim/2):
axes[i].set(ylabel='d%i' % (i+1))
else:
axes[i].set(ylabel='c%i' % (i-5))
#plot traces for each parameter
for i in range(0,ndim):
sns.distplot(traces_cold[i],ax=axes[i],hist=True,norm_hist=False)
"""
Explanation: Now we are going to use the seaborn distplot function to plot histograms of the last half of the traces for each parameter.
End of explanation
"""
from dustcurve import pixclass
post_all=np.empty((0,700,120))
for i in range(0,len(gv.unique_post)):
post_all=np.vstack((post_all,gv.unique_post[i]))
#from dustcurve import plot_posterior
#plot the reddening profile over the stacked, normalized stellar posterior surfaces
#normcol=True, normsurf=True
#plot_posterior.plot_posterior(np.asarray(post_all),np.linspace(4,19,120),np.linspace(0,7,700),quantile_50,ratio,gv.unique_co,y_range=[0,2],vmax=0.03,normcol=True)
#plot_posterior.plot_posterior(np.asarray(post_all),np.linspace(4,19,120),np.linspace(0,7,700),quantile_50,ratio,gv.unique_co,y_range=[0,2],vmax=10.0,normcol=False)
"""
Explanation: Now let's overplot the reddening profiles corresponding to our most probable parameters on top of the stacked stellar posterior surfaces. We show two options. The first plot has been normalized so that a) each individual stellar posterior array sums to one and b) each distance column in the stacked posterior array contains the same amount of "ink." The second plot just normalized so each individual stellar posterior array sums to 1. Each colored reddening profile corresponds to a single pixel in the Dame et al. 2001 CO spectral cube. There are stars in ~175 Dame et al. 2001 CO pixels in our region of interest.
End of explanation
"""
#from dustcurve import plot_posterior
#from dustcurve import globalvars as gv
#import numpy as np
#plot_posterior.plot_samples(np.asarray(post_all),np.linspace(4,19,120),np.linspace(0,7,700),quantile_50,traces_cold,ratio,gv.unique_co,y_range=[0,2],vmax=0.03,normcol=True)
#plot_posterior.plot_samples(np.asarray(post_all),np.linspace(4,19,120),np.linspace(0,7,700),quantile_50,traces_cold,ratio,gv.unique_co,y_range=[0,2],vmax=20,normcol=False)
"""
Explanation: Now we want to see how similar the parameters at different steps are. To do this, we draw one thousand random samples from the last half of the chain and plot the reddening profile corresponding to those parameters in light blue. Then, we plot the "best fit" reddening profile corresponding to the 50th quantile parameters (essentially the median of the last half of the chains). We are normalizing the surfaces in two different ways, as we did above.
End of explanation
"""
import h5py
#Save the results of the sampler:
#our pixels of choice
output='2degrees_jul1_0.1sig_cropped.h5'
fwrite = h5py.File("/n/fink1/czucker/Output/"+str(output), "w")
chaindata = fwrite.create_dataset("/chains", sampler.chain.shape, dtype='f')
chaindata[:,:,:,:]=sampler.chain
probdata = fwrite.create_dataset("/probs", sampler.lnprobability.shape, dtype='f')
probdata[:,:,:]=sampler.lnprobability
fwrite.close()
type(gv.unique_post[1])
"""
Explanation: Now let's save the results to a file!
End of explanation
"""
|
jaekookang/useful_bits | Speech/Extract_Pitch_using_Praat/Extract_Pitch.ipynb | mit | import os
import numpy as np
from subprocess import Popen, PIPE
from sys import platform
import pdb
"""
Explanation: Extract fundamental frequency (F0 or pitch) using Python
<br>
- tested: Python3.6 on Linux and Mac
- 2017-09-24 jk
Logic:
1) Generate Praat script temporarily within Python script
2) Run the Praat script through Python
Settings
End of explanation
"""
tmp_script = 'tmp.praat'
def gen_script():
# This generates temporary praat script file
global tmp_script
with open(tmp_script, 'w') as f:
f.write('''
form extract_pitch
text FILENAME
positive TIMEAT 0.0
positive TIMESTEP 0.0
real FLOOR 75.0
real CEILING 600.0
endform
Read from file... 'FILENAME$'
To Pitch... 'TIMESTEP' 'FLOOR' 'CEILING'
Get value at time... 'TIMEAT' Hertz Linear
exit
''')
return tmp_script
def run_praat_cmd(*args):
# Check operating system
if platform == 'darwin': # macOS
o = Popen(['praat'] + [str(i) for i in args],
shell=False, stdout=PIPE, stderr=PIPE)
else: # Linux
o = Popen(['praat', '--run'] + [str(i) for i in args],
shell=False, stdout=PIPE, stderr=PIPE)
stdout, stderr = o.communicate()
if os.path.exists(tmp_script):
os.remove(tmp_script)
if o.returncode:
raise Exception(stderr.decode('utf-8'))
else:
return stdout
def get_pitch(FNAME, TIMEAT, TIMESTEP=0.0, FLOOR=75.0, CEILING=600.0):
fmt_out = {}
def _float(s):
# Retrieved from https://github.com/mwv/praat_formants_python
try:
return float(s)
except ValueError:
return np.nan
key = (FNAME, TIMEAT, TIMESTEP, FLOOR, CEILING)
out = run_praat_cmd(gen_script(), FNAME, TIMEAT, TIMESTEP, FLOOR, CEILING)
outstr = str(out, 'utf-8').split()
if len(outstr) < 2:
print('--undefined--')
val = 0.0 # pad nan as 0
else:
val = float('{:.3f}'.format(float(outstr[0])))
return val
"""
Explanation: Main functions
End of explanation
"""
time = 0.5 # sec
get_pitch('da_ta.wav', time) # output: F0
"""
Explanation: Run
End of explanation
"""
|
peterbraden/tensorflow | tensorflow/examples/tutorials/deepdream/deepdream.ipynb | apache-2.0 | # boilerplate code
import os
from cStringIO import StringIO
import numpy as np
from functools import partial
import PIL.Image
from IPython.display import clear_output, Image, display, HTML
import tensorflow as tf
"""
Explanation: DeepDreaming with TensorFlow
Loading and displaying the model graph
Naive feature visualization
Multiscale image generation
Laplacian Pyramid Gradient Normalization
Playing with feature visualzations
DeepDream
This notebook demonstrates a number of Convolutional Neural Network image generation techniques implemented with TensorFlow for fun and science:
visualize individual feature channels and their combinations to explore the space of patterns learned by the neural network (see GoogLeNet and VGG16 galleries)
embed TensorBoard graph visualizations into Jupyter notebooks
produce high-resolution images with tiled computation (example)
use Laplacian Pyramid Gradient Normalization to produce smooth and colorful visuals at low cost
generate DeepDream-like images with TensorFlow (DogSlugs included)
The network under examination is the GoogLeNet architecture, trained to classify images into one of 1000 categories of the ImageNet dataset. It consists of a set of layers that apply a sequence of transformations to the input image. The parameters of these transformations were determined during the training process by a variant of gradient descent algorithm. The internal image representations may seem obscure, but it is possible to visualize and interpret them. In this notebook we are going to present a few tricks that allow to make these visualizations both efficient to generate and even beautiful. Impatient readers can start with exploring the full galleries of images generated by the method described here for GoogLeNet and VGG16 architectures.
End of explanation
"""
#!wget https://storage.googleapis.com/download.tensorflow.org/models/inception5h.zip && unzip inception5h.zip
model_fn = 'tensorflow_inception_graph.pb'
# creating TensorFlow session and loading the model
graph = tf.Graph()
sess = tf.InteractiveSession(graph=graph)
graph_def = tf.GraphDef.FromString(open(model_fn).read())
t_input = tf.placeholder(np.float32, name='input') # define the input tensor
imagenet_mean = 117.0
t_preprocessed = tf.expand_dims(t_input-imagenet_mean, 0)
tf.import_graph_def(graph_def, {'input':t_preprocessed})
"""
Explanation: <a id='loading'></a>
Loading and displaying the model graph
The pretrained network can be downloaded here. Unpack the tensorflow_inception_graph.pb file from the archive and set its path to model_fn variable. Alternatively you can uncomment and run the following cell to download the network:
End of explanation
"""
layers = [op.name for op in graph.get_operations() if op.type=='Conv2D' and 'import/' in op.name]
feature_nums = [int(graph.get_tensor_by_name(name+':0').get_shape()[-1]) for name in layers]
print 'Number of layers', len(layers)
print 'Total number of feature channels:', sum(feature_nums)
# Helper functions for TF Graph visualization
def strip_consts(graph_def, max_const_size=32):
"""Strip large constant values from graph_def."""
strip_def = tf.GraphDef()
for n0 in graph_def.node:
n = strip_def.node.add()
n.MergeFrom(n0)
if n.op == 'Const':
tensor = n.attr['value'].tensor
size = len(tensor.tensor_content)
if size > max_const_size:
tensor.tensor_content = "<stripped %d bytes>"%size
return strip_def
def rename_nodes(graph_def, rename_func):
res_def = tf.GraphDef()
for n0 in graph_def.node:
n = res_def.node.add()
n.MergeFrom(n0)
n.name = rename_func(n.name)
for i, s in enumerate(n.input):
n.input[i] = rename_func(s) if s[0]!='^' else '^'+rename_func(s[1:])
return res_def
def show_graph(graph_def, max_const_size=32):
"""Visualize TensorFlow graph."""
if hasattr(graph_def, 'as_graph_def'):
graph_def = graph_def.as_graph_def()
strip_def = strip_consts(graph_def, max_const_size=max_const_size)
code = """
<script>
function load() {{
document.getElementById("{id}").pbtxt = {data};
}}
</script>
<link rel="import" href="https://tensorboard.appspot.com/tf-graph-basic.build.html" onload=load()>
<div style="height:600px">
<tf-graph-basic id="{id}"></tf-graph-basic>
</div>
""".format(data=repr(str(strip_def)), id='graph'+str(np.random.rand()))
iframe = """
<iframe seamless style="width:800px;height:620px;border:0" srcdoc="{}"></iframe>
""".format(code.replace('"', '"'))
display(HTML(iframe))
# Visualizing the network graph. Be sure expand the "mixed" nodes to see their
# internal structure. We are going to visualize "Conv2D" nodes.
tmp_def = rename_nodes(graph_def, lambda s:"/".join(s.split('_',1)))
show_graph(tmp_def)
"""
Explanation: To take a glimpse into the kinds of patterns that the network learned to recognize, we will try to generate images that maximize the sum of activations of particular channel of a particular convolutional layer of the neural network. The network we explore contains many convolutional layers, each of which outputs tens to hundreds of feature channels, so we have plenty of patterns to explore.
End of explanation
"""
# Picking some internal layer. Note that we use outputs before applying the ReLU nonlinearity
# to have non-zero gradients for features with negative initial activations.
layer = 'mixed4d_3x3_bottleneck_pre_relu'
channel = 139 # picking some feature channel to visualize
# start with a gray image with a little noise
img_noise = np.random.uniform(size=(224,224,3)) + 100.0
def showarray(a, fmt='jpeg'):
a = np.uint8(np.clip(a, 0, 1)*255)
f = StringIO()
PIL.Image.fromarray(a).save(f, fmt)
display(Image(data=f.getvalue()))
def visstd(a, s=0.1):
'''Normalize the image range for visualization'''
return (a-a.mean())/max(a.std(), 1e-4)*s + 0.5
def T(layer):
'''Helper for getting layer output tensor'''
return graph.get_tensor_by_name("import/%s:0"%layer)
def render_naive(t_obj, img0=img_noise, iter_n=20, step=1.0):
t_score = tf.reduce_mean(t_obj) # defining the optimization objective
t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!
img = img0.copy()
for i in xrange(iter_n):
g, score = sess.run([t_grad, t_score], {t_input:img})
# normalizing the gradient, so the same step size should work
g /= g.std()+1e-8 # for different layers and networks
img += g*step
print score,
clear_output()
showarray(visstd(img))
render_naive(T(layer)[:,:,:,channel])
"""
Explanation: <a id='naive'></a>
Naive feature visualization
Let's start with a naive way of visualizing these. Image-space gradient ascent!
End of explanation
"""
def tffunc(*argtypes):
'''Helper that transforms TF-graph generating function into a regular one.
See "resize" function below.
'''
placeholders = map(tf.placeholder, argtypes)
def wrap(f):
out = f(*placeholders)
def wrapper(*args, **kw):
return out.eval(dict(zip(placeholders, args)), session=kw.get('session'))
return wrapper
return wrap
# Helper function that uses TF to resize an image
def resize(img, size):
img = tf.expand_dims(img, 0)
return tf.image.resize_bilinear(img, size)[0,:,:,:]
resize = tffunc(np.float32, np.int32)(resize)
def calc_grad_tiled(img, t_grad, tile_size=512):
'''Compute the value of tensor t_grad over the image in a tiled way.
Random shifts are applied to the image to blur tile boundaries over
multiple iterations.'''
sz = tile_size
h, w = img.shape[:2]
sx, sy = np.random.randint(sz, size=2)
img_shift = np.roll(np.roll(img, sx, 1), sy, 0)
grad = np.zeros_like(img)
for y in xrange(0, max(h-sz//2, sz),sz):
for x in xrange(0, max(w-sz//2, sz),sz):
sub = img_shift[y:y+sz,x:x+sz]
g = sess.run(t_grad, {t_input:sub})
grad[y:y+sz,x:x+sz] = g
return np.roll(np.roll(grad, -sx, 1), -sy, 0)
def render_multiscale(t_obj, img0=img_noise, iter_n=10, step=1.0, octave_n=3, octave_scale=1.4):
t_score = tf.reduce_mean(t_obj) # defining the optimization objective
t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!
img = img0.copy()
for octave in xrange(octave_n):
if octave>0:
hw = np.float32(img.shape[:2])*octave_scale
img = resize(img, np.int32(hw))
for i in xrange(iter_n):
g = calc_grad_tiled(img, t_grad)
# normalizing the gradient, so the same step size should work
g /= g.std()+1e-8 # for different layers and networks
img += g*step
print '.',
clear_output()
showarray(visstd(img))
render_multiscale(T(layer)[:,:,:,channel])
"""
Explanation: <a id="multiscale"></a>
Multiscale image generation
Looks like the network wants to show us something interesting! Let's help it. We are going to apply gradient ascent on multiple scales. Details formed on smaller scale will be upscaled and augmented with additional details on the next scale.
With multiscale image generation it may be tempting to set the number of octaves to some high value to produce wallpaper-sized images. Storing network activations and backprop values will quickly run out of GPU memory in this case. There is a simple trick to avoid this: split the image into smaller tiles and compute each tile gradient independently. Applying random shifts to the image before every iteration helps avoid tile seams and improves the overall image quality.
End of explanation
"""
k = np.float32([1,4,6,4,1])
k = np.outer(k, k)
k5x5 = k[:,:,None,None]/k.sum()*np.eye(3, dtype=np.float32)
def lap_split(img):
'''Split the image into lo and hi frequency components'''
with tf.name_scope('split'):
lo = tf.nn.conv2d(img, k5x5, [1,2,2,1], 'SAME')
lo2 = tf.nn.conv2d_transpose(lo, k5x5*4, tf.shape(img), [1,2,2,1])
hi = img-lo2
return lo, hi
def lap_split_n(img, n):
'''Build Laplacian pyramid with n splits'''
levels = []
for i in xrange(n):
img, hi = lap_split(img)
levels.append(hi)
levels.append(img)
return levels[::-1]
def lap_merge(levels):
'''Merge Laplacian pyramid'''
img = levels[0]
for hi in levels[1:]:
with tf.name_scope('merge'):
img = tf.nn.conv2d_transpose(img, k5x5*4, tf.shape(hi), [1,2,2,1]) + hi
return img
def normalize_std(img, eps=1e-10):
'''Normalize image by making its standard deviation = 1.0'''
with tf.name_scope('normalize'):
std = tf.sqrt(tf.reduce_mean(tf.square(img)))
return img/tf.maximum(std, eps)
def lap_normalize(img, scale_n=4):
'''Perform the Laplacian pyramid normalization.'''
img = tf.expand_dims(img,0)
tlevels = lap_split_n(img, scale_n)
tlevels = map(normalize_std, tlevels)
out = lap_merge(tlevels)
return out[0,:,:,:]
# Showing the lap_normalize graph with TensorBoard
lap_graph = tf.Graph()
with lap_graph.as_default():
lap_in = tf.placeholder(np.float32, name='lap_in')
lap_out = lap_normalize(lap_in)
show_graph(lap_graph)
def render_lapnorm(t_obj, img0=img_noise, visfunc=visstd,
iter_n=10, step=1.0, octave_n=3, octave_scale=1.4, lap_n=4):
t_score = tf.reduce_mean(t_obj) # defining the optimization objective
t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!
# build the laplacian normalization graph
lap_norm_func = tffunc(np.float32)(partial(lap_normalize, scale_n=lap_n))
img = img0.copy()
for octave in xrange(octave_n):
if octave>0:
hw = np.float32(img.shape[:2])*octave_scale
img = resize(img, np.int32(hw))
for i in xrange(iter_n):
g = calc_grad_tiled(img, t_grad)
g = lap_norm_func(g)
img += g*step
print '.',
clear_output()
showarray(visfunc(img))
render_lapnorm(T(layer)[:,:,:,channel])
"""
Explanation: <a id="laplacian"></a>
Laplacian Pyramid Gradient Normalization
This looks better, but the resulting images mostly contain high frequencies. Can we improve it? One way is to add a smoothness prior into the optimization objective. This will effectively blur the image a little every iteration, suppressing the higher frequencies, so that the lower frequencies can catch up. This will require more iterations to produce a nice image. Why don't we just boost lower frequencies of the gradient instead? One way to achieve this is through the Laplacian pyramid decomposition. We call the resulting technique Laplacian Pyramid Gradient Normailzation.
End of explanation
"""
render_lapnorm(T(layer)[:,:,:,65])
"""
Explanation: <a id="playing"></a>
Playing with feature visualizations
We got a nice smooth image using only 10 iterations per octave. In case of running on GPU this takes just a few seconds. Let's try to visualize another channel from the same layer. The network can generate wide diversity of patterns.
End of explanation
"""
render_lapnorm(T('mixed3b_1x1_pre_relu')[:,:,:,101])
"""
Explanation: Lower layers produce features of lower complexity.
End of explanation
"""
render_lapnorm(T(layer)[:,:,:,65]+T(layer)[:,:,:,139], octave_n=4)
"""
Explanation: There are many interesting things one may try. For example, optimizing a linear combination of features often gives a "mixture" pattern.
End of explanation
"""
def render_deepdream(t_obj, img0=img_noise,
iter_n=10, step=1.5, octave_n=4, octave_scale=1.4):
t_score = tf.reduce_mean(t_obj) # defining the optimization objective
t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!
# split the image into a number of octaves
img = img0
octaves = []
for i in xrange(octave_n-1):
hw = img.shape[:2]
lo = resize(img, np.int32(np.float32(hw)/octave_scale))
hi = img-resize(lo, hw)
img = lo
octaves.append(hi)
# generate details octave by octave
for octave in xrange(octave_n):
if octave>0:
hi = octaves[-octave]
img = resize(img, hi.shape[:2])+hi
for i in xrange(iter_n):
g = calc_grad_tiled(img, t_grad)
img += g*(step / (np.abs(g).mean()+1e-7))
print '.',
clear_output()
showarray(img/255.0)
"""
Explanation: <a id="deepdream"></a>
DeepDream
Now let's reproduce the DeepDream algorithm with TensorFlow.
End of explanation
"""
img0 = PIL.Image.open('pilatus800.jpg')
img0 = np.float32(img0)
showarray(img0/255.0)
render_deepdream(tf.square(T('mixed4c')), img0)
"""
Explanation: Let's load some image and populate it with DogSlugs (in case you've missed them).
End of explanation
"""
render_deepdream(T(layer)[:,:,:,139], img0)
"""
Explanation: Note that results can differ from the Caffe's implementation, as we are using an independently trained network. Still, the network seems to like dogs and animal-like features due to the nature of the ImageNet dataset.
Using an arbitrary optimization objective still works:
End of explanation
"""
|
Lstyle1/Deep_learning_projects | autoencoder/Convolutional_Autoencoder.ipynb | mit | %matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
"""
Explanation: Convolutional Autoencoder
Sticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
End of explanation
"""
learning_rate = 0.001
img_size = mnist.train.images.shape[1]
# Input and target placeholders
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1))
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1))
### Encoder
conv1 = tf.layers.conv2d(inputs_, 16, (3,3), padding='same', activation=tf.nn.relu,
kernel_initializer=tf.truncated_normal_initializer(stddev=0.1))
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(conv1, (2,2), (2,2), padding='same')
# Now 14x14x16
conv2 = tf.layers.conv2d(maxpool1, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(conv2, (2,2), (2,2), padding='same')
# Now 7x7x8
conv3 = tf.layers.conv2d(maxpool2, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x8
encoded = tf.layers.max_pooling2d(conv3, (2,2), (2,2), padding='same')
# Now 4x4x8
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7))
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x8
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14))
# Now 14x14x8
conv5 = tf.layers.conv2d(upsample2, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x8
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28))
# Now 28x28x8
conv6 = tf.layers.conv2d(upsample3, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x16
logits = tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded =tf.nn.sigmoid(logits)
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
"""
Explanation: Network Architecture
The encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.
<img src='assets/convolutional_autoencoder.png' width=500px>
Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data.
What's going on with the decoder
Okay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers aren't. Usually, you'll see transposed convolution layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, tf.nn.conv2d_transpose.
However, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In this Distill article from Augustus Odena, et al, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with tf.image.resize_images, followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.
Exercise: Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by 2. Odena et al claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in tf.image.resize_images or use tf.image.resize_nearest_neighbor.
End of explanation
"""
sess = tf.Session()
epochs = 10
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
"""
Explanation: Training
As before, here we'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
End of explanation
"""
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x32
maxpool1 = tf.layers.max_pooling2d(conv1, (2,2), (2,2), padding='same')
# Now 14x14x32
conv2 = tf.layers.conv2d(maxpool1, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x32
maxpool2 = tf.layers.max_pooling2d(conv2, (2,2), (2,2), padding='same')
# Now 7x7x32
conv3 = tf.layers.conv2d(maxpool2, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x16
encoded = tf.layers.max_pooling2d(conv3, (2,2), (2,2), padding='same')
# Now 4x4x16
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7))
# Now 7x7x16
conv4 = tf.layers.conv2d(upsample1, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x16
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14))
# Now 14x14x16
conv5 = tf.layers.conv2d(upsample2, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x32
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28))
# Now 28x28x32
conv6 = tf.layers.conv2d(upsample3, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x32
logits = tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits, name='decoded')
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 100
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
"""
Explanation: Denoising
As I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.
Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.
Exercise: Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
End of explanation
"""
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
"""
Explanation: Checking out the performance
Here I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
End of explanation
"""
|
qutip/qutip-notebooks | examples/heom/heom-3-quantum-heat-transport.ipynb | lgpl-3.0 | import numpy as np
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
import qutip as qt
from qutip.nonmarkov.heom import HEOMSolver, DrudeLorentzPadeBath, BathExponent
from ipywidgets import IntProgress
from IPython.display import display
# Qubit parameters
epsilon = 1
# System operators
H1 = epsilon / 2 * qt.tensor(qt.sigmaz() + qt.identity(2), qt.identity(2))
H2 = epsilon / 2 * qt.tensor(qt.identity(2), qt.sigmaz() + qt.identity(2))
H12 = lambda J12 : J12 * (qt.tensor(qt.sigmap(), qt.sigmam()) + qt.tensor(qt.sigmam(), qt.sigmap()))
Hsys = lambda J12 : H1 + H2 + H12(J12)
# Cutoff frequencies
gamma1 = 2
gamma2 = 2
# Temperatures
Tbar = 2
Delta_T = 0.01 * Tbar
T1 = Tbar + Delta_T
T2 = Tbar - Delta_T
# Coupling operators
Q1 = qt.tensor(qt.sigmax(), qt.identity(2))
Q2 = qt.tensor(qt.identity(2), qt.sigmax())
"""
Explanation: Example 3: Quantum Heat Transport
Setup
In this notebook, we apply the QuTiP HEOM solver to a quantum system coupled to two bosonic baths and demonstrate how to extract information about the system-bath heat currents from the auxiliary density operators (ADOs).
We consider the setup described in Ref. [1], which consists of two coupled qubits, each connected to its own heat bath.
The Hamiltonian of the qubits is given by
$$ \begin{aligned} H_{\text{S}} &= H_1 + H_2 + H_{12} , \quad\text{ where }\
H_K &= \frac{\epsilon}{2} \bigl(\sigma_z^K + 1\bigr) \quad (K=1,2) \quad\text{ and }\quad H_{12} = J_{12} \bigl( \sigma_+^1 \sigma_-^2 + \sigma_-^1 \sigma_+^2 \bigr) . \end{aligned} $$
Here, $\sigma^K_{x,y,z,\pm}$ denotes the usual Pauli matrices for the K-th qubit, $\epsilon$ is the eigenfrequency of the qubits and $J_{12}$ the coupling constant.
Each qubit is coupled to its own bath; therefore, the total Hamiltonian is
$$ H_{\text{tot}} = H_{\text{S}} + \sum_{K=1,2} \bigl( H_{\text{B}}^K + Q_K \otimes X_{\text{B}}^K \bigr) , $$
where $H_{\text{B}}^K$ is the free Hamiltonian of the K-th bath and $X_{\text{B}}^K$ its coupling operator, and $Q_K = \sigma_x^K$ are the system coupling operators.
We assume that the bath spectral densities are given by Drude distributions
$$ J_K(\omega) = \frac{2 \lambda_K \gamma_K \omega}{\omega^2 + \gamma_K^2} , $$
where $\lambda_K$ is the free coupling strength and $\gamma_K$ the cutoff frequency.
We begin by defining the system and bath parameters.
We use the parameter values from Fig. 3(a) of Ref. [1].
Note that we set $\hbar$ and $k_B$ to one and we will measure all frequencies and energies in units of $\epsilon$.
[1] Kato and Tanimura, J. Chem. Phys. 143, 064107 (2015).
End of explanation
"""
def bath_heat_current(bath_tag, ado_state, hamiltonian, coupling_op, delta=0):
"""
Bath heat current from the system into the heat bath with the given tag.
Parameters
----------
bath_tag : str, tuple or any other object
Tag of the heat bath corresponding to the current of interest.
ado_state : HierarchyADOsState
Current state of the system and the environment (encoded in the ADOs).
hamiltonian : Qobj
System Hamiltonian at the current time.
coupling_op : Qobj
System coupling operator at the current time.
delta : float
The prefactor of the \delta(t) term in the correlation function (the Ishizaki-Tanimura terminator).
"""
l1_labels = ado_state.filter(level=1, tags=[bath_tag])
a_op = 1j * (hamiltonian * coupling_op - coupling_op * hamiltonian)
result = 0
cI0 = 0 # imaginary part of bath auto-correlation function (t=0)
for label in l1_labels:
[exp] = ado_state.exps(label)
result += exp.vk * (coupling_op * ado_state.extract(label)).tr()
if exp.type == BathExponent.types['I']:
cI0 += exp.ck
elif exp.type == BathExponent.types['RI']:
cI0 += exp.ck2
result -= 2 * cI0 * (coupling_op * coupling_op * ado_state.rho).tr()
if delta != 0:
result -= 1j * delta * ((a_op * coupling_op - coupling_op * a_op) * ado_state.rho).tr()
return result
def system_heat_current(bath_tag, ado_state, hamiltonian, coupling_op, delta=0):
"""
System heat current from the system into the heat bath with the given tag.
Parameters
----------
bath_tag : str, tuple or any other object
Tag of the heat bath corresponding to the current of interest.
ado_state : HierarchyADOsState
Current state of the system and the environment (encoded in the ADOs).
hamiltonian : Qobj
System Hamiltonian at the current time.
coupling_op : Qobj
System coupling operator at the current time.
delta : float
The prefactor of the \delta(t) term in the correlation function (the Ishizaki-Tanimura terminator).
"""
l1_labels = ado_state.filter(level=1, tags=[bath_tag])
a_op = 1j * (hamiltonian * coupling_op - coupling_op * hamiltonian)
result = 0
for label in l1_labels:
result += (a_op * ado_state.extract(label)).tr()
if delta != 0:
result -= 1j * delta * ((a_op * coupling_op - coupling_op * a_op) * ado_state.rho).tr()
return result
"""
Explanation: Heat currents
Following Ref. [2], we consider two possible definitions of the heat currents from the qubits into the baths.
The so-called bath heat currents are $j_{\text{B}}^K = \partial_t \langle H_{\text{B}}^K \rangle$ and the system heat currents are $j_{\text{S}}^K = \mathrm i\, \langle [H_{\text{S}}, Q_K] X_{\text{B}}^K \rangle$.
As shown in Ref. [2], they can be expressed in terms of the HEOM ADOs as follows:
$$ \begin{aligned} \mbox{} \
j_{\text{B}}^K &= !!\sum_{\substack{\mathbf n\ \text{Level 1}\ \text{Bath $K$}}}!! \nu[\mathbf n] \operatorname{tr}\bigl[ Q_K \rho_{\mathbf n} \bigr] - 2 C_I^K(0) \operatorname{tr}\bigl[ Q_k^2 \rho \bigr] + \Gamma_{\text{T}}^K \operatorname{tr}\bigl[ [[H_{\text{S}}, Q_K], Q_K]\, \rho \bigr] , \[.5em]
j_{\text{S}}^K &= \mathrm i!! \sum_{\substack{\mathbf n\ \text{Level 1}\ \text{Bath $k$}}}!! \operatorname{tr}\bigl[ [H_{\text{S}}, Q_K]\, \rho_{\mathbf n} \bigr] + \Gamma_{\text{T}}^K \operatorname{tr}\bigl[ [[H_{\text{S}}, Q_K], Q_K]\, \rho \bigr] . \ \mbox{}
\end{aligned} $$
The sums run over all level-$1$ multi-indices $\mathbf n$ with one excitation corresponding to the K-th bath, $\nu[\mathbf n]$ is the corresponding (negative) exponent of the bath auto-correlation function $C^K(t)$, and $\Gamma_{\text{T}}^K$ is the Ishizaki-Tanimura terminator (i.e., a correction term accounting for the error introduced by approximating the correlation function with a finite sum of exponential terms).
In the expression for the bath heat currents, we left out terms involving $[Q_1, Q_2]$, which is zero in this example.
[2] Kato and Tanimura, J. Chem. Phys. 145, 224105 (2016).
In QuTiP, these currents can be conveniently calculated as follows:
End of explanation
"""
Nk = 1
NC = 7
options = qt.Options(nsteps=1500, store_states=False, atol=1e-12, rtol=1e-12)
"""
Explanation: Note that at long times, we expect $j_{\text{B}}^1 = -j_{\text{B}}^2$ and $j_{\text{S}}^1 = -j_{\text{S}}^2$ due to energy conservation. At long times, we also expect $j_{\text{B}}^1 = j_{\text{S}}^1$ and $j_{\text{B}}^2 = j_{\text{S}}^2$ since the coupling operators commute, $[Q_1, Q_2] = 0$. Hence, all four currents should agree in the long-time limit (up to a sign). This long-time value is what was analyzed in Ref. [2].
Simulations
For our simulations, we will represent the bath spectral densities using the first term of their Padé decompositions, and we will use $7$ levels of the HEOM hierarchy.
End of explanation
"""
# fix qubit-qubit and qubit-bath coupling strengths
J12 = 0.1
lambda1 = J12 / 2
lambda2 = J12 / 2
# choose arbitrary initial state
rho0 = qt.tensor(qt.identity(2), qt.identity(2)) / 4
# simulation time span
tlist = np.linspace(0, 50, 250)
bath1 = DrudeLorentzPadeBath(Q1, lambda1, gamma1, T1, Nk, tag='bath 1')
bath2 = DrudeLorentzPadeBath(Q2, lambda2, gamma2, T2, Nk, tag='bath 2')
b1delta, b1term = bath1.terminator()
b2delta, b2term = bath2.terminator()
solver = HEOMSolver(qt.liouvillian(Hsys(J12)) + b1term + b2term,
[bath1, bath2], max_depth=NC, options=options)
result = solver.run(rho0, tlist, e_ops=[qt.tensor(qt.sigmaz(), qt.identity(2)),
lambda t, ado: bath_heat_current('bath 1', ado, Hsys(J12), Q1, b1delta),
lambda t, ado: bath_heat_current('bath 2', ado, Hsys(J12), Q2, b2delta),
lambda t, ado: system_heat_current('bath 1', ado, Hsys(J12), Q1, b1delta),
lambda t, ado: system_heat_current('bath 2', ado, Hsys(J12), Q2, b2delta)])
"""
Explanation: Time Evolution
We fix $J_{12} = 0.1 \epsilon$ (as in Fig. 3(a-ii) of Ref. [2]) and choose the fixed coupling strength $\lambda_1 = \lambda_2 = J_{12}\, /\, (2\epsilon)$ (corresponding to $\bar\zeta = 1$ in Ref. [2]).
Using these values, we will study the time evolution of the system state and the heat currents.
End of explanation
"""
fig, axes = plt.subplots(figsize=(8,8))
axes.plot(tlist, result.expect[0], 'r', linewidth=2)
axes.set_xlabel('t', fontsize=28)
axes.set_ylabel(r"$\langle \sigma_z^1 \rangle$", fontsize=28)
pass
"""
Explanation: We first plot $\langle \sigma_z^1 \rangle$ to see the time evolution of the system state:
End of explanation
"""
fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(16, 8))
ax1.plot(tlist, -np.real(result.expect[1]), color='darkorange', label='BHC (bath 1 -> system)')
ax1.plot(tlist, np.real(result.expect[2]), '--', color='darkorange', label='BHC (system -> bath 2)')
ax1.plot(tlist, -np.real(result.expect[3]), color='dodgerblue', label='SHC (bath 1 -> system)')
ax1.plot(tlist, np.real(result.expect[4]), '--', color='dodgerblue', label='SHC (system -> bath 2)')
ax1.set_xlabel('t', fontsize=28)
ax1.set_ylabel('j', fontsize=28)
ax1.set_ylim((-0.05, 0.05))
ax1.legend(loc=0, fontsize=12)
ax2.plot(tlist, -np.real(result.expect[1]), color='darkorange', label='BHC (bath 1 -> system)')
ax2.plot(tlist, np.real(result.expect[2]), '--', color='darkorange', label='BHC (system -> bath 2)')
ax2.plot(tlist, -np.real(result.expect[3]), color='dodgerblue', label='SHC (bath 1 -> system)')
ax2.plot(tlist, np.real(result.expect[4]), '--', color='dodgerblue', label='SHC (system -> bath 2)')
ax2.set_xlabel('t', fontsize=28)
ax2.set_xlim((20, 50))
ax2.set_ylim((0, 0.0002))
ax2.legend(loc=0, fontsize=12)
pass
"""
Explanation: We find a rather quick thermalization of the system state. For the heat currents, however, it takes a somewhat longer time until they converge to their long-time values:
End of explanation
"""
def heat_currents(J12, zeta_bar):
bath1 = DrudeLorentzPadeBath(Q1, zeta_bar * J12 / 2, gamma1, T1, Nk, tag='bath 1')
bath2 = DrudeLorentzPadeBath(Q2, zeta_bar * J12 / 2, gamma2, T2, Nk, tag='bath 2')
b1delta, b1term = bath1.terminator()
b2delta, b2term = bath2.terminator()
solver = HEOMSolver(qt.liouvillian(Hsys(J12)) + b1term + b2term,
[bath1, bath2], max_depth=NC, options=options)
_, steady_ados = solver.steady_state()
return bath_heat_current('bath 1', steady_ados, Hsys(J12), Q1, b1delta), \
bath_heat_current('bath 2', steady_ados, Hsys(J12), Q2, b2delta), \
system_heat_current('bath 1', steady_ados, Hsys(J12), Q1, b1delta), \
system_heat_current('bath 2', steady_ados, Hsys(J12), Q2, b2delta)
# Define number of points to use for final plot
plot_points = 100
progress = IntProgress(min=0, max=(3*plot_points))
display(progress)
zeta_bars = []
j1s = [] # J12 = 0.01
j2s = [] # J12 = 0.1
j3s = [] # J12 = 0.5
# --- J12 = 0.01 ---
NC = 7
# xrange chosen so that 20 is maximum, centered around 1 on a log scale
for zb in np.logspace(-np.log(20), np.log(20), plot_points, base=np.e):
j1, _, _, _ = heat_currents(0.01, zb) # the four currents are identical in the steady state
zeta_bars.append(zb)
j1s.append(j1)
progress.value += 1
# --- J12 = 0.1 ---
for zb in zeta_bars:
# higher HEOM cut-off is necessary for large coupling strength
if zb < 10:
NC = 7
else:
NC = 12
j2, _, _, _ = heat_currents(0.1, zb)
j2s.append(j2)
progress.value += 1
# --- J12 = 0.5 ---
for zb in zeta_bars:
if zb < 5:
NC = 7
elif zb < 10:
NC = 15
else:
NC = 20
j3, _, _, _ = heat_currents(0.5, zb)
j3s.append(j3)
progress.value += 1
progress.close()
np.save('data/qhb_zb.npy', zeta_bars)
np.save('data/qhb_j1.npy', j1s)
np.save('data/qhb_j2.npy', j2s)
np.save('data/qhb_j3.npy', j3s)
"""
Explanation: Steady-state currents
Here, we try to reproduce the HEOM curves in Fig. 3(a) of Ref. [1] by varying the coupling strength and finding the steady state for each coupling strength.
End of explanation
"""
zeta_bars = np.load('data/qhb_zb.npy')
j1s = np.load('data/qhb_j1.npy')
j2s = np.load('data/qhb_j2.npy')
j3s = np.load('data/qhb_j3.npy')
matplotlib.rcParams['figure.figsize'] = (7, 5)
matplotlib.rcParams['axes.titlesize'] = 25
matplotlib.rcParams['axes.labelsize'] = 30
matplotlib.rcParams['xtick.labelsize'] = 28
matplotlib.rcParams['ytick.labelsize'] = 28
matplotlib.rcParams['legend.fontsize'] = 28
matplotlib.rcParams['axes.grid'] = False
matplotlib.rcParams['savefig.bbox'] = 'tight'
matplotlib.rcParams['lines.markersize'] = 5
matplotlib.rcParams['font.family'] = 'STIXgeneral'
matplotlib.rcParams['mathtext.fontset'] = 'stix'
matplotlib.rcParams["font.serif"] = "STIX"
matplotlib.rcParams['text.usetex'] = False
fig, axes = plt.subplots(figsize=(12,7))
axes.plot(zeta_bars, -1000 * 100 * np.real(j1s), 'b', linewidth=2, label=r"$J_{12} = 0.01\, \epsilon$")
axes.plot(zeta_bars, -1000 * 10 * np.real(j2s), 'r--', linewidth=2, label=r"$J_{12} = 0.1\, \epsilon$")
axes.plot(zeta_bars, -1000 * 2 * np.real(j3s), 'g-.', linewidth=2, label=r"$J_{12} = 0.5\, \epsilon$")
axes.set_xscale('log')
axes.set_xlabel(r"$\bar\zeta$", fontsize=30)
axes.set_xlim((zeta_bars[0], zeta_bars[-1]))
axes.set_ylabel(r"$j_{\mathrm{ss}}\; /\; (\epsilon J_{12}) \times 10^3$", fontsize=30)
axes.set_ylim((0, 2))
axes.legend(loc=0)
#fig.savefig("figures/figHeat.pdf")
pass
from qutip.ipynbtools import version_table
version_table()
"""
Explanation: Create Plot
End of explanation
"""
|
zchq88/MyUdacityDeepLearningProject | 1/dlnd-your-first-neural-network.ipynb | mit | %matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
"""
Explanation: Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
End of explanation
"""
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
"""
Explanation: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
End of explanation
"""
rides[:24*10].plot(x='dteday', y='cnt')
"""
Explanation: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days in the data set. You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
End of explanation
"""
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
"""
Explanation: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
End of explanation
"""
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
"""
Explanation: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
End of explanation
"""
# Save the last 21 days
test_data = data[-21*24:]
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
"""
Explanation: Splitting the data into training, testing, and validation sets
We'll save the last 21 days of the data to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
End of explanation
"""
# Hold out the last 60 days of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
"""
Explanation: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
End of explanation
"""
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.hidden_nodes ** -0.5,
(self.hidden_nodes, self.input_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.output_nodes ** -0.5,
(self.output_nodes, self.hidden_nodes))
self.lr = learning_rate
#### Set this to your implemented sigmoid function ####
# Activation function is the sigmoid function
self.activation_function = lambda x : 1/(1+np.exp(-x))
self.activation_function_derivative= lambda x : x * (1 - x)
def train(self, inputs_list, targets_list):
# Convert inputs list to 2d array
inputs = np.array(inputs_list, ndmin=2).T
targets = np.array(targets_list, ndmin=2).T
#### Implement the forward pass here ####
### Forward pass ###
# TODO: Hidden layer
hidden_inputs = np.dot(self.weights_input_to_hidden, inputs)
hidden_outputs = self.activation_function(hidden_inputs)
# TODO: Output layer
final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs)
final_outputs = final_inputs
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error
output_errors = targets - final_outputs # Output layer error is the difference between desired target and actual output.
output_grad = 1 # hidden layer gradients
# TODO: Backpropagated error
hidden_errors = np.dot(self.weights_hidden_to_output.T, output_errors) # errors propagated to the hidden layer
hidden_grad = self.activation_function_derivative(hidden_outputs) # hidden layer gradients
# TODO: Update the weights
self.weights_hidden_to_output += self.lr * np.dot(output_errors * output_grad, hidden_outputs.T) # update hidden-to-output weights with gradient descent step
self.weights_input_to_hidden += self.lr * np.dot(hidden_errors * hidden_grad, inputs.T) # update input-to-hidden weights with gradient descent step
def run(self, inputs_list):
# Run a forward pass through the network
inputs = np.array(inputs_list, ndmin=2).T
#### Implement the forward pass here ####
# TODO: Hidden layer
hidden_inputs = np.dot(self.weights_input_to_hidden, inputs)
hidden_outputs = self.activation_function(hidden_inputs)
# TODO: Output layer
final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs)
final_outputs = final_inputs
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
"""
Explanation: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.
We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.
Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
Below, you have these tasks:
1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.
2. Implement the forward pass in the train method.
3. Implement the backpropagation algorithm in the train method, including calculating the output error.
4. Implement the forward pass in the run method.
End of explanation
"""
import sys
### Set the hyperparameters here ###
epochs = 2000
learning_rate = 0.01
hidden_nodes = 23
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for e in range(epochs):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
for record, target in zip(train_features.ix[batch].values,
train_targets.ix[batch]['cnt']):
network.train(record, target)
# Printing out the training progress
train_loss = MSE(network.run(train_features), train_targets['cnt'].values)
val_loss = MSE(network.run(val_features), val_targets['cnt'].values)
sys.stdout.write("\rProgress: " + str(100 * e/float(epochs))[:4] \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
plt.ylim(ymax=0.5)
"""
Explanation: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of epochs
This is the number of times the dataset will pass through the network, each time updating the weights. As the number of epochs increases, the network becomes better and better at predicting the targets in the training set. You'll need to choose enough epochs to train the network well but not too many or you'll be overfitting.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
End of explanation
"""
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features)*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
"""
Explanation: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
End of explanation
"""
import unittest
inputs = [0.5, -0.2, 0.1]
targets = [0.4]
test_w_i_h = np.array([[0.1, 0.4, -0.3],
[-0.2, 0.5, 0.2]])
test_w_h_o = np.array([[0.3, -0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328, -0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, 0.39775194, -0.29887597],
[-0.20185996, 0.50074398, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
"""
Explanation: Thinking about your results
Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?
Note: You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter
Your answer below
I am not very good at English, so I will use translation software. I hope the translation software can help me express what I mean. I learned the BP algorithm in this lesson. I understand how to update weights. The review helped me understand how to set up hidden nodes. In this project. I think holidays(I means Christmas Day not weekends) should also be a factor. Obviously see the Christmas is not very accurate fitting. The training data does not contain Christmas data. Neural networks are not a good predictor of Christmas. In China may not be a good predictor of Chinese New Year. This special holiday is not well represented in the training data. So it can not be a good fit this special holiday.I tried double the epochs and I found that training and verifying the loss was better than before. However, it is seen that the training and verification losses are getting slower and slower.
Unit tests
Run these unit tests to check the correctness of your network implementation. These tests must all be successful to pass the project.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/hammoz-consortium/cmip6/models/sandbox-3/ocean.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'hammoz-consortium', 'sandbox-3', 'ocean')
"""
Explanation: ES-DOC CMIP6 Model Properties - Ocean
MIP Era: CMIP6
Institute: HAMMOZ-CONSORTIUM
Source ID: SANDBOX-3
Topic: Ocean
Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing.
Properties: 133 (101 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:03
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean model code (NEMO 3.6, MOM 5.0,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the ocean.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.5. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the ocean component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
"""
Explanation: 2.2. Eos Functional Temp
Is Required: TRUE Type: ENUM Cardinality: 1.1
Temperature used in EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
"""
Explanation: 2.3. Eos Functional Salt
Is Required: TRUE Type: ENUM Cardinality: 1.1
Salinity used in EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
"""
Explanation: 2.4. Eos Functional Depth
Is Required: TRUE Type: ENUM Cardinality: 1.1
Depth or pressure used in EOS for sea water ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2.5. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.6. Ocean Specific Heat
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Specific heat in ocean (cpocean) in J/(kg K)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.7. Ocean Reference Density
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Boussinesq reference density (rhozero) in kg / m3
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Reference date of bathymetry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3.2. Type
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the bathymetry fixed in time in the ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Ocean Smoothing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any smoothing or hand editing of bathymetry in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.4. Source
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe source of bathymetry in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how isolated seas is performed
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. River Mouth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how river mouth mixing or estuaries specific treatment is performed
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.4. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.5. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.6. Is Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.7. Thickness Level 1
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Thickness of first surface ocean level (in meters)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Brief description of conservation methodology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in the ocean by the numerical schemes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Consistency Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Corrected Conserved Prognostic Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Set of variables which are conserved by more than the numerical scheme alone.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.5. Was Flux Correction Used
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does conservation involve flux correction ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Grid
Ocean grid
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of grid in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical coordinates in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 10.2. Partial Steps
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Using partial steps with Z or Z vertical coordinate in ocean ?*
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Staggering
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal grid staggering type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of time stepping in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Diurnal Cycle
Is Required: TRUE Type: ENUM Cardinality: 1.1
Diurnal cycle type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracers time stepping scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Tracers time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.3. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Baroclinic time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time splitting method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.2. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Barotropic time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Details of vertical time stepping in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17. Advection
Ocean advection
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of advection in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
"""
Explanation: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of lateral momemtum advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Scheme Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean momemtum advection scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 18.3. ALE
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Using ALE for vertical advection ? (if vertical coordinates are sigma)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Order of lateral tracer advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 19.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for lateral tracer advection scheme in ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19.3. Effective Order
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Effective order of limited lateral tracer advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.4. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19.5. Passive Tracers
Is Required: FALSE Type: ENUM Cardinality: 0.N
Passive tracers advected
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.6. Passive Tracers Advection
Is Required: FALSE Type: STRING Cardinality: 0.1
Is advection of passive tracers different than active ? if so, describe.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 20.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for vertical tracer advection scheme in ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lateral physics in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
"""
Explanation: 21.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transient eddy representation in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics momemtum eddy viscosity coeff type in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 23.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.4. Coeff Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 23.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a mesoscale closure in the lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 24.2. Submesoscale Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics tracers eddy diffusity coeff type in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26.4. Coeff Background
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 26.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EIV in lateral physics tracers in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 27.2. Constant Val
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If EIV scheme for tracers is constant, specify coefficient value (M2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.3. Flux Type
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV flux (advective or skew)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.4. Added Diffusivity
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV added diffusivity (constant, flow dependent or none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vertical physics in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there Langmuir cells mixing in upper ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for tracers in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of tracers, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for momentum in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 31.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 31.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of momentum, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 31.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical convection in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.2. Tide Induced Mixing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how tide induced mixing is modelled (barotropic, baroclinic, none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.3. Double Diffusion
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there double diffusion
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.4. Shear Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there interior shear mixing
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for tracers in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 33.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of tracers, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for momentum in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 34.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of momentum, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of free surface in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 35.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Free surface scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 35.3. Embeded Seaice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the sea-ice embeded in the ocean model (instead of levitating) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of bottom boundary layer in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.2. Type Of Bbl
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of bottom boundary layer in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 36.3. Lateral Mixing Coef
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36.4. Sill Overflow
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any specific treatment of sill overflows
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of boundary forcing in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.2. Surface Pressure
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.3. Momentum Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.4. Tracers Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.5. Wave Effects
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how wave effects are modelled at ocean surface.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.6. River Runoff Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how river runoff from land surface is routed to ocean and any global adjustment done.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.7. Geothermal Heating
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how geothermal heating is present at ocean bottom.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum bottom friction in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum lateral friction in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of sunlight penetration scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 40.2. Ocean Colour
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the ocean sunlight penetration scheme ocean colour dependent ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 40.3. Extinction Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe and list extinctions depths for sunlight penetration scheme (if applicable).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from atmos in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41.2. From Sea Ice
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from sea-ice in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 41.3. Forced Mode Restoring
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of surface salinity restoring in forced mode (OMIP)
End of explanation
"""
|
adlyons/AWOT | examples/awot_utilities_intro.ipynb | gpl-2.0 | # Load the needed packages
import os
import matplotlib.pyplot as plt
import numpy as np
from netCDF4 import Dataset
import awot
%matplotlib inline
"""
Explanation: <h2>Introducing miscellaneous utilities in AWOT.</h2>
<h4>This notebook will grow over time as utilites are added and I have time to update.</h4>
End of explanation
"""
# Released data file
wcrf1 = os.path.join("/Users/guy/data/king_air/owles2013/wcr", "WCR.OWLES13.20131215.225944_234806.up-down.nc")
# Supplementary file with corrected velocity data
wcrf2 = os.path.join("/Users/guy/data/king_air/owles2013/wcr/", "W-CORRECTED.WCR.OWLES13.20131215.225944_234806.up-down.nc")
"""
Explanation: <b> First we'll need some data to interact with</b>
End of explanation
"""
wcr = awot.io.read_wcr2(fname=wcrf1)
"""
Explanation: <b>Read in the radar data</b>
End of explanation
"""
nc = Dataset(wcrf2)
velcor = nc.variables['Velocity_cor_2']
awot.util.add_dict_to_awot_fields(wcr, 'velocity_corrected', data=velcor[:],
units=velcor.units, longname=velcor.long_name, stdname="Corrected velocity")
print(wcr['fields']['velocity']['data'].shape, wcr['fields']['velocity_corrected']['data'].shape)
print(np.ma.min(wcr['fields']['velocity']['data']), np.ma.max(wcr['fields']['velocity']['data']))
print(np.ma.min(wcr['fields']['velocity_corrected']['data']), np.ma.max(wcr['fields']['velocity_corrected']['data']))
"""
Explanation: <h3>Data instances</h3>
<b>Read a variable from another file and add it to the AWOT dictionary. A mask of invalid data is automatically applied. Additional masking can be accomplished by setting the <i>mask_value</i> keyword.</b>
End of explanation
"""
start_time = "2013-12-15 23:05:00"
end_time = "2013-12-15 23:15:00"
# Create subsets of arrays
refsub = awot.util.time_subset_awot_dict(wcr['time'], wcr['fields']['reflectivity'],
start_time, end_time)
velsub = awot.util.time_subset_awot_dict(wcr['time'], wcr['fields']['velocity'],
start_time, end_time)
altsub = awot.util.time_subset_awot_dict(wcr['time'], wcr['altitude'],
start_time, end_time)
print(wcr['fields']['reflectivity']['data'].shape, refsub['data'].shape)
print(wcr['fields']['velocity']['data'].shape, velsub['data'].shape)
print(wcr['altitude']['data'].shape, altsub['data'].shape)
"""
Explanation: <b>Just as in the plotting routines, time can be subset using a date string. But maybe you want to just return a subsetted dicationary for use. The <i>time_subset_awot_dict</i> function does this.</b>
End of explanation
"""
nexf = os.path.join("/Users/guy/data/nexrad/KILN/nex2/20140429", "KILN20140430_004708_V06")
rnex = awot.io.read_ground_radar(nexf, map_to_awot=False)
rnex.fields.keys()
"""
Explanation: <b>AWOT uses Py-ART to read many radar files. Therefore we can read through AWOT.</b>
End of explanation
"""
rnex = awot.io.read_ground_radar(nexf, map_to_awot=True)
rnex.keys()
"""
Explanation: <b> By changing the <i>map_to_awot</i> key we can convert the Py-ART radar instance to an AWOT radar instance. Note this is the DEFAULT behavior to make working with the AWOT package a bit easier.</b>
End of explanation
"""
flname = os.path.join("/Users/guy/data/king_air/pecan2015", "20150716.c1.nc")
fl1 = awot.io.read_netcdf(fname=flname, platform='uwka')
"""
Explanation: <b>An experimental KMZ file creation is available for flight data.</b>
End of explanation
"""
awot.util.write_track_kmz(fl1, 'altitude', show_legend=False, end_time="2016-01-01T00:00:00")
"""
Explanation: <b>Now we can create a KMZ file of the track. This saves a KMZ file to current working directory if not specified.</b>
End of explanation
"""
|
scikit-optimize/scikit-optimize.github.io | 0.8/notebooks/auto_examples/ask-and-tell.ipynb | bsd-3-clause | print(__doc__)
import numpy as np
np.random.seed(1234)
import matplotlib.pyplot as plt
from skopt.plots import plot_gaussian_process
"""
Explanation: Async optimization Loop
Bayesian optimization is used to tune parameters for walking robots or other
experiments that are not a simple (expensive) function call.
Tim Head, February 2017.
Reformatted by Holger Nahrstaedt 2020
.. currentmodule:: skopt
They often follow a pattern a bit like this:
ask for a new set of parameters
walk to the experiment and program in the new parameters
observe the outcome of running the experiment
walk back to your laptop and tell the optimizer about the outcome
go to step 1
A setup like this is difficult to implement with the _minimize()* function
interface. This is why scikit-optimize** has a ask-and-tell interface that
you can use when you want to control the execution of the optimization loop.
This notebook demonstrates how to use the ask and tell interface.
End of explanation
"""
from skopt.learning import ExtraTreesRegressor
from skopt import Optimizer
noise_level = 0.1
"""
Explanation: The Setup
We will use a simple 1D problem to illustrate the API. This is a little bit
artificial as you normally would not use the ask-and-tell interface if you
had a function you can call to evaluate the objective.
End of explanation
"""
def objective(x, noise_level=noise_level):
return np.sin(5 * x[0]) * (1 - np.tanh(x[0] ** 2))\
+ np.random.randn() * noise_level
def objective_wo_noise(x, noise_level=0):
return objective(x, noise_level=0)
"""
Explanation: Our 1D toy problem, this is the function we are trying to
minimize
End of explanation
"""
# Plot f(x) + contours
plt.set_cmap("viridis")
x = np.linspace(-2, 2, 400).reshape(-1, 1)
fx = np.array([objective(x_i, noise_level=0.0) for x_i in x])
plt.plot(x, fx, "r--", label="True (unknown)")
plt.fill(np.concatenate([x, x[::-1]]),
np.concatenate(([fx_i - 1.9600 * noise_level for fx_i in fx],
[fx_i + 1.9600 * noise_level for fx_i in fx[::-1]])),
alpha=.2, fc="r", ec="None")
plt.legend()
plt.grid()
plt.show()
"""
Explanation: Here a quick plot to visualize what the function looks like:
End of explanation
"""
opt = Optimizer([(-2.0, 2.0)], "GP", acq_func="EI",
acq_optimizer="sampling",
initial_point_generator="lhs")
# To obtain a suggestion for the point at which to evaluate the objective
# you call the ask() method of opt:
next_x = opt.ask()
print(next_x)
"""
Explanation: Now we setup the :class:Optimizer class. The arguments follow the meaning and
naming of the *_minimize() functions. An important difference is that
you do not pass the objective function to the optimizer.
End of explanation
"""
f_val = objective(next_x)
opt.tell(next_x, f_val)
"""
Explanation: In a real world use case you would probably go away and use this
parameter in your experiment and come back a while later with the
result. In this example we can simply evaluate the objective function
and report the value back to the optimizer:
End of explanation
"""
for i in range(9):
next_x = opt.ask()
f_val = objective(next_x)
res = opt.tell(next_x, f_val)
"""
Explanation: Like *_minimize() the first few points are suggestions from
the initial point generator as there
is no data yet with which to fit a surrogate model.
End of explanation
"""
_ = plot_gaussian_process(res, objective=objective_wo_noise,
noise_level=noise_level,
show_next_point=False,
show_acq_func=True)
plt.show()
"""
Explanation: We can now plot the random suggestions and the first model that has been
fit:
End of explanation
"""
for i in range(10):
next_x = opt.ask()
f_val = objective(next_x)
res = opt.tell(next_x, f_val)
_ = plot_gaussian_process(res, objective=objective_wo_noise,
noise_level=noise_level,
show_next_point=True,
show_acq_func=True)
plt.show()
"""
Explanation: Let us sample a few more points and plot the optimizer again:
End of explanation
"""
import pickle
with open('my-optimizer.pkl', 'wb') as f:
pickle.dump(opt, f)
with open('my-optimizer.pkl', 'rb') as f:
opt_restored = pickle.load(f)
"""
Explanation: By using the :class:Optimizer class directly you get control over the
optimization loop.
You can also pickle your :class:Optimizer instance if you want to end the
process running it and resume it later. This is handy if your experiment
takes a very long time and you want to shutdown your computer in the
meantime:
End of explanation
"""
|
stephensekula/smu-honors-physics | fractals_random/fractal_from_random.ipynb | mit | import pickle,glob
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
%pylab inline
"""
Explanation: Table of Contents
<p><div class="lev1 toc-item"><a href="#Generating-Fractal-From-Random-Points---The-Chaos-Game" data-toc-modified-id="Generating-Fractal-From-Random-Points---The-Chaos-Game-1"><span class="toc-item-num">1 </span>Generating Fractal From Random Points - The Chaos Game</a></div><div class="lev2 toc-item"><a href="#Initial-Definitions" data-toc-modified-id="Initial-Definitions-1.1"><span class="toc-item-num">1.1 </span>Initial Definitions</a></div><div class="lev2 toc-item"><a href="#Make-A-Fractal" data-toc-modified-id="Make-A-Fractal-1.2"><span class="toc-item-num">1.2 </span>Make A Fractal</a></div><div class="lev4 toc-item"><a href="#Regular-Polygons" data-toc-modified-id="Regular-Polygons-1.2.0.1"><span class="toc-item-num">1.2.0.1 </span>Regular Polygons</a></div><div class="lev4 toc-item"><a href="#Exploring-Further:-Dimension" data-toc-modified-id="Exploring-Further:-Dimension-1.2.0.2"><span class="toc-item-num">1.2.0.2 </span>Exploring Further: Dimension</a></div><div class="lev4 toc-item"><a href="#Randomness-on-Large-Scales" data-toc-modified-id="Randomness-on-Large-Scales-1.2.0.3"><span class="toc-item-num">1.2.0.3 </span>Randomness on Large Scales</a></div><div class="lev2 toc-item"><a href="#Learn-More:" data-toc-modified-id="Learn-More:-1.3"><span class="toc-item-num">1.3 </span>Learn More:</a></div><div class="lev2 toc-item"><a href="#Modeling-Life" data-toc-modified-id="Modeling-Life-1.4"><span class="toc-item-num">1.4 </span>Modeling Life</a></div><div class="lev4 toc-item"><a href="#For-Barnsley's-Fern:" data-toc-modified-id="For-Barnsley's-Fern:-1.4.0.1"><span class="toc-item-num">1.4.0.1 </span>For Barnsley's Fern:</a></div>
End of explanation
"""
def placeStartpoint(npts,fixedpts):
#Start Point
#start = (0.5,0.5)
start = (np.random.random(),np.random.random())
if fixedpts == []: #generates a set of random verticies
for i in range(npts):
randx = np.random.random()
randy = np.random.random()
point = (randx,randy)
fixedpts.append(point)
return (start,fixedpts)
def choosePts(npts,fixedpts,frac):
#chooses a vertex at random
#further rules could be applied here
roll = floor(npts*np.random.random())
point = fixedpts[int(roll)]
return point
def placeItteratePts(npts,itt,start,fixedpts,frac):
ittpts = []
for i in range(itt):
point = choosePts(npts,fixedpts,frac) #chooses a vertex at random
# halfway = ((point[0]+start[0])*frac,(point[1]+start[1])*frac) #calculates the halfway point between the starting point and the vertex
halfway = ((point[0]-start[0])*(1.0 - frac)+start[0],(point[1]-start[1])*(1.0 - frac)+start[1])
ittpts.append(halfway)
start = halfway #sets the starting point to the new point
return ittpts
def plotFractal(start,fixedpts,ittpts):
# set axes range
plt.xlim(-0.05,1.05)
plt.ylim(-0.05,1.05)
plt.axes().set_aspect('equal')
#plots the verticies
plt.scatter(transpose(fixedpts)[0],transpose(fixedpts)[1],alpha=0.8, c='black', edgecolors='none', s=30)
#plots the starting point
plt.scatter(start[0],start[1],alpha=0.8, c='red', edgecolors='none', s=30)
#plots the itterated points
plt.scatter(transpose(ittpts)[0],transpose(ittpts)[1],alpha=0.5, c='blue', edgecolors='none', s=2)
return
def GenerateFractal(npts,frac,itt,reg=False):
#Error Control
if npts < 1 or frac >= 1.0 or frac <= 0.0 or type(npts) is not int or type(frac) is not float or type(itt) is not int:
print("number of points must be a positive integer, compression fraction must be a positive float less than 1.0, itt must be a positive integer")
return
if frac > 0.5:
print("Warning: compression fractions over 1/2 do not lead to fractals")
#Initilize Verticies
if not reg:
fixedpts = [] #Random Verticies
else:
if npts == 3:
fixedpts = [(0.0,0.0),(1.0,0.0),(0.5,0.5*sqrt(3.0))] #Equilateral Triangle (npts = 3)
elif npts == 4:
fixedpts = [(0.0,0.0),(1.0,0.0),(1.0,1.0),(0.0,1.0)] #Square
elif npts == 5:
fixedpts = [(0.0,2./(1+sqrt(5.))),(0.5-2./(5+sqrt(5.)),0.0),(0.5,1.0),(0.5+2./(5+sqrt(5.)),0.0),(1.0,2./(1+sqrt(5.)))] #Regular Pentagon
elif npts == 6:
fixedpts = [(0.0,0.5),(1./4,0.5+.25*sqrt(3.)),(3./4,0.5+.25*sqrt(3.)),(1.0,0.5),(3./4,0.5-.25*sqrt(3.)),(1./4,0.5-.25*sqrt(3.))] #Regular Hexagon
elif npts == 8:
fixedpts = [(0.0,0.0),(1.0,0.0),(1.0,1.0),(0.0,1.0),(0.0,0.5),(1.0,0.5),(0.5,0.0),(0.5,1.0)] #Squares
elif npts == 2:
fixedpts = [(0.0,0.0),(1.0,1.0)] #Line
elif npts == 1:
fixedpts = [(0.5,0.5)] #Line
else:
print("No regular polygon stored with that many verticies, switching to default with randomly assigned verticies")
fixedpts = [] #Random Verticies
#Compression Fraction
# frac = 1.0/2.0 #Sierpinski's Triangle (npts = 3)
# frac = 1.0/2.0 #Sierpinski's "Square" (filled square, npts = 4)
# frac = 1.0/3.0 #Sierpinski's Pentagon (npts = 5)
# frac = 3.0/8.0 #Sierpinski's Hexagon (npts = 6)
if len(fixedpts) != npts and len(fixedpts) != 0:
print("The number of verticies don't match the length of the list of verticies. If you want the verticies generated at random, set fixedpts to []")
return
if len(fixedpts) != 0:
print("Fractal Dimension = {}".format(-log(npts)/log(frac)))
(start, fixedpts) = placeStartpoint(npts,fixedpts)
ittpts = placeItteratePts(npts,itt,start,fixedpts,frac)
plotFractal(start,fixedpts,ittpts)
return
"""
Explanation: Generating Fractal From Random Points - The Chaos Game
Initial Definitions
End of explanation
"""
# Call the GenerateFractal function with a number of verticies, a number of itterations, and the compression fraction
# The starting verticies are random by default. An optional input of True will set the verticies to those of a regular polygon.
GenerateFractal(7,.5,5000)
"""
Explanation: Make A Fractal
End of explanation
"""
GenerateFractal(3,.5,5000,True)
GenerateFractal(5,1./3,50000,True)
GenerateFractal(6,3./8,50000,True)
GenerateFractal(8,1./3,50000,True)
"""
Explanation: Regular Polygons
End of explanation
"""
GenerateFractal(1,.5,50000,True)
GenerateFractal(2,.5,50000,True)
GenerateFractal(4,.5,50000,True)
"""
Explanation: Exploring Further: Dimension
End of explanation
"""
GenerateFractal(10,.5,100)
GenerateFractal(10,.5,5000)
GenerateFractal(100,.5,5000)
GenerateFractal(100,.5,100000)
"""
Explanation: Randomness on Large Scales
End of explanation
"""
def makeFern(f,itt):
colname = ["percent","a","b","c","d","e","f"]
print(pd.DataFrame(data=np.array(f), columns = colname))
x,y = {0.5,0.0}
xypts=[]
if abs(sum(f[j][0] for j in range(len(f)))-1.0) < 10^-10:
print("Probabilities must sum to 1")
return
for i in range(itt):
rand = (np.random.random())
cond = 0.0
for j in range(len(f)):
if (cond <= rand) and (rand <= (cond+f[j][0])):
x = f[j][1]*x+f[j][2]*y+f[j][5]
y = f[j][3]*x+f[j][4]*y+f[j][6]
xypts.append((x,y))
cond = cond + f[j][0]
xmax,ymax = max(abs(transpose(xypts)[0])),max(abs(transpose(xypts)[1]))
plt.axes().set_aspect('equal')
color = transpose([[abs(r)/xmax for r in transpose(xypts)[0]],[abs(g)/ymax for g in transpose(xypts)[1]],[b/itt for b in range(itt)]])
plt.scatter(transpose(xypts)[0],transpose(xypts)[1],alpha=0.5, facecolors=color, edgecolors='none', s=1)
"""
Explanation: Learn More:
Chaos Game Wiki
Numberphile Video
Chaos in the Classroom
Chaos Rules!
Barnsley Fern
Modeling Life
End of explanation
"""
f = ((0.01,0.0,0.0,0.0,0.16,0.0,0.0),
(0.85,0.85,0.08,-0.08,0.85,0.0,1.60),
(0.07,0.20,-0.26,0.23,0.22,0.0,1.60),
(0.07,-0.15,0.28,0.26,0.24,0.0,0.44))
makeFern(f,5000)
"""
Explanation: For Barnsley's Fern:
Use the following values
|Percent|A|B|C|D|E|F|
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|0.01|0.0|0.0|0.0|0.16|0.0|0.0|
|0.85|0.85|0.04|-0.04|0.85|0.0|1.60|
|0.07|0.20|-0.26|0.23|0.22|0.0|1.60|
|0.07|-0.15|0.28|0.26|0.24|0.0|0.44|
Of course, this is only one solution so try as changing the values. Some values modify the curl, some change the thickness, others completely rearrange the structure.
End of explanation
"""
|
Shatnerz/rhc | connection.ipynb | mit | import sys
sys.path.append('/opt/rhc')
"""
Explanation: Defining outbound connections
Start by making sure rhc is in python's path,
End of explanation
"""
import rhc.micro as micro
import rhc.async as async
"""
Explanation: and importing a couple of components.
End of explanation
"""
p=micro.load_connection([
'CONNECTION placeholder http://jsonplaceholder.typicode.com',
'RESOURCE document /posts/1',
])
"""
Explanation: Connections to HTTP resources can be defined using the
CONNECTION and RESOURCE directives in a micro file.
A simple definition follows.
End of explanation
"""
async.wait(micro.connection.placeholder.document(_trace=True))
"""
Explanation: Now, make a connection and see what happens.
End of explanation
"""
micro.load_connection([
'CONNECTION placeholder http://jsonplaceholder.typicode.com',
'RESOURCE document /posts/{document_id}',
])
"""
Explanation: What happened?
This code performs a GET on http://jsonplaceholder.typicode.com/posts/1 and prints the result.
There are simpler ways to perform this task, like using the wonderful requests library, but
this solution is designed to perform well in a microservice environment where numerous connections
are being handled simultaneously.
How does it work?
Function load_connection
The load_connection helper function
allows for the dynamic loading of connection definitions. In this case, the definition is contained
in a list, but could also be loaded from a file by specifying the file's name, or by specifying
a dot-separated path to the file in the python code tree.
In a microservice implementation, the connection definitions are included in the
micro file, or in one of the imported files.
This function is included for experimentation and development.
CONNECTION
The CONNECTION directive provides a name and a url for a service. The connection is
added to rhc.micro, and can be accessed as rhc.micro.connection.{name}.
Since rhc.micro is imported as micro, the rhc preface is skipped in the example.
All by itself, a CONNECTION doesn't provide much.
RESOURCE
The RESOURCE directive adds a specific HTTP resource to the most recently specified CONNECTION.
In this case, the resource name is document and the path to the resource is /posts/1;
when combined with the CONNECTION, the full resource url is
http://jsonplaceholder.typicode.com/posts/1.
The resource is added to the connection, and can be accessed as
micro.connection.{connection name}.{resource name} or, specifically,
micro.connection.placeholder.document.
Function wait
The wait helper function starts a connection to the resource and waits until
it is done, printing the result.
This hints at the asynchronous activity underpinning micro.connection, which
will become more apparent in subsequent examples.
In a microservice, the wait function is never used, since it would cause the
entire service to block until wait completes.
This function is included for testing and development.
Adding Parameters
It would be nice to parameterize our RESOURCE so that we can specify a document other than
/posts/1. This is accomplished by changing the RESOURCE directive to include a
curly-brace delimited name
End of explanation
"""
async.wait(micro.connection.placeholder.document(1))
"""
Explanation: which adds a required argument to the micro.connection.placeholder.document function.
Now the wait call looks like this:
End of explanation
"""
micro.load_connection([
'CONNECTION placeholder http://jsonplaceholder.typicode.com',
'RESOURCE document /posts/{document_id} trace=true',
])
"""
Explanation: Adding non-path parameters
Although it doesn't make sense to add a document to a GET request,
we can do it for demonstration purposes. Add a trace=true to the
RESOURCE like this:
End of explanation
"""
import logging
logging.basicConfig(level=logging.DEBUG)
"""
Explanation: This will log the entire HTTP document when it is sent,
making it easy for us to see what is going on. Make sure
to enable debug logging, by doing something like the following:
End of explanation
"""
async.wait(micro.connection.placeholder.document('1', a=1, b=2))
"""
Explanation: Note: In a production microservice, you should never use trace=debug. Documents
often contain sensitive information that you don't want to end
up in logs.
A json document will be assembled from the keyword arguments to
micro.connection.placeholder.document. Try running the example with this
wait call:
End of explanation
"""
micro.load_connection([
'CONNECTION placeholder http://jsonplaceholder.typicode.com',
'RESOURCE document /posts/{document_number} trace=true',
'REQUIRED first_name',
'OPTIONAL planet default=earth',
])
"""
Explanation: Required and Optional parameters
Most REST documents are not composed of random collections of keyword
arguments. With the addition of two directives, specific arguments
can be required or optionally specified for each RESOURCE.
Change the connection list to look like this:
End of explanation
"""
async.wait(micro.connection.placeholder.document(1, 2))
"""
Explanation: The document resource now has two required arguments: document_id from
the path and first_name from the REQUIRED directive.
If the optional argument planet is not supplied, it will use the default value
of earth.
Run the example with this wait call:
End of explanation
"""
|
tanghaibao/goatools | notebooks/parents_and_ancestors.ipynb | bsd-2-clause | import os
from goatools.obo_parser import GODag
# Load a small test GO DAG and all the optional relationships,
# like 'regulates' and 'part_of'
godag = GODag('../tests/data/i126/viral_gene_silence.obo',
optional_attrs={'relationship'})
"""
Explanation: How to traverse to GO parents and ancestors
Traverse immediate parents or all ancestors with or without user-specified optional relationships
Parents and Ancestors described
Code to get Parents and Ancestors
Get parents through is_a relationship
Get parents through is_a relationship and optional relationship, regulates.
Get ancestors through is_a relationship
Get ancestors through is_a relationship and optional relationship, regulates.
Parents and Ancestors
Parents
Parents are terms directly above a GO term
The yellow term, regulation of metabolic process, has one or two parents.
1) If using only the default is_a relationship, the only parent is circled in green:
regulation of biological process
2) If adding the optional relationship, regulates, the two parents are circled in purple:
regulation of biological process
metabolic process
Ancestors
Ancestors are all terms above a GO term, traversing up all of the GO hierarchy.
3) If adding the optional relationship, regulates, there are four ancestors are circled in blue:
biological_process
biological regulation
regulation of biological process
metabolic process
If using only the default is_a relationship, there are three ancestors (not circled)
biological_process
biological regulation
regulation of biological process
<img src="images/parents_and_ancestors.png" alt="parents_and_ancestors" width="550">
Code to get Parents and Ancestors
End of explanation
"""
GO_ID = 'GO:0019222' # regulation of metabolic process
from goatools.godag.go_tasks import get_go2parents
optional_relationships = set()
go2parents_isa = get_go2parents(godag, optional_relationships)
print('{GO} parent: {P}'.format(
GO=GO_ID,
P=go2parents_isa[GO_ID]))
"""
Explanation: Get parents through is_a relationship
Parent is circled in green
End of explanation
"""
optional_relationships = {'regulates', 'negatively_regulates', 'positively_regulates'}
go2parents_reg = get_go2parents(godag, optional_relationships)
print('{GO} parents: {P}'.format(
GO=GO_ID,
P=go2parents_reg[GO_ID]))
"""
Explanation: Get parents through is_a relationship and optional relationship, regulates
Parents are circled in purple
End of explanation
"""
from goatools.gosubdag.gosubdag import GoSubDag
gosubdag_r0 = GoSubDag([GO_ID], godag, prt=None)
print('{GO} ancestors: {P}'.format(
GO=GO_ID,
P=gosubdag_r0.rcntobj.go2ancestors[GO_ID]))
"""
Explanation: Get ancestors through is_a relationship
Not circled, but can be seen in figure
End of explanation
"""
gosubdag_r1 = GoSubDag([GO_ID], godag, relationships=optional_relationships, prt=None)
print('{GO} ancestors: {P}'.format(
GO=GO_ID,
P=gosubdag_r1.rcntobj.go2ancestors[GO_ID]))
"""
Explanation: Get ancestors through is_a relationship and optional relationship, regulates
Circles in blue
End of explanation
"""
|
utensil/julia-playground | py/CGA-galgebra.ipynb | mit | cga3d = Ga(r'e_1 e_2 e_3 e e_{0}',g='1 0 0 0 0,0 1 0 0 0,0 0 1 0 0,0 0 0 0 -1,0 0 0 -1 0')
cga3d.g
e1,e2,e3,e,e0 = cga3d.mv()
ep = Rational(1,2) * e - e0
ep
en = Rational(1,2) * e + e0
en
ep**2
en**2
ep|en
e0**2
e**2
E = e^e0
E
E == ep^en == ep * en
E**2
E.rev() == -E
E * ep == -en
E * en == -ep
ep * E == en
en * E == ep
E * e == - e * E == -e
E * e0 == - e0 * E == e0
1 - E == -e * e0
1 + E == -e0 * e
"""
Explanation: The following follows definitions in Homogeneous Coordinates for Computational Geometry by Hestenes and Rockwood.
End of explanation
"""
cga3d = Ga(r'e_1 e_2 e_3 e_{+} e_{-}',g='1 0 0 0 0,0 1 0 0 0,0 0 1 0 0,0 0 0 1 0,0 0 0 0 -1')
cga3d.g
e1,e2,e3,ep,en = cga3d.mv()
no = e0 = en - ep
e0
e = ni = Rational(1, 2) * (ep + en)
e
ep**2
en**2
ep|en
e0**2
e**2
E = e^e0
E
E == ep^en == ep * en
E**2
E.rev() == -E
E * ep == -en
E * en == -ep
ep * E == en
en * E == ep
E * e == - e * E == -e
E * e0 == - e0 * E == e0
1 - E == -e * e0
1 + E == -e0 * e
"""
Explanation: The following follows definitions in https://observablehq.com/@enkimute/ganja-js-cheat-sheets
End of explanation
"""
|
lrayle/rental-listings-census | src/Geographically_weight_regression.ipynb | bsd-3-clause | # # TODO: add putty connection too.
# #read SSH connection parameters
# with open('ssh_settings.json') as settings_file:
# settings = json.load(settings_file)
# hostname = settings['hostname']
# username = settings['username']
# password = settings['password']
# local_key_dir = settings['local_key_dir']
# census_dir = 'synthetic_population/'
# """Remote directory with census data"""
# results_dir = 'craigslist_census/'
# """Remote directory for results"""
# # estbalish SSH connection
# ssh = paramiko.SSHClient()
# ssh.load_host_keys(local_key_dir)
# ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
# ssh.connect(hostname,username=username, password=password)
# sftp = ssh.open_sftp()
"""
Explanation: This notebook explores merged craigslist listings/census data and fits some initial models
Remote connection parameters
If data is stored remotely
End of explanation
"""
# def read_listings_file(fname):
# """Read csv file via SFTP and return as dataframe."""
# with sftp.open(os.path.join(listings_dir,fname)) as f:
# df = pd.read_csv(f, delimiter=',', dtype={'fips_block':str,'state':str,'mpo_id':str}, date_parser=['date'])
# # TODO: parse dates.
# return df
def log_var(x):
"""Return log of x, but NaN if zero."""
if x==0:
return np.nan
else:
return np.log(x)
def create_census_vars(df):
"""Make meaningful variables and return the dataframe."""
df['pct_white'] = df['race_of_head_1']/df['hhs_tot']
df['pct_black'] = df['race_of_head_2']/df['hhs_tot']
df['pct_amer_native'] = df['race_of_head_3']/df['hhs_tot']
df['pct_alaska_native'] = df['race_of_head_4']/df['hhs_tot']
df['pct_any_native'] = df['race_of_head_5']/df['hhs_tot']
df['pct_asian'] = df['race_of_head_6']/df['hhs_tot']
df['pct_pacific'] = df['race_of_head_7']/df['hhs_tot']
df['pct_other_race'] = df['race_of_head_8']/df['hhs_tot']
df['pct_mixed_race'] = df['race_of_head_9']/df['hhs_tot']
df['pct_mover'] = df['recent_mover_1']/df['hhs_tot']
df['pct_owner'] = df['tenure_1']/df['hhs_tot']
df['avg_hh_size'] = df['persons_tot']/df['hhs_tot']
df['cars_per_hh'] = df['cars_tot']/df['hhs_tot']
df['ln_rent'] = df['rent'].apply(log_var)
df['ln_income'] = df.income_med.apply(log_var)
return df
def filter_outliers(df, rent_range=(100,10000),sqft_range=(10,5000)):
"""Drop outliers from listings dataframe. For now, only need to filter out rent and sq ft.
Args:
df: Dataframe with listings. Cols names include ['rent','sqft']
rent_range (tuple): min and max rent
sqft_range (tuple): min and max sqft
Returns:
DataFrame: listings data without outliers.
"""
n0=len(df)
df=df[(df.rent>=rent_range[0])&(df.rent<rent_range[1])]
n1=len(df)
print('Dropped {} outside rent range ${}-${}'.format(n0-n1,rent_range[0],rent_range[1]))
df=df[(df.sqft>=sqft_range[0])&(df.sqft<sqft_range[1])]
n2=len(df)
print('Dropped {} outside sqft range {}-{} sqft. {} rows remaining'.format(n1-n2,sqft_range[0],sqft_range[1],len(df)))
return(df)
# get list of files and load.
# for remotely stored data by state (just do one state for now)
state='CA'
infile='cl_census_{}.csv'.format(state)
#data = read_listings_file(infile) # uncomment to get remote data.
# for local data:
data_dir = '../data/'
data_file = r'..\data\sfbay_listings_03172017.csv'
data = pd.read_csv(os.path.join(data_file),parse_dates=[1],dtype={'listing_id':str, 'rent':float, 'bedrooms':float, 'bathrooms':float, 'sqft':float,
'rent_sqft':float, 'fips_block':str, 'state':str, 'region':str, 'mpo_id':str, 'lng':float, 'lat':float,
'cars_tot':float, 'children_tot':float, 'persons_tot':float, 'workers_tot':float,
'age_of_head_med':float, 'income_med':float, 'hhs_tot':float, 'race_of_head_1':float,
'race_of_head_2':float, 'race_of_head_3':float, 'race_of_head_4':float, 'race_of_head_5':float,
'race_of_head_6':float, 'race_of_head_7':float, 'race_of_head_8':float, 'race_of_head_9':float,
'recent_mover_0':float, 'recent_mover_1':float, 'tenure_1':float, 'tenure_2':float})
print(len(data))
data.head()
# for census vars, NA really means 0...
census_cols = ['cars_tot', 'children_tot','persons_tot', 'workers_tot', 'age_of_head_med', 'income_med','hhs_tot', 'race_of_head_1', 'race_of_head_2', 'race_of_head_3','race_of_head_4', 'race_of_head_5', 'race_of_head_6', 'race_of_head_7','race_of_head_8', 'race_of_head_9', 'recent_mover_0', 'recent_mover_1','tenure_1', 'tenure_2']
for col in census_cols:
data[col] = data[col].fillna(0)
"""
Explanation: Data Preparation
End of explanation
"""
# create useful variables
data = create_census_vars(data)
# define some feature to include in the model.
features_to_examine = ['rent','ln_rent', 'bedrooms','bathrooms','sqft','pct_white', 'pct_black','pct_asian','pct_mover','pct_owner','income_med','age_of_head_med','avg_hh_size','cars_per_hh']
data[features_to_examine].describe()
"""
Explanation: create variables
variable codes
Race codes (from PUMS)
1 .White alone
2 .Black or African American alone
3 .American Indian alone
4 .Alaska Native alone
5 .American Indian and Alaska Native tribes specified; or American
.Indian or Alaska native, not specified and no other races
6 .Asian alone
7 .Native Hawaiian and Other Pacific Islander alone
8 .Some other race alone
9 .Two or more major race groups
tenure_1 = owner (based on my guess; didn't match the PUMS codes)
mover_1 = moved past year (based on my guess)
End of explanation
"""
# I've already identified these ranges as good at exluding outliers
rent_range=(100,10000)
sqft_range=(10,5000)
data = filter_outliers(data, rent_range=rent_range, sqft_range=sqft_range)
# Use this to explore outliers yourself.
g=sns.distplot(data['rent'], kde=False)
g.set_xlim(0,10000)
g=sns.distplot(data['sqft'], kde=False)
g.set_xlim(0,10000)
"""
Explanation: Filter outliers
End of explanation
"""
# examine NA's
print('Total rows:',len(data))
print('Rows with any NA:',len(data[pd.isnull(data).any(axis=1)]))
print('Rows with bathroom NA:',len(data[pd.isnull(data.bathrooms)]))
print('% rows missing bathroom col:',len(data[pd.isnull(data.bathrooms)])/len(data))
"""
Explanation: Examine missing data
End of explanation
"""
#for d in range(1,31):
# print(d,'% rows missing bathroom col:',len(data[pd.isnull(data.bathrooms)&((data.date.dt.month==12)&(data.date.dt.day==d))])/len(data[(data.date.dt.month==12)&(data.date.dt.day==d)]))
"""
Explanation: uh oh, 74% are missing bathrooms feature. Might have to omit that one. Only 0.02% of rows have other missing values, so that should be ok.
End of explanation
"""
# uncommon to only use data after Dec 21.
#data=data[(data.date.dt.month>=12)&(data.date.dt.day>=22)]
#data.shape
# Uncomment to drop NA's
#data = data.dropna()
#print('Dropped {} rows with NAs'.format(n0-len(data)))
"""
Explanation: Bathrooms were added on Dec 21. After that, if bathrooms aren't in the listing, the listing is thrown out. Let's try to find the date when the bathrooms column was added. So if need to use bathrooms feature, can use listings Dec 22 and after.
End of explanation
"""
p=sns.distplot(data.rent, kde=False)
p.set_title('rent')
p=sns.distplot(data.ln_rent, kde=False)
p.set_title('ln rent')
plot_rows = math.ceil(len(features_to_examine)/2)
f, axes = plt.subplots(plot_rows,2, figsize=(8,15))
sns.despine(left=True)
for i,col in enumerate(features_to_examine):
row_position = math.floor(i/2)
col_position = i%2
data_notnull = data[pd.notnull(data[col])] # exclude NA values from plot
sns.distplot(data_notnull[col], ax=axes[row_position, col_position],kde=False)
axes[row_position, col_position].set_title('{}'.format(col))
plt.tight_layout()
plt.show()
data_notnull = data[pd.notnull(data['ln_income'])]
p=sns.distplot(data_notnull['ln_income'],kde=False)
p.set_title('ln med income')
# ln med income is not more normal.. use med income instead.
"""
Explanation: Look at distributions
Since rent has a more or less logarithmic distribution, use ln_rent instead
End of explanation
"""
# correlation heatmap
corrmat=data[features_to_examine].corr()
corrmat.head()
f, ax = plt.subplots(figsize=(12, 9))
sns.heatmap(corrmat, vmax=.8, square=True)
f.tight_layout()
"""
Explanation: look at correlations
End of explanation
"""
print(data.columns)
#'pct_amer_native','pct_alaska_native',
x_cols = ['bedrooms','bathrooms', 'sqft','age_of_head_med', 'income_med','pct_white', 'pct_black', 'pct_any_native', 'pct_asian', 'pct_pacific',
'pct_other_race', 'pct_mixed_race', 'pct_mover', 'pct_owner', 'avg_hh_size', 'cars_per_hh']
y_col = 'ln_rent'
print(len(data))
# exclude missing values
data_notnull= data[(pd.notnull(data[x_cols])).all(axis=1)]
data_notnull= data_notnull[(pd.notnull(data_notnull[y_col]))]
print('using {} rows of {} total'.format(len(data_notnull),len(data)))
data_notnull.head()
data_notnull.to_csv(r'..\data\sfbay_listings_03172017_notnull.csv',index=False)
"""
Explanation: The correlations appear as expected, except for cars_per_hh. Maybe this is because cars_per_hh is reflecting the size of the household more than income. Might want to try cars per adult instead..
End of explanation
"""
from sklearn import linear_model, cross_validation
# create training and testing datasets.
# this creates a test set that is 30% of total obs.
X_train, X_test, y_train, y_test = cross_validation.train_test_split(data_notnull[x_cols],data_notnull[y_col], test_size = .3, random_state = 201)
regr = linear_model.LinearRegression()
regr.fit(X_train, y_train)
# Intercept
print('Intercept:', regr.intercept_)
# The coefficients
print('Coefficients:')
pd.Series(regr.coef_, index=x_cols)
# See mean square error, using test data
print("Mean squared error: %.2f" % np.mean((regr.predict(X_test) - y_test) ** 2))
print("RMSE:", np.sqrt(np.mean((regr.predict(X_test) - y_test) ** 2)))
# Explained variance score: 1 is perfect prediction
print('Variance score: %.2f' % regr.score(X_test, y_test))
# Plot predicted values vs. observed
plt.scatter(regr.predict(X_train),y_train, color='blue',s=1, alpha=.5)
plt.show()
# plot residuals vs predicted values
plt.scatter(regr.predict(X_train), regr.predict(X_train)- y_train, color='blue',s=1, alpha=.5)
plt.scatter(regr.predict(X_test), regr.predict(X_test)- y_test, color='green',s=1, alpha=.5)
plt.show()
"""
Explanation: Comparison of models
Try a linear model
We'll start with a linear model to use as the baseline.
End of explanation
"""
print("Training set. Mean squared error: %.5f" % np.mean((regr.predict(X_train) - y_train) ** 2), '| Variance score: %.5f' % regr.score(X_train, y_train))
print("Test set. Mean squared error: %.5f" % np.mean((regr.predict(X_test) - y_test) ** 2), '| Variance score: %.5f' % regr.score(X_test, y_test))
"""
Explanation: The residuals look pretty normally distributed.
I wonder if inclusion of all these race variables is leading to overfitting. If so, we'd have small error on training set and large error on test set.
End of explanation
"""
from sklearn.linear_model import Ridge
# try a range of different regularization terms.
for a in [10,1,0.1,.01,.001,.00001]:
ridgereg = Ridge(alpha=a)
ridgereg.fit(X_train, y_train)
print('\n alpha:',a)
print("Mean squared error: %.5f" % np.mean((ridgereg.predict(X_test) - y_test) ** 2),'| Variance score: %.5f' % ridgereg.score(X_test, y_test))
# Intercept
print('Intercept:', ridgereg.intercept_)
# The coefficients
print('Coefficients:')
pd.Series(ridgereg.coef_, index=x_cols)
"""
Explanation: Try Ridge Regression (linear regression with regularization )
Since the training error and test error are about the same, and since we're using few features, overfitting probably isn't a problem. If it were a problem, we would want to try a regression with regularization.
Let's try it just for the sake of demonstration.
End of explanation
"""
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import KFold
from sklearn.metrics import mean_squared_error
def RMSE(y_actual, y_predicted):
return np.sqrt(mean_squared_error(y_actual, y_predicted))
def cross_val_rf(X, y,max_f='auto', n_trees = 50, cv_method='kfold', k=5):
"""Estimate a random forest model using cross-validation and return the average error across the folds.
Args:
X (DataFrame): features data
y (Series): target data
max_f (str or int): how to select max features to consider for the best split.
If “auto”, then max_features=n_features.
If “sqrt”, then max_features=sqrt(n_features)
If “log2”, then max_features=log2(n_features)
If int, then consider max_features features at each split
n_trees (number of trees to build)
cv_method (str): how to split the data ('kfold' (default) or 'timeseries')
k (int): number of folds (default=5)
Returns:
float: mean error (RMSE) across all training/test sets.
"""
if cv_method == 'kfold':
kf = KFold(n_splits=k, shuffle=True, random_state=2012016) # use random seed for reproducibility.
E = np.ones(k) # this array will hold the errors.
i=0
for train, test in kf.split(X, y):
train_data_x = X.iloc[train]
train_data_y = y.iloc[train]
test_data_x = X.iloc[test]
test_data_y = y.iloc[test]
# n_estimators is number of trees to build.
# max_features = 'auto' means the max_features = n_features. This is a parameter we should tune.
random_forest = RandomForestRegressor(n_estimators=n_trees, max_features=max_f, criterion='mse', max_depth=None)
random_forest.fit(train_data_x,train_data_y)
predict_y=random_forest.predict(test_data_x)
E[i] = RMSE(test_data_y, predict_y)
i+=1
return np.mean(E)
def optimize_rf(df_X, df_y, max_n_trees=100, n_step = 20, cv_method='kfold', k=5):
"""Optimize hyperparameters for a random forest regressor.
Args:
df_X (DataFrame): features data
df_y (Series): target data
max_n_trees (int): max number of trees to generate
n_step (int): intervals to use for max_n_trees
cv_method (str): how to split the data ('kfold' (default) or 'timeseries')
k (int): number of folds (default=5)
"""
max_features_methods = ['auto','sqrt','log2'] # methods of defining max_features to try.
# create a place to store the results, for easy plotting later.
results = pd.DataFrame(columns=max_features_methods, index=[x for x in range(10,max_n_trees+n_step,n_step)])
for m in max_features_methods:
print('max_features:',m)
for n in results.index:
error = cross_val_rf(df_X, df_y,max_f=m, n_trees=n)
print('n_trees:',n,' error:',error)
results.ix[n,m] = error
return results
# data to use - exclude nulls
df_X = data_notnull[x_cols]
df_y = data_notnull[y_col]
print(df_X.shape, df_y.shape)
#df_all = pd.concat([data_notnull[x_cols],data_notnull[y_col]], axis=1)
#df_all.shape
# basic model to make sure it workds
random_forest = RandomForestRegressor(n_estimators=10, criterion='mse', max_depth=None)
random_forest.fit(df_X,df_y)
y_predict = random_forest.predict(df_X)
RMSE(df_y,y_predict)
"""
Explanation: As expected, Ridge regression doesn't help much.
The best way to improve the model at this point is probably to add more features.
Random Forest
End of explanation
"""
# without parameter tuning
cross_val_rf(df_X,df_y)
# tune the parameters
rf_results = optimize_rf(df_X,df_y, max_n_trees = 100, n_step = 20) # this is sufficient; very little improvement after n_trees=100.
#rf_results2 = optimize_rf(df_X,df_y, max_n_trees = 500, n_step=100)
rf_results
ax = rf_results.plot()
ax.set_xlabel('number of trees')
ax.set_ylabel('RMSE')
#rf_results2.plot()
"""
Explanation: We can use k-fold validation if we believe the samples are independently and identically distributed. That's probably fine right now because we have only 1.5 months of data, but later we may have some time-dependent processes in these timeseries data. If we do use k-fold, I think we should shuffle the samples, because they do not come in a non-random sequence.
End of explanation
"""
random_forest = RandomForestRegressor(n_estimators=100, max_features='sqrt', criterion='mse', max_depth=None)
random_forest.fit(df_X,df_y)
predict_y=random_forest.predict(df_X)
"""
Explanation: Using m=sqrt(n_features) and log2(n_features) gives similar performance, and a slight improvement over m = n_features. After about 100 trees the error levels off. One of the nice things about random forest is that using additional trees doesn't lead to overfitting, so we could use more, but it's not necessary. Now we can fit the model using n_trees = 100 and m = sqrt.
End of explanation
"""
# plot the importances
rf_o = pd.DataFrame({'features':x_cols,'importance':random_forest.feature_importances_})
rf_o= rf_o.sort_values(by='importance',ascending=False)
plt.figure(1,figsize=(12, 6))
plt.xticks(range(len(rf_o)), rf_o.features,rotation=45)
plt.plot(range(len(rf_o)),rf_o.importance,"o")
plt.title('Feature importances')
plt.show()
"""
Explanation: The 'importance' score provides an ordered qualitative ranking of the importance of each feature. It is calculated from the improvement in MSE provided by each feature when it is used to split the tree.
End of explanation
"""
from sklearn.model_selection import TimeSeriesSplit
tscv = TimeSeriesSplit(n_splits=5)
"""
Explanation: It's not surprising sqft is the most important predictor, although it is strange cars_per_hh is the second most important. I would have expected incometo be higher in the list.
If we don't think the samples are i.i.d., it's better to use time series CV.
End of explanation
"""
from sklearn.ensemble import GradientBoostingRegressor
def cross_val_gb(X,y,cv_method='kfold',k=5, **params):
"""Estimate gradient boosting regressor using cross validation.
Args:
X (DataFrame): features data
y (Series): target data
cv_method (str): how to split the data ('kfold' (default) or 'timeseries')
k (int): number of folds (default=5)
**params: keyword arguments for regressor
Returns:
float: mean error (RMSE) across all training/test sets.
"""
if cv_method == 'kfold':
kf = KFold(n_splits=k, shuffle=True, random_state=2012016) # use random seed for reproducibility.
E = np.ones(k) # this array will hold the errors.
i=0
for train, test in kf.split(X, y):
train_data_x = X.iloc[train]
train_data_y = y.iloc[train]
test_data_x = X.iloc[test]
test_data_y = y.iloc[test]
# n_estimators is number of trees to build.
grad_boost = GradientBoostingRegressor(loss='ls',criterion='mse', **params)
grad_boost.fit(train_data_x,train_data_y)
predict_y=grad_boost.predict(test_data_x)
E[i] = RMSE(test_data_y, predict_y)
i+=1
return np.mean(E)
params = {'n_estimators':100,
'learning_rate':0.1,
'max_depth':1,
'min_samples_leaf':4
}
grad_boost = GradientBoostingRegressor(loss='ls',criterion='mse', **params)
grad_boost.fit(df_X,df_y)
RMSE(y_test, grad_boost.predict(X_test))
n_trees = 100
l_rate = 0.1
max_d = 1
cross_val_gb(df_X,df_y, l_rate,max_d)
"""
Explanation: Try Boosted Forest
End of explanation
"""
from sklearn.model_selection import GridSearchCV
param_grid = {'learning_rate':[.1, .05, .02, .01],
'max_depth':[2,4,6],
'min_samples_leaf': [3,5,9,17],
'max_features': [1, .3, .1]
}
est= GradientBoostingRegressor(n_estimators = 1000)
gs_cv = GridSearchCV(est,param_grid).fit(df_X,df_y)
print(gs_cv.best_params_)
print(gs_cv.best_score_)
# best parameters
params = {'n_estimators':1000,
'learning_rate':0.05,
'max_depth':6,
'min_samples_leaf':3
}
grad_boost = GradientBoostingRegressor(loss='ls',criterion='mse', **params)
grad_boost.fit(df_X,df_y)
RMSE(y_test, grad_boost.predict(X_test))
"""
Explanation: tune parameters
This time we'll use Grid Search in scikit-learn. This conducts an exhaustive search through the given parameters to find the best for the given estimator.
End of explanation
"""
# plot the importances
gb_o = pd.DataFrame({'features':x_cols,'importance':grad_boost.feature_importances_})
gb_o= gb_o.sort_values(by='importance',ascending=False)
plt.figure(1,figsize=(12, 6))
plt.xticks(range(len(gb_o)), gb_o.features,rotation=45)
plt.plot(range(len(gb_o)),gb_o.importance,"o")
plt.title('Feature importances')
plt.show()
"""
Explanation: Wow, that's a big improvement on the random forest model!
End of explanation
"""
from sklearn.ensemble.partial_dependence import plot_partial_dependence
from sklearn.ensemble.partial_dependence import partial_dependence
df_X.columns
features = [2, 15, 4, 12,(4,2), (4,15)]
names = df_X.columns
fig, axs = plot_partial_dependence(grad_boost, df_X, features,feature_names=names, grid_resolution=50, figsize = (9,6))
fig.suptitle('Partial dependence of rental price features')
plt.subplots_adjust(top=0.9) # tight_layout causes overlap with suptitle
plt.show()
"""
Explanation: Let's use partial_dependence to look at feature interactions. Look at the four most important features.
End of explanation
"""
|
tjwei/HackNTU_Data_2017 | Week01/01-Numpy.ipynb | mit | # 起手式
import numpy as np
"""
Explanation: Numpy 介紹
End of explanation
"""
np.array([1,2,3,4])
x = _
y = np.array([[1.,2,3],[4,5,6]])
y
"""
Explanation: 建立 ndarray
End of explanation
"""
x.shape
y.shape
x.dtype
y.dtype
"""
Explanation: 看 ndarray 的第一件事情: shape , dtype
End of explanation
"""
# import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
# 畫圖
plt.plot(x, 'x');
"""
Explanation: 有時候,可以看圖
End of explanation
"""
# 建立 0 array
np.zeros_like(y)
np.zeros((10,10))
# 跟 range 差不多
x = np.arange(0, 10, 0.1)
# 亂數
y = np.random.uniform(-1,1, size=x.shape)
plt.plot(x, y)
"""
Explanation: 有很多其他建立的方式
End of explanation
"""
x = np.linspace(0, 2* np.pi, 1000)
plt.plot(x, np.sin(x))
"""
Explanation: 這是一堆資料
* 資料有什麼資訊?
* 資料有什麼限制?
* 這些限制有什麼意義?好處?
* 以前碰過什麼類似的東西?
* 可以套用在哪些東西上面?
* 可以怎麼用(運算)?
最簡單的計算是 逐項計算
see also np.vectorize
End of explanation
"""
#可以用 %run -i 跑參考範例
%run -i q0.py
# 或者看看參考範例
#%load q0.py
"""
Explanation: Q0:
畫出 $y=x^2+1$ 或其他函數的圖形
用
python
plt.plot?
看看 plot 還有什麼參數可以玩
End of explanation
"""
# 參考答案
#%load q1.py
"""
Explanation: Q1:
試試看圖片。
使用
```python
from PIL import Image
讀入 PIL Image (這張圖是從 openclipart 來的 cc0)
img = Image.open('img/Green-Rolling-Hills-Landscape-800px.png')
圖片轉成 ndarray
img_array = np.array(img)
ndarray 轉成 PIL Image
Image.fromarray(img_array)
```
看看這個圖片的內容, dtype 和 shape
End of explanation
"""
a = np.arange(30)
a
a[5]
a[3:7]
# 列出所有奇數項
a[1::2]
# 還可以用來設定值
a[1::2] = -1
a
# 或是
a[1::2] = -a[::2]-1
a
"""
Explanation: Indexing
可以用類似 list 的 indexing
End of explanation
"""
%run -i q2.py
#%load q2.py
"""
Explanation: Q2
給定
python
x = np.arange(30)
a = np.arange(30)
a[1::2] = -a[1::2]
畫出下面的圖
End of explanation
"""
b = np.array([[1,2,3], [4,5,6], [7,8,9]])
b
b[1][2]
b[1,2]
b[1]
"""
Explanation: ndarray 也可以
End of explanation
"""
b = np.random.randint(0,99, size=(5,10))
b
"""
Explanation: Q3
動手試試看各種情況
比方
python
b = np.random.randint(0,99, size=(10,10))
b[::2, 2]
Fancy indexing
End of explanation
"""
b[[1,3]]
b[(1,3)]
b[[1,2], [3,4]]
b[[(1,2),(3,4)]]
b[[True, False, False, True, False]]
"""
Explanation: 試試看下面的結果
想一下是怎麼一回事(numpy 在想什麼?)
End of explanation
"""
#參考範例
%run -i q4.py
"""
Explanation: Q4
把 b 中的偶數都變成 -1
End of explanation
"""
# 還記得剛才的
from PIL import Image
img = Image.open('img/Green-Rolling-Hills-Landscape-800px.png')
img_array = np.array(img)
Image.fromarray(img_array)
# 用來顯示圖片的函數
from IPython.display import display
def show(img_array):
display(Image.fromarray(img_array))
"""
Explanation: 用圖形來練習
End of explanation
"""
# 將圖片縮小成一半
%run -i q_half.py
# 將圖片放大
%run -i q_scale2.py
# 圖片上下顛倒
show(img_array[::-1])
%run -i q_paste.py
%run -i q_grayscale.py
"""
Explanation: Q
將圖片縮小成一半
擷取中間一小塊
圖片上下顛倒
左右鏡射
去掉綠色
將圖片放大兩倍
貼另外一張圖到大圖中
python
from urllib.request import urlopen
url = "https://raw.githubusercontent.com/playcanvas/engine/master/examples/images/animation.png"
simg = Image.open(urlopen(url))
紅綠交換
團片變成黑白 參考 Y=0.299R+0.587G+0.114B
會碰到什麼困難? 要如何解決
End of explanation
"""
# 用迴圈畫圓
%run -i q_slow_circle.py
# 用 fancy index 畫圓
%run -i q_fast_circle.py
"""
Explanation: Q
挖掉個圓圈? (300,300)中心,半徑 100
旋轉九十度? x,y 互換?
End of explanation
"""
# 還可以做模糊化
a = img_array.astype(float)
for i in range(10):
a[1:,1:] = (a[1:,1:]+a[:-1,1:]+a[1:,:-1]+a[:-1,:-1])/4
show(a.astype('uint8'))
# 求邊界
a = img_array.astype(float)
a = a @ [0.299, 0.587, 0.114, 0]
a = np.abs((a[1:]-a[:-1]))*2
show(a.astype('uint8'))
"""
Explanation: indexing 的其他用法
End of explanation
"""
# reshaping 的應用
R,G,B,A = img_array.reshape(-1,4).T
plt.hist((R,G,B,A), color="rgby");
"""
Explanation: Reshaping
.flatten 拉平看看資料在電腦中如何儲存?
查看
.reshape, .T, np.rot00, .swapaxes .rollaxis
然後再做一下上面的事情
End of explanation
"""
# 例子
show(np.hstack([img_array, img_array2]))
# 例子
np.concatenate([img_array, img_array2], axis=2).shape
"""
Explanation: 堆疊在一起
查看 np.vstack np.hstack np.concatenate 然後試試看
End of explanation
"""
np.max([1,2,3,4])
np.sum([1,2,3,4])
np.mean([1,2,3,4])
np.min([1,2,3,4])
"""
Explanation: 作用在整個 array/axis 的函數
End of explanation
"""
x_mean = img_array.astype(float).min(axis=0, keepdims=True)
print(x_mean.dtype, x_mean.shape)
y_mean = img_array.astype(float).min(axis=1, keepdims=True)
print(y_mean.dtype, y_mean.shape)
# 自動 broadcast
xy_combined = ((x_mean+y_mean)/2).astype('uint8')
show(xy_combined)
"""
Explanation: 多重意義的運用, 水平平均,整合垂直平均
End of explanation
"""
# = 1*4 + 2*5 + 4*6
np.dot([1,2,3], [4,5,6])
u=np.array([1,2,3])
v=np.array([4,5,6])
print( u@v )
print( (u*v).sum() )
"""
Explanation: Tensor 乘法
先從點積開始
End of explanation
"""
A=np.random.randint(0,10, size=(5,3))
A
B=np.random.randint(0,10, size=(3,7))
B
A.dot(B)
"""
Explanation: 矩陣乘法
如果忘記矩陣乘法是什麼了, 參考這裡 http://matrixmultiplication.xyz/
或者 http://eli.thegreenplace.net/2015/visualizing-matrix-multiplication-as-a-linear-combination/
矩陣乘法可以看成是:
* 所有組合(其他軸)的內積(共有軸)
* 多個行向量線性組合
* 代入線性方程式 A1-矩陣與基本列運算.ipynb
* 用 numpy 來理解
python
np.sum(a[:,:, np.newaxis] * b[np.newaxis, : , :], axis=1)
dot(a, b)[i,k] = sum(a[i,:] * b[:, k])
高維度
要如何推廣?
* tensordot, tensor contraction, a.shape=(3,4,5), b.shape=(4,5,6), axis = 2 時等價於
python
np.sum(a[..., np.newaxis] * b[np.newaxis, ...], axis=(1, 2))
tensordot(a,b)[i,k]=sum(a[i, ...]* b[..., k])
https://en.wikipedia.org/wiki/Tensor_contraction
dot
python
dot(a, b)[i,j,k,m] = sum(a[i,j,:] * b[k,:,m])
np.tensordot(a,b, axes=(-1,-2))
matmul 最後兩個 index 當成 matrix
python
a=np.random.random(size=(3,4,5))
b=np.random.random(size=(3,5,7))
(a @ b).shape
np.sum(a[..., np.newaxis] * np.moveaxis(b[..., np.newaxis], -1,-3), axis=-2)
einsum https://en.wikipedia.org/wiki/Einstein_notation
python
np.einsum('ii', a) # trace(a)
np.einsum('ii->i', a) #diag(a)
np.einsum('ijk,jkl', a, b) # tensordot(a,b)
np.einsum('ijk,ikl->ijl', a,b ) # matmul(a,b)
End of explanation
"""
|
Who8MyLunch/ipynb_widget_canvas | notebooks/03 - Different Ways to Display an Image.ipynb | mit | from __future__ import print_function, unicode_literals, division, absolute_import
import io
import IPython
from ipywidgets import widgets
import PIL.Image
from widget_canvas import CanvasImage
from widget_canvas.image import read
"""
Explanation: Image Display Examples
End of explanation
"""
data_image = read('images/Whippet.jpg')
data_image.shape
"""
Explanation: Load some image data
End of explanation
"""
import matplotlib.pyplot as plt
%matplotlib inline
plt.imshow(data_image)
plt.tight_layout()
"""
Explanation: Matplotlib imshow()
Matplotlib is a great high-quality data display tool used by lots of people for a long time. It has long been my first choice for interactive data exploration on my PC when I want a native GUI framework. But when I use the IPython Notebook I want my interactive display tools to live entirely in the world of HTML and JavaScript. Static image display works fine enough (see the example below), but fully-interactive displays are still a work in progress. Ultimately I need compatibility with IPython widgets and Matplotlib is not there yet.
End of explanation
"""
def compress_to_bytes(data, fmt):
"""
Helper function to compress image data via PIL/Pillow.
"""
buff = io.BytesIO()
img = PIL.Image.fromarray(data)
img.save(buff, format=fmt)
return buff.getvalue()
# Compress the image data.
fmt = 'png'
data_comp = compress_to_bytes(data_image, fmt)
# Display first 100 bytes of compressed data just for fun.
data_comp[:100]
# Built-in IPython image widget.
wid_builtin = widgets.Image(value=data_comp)
wid_builtin.border_color = 'black'
wid_builtin.border_width = 2
wid_builtin
# At one point during development the above image was stretched out to the full width of containing cell.
# Not sure why. The two lines below are meant to address that problem.
wid_builtin.width = data_image.shape[1]
wid_builtin.height = data_image.shape[0]
"""
Explanation: IPython's Built-in Image Widget
The IPython built-in image widget accepts as input a string of byte data representing an already-compressed image. The compressed image data is synchronized from the Python backend to the Notebook's Javascript frontend and copied into the widget's image element for display.
The upside of this display widget is simplicity of implementaion. The downside is the depth of understanding and complexity of implementation required of the user. I want an easy-to-use image display widget that readily accepts Numpy arrays as input.
End of explanation
"""
import base64
data_encode = base64.b64encode(data_comp)
# Display first 100 bytes of compressed and encoded data just for fun.
data_encode[:100]
# Compare sizes.
print('Original Image: {:7d} bytes (raw)'.format(data_image.size))
print('Compressed: {:7d} bytes ({})'.format(len(data_comp), fmt))
print('Compressed & Encoded:{:7d} bytes (base64)'.format(len(data_encode)))
# The decoding step here is necesary since we need to interpret byte data as text.
# See this link for a nice explanation:
# http://stackoverflow.com/questions/14010551/how-to-convert-between-bytes-and-strings-in-python-3
enc = 'utf-8'
data_url = 'data:image/{:s};charset={};base64,{:s}'.format(fmt, enc, data_encode.decode(encoding=enc))
"""
Explanation: Canvas Element with Basic HTML and JavaScript
The HTML5 Canvas Element is a great tool for displaying images and drawing artwork onto a bitmap surface in the browser. It has built-in support for mouse events plus size and rotation transforms.
The example below uses HTML and JavaScript to display an image to a canvas element. But since the image originates from a local data file and not a remote URL, a special URL form must be used to encode the image data into something compatible. See mozilla and wikipedia for details.
The above example already compressed the original image data into a sequence of bytes. Now those bytes need to be encoded into a form that will survive delivery over the internet.
xml
data:[<MIME-type>][;charset=<encoding>][;base64],<data>
End of explanation
"""
doc_html = \
"""
<html>
<head></head>
<body>
<canvas id='hello_example' style='border: solid black 2px'/>
</body>
</html>
"""
template_js = \
"""
// Embedded data URI goes right here.
var url = "{}"
// Get the canvas element plus corresponding drawing context
var canvas = document.getElementById('hello_example');
var context = canvas.getContext('2d');
// Create a hidden <img> element to manage incoming data.
var img = new Image();
// Add new-data event handler to the hidden <img> element.
img.onload = function () {{
// This function will be called when new image data has finished loading
// into the <img> element. This new data will be the source for drawing
// onto the Canvas.
// Set canvas geometry.
canvas.width = img.width
canvas.style.width = img.width + 'px'
canvas.height = img.height
canvas.style.height = img.height + 'px'
// Draw new image data onto the Canvas.
context.drawImage(img, 0, 0);
}}
// Assign image URL.
img.src = url
"""
doc_js = template_js.format(data_url)
# Display the HTML via IPython display hook.
IPython.display.display_html(doc_html, raw=True)
# Update HTML canvas element with some JavaScript.
IPython.display.display_javascript(doc_js, raw=True)
"""
Explanation: So that takes care of getting the data ready. Next we use some HTML and JavaScript to display the image right here in the notebook.
End of explanation
"""
wid_canvas = CanvasImage(data_image)
wid_canvas.border_color = 'black'
wid_canvas.border_width = 2
wid_canvas
"""
Explanation: My New Canvas Widget
My new canvas widget is simpler to use than IPython's built-in image display widget since it takes a Numpy array as input. Behind the scenes it takes care of compressing and encoding the data and then feeding it into the canvas element in a manner similar to the example just above.
End of explanation
"""
data_image_2 = read('images/Doberman.jpg')
wid_canvas.data = data_image_2
"""
Explanation: Modifying the image data in place is as easy as any other notebook widget.
End of explanation
"""
|
eTomate/ML-TextLearning-Intro | TextLearningSkLearn.ipynb | mit | %%capture
!pip install scikit-learn scipy numpy pandas matplotlib
import pandas as pd
import numpy as np
import math
%matplotlib inline
"""
Explanation: Text Learning with sklearn
This notebook will give you a short overview over text learning with skLearn.
At first we will install and import the required python packages.
Required Packages and import
End of explanation
"""
data = pd.read_csv('spam.csv', encoding='latin-1')
# print the dimensions of the dataframe
print(data.shape)
data.head()
"""
Explanation: Scikit-learn is a machine learning package for Python build on top of SciPy, NumPy and matplotlib. It gives access to a huge set of different machine learning techniques and a lot of preprocessing methods.
SciPy and NumPy are libraries, which added high performant mathematical operations to Python. Thanks to NumPy, Python is now able to perform different mathematical operations on matrices and vectors. Since Python is only a interpreted language, most of the operations are written in C.
Matplotlib adds a wide selection of different plotting methods.
Pandas adds the dataframe strutcture to Python. Some people might recognize the similarities to the dataframe of R. You can easily read data of different sources into a dataframe, analyse it there and use a wide selection of manipulation methods on the data.
The Dataset
I chose a dataset of tagged SMS messages, that were collected for a SMS Spam research. It contains a set of english SMS messages, tagged according being harmless or spam.
So let's use the handy pandas method read_csv to import the CSV file to a dataframe.
End of explanation
"""
data.drop(data.columns[[2, 3, 4]], axis=1, inplace=True)
data.head()
"""
Explanation: As we can see, the data has 5 columns for 5572 SMS messages. We won't need the empty columns Unnamed: 2 - Unnamed: 4, so we're going to drop them.
End of explanation
"""
data.columns = ['class', 'message']
data['class'] = data['class'].map({'ham': 0, 'spam': 1})
print('Harmless messages in the dataset: {}\nSpam messages in the dataset: {}'
.format(len(data[data['class'] == 0]),
len(data[data['class'] == 1])
)
)
"""
Explanation: Now we only have the interesting columns selected. v1 needs some transforming too, so we can classify in numerical classes. v2 contains the text, which we're going to use for training our classifiers. Let's rename the columns to something useful and transform v1 to numerical data before we begin with the preprocessing of the data.
End of explanation
"""
X = data["message"]
y = data["class"]
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
"""
Explanation: We have a total of 4825 harmless messages, facing 747 spam messages.
Preprocessing
Currently we only have our 2 features. One of them was already converted to numeric data. Since we can't apply our algorithms on text data, we need to somehow convert our text into a numerical representation. This is where the preprocessing capacities of sklearn come in handy.
Let's look at some methods we can use here. But the first important step we need to do, is splitting the data into test and train data.
Train/Test split
We can split our train and test data into different sets. This will be represented as 4 different variables.
The input data X will be split into 2 sets for training and testing of our model. The same idea applies to y.
End of explanation
"""
from sklearn.feature_extraction.text import CountVectorizer
count_vectorizer = CountVectorizer()
# fit the vectorizer to our training data and vectorize it
X_train_cnt = count_vectorizer.fit_transform(X_train)
# since the count vectorizer is already fitted, we now only need to transform the test data
X_test_cnt = count_vectorizer.transform(X_test)
print(X_train_cnt[5])
"""
Explanation: Bag of Words representation
The so called Bag of Words representation is the strategy used to convert text data into numerical vectors. We're going to convert our text by using this strategy.
Occurence count and Vectorizing
The first step for us, is converting the text into a numerical representation. This can be extremly memory intensive, but thanks to sklearn, we use a sparse representation. Generally we differ between a sparse and a dense matrix/vector.
sparse is a representation of a vector, where only the values and indexes of non zero values are stored. For example a vector [1, 0, 0, 5, 0, 0, 9, 0] is stored as [(0, 1), (3, 5), (6, 9)].
dense is the difference, so the all values are stored.
End of explanation
"""
from sklearn.feature_extraction.text import TfidfTransformer
tfidf_transformer = TfidfTransformer()
# fit the transformer and convert our training data
X_train_tfidf = tfidf_transformer.fit_transform(X_train_cnt)
# transform test data
X_test_tfidf = tfidf_transformer.transform(X_test_cnt)
print(X_train_tfidf[5])
"""
Explanation: Now we have converted our data in a numerical representation. Since there might be a lot of common words which hold no information like the, it, a etc. we might need to transform the data further.
Tf-idf transformation
The Tf-idf transformer will help us, to transform our data into a more helpful form and re-weight the data.
End of explanation
"""
|
SciTools/courses | course_content/cartopy_course/cartopy_intro.ipynb | gpl-3.0 | import matplotlib.pyplot as plt
import cartopy.crs as ccrs
"""
Explanation: A First Look at Cartopy (for Iris)
Course aims and objectives
The aim of the cartopy course is to introduce the cartopy library and highlight of some of its features.
The learning outcomes of the cartopy course are as follows. By the end of the course, you will be able to:
understand the functionality provided by the cartopy.crs module,
explain the purpose and usage of the projection and transform keyword arguments, and
add geolocated data, grid lines and Natural Earth features to a cartopy map.
Introduction: maps and projections
Cartopy is a Python package that provides easy creation of maps, using matplotlib, for the analysis and visualisation of geospatial data.
In order to create a map with cartopy and matplotlib, we typically need to import pyplot from matplotlib and cartopy's crs (coordinate reference system) submodule. These are typically imported as follows:
End of explanation
"""
ax = plt.axes(projection=ccrs.PlateCarree())
ax.coastlines()
plt.show()
"""
Explanation: Cartopy's matplotlib interface is set up via the projection keyword when constructing a matplotlib Axes / SubAxes instance. The resulting axes instance has new methods, such as the coastlines() method, which are specific to drawing cartographic data:
End of explanation
"""
#
# EDIT for user code ...
#
"""
Explanation: A full list of Cartopy projections is available at http://scitools.org.uk/cartopy/docs/latest/crs/projections.html.
<div class="alert alert-block alert-warning">
<b><font color='brown'>Suggested Activity :</font></b> Try modifying the above code to plot a different projection.
</div>
End of explanation
"""
# %load solutions/cartopy_exercise_1
"""
Explanation: <b><font color="brown">SAMPLE SOLUTION:</font></b>
Un-comment and execute the following, to view a possible code solution.
Then run it ...
End of explanation
"""
# Make sure the figure is a decent size when plotted.
fig = plt.figure(figsize=(14, 7))
# Left plot.
ax1 = fig.add_subplot(1, 2, 1, projection=ccrs.PlateCarree())
ax1.coastlines()
# Right plot.
ax2 = fig.add_subplot(1, 2, 2, projection=ccrs.Orthographic())
ax2.coastlines()
# Show both subplots on the same figure.
plt.show()
"""
Explanation: Let's compare the Plate Carree projection to another projection from the projection list; being the Orthographic projection. We'll do that by plotting two subplots next to each other at the same time:
End of explanation
"""
ax = plt.axes(projection=ccrs.PlateCarree())
ax.coastlines()
x0, y0 = -50, -30
x1, y1 = 10, 55
plt.plot([x0, x1], [y0, y1], linewidth=4)
plt.show()
"""
Explanation: Transforming data
To draw cartographic data, we use the standard matplotlib plotting routines with an additional transform keyword argument. The value of the transform argument should be the cartopy coordinate reference system of the data being plotted.
First let's plot a line on a PlateCarree projection.
End of explanation
"""
proj = ccrs.EquidistantConic()
ax = plt.axes(projection=proj)
ax.coastlines()
plt.plot([x0, x1], [y0, y1], linewidth=4)
plt.show()
"""
Explanation: Now let's try plotting the same line on an EquidistantConic projection.
End of explanation
"""
ax = plt.axes(projection=proj)
ax.coastlines()
plt.plot([x0, x1], [y0, y1], linewidth=4, transform=ccrs.PlateCarree())
plt.show()
"""
Explanation: The above plot is not what we intended.
We have set up the axes to be in the Equidistant Conic projection, but we have not told Cartopy that the coordinates of the line are "in PlateCarree".
To do this, we use the transform keyword in the plt.plot function :
End of explanation
"""
ax = plt.axes(projection=proj)
ax.coastlines()
ax.set_global()
plt.plot([x0, x1], [y0, y1], linewidth=4, transform=ccrs.PlateCarree())
plt.show()
"""
Explanation: Notice that the plotted line is bent : It is a straight line in the coordinate system it is defined in, so that makes it a curved line on this map.
Also note that, unless we specify a map extent, the map zooms to contain just the plotted data.
A very simple alternative to that is to plot the 'full map', by calling the set_global method on the Axes, as in this case :
End of explanation
"""
#
# edit space for user code ...
#
# SAMPLE SOLUTION
# %load solutions/cartopy_exercise_2
"""
Explanation: <div class="alert alert-block alert-warning">
<b><font color='brown'>Suggested Activity :</font></b> Try re-plotting the "failed plot" above, but adding a "set_global" call to show the full map extent.
What does this tell you about what was actually being plotted in that case ?
</div>
End of explanation
"""
#
# edit space for user code ...
#
# SAMPLE SOLUTION
# %load solutions/cartopy_exercise_3
"""
Explanation: <div class="alert alert-block alert-warning">
<b><font color='brown'>Suggested Activity :</font></b> Try taking more control of the plot region,
using the <a href="https://scitools.org.uk/cartopy/docs/latest/matplotlib/geoaxes.html?highlight=set_extent#cartopy.mpl.geoaxes.GeoAxes.set_extent"> Geoaxes.set_extents</a> method. <br>
( look this up ! )
What is the _coordinate system_ of the coordinate values which you pass into this method ?
</div>
End of explanation
"""
import cartopy.feature as cfeat
fig = plt.figure(figsize=(14, 7))
ax = plt.axes(projection=ccrs.Miller())
ax.coastlines('50m')
# ax.add_feature(cfeat.BORDERS, edgecolor='b')
political_bdrys = cfeat.NaturalEarthFeature(category='cultural',
name='admin_0_countries',
scale='50m')
ax.add_feature(political_bdrys,
edgecolor='b', facecolor='none',
linestyle='--', zorder=-1)
plt.show()
"""
Explanation: Adding features
We can add features from the Natural Earth database to maps we produce. Natural Earth datasets are downloaded and cached when they are first used, so you need an internet connection to use them, and you may encounter warnings when you first download them.
We add features to maps via the cartopy feature interface.
For example, let's add political borders to a map:
End of explanation
"""
ax = plt.axes(projection=ccrs.Mercator())
ax.coastlines()
gl = ax.gridlines(draw_labels=True)
plt.show()
"""
Explanation: We can add graticule lines and tick labels to the map using the gridlines method (this currently is limited to just a few coordinate reference systems):
End of explanation
"""
import matplotlib.ticker as mticker
from cartopy.mpl.gridliner import LATITUDE_FORMATTER
ax = plt.axes(projection=ccrs.PlateCarree())
ax.coastlines()
gl = ax.gridlines(draw_labels=True)
gl.xlocator = mticker.FixedLocator([-180, -45, 0, 45, 180])
gl.yformatter = LATITUDE_FORMATTER
plt.show()
"""
Explanation: We can control the specific tick values by using matplotlib's locator object, and the formatting can be controlled with matplotlib formatters:
End of explanation
"""
import numpy as np
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
x = np.linspace(337, 377, 25)
y = np.linspace(-18.7, 25.3, 35)
x2d, y2d = np.meshgrid(x, y)
data = np.cos(np.deg2rad(y2d) * 4) + np.sin(np.deg2rad(x2d) * 4)
"""
Explanation: Cartopy cannot currently label all types of projection, though more work is intended on this functionality in the future.
Exercise
The following snippet of code produces coordinate arrays and some data in a rotated pole coordinate system. The coordinate system for the x and y values, which is similar to that found in the some limited area models of Europe, has a projection "north pole" at 193.0 degrees longitude and 41.0 degrees latitude.
End of explanation
"""
|
zhsun/neu-cs5700 | network_basics.ipynb | mit | s = 'Hello world!'
print(s)
print("length is", len(s))
us = 'Hello 世界!'
print(us)
print("length is", len(us))
"""
Explanation: String vs. Bytes
Text in Python 3 is always Unicode and is represented by the str type, and binary data is represented by the bytes type. They cannot be mixed.
Strings can be encoded to bytes, and bytes can be decoded back to strings.
End of explanation
"""
bs = s.encode('utf-8')
print(bs)
print("length is", len(bs))
bus = us.encode('utf-8')
print(bus)
print("length is", len(bus))
"""
Explanation: Now encode both strings to bytes.
End of explanation
"""
print(bs.decode('utf-8'))
print(bus.decode('utf-8'))
"""
Explanation: Decode back to strings.
End of explanation
"""
num = 258
print(num.to_bytes(2, "big"))
print(num.to_bytes(2, "little"))
print(num.to_bytes(4, "big"))
print(num.to_bytes(4, "little"))
"""
Explanation: Big Endian vs Little Endian
End of explanation
"""
import struct
"""
Explanation: struct package
This module performs conversions between Python values and C structs represented as Python bytes objects.
End of explanation
"""
x = 256
print("Network endianess")
print(struct.pack('!h', x))
print("Little endian")
print(struct.pack('<h', x))
print("Big endian")
print(struct.pack('>h', x))
print("Native endianess")
print(struct.pack('=h', x))
"""
Explanation: struct.pack(fmt, v1, v2, …)
Return a bytes object containing the values v1, v2, … packed according to the format string fmt. The arguments must match the values required by the format exactly.
"!" means network endianess (big endian)
"<" means little endian
">" means big endian
"=" means native
"h" measn short integer (2 bytes)
End of explanation
"""
bx = struct.pack('!h', x)
print(struct.unpack('!h', bx))
print(struct.unpack('<h', bx))
print(struct.unpack('!h', bx)[0])
print(struct.unpack('<h', bx)[0])
"""
Explanation: struct.unpack(fmt, buffer)
Unpack from the buffer buffer (presumably packed by pack(fmt, ...)) according to the format string fmt. The result is a tuple even if it contains exactly one item. The buffer’s size in bytes must match the size required by the format,
End of explanation
"""
|
royalosyin/Python-Practical-Application-on-Climate-Variability-Studies | ex33-View Northeast Pacifc sea surface temperature based on an ensemble empirical mode decomposition.ipynb | mit | %matplotlib inline
import xarray as xr
from PyEMD import EEMD
import numpy as np
import pylab as plt
plt.rcParams['figure.figsize'] = (9,5)
"""
Explanation: View Northeast Pacific SST based on an Ensemble Empirical Mode Decomposition
The oscillation of sea surface temperature (SST) has substantial impacts on the global climate. For example, anomalously high SST near the equator (between 5°S and 5°N and the Peruvian coast) causes the El Niño phenomenon, while low SST in this area brings about the La Niña phenomenon, both of which impose considerable influence on temperature, precipitation and wind globally.
In this notebook, an adaptive and temporal local analysis method, the recently developed ensemble empirical mode decomposition (EEMD) method (Huang and Wu 2008; Wu and Huang 2009) is applied to study the oscillation of SST over Northeast Pacific(40°–50°N, 150°–135°W). The EEMD is the most recent improvement of the EMD method (Huang et al. 1998; Huang and Wu 2008). The package of PyEMD is used, which is a Python implementation of Empirical Mode Decomposition (EMD) and its variations. One of the most popular expansion is Ensemble Empirical Mode Decomposition (EEMD), which utilises an ensemble of noise-assisted executions. As a result of EMD one will obtain a set of components that possess oscillatory features. In case of plain EMD algorithm, these are called Intrinsic Mode Functions (IMFs) as they are expected to have a single mode. In contrary, EEMD will unlikely produce pure oscillations as the effects of injected noise can propagate throughout the decomposition.
The SST data is extracted from the lastest version of Extended Reconstructed Sea Surface Temperature (ERSST) dataset, version5. It is a global monthly sea surface temperature dataset derived from the International Comprehensive Ocean–Atmosphere Dataset (ICOADS). Production of the ERSST is on a 2° × 2° grid. For more information see https://www.ncdc.noaa.gov/data-access/marineocean-data/extended-reconstructed-sea-surface-temperature-ersst-v5.
1. Load all needed libraries
End of explanation
"""
ds = xr.open_dataset('data\sst.mnmean.v5.nc')
sst = ds.sst.sel(lat=slice(50, 40), lon=slice(190, 240), time=slice('1981-01-01','2015-12-31'))
#sst.mean(dim='time').plot()
"""
Explanation: 2. Load SST data
2.1 Load time series SST
Select the region (40°–50°N, 150°–135°W) and the period(1981-2016)
End of explanation
"""
sst_clm = sst.sel(time=slice('1981-01-01','2010-12-31')).groupby('time.month').mean(dim='time')
#sst_clm = sst.groupby('time.month').mean(dim='time')
"""
Explanation: 2.2 Calculate climatology between 1981-2010
End of explanation
"""
sst_anom = sst.groupby('time.month') - sst_clm
sst_anom_mean = sst_anom.mean(dim=('lon', 'lat'), skipna=True)
"""
Explanation: 2.3 Calculate SSTA
End of explanation
"""
S = sst_anom_mean.values
t = sst.time.values
# Assign EEMD to `eemd` variable
eemd = EEMD()
# Execute EEMD on S
eIMFs = eemd.eemd(S)
"""
Explanation: 3. Carry out EMD analysis
End of explanation
"""
nIMFs = eIMFs.shape[0]
plt.figure(figsize=(11,20))
plt.subplot(nIMFs+1, 1, 1)
# plot original data
plt.plot(t, S, 'r')
# plot IMFs
for n in range(nIMFs):
plt.subplot(nIMFs+1, 1, n+2)
plt.plot(t, eIMFs[n], 'g')
plt.ylabel("eIMF %i" %(n+1))
plt.locator_params(axis='y', nbins=5)
plt.xlabel("Time [s]")
"""
Explanation: 4. Visualize
4.1 Plot IMFs
End of explanation
"""
reconstructed = eIMFs.sum(axis=0)
plt.plot(t, reconstructed-S)
"""
Explanation: 4.2 Error of reconstruction
End of explanation
"""
|
metpy/MetPy | v0.11/_downloads/c93092487c8713b537d47b1774b1c063/unit_tutorial.ipynb | bsd-3-clause | import numpy as np
from metpy.units import units
"""
Explanation: Units Tutorial
Early in our scientific careers we all learn about the importance of paying
attention to units in our calculations. Unit conversions can still get the best
of us and have caused more than one major technical disaster, including the
crash and complete loss of the $327 million Mars Climate Orbiter.
In MetPy, we use the pint library and a custom unit registry to help prevent
unit mistakes in calculations. That means that every quantity you pass to MetPy
should have units attached, just like if you were doing the calculation on
paper!
In MetPy units are attached by multiplying them with the integer, float, array,
etc. In this tutorial we'll show some examples of working with units and get
you on your way to utilizing the computation functions in MetPy.
End of explanation
"""
length = 10.4 * units.inches
width = 20 * units.meters
print(length, width)
"""
Explanation: Simple Calculation
Let's say we want to calculate the area of a rectangle. It so happens that
one of our colleagues measures their side of the rectangle in imperial units
and the other in metric units. No problem! First we need to attach units to
our measurements. For many units the easiest way is by find the unit as an
attribute of the unit registry:
End of explanation
"""
area = length * width
print(area)
"""
Explanation: Don't forget that you can use tab completion to see what units are available!
Just about every imaginable quantity is there, but if you find one that isn't,
we're happy to talk about adding it.
While it may seem like a lot of trouble, let's compute the area of a rectangle
defined by our length and width variables above. Without units attached, you'd
need to remember to perform a unit conversion before multiplying or you would
end up with an area in inch-meters and likely forget about it. With units
attached, the units are tracked for you.
End of explanation
"""
print(area.to('m^2'))
"""
Explanation: That's great, now we have an area, but it is not in a very useful unit still.
Units can be converted using the to() method. While you won't see square meters in
the units list, we can parse complex/compound units as strings:
End of explanation
"""
print(10 * units.degC - 5 * units.degC)
"""
Explanation: Temperature
Temperature units are actually relatively tricky (more like absolutely tricky as
you'll see). Temperature is a non-multiplicative unit - they are in a system
with a reference point. That means that not only is there a scaling factor, but
also an offset. This makes the math and unit book-keeping a little more complex.
Imagine adding 10 degrees Celsius to 100 degrees Celsius. Is the answer 110
degrees Celsius or 383.15 degrees Celsius (283.15 K + 373.15 K)? That's why
there are delta degrees units in the unit registry for offset units. For more
examples and explanation you can watch MetPy Monday #13:
https://www.youtube.com/watch?v=iveJCqxe3Z4.
Let's take a look at how this works and fails:
We would expect this to fail because we cannot add two offset units (and it does
fail as an "Ambiguous operation with offset unit").
10 * units.degC + 5 * units.degC
On the other hand, we can subtract two offset quantities and get a delta. A delta unit is
pint's way of representing a relative change in two offset units, indicating that this is
not an absolute value of 5 degrees Celsius, but a relative change of 5 degrees Celsius.
End of explanation
"""
print(25 * units.degC + 5 * units.delta_degF)
"""
Explanation: We can add a delta to an offset unit as well since it is a relative change.
End of explanation
"""
print(273 * units.kelvin + 10 * units.kelvin)
print(273 * units.kelvin - 10 * units.kelvin)
"""
Explanation: Absolute temperature scales like Kelvin and Rankine do not have an offset
and therefore can be used in addition/subtraction without the need for a
delta version of the unit.
End of explanation
"""
u = np.random.randint(0, 15, 10) * units('m/s')
v = np.random.randint(0, 15, 10) * units('meters/second')
print(u)
print(v)
"""
Explanation: Compound Units
We can create compound units for things like speed by parsing a string of
units. Abbreviations or full unit names are acceptable.
End of explanation
"""
|
AllenDowney/ProbablyOverthinkingIt | test_scenario_sim.ipynb | mit | from __future__ import print_function, division
from thinkbayes2 import Pmf
from random import random
def flip(p):
return random() < p
def run_single_simulation(func, iters=1000000):
pmf_t = Pmf([0.2, 0.4])
p = 0.1
s = 0.9
outcomes = Pmf()
post_t = Pmf()
for i in range(iters):
test, sick, t = func(p, s, pmf_t)
if test:
outcomes[sick] += 1
post_t[t] += 1
outcomes.Normalize()
post_t.Normalize()
return outcomes, post_t
"""
Explanation: Bayesian interpretation of medical tests
This notebooks explores several problems related to interpreting the results of medical tests.
Copyright 2016 Allen Downey
MIT License: http://opensource.org/licenses/MIT
End of explanation
"""
def generate_patient_A(p, s, pmf_t):
while True:
t = pmf_t.Random()
sick = flip(p)
test = flip(s) if sick else flip(t)
return test, sick, t
outcomes, post_t = run_single_simulation(generate_patient_A)
outcomes.Print()
post_t.Print()
"""
Explanation: Scenario A: Choose t for each patient, yield all patients regardless of test.
End of explanation
"""
def generate_patient_B(p, s, pmf_t):
t = pmf_t.Random()
while True:
sick = flip(p)
test = flip(s) if sick else flip(t)
return test, sick, t
outcomes, post_t = run_single_simulation(generate_patient_B)
outcomes.Print()
post_t.Print()
"""
Explanation: Scenario B: Choose t before generating patients, yield all patients regardless of test.
End of explanation
"""
def generate_patient_C(p, s, pmf_t):
while True:
t = pmf_t.Random()
sick = flip(p)
test = flip(s) if sick else flip(t)
if test:
return test, sick, t
outcomes, post_t = run_single_simulation(generate_patient_C)
outcomes.Print()
post_t.Print()
"""
Explanation: Scenario C: Choose t for each patient, only yield patients who test positive.
End of explanation
"""
def generate_patient_D(p, s, pmf_t):
t = pmf_t.Random()
while True:
sick = flip(p)
test = flip(s) if sick else flip(t)
if test:
return test, sick, t
outcomes, post_t = run_single_simulation(generate_patient_D)
outcomes.Print()
post_t.Print()
"""
Explanation: Scenario D: Choose t before generating patients, only yield patients who test positive.
End of explanation
"""
from random import choice
import numpy as np
N = 100
patients = range(N)
p = 0.1
s = 0.9
num_sick = 0
pmf_t = Pmf()
pmf_sick = Pmf()
for i in range(10000000):
# decide what the value of t is
t = choice([0.2, 0.4])
np.random.shuffle(patients)
# generate patients until we get a positive test
for patient in patients:
sick = flip(p)
test = flip(s) if sick else flip(t)
if test:
if patient==1:
#print(patient, sick, t)
pmf_t[t] += 1
pmf_sick[sick] += 1
break
pmf_t.Normalize()
pmf_sick.Normalize()
print('Dist of t')
pmf_t.Print()
print('Dist of status')
pmf_sick.Print()
num_sick
"""
Explanation: Here's a variation of the Scenario D where we only consider cases where patient 1 is the first to test positive.
End of explanation
"""
|
carlosclavero/PySimplex | Documentation/Tutorial librería Simplex.py.ipynb | gpl-3.0 | from PySimplex import Simplex
from PySimplex import rational
import numpy as np
number="2"
print(Simplex.convertStringToRational(number))
number="2/5"
print(Simplex.convertStringToRational(number))
# Si recibe algo que no es un string, devuelve None
number=2
print(Simplex.convertStringToRational(number))
"""
Explanation: Simplex.py
En el siguiente tutorial, se van a ver todos los métodos con los que cuenta la librería Simplex.py. Por supuesto, una aplicación de muchos de ellos, siguiendo una secuencia, correcta, podría dar lugar a la resolución de un problema de programación lineal. Sin embargo, obtener una solcuión desde esta perspectiva, es algo mucho más largo y complejo, siendo mucho más fácil usar el programa SimplexSolver.py.
Para el uso de la librería, se ha creado una clase auxiliar llamada rational. Esta clase representa a los números racionales. Cada objeto de esa clase contará con un númerador y un denominador, de tal forma que si se quiere definir un número entero, habrá que asignarle denominador 1. La forma de definir un objeto rational es la siguiente:
rational(3,2) # Esto define el número 3/2
El tutorial se va a dividir en cuatro partes, las mismas en las que se divide la librería. La primera, muestra los métodos creados para realizar operaciones con racionales(muchos de ellos se utilizan simplemente para las comprobaciones de parámetros de entrada de otros métodos). La segunda parte, serán operaciones con matrices y arrays(tales como invertir una matriz), que han tenido que ser redefinidas para que puedan ser utilizadas con la clase rational. La tercera parte, son los métodos utilizados para alcanzar la solución mediante el método Simplex, y la cuarta, será la formada por aquellos métodos que permiten obtener la solución gráfica.
A continuación se exponen los métodos de la librería, con explicaciones y ejemplos de cada uno de ellos.
NOTA 1: Siempre que se hable de variables del problema, hay que considerar, que la primera variable será la 0, es decir x0.
NOTA 2: Los "imports" necesarios se realizan en la primera celda, para ejecutar cualquiera de las siguientes, sin errores, debe ejecutar primero la celda que contiene los "imports". Si realiza una ejecución en su propio entorno de programación, debe importar estas dos clases, para que los métodos se ejecuten sin errores(por favor, consulte en detalle el manual de instalación que hay en la misma localización que este manual):
from PySimplex import Simplex
from PySimplex import rational
import numpy as np
Operaciones con rational
convertStringToRational
Este método recibe un número en un string, y devuelve el número como un rational. Si no recibe un string, devuelve None. Ejemplos:
End of explanation
"""
line="3 4 5"
print(Simplex.printMatrix((np.asmatrix(Simplex.convertLineToRationalArray(line)))))
line="3 4/5 5"
print(Simplex.printMatrix((np.asmatrix(Simplex.convertLineToRationalArray(line)))))
# Si se le pasa algo que no es un string, devuelve None
print(Simplex.convertLineToRationalArray(4))
"""
Explanation: convertLineToRationalArray
Este método recibe un string, que contiene un conjunto de números separados por un espacio, y devuelve los números en un array de numpy con elementos rational.Si no recibe un string, devuelve None. Ejemplos:
End of explanation
"""
a=rational(3,4)
Simplex.rationalToFloat(a)
a=rational(3,1)
Simplex.rationalToFloat(a)
# Si no se introduce un rational, devuelve None
a=3.0
print(Simplex.rationalToFloat(a))
"""
Explanation: rationalToFloat
Este método recibe un objeto rational, y devuelve su valor en float. Lo que hace es realizar la división entre el númerador y el denominador. En caso de no pasar un rational como parámetro, devuelve None.
End of explanation
"""
rationalList=[(rational(4,5),rational(1,2)),(rational(4,2),rational(3,1)),(rational(8,3),rational(3,5)),(rational(7,2)
,rational(4,5)),(rational(7,9),rational(4,9)),(rational(9,8),rational(10,7))]
Simplex.listPointsRationalToFloat(rationalList)
# Si recibe algo que no es una lista de puntos con coordenadas rational,devuelve None
rationalList=[(4.0,5.0),(4.0,3.0),(8.0,5.0),(7.0,4.0),(7.0,9.0),(10.0,4.0)]
print(Simplex.listPointsRationalToFloat(rationalList))
"""
Explanation: * listPointsRationalToFloat*
Este método recibe una lista de puntos, cuyas coordenadas son rational, y devuelve la misma lista de puntos, pero con las coordenadas en float. En caso de no introducir una lista de rational, devuelve None. Ejemplos:
End of explanation
"""
lis=[(rational(1,2),rational(5,7)),(rational(4,5),rational(4,6)),(rational(4,9),rational(9,8))]
Simplex.isAListOfRationalPoints(lis)
lis=[(rational(1,2),rational(5,7)),(4,rational(4,6)),(rational(4,9),rational(9,8))]
Simplex.isAListOfRationalPoints(lis)
# Si recibe algo que no es una lista devuelve None
lis=np.array([(rational(1,2),rational(5,7)),(4,rational(4,6)),(rational(4,9),rational(9,8))])
print(Simplex.isAListOfRationalPoints(lis))
"""
Explanation: isAListOfRationalPoints
Este método recibe una lista, y devuelve True, si todos los elementos son puntos(tuplas)con coordenadas rational o False, si hay algún elemento que no es un punto cuyas coordenadas sean rational. En caso de no pasar una lista, devuelve None. Ejemplos:
End of explanation
"""
# Si todos los elementos son puntos(tuplas), devuelve True
lis=[(3,4),(5,6),(7,8),(8,10)]
Simplex.isAListOfPoints(lis)
# Si recibe una lista cuyos elementos, no son todos puntos(tuplas), devuelve False
lis=[3,5,6,(6,7)]
Simplex.isAListOfPoints(lis)
# Si recibe algo que no es una lista devuelve None
print(Simplex.isAListOfPoints(3))
"""
Explanation: isAListOfPoints
Este método recibe una lista, y devuelve True, si todos los elementos son puntos(tuplas) o False, si hay algún elemento que no es un punto. En caso de no pasar una lista, devuelve None. Ejemplos:
End of explanation
"""
mat=np.matrix([[rational(1,2),rational(5,7)],[rational(5,8),rational(9,3)]])
Simplex.isARationalMatrix(mat)
mat=np.array([[rational(1,2),rational(5,7)],[rational(5,8),rational(9,3)]])
Simplex.isARationalMatrix(mat)
mat=np.matrix([[1,rational(5,7)],[rational(5,8),rational(9,3)]])
Simplex.isARationalMatrix(mat)
# Si recibe algo que no es una matriz o un array de numpy
mat=[rational(1,2),rational(5,7)]
print(Simplex.isARationalMatrix(mat))
"""
Explanation: isARationalMatrix
Este método recibe una matriz de numpy o un array bidimensional de numpy, y comprueba si todos los elementos del mismo, son rational, en ese caso devuelve True. En otro caso devuelve False. Si no recibe una matriz o un array de numpy, devuelve None. Ejemplos:
End of explanation
"""
arr=np.array([rational(1,2),rational(5,7),rational(4,5)])
Simplex.isARationalArray(arr)
arr=np.array([rational(1,2),6,rational(4,5)])
Simplex.isARationalArray(arr)
# Si recibe algo que no es una matriz o un array de numpy
arr=[rational(1,2),rational(5,7),rational(4,5)]
print(Simplex.isARationalArray(arr))
"""
Explanation: isARationalArray
Este método recibe un array de numpy, y comprueba si todos los elementos del mismo, son rational, en ese caso devuelve True. En otro caso devuelve False. Si no recibe una matriz o un array de numpy, devuelve None. Ejemplos:
End of explanation
"""
matrix=np.matrix([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)]])
det=Simplex.determinant(matrix)
print(det)
# Si la matriz no es cuadrada, devuelve None
matrix=np.matrix([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)],[rational(5,4),rational(3,9)]])
print(Simplex.determinant(matrix))
# También admite un array de numpy bidimensional
matrix=np.array([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)]])
print(Simplex.determinant(matrix))
# Si recibe algo que no es una matriz cuadrada de numpy, con elementos rational, devuelve None
print(Simplex.determinant(3))
"""
Explanation: Operaciones con matrices
determinant
Este método recibe una matriz de numpy, con componentes rational, y devuelve el determinante de la matriz. La matriz debe ser cuadrada. Si se introduce algo que no es una matriz cuadrada de numpy, con elementos rational, devuelve None. También admite un array de numpy bidimensional.Ejemplos:
End of explanation
"""
matrix=np.matrix([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)]])
m=Simplex.coFactorMatrix(matrix)
print(Simplex.printMatrix(m))
# Si la matriz no es cuadrada, devuelve None
matrix=np.matrix([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)],[rational(5,4),rational(3,9)]])
print(Simplex.coFactorMatrix(matrix))
# Si recibe algo que no es una matriz cuadrada de numpy, con elementos rational, devuelve None
matrix=np.array([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)]])
print(Simplex.coFactorMatrix(matrix))
"""
Explanation: coFactorMatrix
Este método recibe una matriz de numpy, con componentes rational, y devuelve la matriz de cofactores. La matriz debe ser cuadrada. Si se introduce algo que no es una matriz cuadrada de numpy, con elementos rational, devuelve None. Ejemplos:
End of explanation
"""
matrix=np.matrix([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)]])
m=Simplex.adjMatrix(matrix)
print(Simplex.printMatrix(m))
# Si la matriz no es cuadrada, devuelve None
matrix=np.matrix([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)],[rational(5,4),rational(3,9)]])
print(Simplex.adjMatrix(matrix))
# Si recibe algo que no es una matriz cuadrada de numpy, con elementos rational, devuelve None
matrix=np.array([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)]])
print(Simplex.invertMatrix(matrix))
"""
Explanation: adjMatrix
Este método recibe una matriz de numpy, con componentes rational, y devuelve la matriz de adjuntos. La matriz debe ser cuadrada. Si se introduce algo que no es una matriz cuadrada de numpy, con elementos rational, devuelve None. Ejemplos:
End of explanation
"""
matrix=np.matrix([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)]])
m=Simplex.invertMatrix(matrix)
print(Simplex.printMatrix(m))
# Si la matriz no es cuadrada, devuelve None
matrix=np.matrix([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)],[rational(5,4),rational(3,9)]])
print(Simplex.invertMatrix(matrix))
# Si recibe algo que no es una matriz cuadrada de numpy, con elementos rational, devuelve None
matrix=np.array([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)]])
print(Simplex.invertMatrix(matrix))
"""
Explanation: invertMatrix
Este método recibe una matriz de numpy, con componentes rational, y devuelve la matriz inversa. La matriz debe ser cuadrada. Si se introduce algo que no es una matriz cuadrada de numpy, con elementos rational, devuelve None. Ejemplos:
End of explanation
"""
m=Simplex.initializeMatrix(3, 2)
print(Simplex.printMatrix(m))
# Si se introduce algo que no son enteros, devuelve None
print(Simplex.initializeMatrix(4.0,3.0))
"""
Explanation: initializeMatrix
Este método recibe unas dimensiones y devuelve una matriz de numpy, con elementos rational,de valor 0. Si los valores introducidos no son enteros, devuelve None. Ejemplos:
End of explanation
"""
m=Simplex.createRationalIdentityMatrix(3)
print(Simplex.printMatrix(m))
# Si se introduce algo que es un entero, devuelve None
print(Simplex.createRationalIdentityMatrix(4.0))
"""
Explanation: createRationalIdentityMatrix
Este método recibe un número y devuelve una matriz identidad de numpy, con elementos rational. Si el valor introducido no es entero, devuelve None. Ejemplos:
End of explanation
"""
matrix=np.matrix([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)],[rational(4,6),rational(9,1)]])
num= rational(3,4)
m = Simplex.multNumMatrix(num, matrix)
print(Simplex.printMatrix(m))
# Si recibe algo que no es una matriz de numpy, con elementos rational, devuelve None
num = 4
print(Simplex.multNumMatrix(num, matrix))
"""
Explanation: multNumMatrix
Este método recibe un número en forma rational y una matriz de numpy, con componentes rational, y devuelve la matriz del producto del número por la matriz introducida.Si se introduce algo que no es un rational como número o una matriz de numpy, con elementos rational,como matriz, devuelve None. Ejemplos:
End of explanation
"""
matrix1=np.matrix([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)],[rational(4,6),rational(9,1)]])
matrix2=np.matrix([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)],[rational(4,6),rational(9,1)]])
Simplex.twoMatrixEqual(matrix1, matrix2)
matrix1=np.matrix([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)],[rational(4,6),rational(9,1)]])
matrix2=np.matrix([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)],[rational(9,6),rational(6,1)]])
Simplex.twoMatrixEqual(matrix1, matrix2)
# Si las dimensiones no son iguales, devuelve False
matrix1=np.matrix([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)],[rational(4,6),rational(9,1)]])
matrix2=np.matrix([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)]])
Simplex.twoMatrixEqual(matrix1, matrix2)
# Si recibe algo que no es una matriz de numpy, con elementos rational, devuelve None
print(Simplex.twoMatrixEqual(matrix1, 3))
"""
Explanation: twoMatrixEqual
Este método recibe dos matrices de numpy, con componentes rational, y devuelve True,si son iguales, o False, si no lo son. Si se introduce algo que no es una matriz de numpy, con elementos rational, devuelve None. Ejemplos:
End of explanation
"""
matrix2=np.matrix([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)],[rational(9,6),rational(6,1)]])
print(Simplex.printMatrix(matrix2))
# También admite un array de numpy bidimensional
matrix2=np.array([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)],[rational(9,6),rational(6,1)]])
print(Simplex.printMatrix(matrix2))
# Si recibe algo que no es una matriz de numpy o un array bidimensional, con elementos rational, devuelve None
print(Simplex.printMatrix(matrix2))
"""
Explanation: printMatrix
Este método recibe una matriz de numpy, con componentes rational, y la pasa a formato string.Si se introduce algo que no es una matriz de numpy, con elementos rational, devuelve None. También admite un array de numpy bidimensional. Ejemplos:
End of explanation
"""
matrix1=np.matrix([[rational(4,7),rational(8,9),rational(2,5)],[rational(2,4),rational(3,4),rational(7,5)]])
matrix2=np.matrix([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)],[rational(4,6),rational(9,1)]])
m=Simplex.multMatrix(matrix1, matrix2)
print(Simplex.printMatrix(m))
# Si el número de columnas de la primera matriz, y el número de filas de la segunda, no son iguales, devuelve None
matrix1=np.matrix([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)]])
matrix2=np.matrix([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)],[rational(4,6),rational(9,1)]])
print(Simplex.multMatrix(matrix1, matrix2))
# Si recibe algo que no es una matriz de numpy, con elementos rational, devuelve None
matrix1=np.array([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)]])
matrix2=np.array([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)],[rational(4,6),rational(9,1)]])
print(Simplex.multMatrix(matrix1, matrix2))
"""
Explanation: multMatrix
Este método recibe dos matrices de numpy, con componentes rational, y devuelve la matriz resultado del producto de las dos matrices introducidas. Si el número de columnas de la primera matriz, y el número de filas de la segunda, no son iguales, las matrices no se pueden multiplicar y devuelve None. Si se introduce algo que no es una matriz de numpy, con elementos rational, devuelve None. Ejemplos:
End of explanation
"""
matrix=np.matrix([[1,3,4,4,5],[12,45,67,78,9],[3,4,3,5,6]])
variablesIteration=np.array([1,3,4])
Simplex.variablesNoiteration(matrix,variablesIteration)
variablesIteration=np.array([3,4,1])
Simplex.variablesNoiteration(matrix,variablesIteration)
# EL método funciona con matrices con elementos rational
matrix=np.matrix([[rational(6,7),rational(4,5),rational(3,1)],[rational(2,3),rational(7,6),rational(1,3)],
[rational(4,1),rational(6,4),rational(9,2)]])
variablesIteration=np.array([3,4,1])
Simplex.variablesNoiteration(matrix,variablesIteration)
#Si le introduzco algo que no sea una matriz de numpy en el primer parámetro o algo que no sea un array de numpy en el segundo,me
#devuelve None
print(Simplex.variablesNoiteration(3,variablesIteration))
"""
Explanation: Método Simplex
variablesNoiteration
Este método se utiliza para calcular las variables que no están en la iteración. Recibe como parámetro, una matrix numpy, que contiene las restricciones del problema y un array numpy, que contiene las variables que ya están en la iteración(estas variables no tienen porqué aparecer ordenadas en el array). El método funciona, con matrices de tipo entero, de tipo float y de tipo rational. En caso de que los parámetros introducidos no sean correctos, devolverá None. Si todo es correcto, devolverá array numpy, con las variables que no están en la iteración. Ejemplos:
End of explanation
"""
setOfVal=np.array([rational(1,4),rational(4,7),rational(6,8),rational(6,4)])
print(Simplex.calcMinNoNan(setOfVal))
setOfVal=np.array([np.nan,rational(4,7),rational(6,8),rational(6,4)])
print(Simplex.calcMinNoNan(setOfVal))
#Si le paso un conjunto de valores, TODOS no rational, devuelve None
setOfVal=np.array([np.nan,np.nan,np.nan,np.nan])
print(Simplex.calcMinNoNan(setOfVal))
#Si le algo que no es array numpy, devuelve None
print(Simplex.calcMinNoNan(2))
"""
Explanation: calcMinNoNan
Este método se utiliza para calcular cuál es el mínimo valor, de un conjunto de valores. Recibe un array de numpy, con los valores. El método selecciona aquellos valores que sean rational, y calcula el mínimo. En caso de que los parámetros introducidos no sean correctos, devolverá None. Si todo es correcto, devolverá el mínimo valor no negativo o None, en caso de que no haya valores rational. Ejemplos:
End of explanation
"""
array=np.array([3,4,5,6,7,2,3,6])
value= 3
Simplex.calculateIndex(array,value)
#Si introduzco un valor que no está en el array, devuelve None
value=78
print(Simplex.calculateIndex(array,value))
# El método funciona también con rational
value=rational(4,7)
array=np.array([rational(1,4),rational(4,7),rational(6,8),rational(6,4)])
Simplex.calculateIndex(array,value)
#Si introduzco algo que no es un array en el primer parámetro o algo que no es un número en el segundo, devuelve None
print(Simplex.calculateIndex(4,value))
"""
Explanation: calculateIndex
Este método recibe un array de numpy, y un valor, y devuelve la posición dentro del array donde se encuentra la primera ocurrencia de dicho valor. En caso de que dicho valor no aparezca en el array, se devolverá None. El método funciona con conjuntos de números enteros y con conjuntos de rational. En caso de que los parámetros introducidos no sean correctos, devolverá None.Ejemplos:
End of explanation
"""
totalMatrix=np.matrix([[1,2,3,4,5],[2,6,7,8,9],[6,3,4,5,6]])
columnsOfIteration=np.array([1,2,0])
Simplex.calculateBaseIteration(totalMatrix,columnsOfIteration)
# El método funciona también con matrices con elementos rational
columnsOfIteration=np.array([1,2,0])
totalMatrix=np.matrix([[rational(6,7),rational(4,5),rational(3,1),rational(5,3),rational(2,1)],[rational(2,3),rational(7,6),
rational(1,3),rational(2,5),rational(9,5)], [rational(4,1),rational(6,4),rational(9,2),rational(4,5),
rational(3,1)]])
print(Simplex.printMatrix(Simplex.calculateBaseIteration(totalMatrix,columnsOfIteration)))
# Si le paso más columnas de las que hay en la matriz total, me devolverá None
columnsOfIteration=np.array([0,1,2,3,4,5,6])
print(Simplex.calculateBaseIteration(totalMatrix,columnsOfIteration))
# Si le introduzco algo que no sea una matriz de numpy en el primer parámetro o algo que no sea un array de numpy en el segundo
# ,me devuelve None
print(Simplex.calculateBaseIteration(4,columnsOfIteration))
"""
Explanation: calculateBaseIteration
Este método calcula la base de la iteración, y la devuelve en una matriz numpy. Para ello, recibe la matriz que contiene todas las restricciones del problema(sin signo ni recursos), y las columnas que forman parte de la iteración(no tienen porqué aparecer ordenadas en el array). La matriz, puede ser de valores enteros o rational. En caso de que los parámetros introducidos no sean correctos, devolverá None. Ejemplos:
End of explanation
"""
base=np.matrix([[rational(6,7),rational(4,5),rational(3,1)],[rational(2,3),rational(7,6),rational(1,3)],
[rational(4,1),rational(6,4),rational(9,2)]])
Simplex.showBase(base,"B")
#Si se le pasa algo que no es una matriz de numpy con elementos rational en el primer parámetro, o un string en el segundo, me
# devuelve None
print(Simplex.showBase(3,"B"))
"""
Explanation: showBase
Este método recibe una matriz numpy con elementos rational, que se supone que será la base de una iteración, acompañado del nombre que se le quiera asignar, y la muestra por pantalla, con el nombre que se le asigna (B), dentro de la iteración. En caso de que los parámetros introducidos no sean correctos, devolverá None. Ejemplos:
End of explanation
"""
base=np.matrix([[rational(6,7),rational(4,5),rational(3,1)],[rational(2,3),rational(7,6),rational(1,3)],
[rational(4,1),rational(6,4),rational(9,2)]])
resourcesVector=np.array([rational(2,1),rational(33,2),rational(52,8)])
print(Simplex.printMatrix(np.asmatrix(Simplex.calculateIterationSolution(base,resourcesVector))))
#Si le paso un vector de recursos, que tenga un longitud diferente al número de filas de la matriz, me duvuelve None
resourcesVector=np.array([rational(2,1),rational(33,2)])
print(Simplex.calculateIterationSolution(base,resourcesVector))
#Si le paso algo que no es una matriz de numpy de elementos rational en el primer parámetro o un array de numpy con elementos
# rational en el segundo, me devuelve None
print(Simplex.calculateIterationSolution(base,4))
"""
Explanation: calculateIterationSolution
Este método calcula la solución de una iteración, para las variables de la misma, y la devuelve en un array de numpy. Para ello, recibe la base de la iteración, en una matriz numpy y también recibe el vector de recursos en un array de numpy. Los elementos de la matriz y el array, deben ser rational. En caso de que los parámetros introducidos no sean correctos, devolverá None. Ejemplos:
End of explanation
"""
sol=np.array([[rational(2,2)],[rational(5,3)],[rational(6,1)],[rational(7,8)]])
Simplex.showSolution(sol)
#Si le paso algo que no es un array numpy con elementos rational, me devuelve None
sol=np.array([[2],[5],[6],[7]])
print(Simplex.showSolution(sol))
"""
Explanation: showSolution
Este método recibe la solución de una iteración, y la muestra con el nombre que se le asigna en ella ("x"). La solución deberá ser pasada en un numpy array en forma de columna con elementos rational. En caso de que los parámetros introducidos no sean correctos, devolverá None. Ejemplos:
End of explanation
"""
columnsOfIteration=np.array([0,2,3])
functionVector= np.array([0,1,2,3,5,5,6])
Simplex.calculateCB(columnsOfIteration,functionVector)
# El método también funciona con elementos rational
columnsOfIteration=np.array([0,2])
functionVector= np.array([rational(0,1),rational(2,3),rational(5,5)])
print(Simplex.printMatrix(np.asmatrix(Simplex.calculateCB(columnsOfIteration,functionVector))))
# Si meto más columnas de las que tiene el vector función, me devuelve None
columnsOfIteration=np.array([0,1,2])
functionVector= np.array([0,1])
print(Simplex.calculateCB(columnsOfIteration,functionVector))
# Si meto algo por parámetro que no es un array de numpy en cualquiera de los dos parámetros, me devuelve None
print(Simplex.calculateCB([0,1],functionVector))
"""
Explanation: calculateCB
Este método calcula el valor del vector función, para una iteración. Para ello recibe en un array numpy, las columnas de la iteración, y en otro array numpy, el vector de función completo del problema. Si todo es correcto, se devuelve en un array numpy, el vector de la función para las columnas introducidas. En caso de que los parámetros introducidos no sean correctos, devolverá None. Ejemplos:
End of explanation
"""
CBValue= np.array([rational(0,1),rational(2,3),rational(5,5)])
Simplex.showCB(CBValue)
#Si se le pasa algo que no es un array numpy de elementos rational, devuelve None
CBValue= np.array([0,1,4,6])
print(Simplex.showCB(CBValue))
"""
Explanation: showCB
Este método, recibe un array numpy de elementos rational, que contiene el valor del vector función, y simplemente lo muestra por pantalla, con el correspondiente nombre que se le asigna("CB"). En caso de que los parámetros introducidos no sean correctos, devolverá None. Ejemplos:
End of explanation
"""
# La solución se debe pasar como un array en forma de columna
solution=np.array([[rational(2,1)],[rational(3,2)],[rational(2,5)]])
CB = np.array([rational(0,1),rational(2,3),rational(5,5)])
print(Simplex.printMatrix(Simplex.calculateFunctionValueOfIteration(solution,CB)))
#Si el tamaño de uno de los parámetros difiere del otro, devuelve None
solution=np.array([[rational(2,1)],[rational(3,2)],[rational(2,5)]])
CB = np.array([rational(0,1),rational(5,5)])
print(Simplex.calculateFunctionValueOfIteration(solution,CB))
#Si recibe algo que no es un array numpy con elementos rational en cualquiera de los dos parámetros, devuelve None
print(Simplex.calculateFunctionValueOfIteration(solution,3))
"""
Explanation: calculateFunctionValueOfIteration
Este método recibe la solución de la iteración, y el vector de la función para la misma, y devuelve una matriz numpy que contiene el valor de la función para dicha iteración. Es necesario que la solución se pase como un array numpy en forma de columna(como muestra el ejemplo). El vector de la función debe ser un array de numpy, en forma de fila. Ambos arrays, deben ser de elementos rational. En caso de que los parámetros introducidos no sean correctos, devolverá None. Ejemplos:
End of explanation
"""
functionValue=np.matrix([34])
Simplex.showFunctionValue(functionValue)
# El método funciona también con metrices rational
functionValue=np.matrix([rational(34,1)])
Simplex.showFunctionValue(functionValue)
#En caso de recibir algo que no es una matriz numpy, devuelve None
functionValue=np.matrix([34])
print(Simplex.showFunctionValue(4))
"""
Explanation: showFunctionValue
Este método recibe una matriz numpy que contiene la solución de la función, para la iteración, y la muestra por pantalla con su nombre("z"). El método funciona tambiñen si se pasa la matriz con elementos rational En caso de que los parámetros introducidos no sean correctos, devolverá None. Ejemplos:
End of explanation
"""
variablesNoIteration=np.array([3,4])
iterationBase=np.matrix([[rational(6,7),rational(4,5),rational(3,1)],[rational(2,3),rational(7,6),
rational(1,3)], [rational(4,1),rational(6,4),rational(9,2)]])
totalMatrix=np.matrix([[rational(6,7),rational(4,5),rational(3,1),rational(5,3),rational(2,1)],[rational(2,3),rational(7,6),
rational(1,3),rational(2,5),rational(9,5)], [rational(4,1),rational(6,4),rational(9,2),rational(4,5),
rational(3,1)]])
print(Simplex.printMatrix(Simplex.calculateYValues(variablesNoIteration,iterationBase,totalMatrix)))
#Si el número de variables fuera de la iteración, es mayor que el número total de variables, se devuelve None
variablesNoIteration=np.array([0,1,2,3,4,5])
print(Simplex.calculateYValues(variablesNoIteration,iterationBase,totalMatrix))
#Si el la base tiene más o menos filas, que la matriz total, devuelve None
variablesNoIteration=np.array([3,4])
iterationBase=np.matrix([[rational(6,7),rational(4,5),rational(3,1)], [rational(4,1),rational(6,4),rational(9,2)]])
totalMatrix=np.matrix([[rational(6,7),rational(4,5),rational(3,1),rational(5,3),rational(2,1)],[rational(2,3),rational(7,6),
rational(1,3),rational(2,5),rational(9,5)], [rational(4,1),rational(6,4),rational(9,2),rational(4,5),
rational(3,1)]])
print(Simplex.calculateYValues(variablesNoIteration,iterationBase,totalMatrix))
#Si se introduce algo que no sea una matriz numpy de rational en el segundo y tercer parámetro, o un array numpy en el primer
# parámetro, devuelve None
print(Simplex.calculateYValues(variablesNoIteration,4,totalMatrix))
"""
Explanation: calculateYValues
Este método calcula los valores de y, para una iteración. Para ello recibe la base de la iteración en una matriz numpy, la matriz total que contiene todas las restricciones del problema (sin signo, ni recursos) en una matriz numpy y las variables que no pertenecen a la iteración, en un array numpy. Los elementos de ambas matrices, deben ser rational. Si todos los parámetros introducidos son correctos, se devuelve en un array de numpy los valores, de cada una de las y para la iteración. En caso de que los parámetros introducidos no sean correctos, devolverá None. Ejemplos:
End of explanation
"""
variablesNoIteration=np.array([1,3])
y = np.array([[rational(2,3),rational(4,6)],[rational(3,2),rational(4,1)]])
Simplex.showYValues(variablesNoIteration,y)
#Si se pasa algo que no sea un array numpy en cualquiera de los dos parámetros,siendo el segundo de elementos rational,
# devuelve None
print(Simplex.showYValues(690,y))
"""
Explanation: showYValues
Este método recibe un array numpy que contiene las variables que no pertenecen a la iteración, y los valores de y en un array de numpy con elementos rational, y los muestra por pantalla con su nombre("y"+número de la variable). En caso de que los parámetros introducidos no sean correctos, devolverá None. Ejemplos:
End of explanation
"""
functionVector= np.array([rational(1,1),rational(3,1),rational(4,1),rational(5,1),rational(5,1)])
variablesNoIteration= np.array([0,2,3])
CB = np.array([rational(2,1),rational(0,1)])
y = np.array([[rational(2,1),rational(1,1)],[rational(-1,1),rational(-3,1)],[rational(1,1),rational(1,1)],[rational(0,1)
,rational(-1,1)]])
print(Simplex.printMatrix(np.asmatrix(Simplex.calculateZC(functionVector,variablesNoIteration,CB,y))))
# Si se le pasa algo que no es un array numpy en cualquiera de los parámetros, devuelve None
print(Simplex.calculateZC(89,variablesNoIteration,CB,y))
# Si el tamaño del vector de recursos para la iteración, es mayor que el tamaño de los resultados de y, devuelve None
functionVector= np.array([rational(1,1),rational(3,1),rational(4,1),rational(5,1),rational(5,1)])
variablesNoIteration= np.array([0,2,3])
CB = np.array([rational(2,1),rational(0,1),rational(3,2),rational(2,1),rational(4,3)])
y = np.array([[rational(2,1),rational(1,1)],[rational(-1,1),rational(-3,1)],[rational(1,1),rational(1,1)],[rational(0,1)
,rational(-1,1)]])
print(Simplex.calculateZC(functionVector,variablesNoIteration,CB,y))
# Si hay más variables fuera de la iteración que variables en el vector de función total,se devuelve None
functionVector= np.array([rational(1,1),rational(3,1),rational(4,1),rational(5,1),rational(5,1)])
variablesNoIteration= np.array([0,1,2,3,4,5,6])
CB = np.array([rational(2,1),rational(0,1)])
y = np.array([[rational(2,1),rational(1,1)],[rational(-1,1),rational(-3,1)],[rational(1,1),rational(1,1)],[rational(0,1)
,rational(-1,1)]])
print(Simplex.calculateZC(functionVector,variablesNoIteration,CB,y))
# Si el tamaño del vector función para la iteración es mayor que el del vector total de la función, devuelve None:
functionVector= np.array([rational(1,1),rational(3,1)])
variablesNoIteration= np.array([0,1,2,3,4,5,6])
CB = np.array([rational(2,1),rational(0,1),rational(4,1),rational(5,1),rational(5,1)])
y = np.array([[rational(2,1),rational(1,1)],[rational(-1,1),rational(-3,1)],[rational(1,1),rational(1,1)],[rational(0,1)
,rational(-1,1)]])
print(Simplex.calculateZC(functionVector,variablesNoIteration,CB,y))
# Si se introduce algo que no es un array de numpy, devuelve None(el primer, tercer y cuarto parámetro deben tener elementos
# rational)
functionVector=np.array([3,-6,-3])
print(Simplex.calculateZC(functionVector,variablesNoIteration,CB,y))
"""
Explanation: calculateZC
Este método calcula los valores de la regla de entrada, y los devuelve en un array de numpy. Para ello recibe el vector de la función completo, en un array de numpy; las variables que no están dentro de la iteración, en un array de numpy; el vector de la función para la iteración, en un array de numpy y por último, los valores de y para la iteración, en un numpy array. Todos los arrays deben tener elementos rational, excepto en el de las variables que no están en la iteración. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
End of explanation
"""
variablesNoIteration= np.array([0,2,3])
Z_C=np.array([3,-6,-3])
Simplex.showZCValues(variablesNoIteration,Z_C)
# También funciona con rational
variablesNoIteration= np.array([0,2,3])
Z_C=np.array([rational(3,5),rational(-6,2),rational(-3,1)])
Simplex.showZCValues(variablesNoIteration,Z_C)
# Si la longitud de los valores de la regla de entrada, es diferente del número de valores que hay en la iteración, devuelve None
Z_C=np.array([3,-6])
print(Simplex.showZCValues(variablesNoIteration,Z_C))
# Si lo que se introduce no es un array de numpy, en cualquiera de los dos parámetros, devuelve None
print(Simplex.showZCValues(3,Z_C))
"""
Explanation: showZCValues
Este método recibe en un array de numpy los valores de la regla de entrada(Z_C) y en otro array de numpy,las variables que no pertenecen a la iteración. Si todos los parámetros son correctos, muestra por pantalla los valores de la regla de entrada con su nombre asociado("Z_C"+número de la variable). El método funciona tanto con elementos rational, como con elementos enteros.. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
End of explanation
"""
inputRuleValues=np.array([3,-6])
Simplex.thereIsAnotherIteration(inputRuleValues)
inputRuleValues=np.array([0,-6])
Simplex.thereIsAnotherIteration(inputRuleValues)
inputRuleValues=np.array([0,6])
Simplex.thereIsAnotherIteration(inputRuleValues)
inputRuleValues=np.array([1,6])
Simplex.thereIsAnotherIteration(inputRuleValues)
# El método funciona también con rational
inputRuleValues=np.array([rational(1,3),rational(-2,3)])
Simplex.thereIsAnotherIteration(inputRuleValues)
#Si se le pasa algo que no sea un array de numpy, devuelve None
print(Simplex.thereIsAnotherIteration(2))
"""
Explanation: thereIsAnotherIteration
Este método recibe los valores de la regla de entrada en un array de numpy. Devuelve True, si hay otra iteración; -1, si hay infinitas soluciones o False, si no hay más iteraciones. El método funciona tanto con elementos rational, como con elementos enteros. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
End of explanation
"""
Simplex.showNextIteration(True)
Simplex.showNextIteration(False)
Simplex.showNextIteration(-1)
# Si recibe algo distinto a True,False o -1, devuelve None
print(Simplex.showNextIteration(-2))
"""
Explanation: showNextIteration
Este método muestra mediante una explicación, cuál es la solución dada por el método anterior. Si recibe True, muestra la explicación para cuando el problema no ha terminado, y hay más iteraciones; si recibe False, muestra la expliación para cuando el problema ha terminado y si recibe -1, muestra la explicación para cuando hay infinitas soluciones. En caso de que reciba algo distinto a esto, devuelve None. Ejemplos:
End of explanation
"""
variablesNoIteration=np.array([0,2,3])
inputRuleValues=np.array([3,-6,-3])
Simplex.calculateVarWhichEnter(variablesNoIteration,inputRuleValues)
# El método también funciona con elementos rational
variablesNoIteration=np.array([0,2,3])
inputRuleValues=np.array([rational(3,9),rational(-6,2),rational(-3,2)])
Simplex.calculateVarWhichEnter(variablesNoIteration,inputRuleValues)
# Si se recibe algo que no es un array de numpy en cualquiera de los dos parámetros, devuelve None
print(Simplex.calculateVarWhichEnter(variablesNoIteration,5))
"""
Explanation: calculateVarWhichEnter
Este método recibe un array de numpy que contiene las variables que no están en la iteración, y otro array de numpy que contiene los valores de la regla de entrada. Si los parámetros de entrada son correctos, se devuelve la variable que debe entrar en la siguiente iteración(el que tenga el valor mínimo). El método funciona tanto con elementos rational, como con elementos enteros. En caso de que los parámetros introducidos no sean correctos, devolverá None. Ejemplos:
End of explanation
"""
variableWhichEnter= 2
Simplex.showVarWhichEnter(variableWhichEnter)
#Si lo que recibe por parámetro no es un número, devuelve None
print(Simplex.showVarWhichEnter("adsf"))
"""
Explanation: showVarWhichEnter
Este método recibe la variable que entra y la muestra por pantalla, indicando que esa es la variable que entra. En caso de no recibir un número por parámetro, devuelve None. Ejemplos:
End of explanation
"""
inputRuleValues=np.array([rational(2,1),rational(-3,1),rational(-4,3)])
yValues=np.array([[rational(2,1),rational(3,1),rational(4,1)],[rational(4,1),rational(6,1),rational(8,1),],[rational(3,1),
rational(5,1),rational(6,1)]])
sol=np.array([[rational(1,1)],[rational(0,1)],[rational(-4,2)]])
Simplex.calculateExitValues(inputRuleValues,yValues,sol)
#Si el número de valores de la regla de entrada es diferente que el número de valores de y, devuelve None
inputRuleValues=np.array([rational(2,1),rational(-3,1)])
yValues=np.array([[rational(2,1),rational(3,1),rational(4,1)],[rational(4,1),rational(6,1),rational(8,1),],[rational(3,1),
rational(5,1),rational(6,1)]])
sol=np.array([[rational(1,1)],[rational(0,1)],[rational(-4,2)]])
print(Simplex.calculateExitValues(inputRuleValues,yValues,sol))
#Si el número de valores de la regla de entrada es diferente que el número de valores de y, devuelve None
inputRuleValues=np.array([rational(2,1),rational(-3,1),rational(-4,3)])
yValues=np.array([[rational(2,1),rational(3,1),rational(4,1)],[rational(4,1),rational(6,1),rational(8,1),]])
sol=np.array([[rational(1,1)],[rational(0,1)],[rational(-4,2)]])
print(Simplex.calculateExitValues(inputRuleValues,yValues,sol))
#Si la longitud de la solución es menor que el número de valores de algún conjunto de y, devuelve None
inputRuleValues=np.array([rational(2,1),rational(-3,1),rational(-4,3)])
yValues=np.array([[rational(2,1),rational(3,1),rational(4,1)],[rational(4,1),rational(6,1),rational(8,1),],[rational(3,1),
rational(5,1),rational(6,1)]])
sol=np.array([[rational(1,1)],[rational(0,1)]])
print(Simplex.calculateExitValues(inputRuleValues,yValues,sol))
#Si recibe algo que no sea un array de numpy con elementos rational en cualquiera de los parámetros, devuelve None
print(Simplex.calculateExitValues(inputRuleValues,66,sol))
"""
Explanation: calculateExitValues
Este método recibe los valores de la regla de entrada en un array de numpy, los valores de y en otro array de numpy, y la solución de esa iteración en un array de numpy, en forma de columna. Todos los elementos de los arrays deben ser rational. Si todos los parámetros se introducen de forma correcta, se devuelven los valores de la regla de salida. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
End of explanation
"""
exitValues=np.array([rational(1,2),rational(-3,2),rational(0,1),rational(5,2)])
Simplex.showExitValues(exitValues)
#Si recibe algo que no es una array de numpy con elementos rational, devuelve None
exitValues=np.array([1,-3,0,5])
print(Simplex.showExitValues(exitValues))
"""
Explanation: showExitValues
Este método recibe en un array de numpy con elementos rational los valores de la regla de salida, y los muestra por pantalla, acompañados de el nombre que reciben("O"), y de cuál será el criterio de elección del valor de salida(min). En caso de que no reciba un array de numpy, devuelve None. Ejemplos:
End of explanation
"""
exitValues=np.array([rational(1,3),rational(-3,2),rational(0,1),rational(5,4)])
print(Simplex.calculateO(exitValues))
#Si todos los valores recibidos son Nan, se omitirán y devolverá None
exitValues=np.array([np.nan,np.nan,np.nan,np.nan])
print(Simplex.calculateO(exitValues))
#Si recibe algo que no es una array de numpy con elementos rational o Nan, devuelve None
exitValues=np.array([-1,-3,-3,-5])
print(Simplex.calculateO(exitValues))
"""
Explanation: calculateO
Este método calcula el valor de O, para un conjunto de valores de salida que recibe por parámetro como un array de numpy. Este valor será el de los valores recibidos. El cálculo de qué valores tienen denominador negativo o 0, se hace en el método calculateExitValues, luego aquí se recibirá un array con valores rational y Nan.Si todos los valores son Nan, devolverá None. En caso de que no reciba un array de numpy, devuelve None. Ejemplos:
End of explanation
"""
O = 3
Simplex.showOValue(O)
O = rational(3,4)
Simplex.showOValue(O)
#Si lo que recibe por parámetro no es un nuúmero, devuelve None
print(Simplex.showOValue([4,3]))
"""
Explanation: showOValue
Este método recibe el valor de O, y simplemente lo muestra por pantalla, con su nombre asociado("O"). En caso de no recibir un número por parámetro, devuelve None. Ejemplos:
End of explanation
"""
outputRuleValues=np.array([rational(1,2),rational(-3,-2),rational(0,1),rational(5,7)])
columnsOfIteration=np.array([0,2,3])
Simplex.calculateVarWhichExit(columnsOfIteration,outputRuleValues)
#Si los valores de la regla de salida, son todos negativos o divididos por 0, es decir, le pasamos Nan, devuelve None
outputRuleValues=np.array([np.nan,np.nan,np.nan,np.nan])
print(Simplex.calculateVarWhichExit(columnsOfIteration,outputRuleValues))
# Si recibe algo que no es un array de numpy en ambos parámetros, devuelve None
outputRuleValues=np.array([1,-3,0,5])
print(Simplex.calculateVarWhichExit(4,outputRuleValues))
"""
Explanation: calculateVarWhichExit
Este método recibe en un array de numpy las variables o columnas que pertenecen a la iteración(deben aparecer ordenadas en función de lo que se esté realizando en el problema), y en otro array de numpy, los valores de la regla de salida, que deben ser rational o Nan. Si los parámetros introducidos son correctos, devuelve el valor de la variable que saldrá en esta iteración, o None, en caso de que todos los valores sean Nan. En caso de no recibir como parámetro un array de numpy, devolverá None. Ejemplos:
End of explanation
"""
varWhichExit=4
Simplex.showVarWhichExit(varWhichExit)
# Si lo que recibe por parámetro no es un número, devuelve None.
print(Simplex.showVarWhichExit(np.array([3,4])))
"""
Explanation: showVarWhichExit
Este método recibe la variable que sale por parámetro, y la muestra por pantalla, acompañado de una indicación de que esa es la variable que saldrá en esta iteración. En caso de no recibir un número por parámetro, devuelve None. Ejemplos:
End of explanation
"""
columnsOfIteration=np.array([3,4,5])
Simplex.showIterCol(columnsOfIteration)
# Si recibe algo que no sea un array de numpy, devuelve None
print(Simplex.showIterCol(3))
"""
Explanation: showIterCol
Este método recibe un array de numpy con las columnas o variables de la iteración, y simplemente las muestra por pantalla, acompañado de una indicación de que esas son las variables de la iteración. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
End of explanation
"""
totalMatrix= np.matrix([[rational(-1,1),rational(4,1),rational(5,1),rational(7,1),rational(0,1),rational(0,1)],[rational(4,1),
rational(6,1),rational(7,1),rational(0,1),rational(1,1),rational(0,1)],[rational(7,1),rational(-2,1),rational(-3,1)
,rational(9,1),rational(0,1), rational(1,1)]])
functionVector =np.array([rational(2,1),rational(-3,1),rational(5,1),rational(0,1),rational(0,1),rational(1,1)])
b = np.array([rational(2,1),rational(4,1),rational(1,1)])
columnsOfIteration=np.array([3,4,5])
Simplex.solveIteration(totalMatrix,b,functionVector,columnsOfIteration)
# Si hay distinto número de recursos(b), que restricciones, devuelve None
totalMatrix= np.matrix([[rational(-1,1),rational(4,1),rational(5,1),rational(7,1),rational(0,1),rational(0,1)],[rational(4,1),
rational(6,1),rational(7,1),rational(0,1),rational(1,1),rational(0,1)],[rational(7,1),rational(-2,1),rational(-3,1)
,rational(9,1),rational(0,1), rational(1,1)]])
functionVector =np.array([rational(2,1),rational(-3,1),rational(5,1),rational(0,1),rational(0,1),rational(1,1)])
b = np.array([[rational(2,1)],[rational(4,1)]])
columnsOfIteration=np.array([3,4,5])
print(Simplex.solveIteration(totalMatrix,b,functionVector,columnsOfIteration))
# Si la función tiene diferente número de variables que las restricciones, devuelve None
totalMatrix= np.matrix([[rational(-1,1),rational(4,1),rational(5,1),rational(7,1),rational(0,1),rational(0,1)],[rational(4,1),
rational(6,1),rational(7,1),rational(0,1),rational(1,1),rational(0,1)],[rational(7,1),rational(-2,1),rational(-3,1)
,rational(9,1),rational(0,1), rational(1,1)]])
functionVector =np.array([rational(2,1),rational(-3,1),rational(5,1),rational(0,1)])
b = np.array([[rational(2,1)],[rational(4,1)],[rational(1,1)]])
columnsOfIteration=np.array([3,4,5])
print(Simplex.solveIteration(totalMatrix,b,functionVector,columnsOfIteration))
# Si el número de columnas o variables de la iteración, no se corresponde con el número de restricciones, devuelve None
totalMatrix= np.matrix([[rational(-1,1),rational(4,1),rational(5,1),rational(7,1),rational(0,1),rational(0,1)],[rational(4,1),
rational(6,1),rational(7,1),rational(0,1),rational(1,1),rational(0,1)],[rational(7,1),rational(-2,1),rational(-3,1)
,rational(9,1),rational(0,1), rational(1,1)]])
functionVector =np.array([rational(2,1),rational(-3,1),rational(5,1),rational(0,1),rational(0,1),rational(1,1)])
b = np.array([[rational(2,1)],[rational(4,1)],[rational(1,1)]])
columnsOfIteration=np.array([3,4])
print(Simplex.solveIteration(totalMatrix,b,functionVector,columnsOfIteration))
# Si recibe por parámetro, algo que no es una matriz de numpy con elementos rational en el primer parámetro, o un array de numpy
# con elementos rational(excepto en las columnas de la iteración, que son valores enteros) en el resto, devuelve None.
totalMatrix= np.matrix([[rational(-1,1),rational(4,1),rational(5,1),rational(7,1),rational(0,1),rational(0,1)],[rational(4,1),
rational(6,1),rational(7,1),rational(0,1),rational(1,1),rational(0,1)],[rational(7,1),rational(-2,1),rational(-3,1)
,rational(9,1),rational(0,1), rational(1,1)]])
functionVector =np.array([rational(2,1),rational(-3,1),rational(5,1),rational(0,1),rational(0,1),rational(1,1)])
b = np.array([[rational(2,1)],[rational(4,1)],[rational(1,1)]])
columnsOfIteration=np.array([3,4,5])
print(Simplex.solveIteration(4,b,functionVector,columnsOfIteration))
"""
Explanation: solveIteration
Este método recibe por parámetro la matriz completa de las restricciones del problema(sin signos ni recursos) en una matriz de numpy, y luego recibe tres arrays de numpy, que contienen el vector de recursos,el valor de todas las variables en la función, y las columnas o variables de la presente iteración. Los elementos de la matriz, los recursos y el vector de la función deben ser rational. En caso de que todos los parámetros introducidos sean correctos, muestra por pantalla el desarrollo de la iteración, y finalmente devuelve, la solución de la iteración,el valor de la función para la iteración, cuál sería la variable que entraría, cuál la variable que saldría y un valor que indica si habría más iteraciones(True),no hay más iteraciones(False) o el número de soluciones es elevado(-1). En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
End of explanation
"""
matrix=np.matrix([[rational(3,2),rational(0,1),rational(1,1)],[rational(3,5),rational(4,5),rational(0,1)],[rational(5,6),
rational(7,8),rational(0,1)]])
column=0
'''Se busca la columna 0 de la matriz identidad: [[1],
[0],
[0]]'''
Simplex.identityColumnIsInMatrix(matrix,column)
# Si la columna de la matriz identidad no está en la matriz, devuelve None
column=2
print(Simplex.identityColumnIsInMatrix(matrix,column))
# Si la columna pasada, aparece más de una vez, devolverá la primera
matrix=np.matrix([[rational(1,1),rational(0,1),rational(1,1)],[rational(0,1),rational(4,5),rational(0,1)],[rational(0,1),
rational(7,8),rational(0,1)]])
column=0
Simplex.identityColumnIsInMatrix(matrix,column)
# Si se pasa un número mayor o igual que el número de columnas que tiene la matriz, devuelve None
matrix=np.matrix([[rational(1,1),rational(0,1),rational(1,1)],[rational(0,1),rational(4,5),rational(0,1)],[rational(0,1),
rational(7,8),rational(0,1)]])
column=4
print(Simplex.identityColumnIsInMatrix(matrix,column))
# Si se pasa algo que no es una matriz de numpy con elementos rational en el primer parámetro o algo que no es un número en el
# segundo parámetro, devuelve None
print(Simplex.identityColumnIsInMatrix(matrix,"[2,3]"))
"""
Explanation: identityColumnIsInMatrix
Este método recibe una matriz de numpy con elementos rational, y un número que se corresponde, con el índice de una columna de la matriz identidad. Si todos los parámetros son correctos, devolverá el índice de la columna de la matriz pasada, donde se encuentra la columna de la matriz identidad. En caso de que la columna de la matriz identidad indicada no se encuentre en la matriz, devolverá None. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
End of explanation
"""
totalMatrix=np.matrix([[rational(1,1),rational(2,1),rational(3,1),rational(4,1),rational(0,1)],[rational(0,1),rational(3,1),
rational(4,1),rational(7,1),rational(1,1)]])
Simplex.variablesFirstIteration(totalMatrix)
# En caso de que una de las columnas de la matriz identidad, no aparezca, devuelve None
totalMatrix=np.matrix([[rational(1,1),rational(2,1),rational(3,1),rational(4,1),rational(0,1)],[rational(1,1),rational(3,1),
rational(4,1),rational(7,1),rational(1,1)]])
Simplex.variablesFirstIteration(totalMatrix)
# En caso de que una columna de la matriz identidad aparezca más de una vez, solo devuelve la primera
totalMatrix=np.matrix([[rational(1,1),rational(1,1),rational(3,1),rational(4,1),rational(0,1)],[rational(0,1),rational(0,1),
rational(4,1),rational(7,1),rational(1,1)]])
Simplex.variablesFirstIteration(totalMatrix)
# Si recibe algo que no es una matriz de numpy de elementos rational, devuelve None
print(Simplex.variablesFirstIteration(4))
"""
Explanation: variablesFirstIteration
Este método recibe una matriz de numpy, que será la matriz completa del problema y que debe tener elementos rational. Si todos los parámetros son correctos, calcula cuáles son las variables de la primera iteración del problema(es decir, donde están las columnas de la matriz identidad, en la matriz pasada)en un array de numpy. En caso de que alguna de las columnas de la matriz identidad no aparezca, devuelve None en su posición. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
End of explanation
"""
variableWhichEnters=4
variableWhichExits=3
previousVariables=np.array([1,3,5])
Simplex.calculateColumnsOfIteration(variableWhichEnters,variableWhichExits,previousVariables)
# Si se intenta sacar una variable que no está, no saca nada
variableWhichEnters=4
variableWhichExits=6
previousVariables=np.array([1,3,5])
Simplex.calculateColumnsOfIteration(variableWhichEnters,variableWhichExits,previousVariables)
# Si se mete algo que no es un array de numpy en el tercer parámetro,o algo que no es un número en los dos primeros, devuelve
# None
print(Simplex.calculateColumnsOfIteration(variableWhichEnters,variableWhichExits,3))
"""
Explanation: calculateColumnsOfIteration
Este método recibe la variable que entrará en la siguiente iteración, la variable que saldrá en la siguiente iteración, y en un array de numpy, las variables de la iteración anterior. Si los parámetros son correctos, devuelve en un array de numpy, las variables de la iteración actual. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
End of explanation
"""
variablesOfLastIter=np.array([2,3,4])
numberOfVariables=6
iterationSolution=np.array([rational(4,1),rational(6,4),rational(7,3)])
print(Simplex.printMatrix(Simplex.completeSolution(variablesOfLastIter,numberOfVariables,iterationSolution)))
# Si el número de variables de la última iteración es diferente que la longitud de la solución, devuelve None
variablesOfLastIter=np.array([3,4])
numberOfVariables=6
iterationSolution=np.array([rational(4,1),rational(6,4),rational(7,3)])
print(Simplex.completeSolution(variablesOfLastIter,numberOfVariables,iterationSolution))
# Si recibe algo que no es un array de numpy en el primer y tercer parámetro(este debe ser de elementos rational), o algo que
# no es un número en el segundo, devuelve None
print(Simplex.completeSolution(variablesOfLastIter,[9,9],iterationSolution))
"""
Explanation: completeSolution
Este método recibe las variables de la iteración en un array de numpy, el número total de variables del problema, y la solución de la iteración en un array de numpy, con todos sus elementos rational. Si todos los parámetros se introducen de forma correcta, devolverá la solución completa, es decir, el valor de cada una de las variables para dicha iteración. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
End of explanation
"""
matrixInitial=np.matrix([[rational(3,2),rational(4,3),rational(6,3)],[rational(6,9),rational(7,3),rational(8,5)],[rational(4,3),
rational(5,4),rational(7,5)]])
print(Simplex.printMatrix(Simplex.addIdentityColumns(matrixInitial)))
# Si ya hay alguna columna de la matriz identidad, devuelve solo las que faltan
matrixInitial=np.matrix([[rational(3,4),rational(1,1),rational(6,3)],[rational(6,4),rational(0,1),rational(8,9)],[rational(4,5),
rational(0,1),rational(7,6)]])
print(Simplex.printMatrix(Simplex.addIdentityColumns(matrixInitial)))
# Si ya están todas las columnas de la mtriz identidad, devuelve un array vacío
matrixInitial=np.matrix([[rational(0,1),rational(1,1),rational(0,1)],[rational(1,1),rational(0,1),rational(0,1)],[rational(0,1),
rational(0,1),rational(1,1)]])
Simplex.addIdentityColumns(matrixInitial)
# Si se pasa algo que no es una matriz de numpy con elementos rational, devuelve None
print(Simplex.addIdentityColumns(4))
"""
Explanation: addIdentityColumns
Este método recibe una matriz de numpy con elementos rational, y devuelve en una matriz de numpy, cuáles son las columnas de la matriz identidad que no tiene. En caso de que ya tenga todas las columnas de la matriz identidad, devuelve un array vacío. En caso de recibir algo que no sea una matriz de numpy, devuelve None. Ejemplos:
End of explanation
"""
lis=["hola","adios","hasta luego"]
Simplex.isStringList(lis)
lis=["hola",4,"hasta luego"]
Simplex.isStringList(lis)
# Si recibe algo que no es una lista, devuelve None
print(Simplex.isStringList(4))
"""
Explanation: isStringList
Este método recibe una lista y comprueba si todos los elementos de la misma son strings, en ese caso devuelve True. Si algún elemento de la lista no es un string devuelve False.Se utiliza principalmente para comprobar que los parámetros de entrada de algunos métodos son correctos. En caso de no introducir una lista, devuelve None. Ejemplos:
End of explanation
"""
array=np.array([2,3,4,5])
print(Simplex.calculateArtificialValueInFunction(array))
array=np.array([2,3,4,-5])
print(Simplex.calculateArtificialValueInFunction(array))
array=np.array([rational(2,5),rational(3,4),rational(4,9),rational(-5,7)])
print(Simplex.calculateArtificialValueInFunction(array))
#Si recibe algo que no es una rray de Numpy, devuelve None
print(Simplex.calculateArtificialValueInFunction(4))
"""
Explanation: calculateArtificialValueInFunction
Este método calcula y devuelve el coeficiente de la variable artificial para la función objetivo. Aunque como sabemos este valor será infinito y se añadirá con coeficiente negativo, basta con que este valor sea superior a la suma de los valores absolutos de los coeficientes que ya están en el vector función. El método funciona tanto con valores enteros, como con rational, pero siempre devolverá un rational. En caso de recibir algo que no es un array de numpy, devuelve None. Ejemplos:
End of explanation
"""
vector=np.array([rational(3,1),rational(4,1),rational(5,1),rational(6,1)])
numOfArtificialVariables= 2
print(Simplex.printMatrix(np.asmatrix(Simplex.addArtificialVariablesToFunctionVector
(vector,numOfArtificialVariables))))
#Si se pasa algo que no es un array de numpy con elementos rational en el primer parámetro, o algo que no es un número en
# el segundo, devuelve None
print(Simplex.addArtificialVariablesToFunctionVector(vector,[2,3]))
"""
Explanation: addArtificialVariablesToFunctionVector
Este método recibe un array de numpy con elementos rational, que contiene los coeficientes de la función objetivo(vector función), y un número, que será el número de variables artificiales que se desea añadir. Si se introducen los parámetros de forma correcta, devolverá un array de numpy, que contendrá el vector función completo, ya con los coeficientes de las variables artificiales añadidos. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
End of explanation
"""
vector=np.array([3,4,5,6,-20,-40])
numOfArtificialVariables= 2
Simplex.calculateWhichAreArtificialVariables(vector,numOfArtificialVariables)
# Si no se han incluido las variables artificiales, supone que son las últimas
vector=np.array([3,4,5,6])
numOfArtificialVariables= 2
Simplex.calculateWhichAreArtificialVariables(vector,numOfArtificialVariables)
vector=np.array([rational(3,2),rational(4,4),rational(5,6),rational(6,9),rational(-20,1),rational(-40,1)])
numOfArtificialVariables= 2
Simplex.calculateWhichAreArtificialVariables(vector,numOfArtificialVariables)
#Si se introduce algo que no es un array de numpy en el primer valor, o algo que no es un número en el segundo, devuelve None
numOfArtificialVariables= 2
print(Simplex.calculateWhichAreArtificialVariables(2,numOfArtificialVariables))
"""
Explanation: calculateWhichAreArtificialVariables
Este método recibe un array de numpy, que contiene los coeficientes de la función objetivo, con las variables artificiales incluidas(en orden), y un número que representa el número de variables artificiales que hay. Si los parámetros son correctos, devolverá cuáles son las variables artificiales. El método funciona tanto con elementos rational, como con enteros. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
End of explanation
"""
varArtificial=[4,5]
solution=np.array([[rational(34,2)],[rational(56,4)],[rational(7,8)],[rational(89,7)],[rational(3,1)],[rational(9,1)]])
Simplex.checkValueOfArtificialVariables(varArtificial,solution)
varArtificial=[4,5]
solution=np.array([[rational(34,2)],[rational(56,4)],[rational(7,8)],[rational(89,7)],[rational(-3,1)],[rational(-9,1)]])
Simplex.checkValueOfArtificialVariables(varArtificial,solution)
varArtificial=[4,5]
solution=np.array([[rational(34,2)],[rational(56,4)],[rational(7,8)],[rational(89,7)],[rational(0,1)],[rational(9,1)]])
Simplex.checkValueOfArtificialVariables(varArtificial,solution)
# Si recibe algo que no sea una lista en el primer parámetro o un array de numpy de elementos rational en el segundo, devuelve
# None
print(Simplex.checkValueOfArtificialVariables(5,solution))
"""
Explanation: checkValueOfArtificialVariables
Este método recibe una lista que contiene las variables artificiales del problema, y en un array de numpy con elementos rational, la solución al mismo. Si los parámetros se introducen correctamente, el método comprueba si alguna de las variables artificiales, toma un valor positivo, y en ese caso las devuelve en una lista(si esto ocurriera el problema no tendría solución). Este método es algo especial, puesto que no sigue le funcionamiento de los demás. En este caso recibe las variables artificiales, pero empezando a contar desde la 0,(en el primer ejemplo entonces, 4 y 5, serán las dos últimas). Sin embargo, las variables que devuelve, son empezando a contar desde la 1. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
End of explanation
"""
listOfstrings=["//hola","2 3 4 <=4 //first","#hola","adios"]
Simplex.omitComments(listOfstrings)
# En caso de no recibir una lista de strings, devuelve None
print(Simplex.omitComments([5,3]))
"""
Explanation: omitComments
Este método recibe una lista de strings, y lo que hace es eliminar aquellas ocurrencias que comiencen por el caracter "//" o "#". También en aquellas ocurrencias que estos caracteres aparezcan en cualquier parte de la cadena, elimina la subcadena a partir de estos caracteres. Devolverá la lista, ya con estas ocurrencias eliminadas. Se utiliza para eliminar comentarios. En caso de recibir algo que no sea una lista, devuelve None.Ejemplos:
End of explanation
"""
# Introducir aquí la ruta del archivo a abrir
file = open('../Files/file2.txt','r')
problem=Simplex.proccessFile(file)
print(Simplex.printMatrix(problem[0]))
print(Simplex.printMatrix(np.asmatrix(problem[1])))
print(problem[2])
print(problem[3])
#En caso de que se le pase algo que no sea un archivo, devuelve None
print(Simplex.proccessFile(4))
"""
Explanation: proccessFile
Este método recibe un archivo por parámetro, que debe contener un problema de programación lineal en el siguiente formato:
Y devuelve en este orden, la matriz de restricciones en una matriz numpy,el vector de recursos en un array de numpy, los signos de las restricciones en una lista de strings y un string que contiene la función objetivo a optimizar. Para ver como abrir un archivo consultar los ejemplos. Si se le pasa algo que no es un archivo, devuelve None Ejemplos:
End of explanation
"""
function="max 2 -3"
print(Simplex.printMatrix(np.asmatrix(Simplex.convertFunctionToMax(function))))
function="min 2 -3\n"
print(Simplex.printMatrix(np.asmatrix(Simplex.convertFunctionToMax(function))))
# Si recibe algo que no es un string devuelve None
function="min 2 -3\n"
print(Simplex.convertFunctionToMax(3))
"""
Explanation: convertFunctionToMax
Este método recibe un string que contiene la función objetivo del problema en el siguiente formato:
max/min 2 -3
El método devuelve en un array de numpy de elementos rational con los coeficientes de la función, en forma de maximización, puesto que es como se utiliza en la forma estándar, luego si introduzco una función de minimización, me devolverá los coeficientes cambiados de signo. En caso de que lo que le pase no sea un string, devuelve None. Ejemplo:
End of explanation
"""
previousSign="<"
Simplex.invertSign(previousSign)
previousSign=">"
Simplex.invertSign(previousSign)
previousSign="<="
Simplex.invertSign(previousSign)
previousSign=">="
Simplex.invertSign(previousSign)
previousSign="="
Simplex.invertSign(previousSign)
#Si introduzco algo que no sea un string, me devuelve None
previousSign=3
print(Simplex.invertSign(previousSign))
"""
Explanation: invertSign
Este método recibe un string que contiene un signo (debe ser <,<=,>,>=,=) y devuelve en otro string su signo opuesto. En caso de no recibir un string por parámetro, devuelve None. Ejemplos:
End of explanation
"""
matrix=np.matrix([[rational(1,2),rational(2,3),rational(4,9)],[rational(4,3),rational(6,2),rational(7,4)],
[rational(3,1),rational(4,2),rational(6,4)]])
resources=np.array([rational(1,4),rational(-4,1),rational(5,2)])
sign=["<=","<",">"]
std=Simplex.negativeToPositiveResources(matrix,resources,sign)
print(Simplex.printMatrix(std[0]))
print(Simplex.printMatrix(np.asmatrix(std[1])))
print(std[2])
matrix=np.matrix([[rational(1,2),rational(2,3),rational(4,9)],[rational(4,3),rational(6,2),rational(7,4)],
[rational(3,1),rational(4,2),rational(6,4)]])
resources=np.array([rational(1,4),rational(4,1),rational(5,2)])
sign=["<=","<",">"]
std=Simplex.negativeToPositiveResources(matrix,resources,sign)
print(Simplex.printMatrix(std[0]))
print(Simplex.printMatrix(np.asmatrix(std[1])))
print(std[2])
# Si la longitud del vector de recursos, es diferente del número de filas de la matriz, devuelve None
matrix=np.matrix([[rational(1,2),rational(2,3),rational(4,9)],[rational(4,3),rational(6,2),rational(7,4)],
[rational(3,1),rational(4,2),rational(6,4)]])
resources=np.array([rational(1,4),rational(-4,1)])
sign=["<=","<",">"]
std=Simplex.negativeToPositiveResources(matrix,resources,sign)
print(Simplex.negativeToPositiveResources(matrix,resources,sign))
# Si el número de signos es diferente a la longitud del vector de recursos o diferente del número de filas de la matriz,
# devuelve None
matrix=np.matrix([[rational(1,2),rational(2,3),rational(4,9)],[rational(4,3),rational(6,2),rational(7,4)],
[rational(3,1),rational(4,2),rational(6,4)]])
resources=np.array([rational(1,4),rational(-4,1),rational(5,2)])
sign=["<=","<"]
std=Simplex.negativeToPositiveResources(matrix,resources,sign)
print(Simplex.negativeToPositiveResources(matrix,resources,sign))
# Si se pasa por parámetro algo que no es una matriz de numpy con elementos rational en el primer parámetro, algo que no es un
# array de numpy con elementos rational en el segundo, o algo que no es una lista de strings, en el tercero,devuelve None
resources=np.array([1,-4,5])
sign=["<=","<",">"]
print(Simplex.negativeToPositiveResources(matrix,resources,sign))
"""
Explanation: negativeToPositiveResources
Este método se utiliza para cambiar a positivos, los recursos que sean negativos, ya que esto no debe darse. Para ello, realiza las transformaciones necesarias, devolviendo un matriz de numpy con elementos rational que contiene las restricciones, un array de numpy con elementos rational que contiene los recursos, y una lista de strings con los signos de cada restricción, con todos los cambios ya realizados. Los parámetros de entrada son los mismos que las salidas que proporciona, pero con las transformaciones sin realizar, es decir, una matriz de numpy, un array de numpy y una lista de strings. Para los recursos que sean positivos, no se realiza transformación alguna, sino que simplemente devuelve lo que recibe. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
End of explanation
"""
matrix=np.matrix([[rational(3,1),rational(2,1),rational(1,1)],[rational(2,1),rational(5,1),rational(3,1)]])
resources=np.array([rational(10,1),rational(15,1)])
sign=["<=",">="]
function="min -2 -3 -4 "
std=Simplex.convertToStandardForm(matrix,resources,sign,function)
print(Simplex.printMatrix(std[0]))
print(Simplex.printMatrix(np.asmatrix(std[1])))
print(std[2])
print(Simplex.printMatrix(np.asmatrix(std[3])))
# Si la longitud del vector de recursos, es diferente del número de filas de la matriz, devuelve None
matrix=np.matrix([[rational(3,1),rational(2,1),rational(1,1)],[rational(2,1),rational(5,1),rational(3,1)]])
resources=np.array([rational(10,1),rational(15,1),rational(52,1)])
sign=["<=",">="]
function="min -2 -3 -4 "
print(Simplex.convertToStandardForm(matrix,resources,sign,function))
# Si el número de signos es diferente a la longitud del vector de recursos o diferente del número de filas de la matriz,
# devuelve None
matrix=np.matrix([[rational(3,1),rational(2,1),rational(1,1)],[rational(2,1),rational(5,1),rational(3,1)]])
resources=np.array([rational(10,1),rational(15,1)])
sign=["<=",">=","="]
function="min -2 -3 -4 "
print(Simplex.convertToStandardForm(matrix,resources,sign,function))
# Si se pasa por parámetro algo que no es una matriz de numpy con elementos rational en el primer parámetro, algo que no es un
# array de numpy con elementos rational en el segundo,algo que no es una lista de strings en el tercero o algo que no es un
# string en el cuarto,devuelve None
matrix=np.matrix([[rational(3,1),rational(2,1),rational(1,1)],[rational(2,1),rational(5,1),rational(3,1)]])
resources=np.array([rational(10,1),rational(15,1)])
function="min -2 -3 -4 "
print(Simplex.convertToStandardForm(matrix,resources,[4,0],function))
"""
Explanation: convertToStandardForm
Este método recibe una martriz de numpy con elementos rational que contendrá las restricciones del problema, un array de numpy con elementos rational, que contendrá el vector de recursos, una lista de strings que contiene los signos de las restricciones y un string que contendrá la función en el formato "max/min 2 -3". Si todos los parámetros introducidos son correctos, el método devolverá los parámetros que ha recibido, pero transformados a la forma estándar(en el caso de la función la devuelve ya en un array de numpy con elementos rational, en su forma de maximización). En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
End of explanation
"""
matrix=np.matrix([[rational(3,1),rational(2,1),rational(1,1)],[rational(2,1),rational(5,1),rational(3,1)]])
resources=np.array([rational(10,1),rational(15,1)])
function=np.array([rational(14,6),rational(25,2)])
Simplex.showStandarForm(matrix,resources,function)
# Si recibe algo que no es una matriz de numpy con elementos rational, en el primer parámetro, algo que no es un array de numpy
# con elementos rational en el segundo y tercer parámetro, devuelve None
function=np.array([3,4])
print(Simplex.showStandarForm(matrix,resources,function))
"""
Explanation: showStandarForm
Este método recibe una matriz de numpy con elementos rational que es la matriz de coeficientes, un array de numpy con elementos rational que es el vector de recursos y un array de numpy con elementos rational que es el vector de la función a optimizar. Todos los parámetros son introducidos en forma estándar y son mostrados, en un formato más visual. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
End of explanation
"""
# Si se pasa False no devuelve la solución dual
matrix=np.matrix([[rational(2,1),rational(1,1)],[rational(1,1),rational(-1,1)],[rational(5,1),rational(2,1)]])
resources=np.array([rational(18,1),rational(8,1),rational(0,1)])
signs=["<=","<=",">="]
function="max 2 1"
solutionOfDualProblem=False
sol=Simplex.solveProblem(matrix,resources,sign,function,solutionOfDualProblem)
print(Simplex.printMatrix(np.asmatrix(sol[0])))
print(Simplex.printMatrix(sol[1]))
print(sol[2])
# Si se pasa True devolverá la soución dual
matrix=np.matrix([[rational(2,1),rational(1,1)],[rational(1,1),rational(-1,1)],[rational(5,1),rational(2,1)]])
resources=np.array([rational(18,1),rational(8,1),rational(0,1)])
signs=["<=","<=",">="]
function="max 2 1"
solutionOfDualProblem=True
sol=Simplex.solveProblem(matrix,resources,sign,function,solutionOfDualProblem)
print(Simplex.printMatrix(np.asmatrix(sol[0])))
print(Simplex.printMatrix(sol[1]))
print(sol[2])
print(Simplex.printMatrix(np.asmatrix(sol[3])))
# Si la longitud del vector de recursos, es diferente del número de filas de la matriz, devuelve None
matrix=np.matrix([[rational(2,1),rational(1,1)],[rational(1,1),rational(-1,1)],[rational(5,1),rational(2,1)]])
resources=np.array([rational(18,1),rational(8,1)])
signs=["<=","<=",">="]
function="max 2 1"
solutionOfDualProblem=True
print(Simplex.solveProblem(matrix,resources,sign,function,solutionOfDualProblem))
# Si el número de signos es diferente a la longitud del vector de recursos o diferente del número de filas de la matriz,
# devuelve None
matrix=np.matrix([[rational(2,1),rational(1,1)],[rational(1,1),rational(-1,1)],[rational(5,1),rational(2,1)]])
resources=np.array([rational(18,1),rational(8,1),rational(0,1)])
sign=["<=","<=",">=","="]
function="max 2 1"
solutionOfDualProblem=True
print(Simplex.solveProblem(matrix,resources,sign,function,solutionOfDualProblem))
# Si se pasa por parámetro algo que no es una matriz de numpy con elementos rational en el primer parámetro, algo que no es un
# array de numpy con elementos rational en el segundo,algo que no es una lista de strings en el tercero,algo que no es un string
# en el cuarto o algo que no sea True o False en el quinto,devuelve None
matrix=np.matrix([[2,1],[1,-1],[5,2]])
resources=np.array([18,8,4])
sign=["<=","<=",">="]
function="max 2 1"
print(Simplex.solveProblem(matrix,resources,sign,function,True))
"""
Explanation: solveProblem
Este método resuelve el problema de programación lineal que se le pasa por parámetro. Para ello, recibe una matriz de numpy con elementos rational que contiene las restricciones, sin signos ni recursos, un array de numpy con elementos rational que contiene los recursos, una lista de strings, que contienen los signos de las restricciones, un string que contiene la función en el formato "max/min 2 -3" y un valor True o False, que determina si se quiere obtener también la solución del problema dual al introducido. El método devuelve en este orden la solución del problema(valor de las variables),el valor de la función objetivo para esa solución, una explicación del tipo de problema y el valor de las variables de la solución del problema dual, en caso de que se introduzca True, como último parámetro. No es necesario que se introduzca el problema en forma estándar puesto que el método ya realiza la transformación internamente.En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
End of explanation
"""
matrix=np.matrix([[rational(2,1),rational(1,1)],[rational(1,1),rational(-1,1)],[rational(5,1),rational(2,1)]])
resources=np.array([rational(18,1),rational(8,1),rational(0,1)])
sign=["<=","<=",">="]
function="max 2 1"
dual=Simplex.dualProblem(matrix,resources,sign,function)
print(Simplex.printMatrix(dual[0]))
print(Simplex.printMatrix(np.asmatrix(dual[1])))
print(dual[2])
print(dual[3])
# Si la longitud del vector de recursos, es diferente del número de filas de la matriz, devuelve None
matrix=np.matrix([[rational(2,1),rational(1,1)],[rational(1,1),rational(-1,1)],[rational(5,1),rational(2,1)]])
resources=np.array([rational(18,1),rational(8,1)])
sign=["<=","<=",">="]
function="max 2 1"
print(Simplex.dualProblem(matrix,resources,sign,function))
# Si el número de signos es diferente a la longitud del vector de recursos o diferente del número de filas de la matriz,
# devuelve None
matrix=np.matrix([[rational(2,1),rational(1,1)],[rational(1,1),rational(-1,1)],[rational(5,1),rational(2,1)]])
resources=np.array([rational(18,1),rational(8,1),rational(0,1)])
sign=["<=","<=",">=","<="]
function="max 2 1"
print(Simplex.dualProblem(matrix,resources,sign,function))
# Si se pasa por parámetro algo que no es una matriz de numpy con elementos rational en el primer parámetro, algo que no es un
# array de numpy con elementos rational en el segundo,algo que no es una lista de strings en el tercero o algo que no es un
# string en el cuarto
matrix=np.matrix([[2,1,4],[6,-4,-7],[8,12,9]])
resources=np.array([[1],[8],[10]])
sign=["<=","<=",">="]
function="min 3 10 0"
print(Simplex.dualProblem(matrix,resources,sign,function))
"""
Explanation: dualProblem
Este método recibe un problema de programación lineal y devuelve el problema dual del pasado por parámetro. Para ello, recibe una matriz de numpy con elementos rational que contiene las restricciones, sin signos ni recursos, un array de numpy con elementos rational que contiene los recursos, una lista de strings, que contienen los signos de las restricciones y un string que contiene la función en el formato "max/min 2 -3". El método devuelve el problema dual en este orden una matriz de numpy que contiene las restricciones, sin signos ni recursos, un array de numpy que contiene los recursos, una lista de strings, que contienen los signos de las restricciones y un string que contiene la función en el formato "max/min 2 -3". No es necesario que se introduzca el problema en forma estándar(tampoco en forma simétrica de maximización) puesto que el método ya realiza la transformación internamente. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
End of explanation
"""
colsOfIteration=np.array([3,4,1])
totalMatrix = np.matrix([[rational(2,1),rational(3,1),rational(4,1),rational(0,1),rational(1,1)],
[rational(3,1),rational(4,1),rational(7,1),rational(0,1),rational(0,1)],[rational(2,1),rational(6,1),
rational(7,1),rational(1,1),rational(0,1)]])
function=np.array([rational(3,1),rational(6,1),rational(-7,1),rational(0,1),rational(0,1)])
print(Simplex.printMatrix(np.asmatrix(Simplex.calculateSolutionOfDualProblem(colsOfIteration,function,
totalMatrix))))
# Si se pasa un número mayor de columnas(variables) del que hay en la matriz o en la función devuelve None
colsOfIteration=np.array([3,4,1,5,6,2])
totalMatrix = np.matrix([[rational(2,1),rational(3,1),rational(4,1),rational(0,1),rational(1,1)],
[rational(3,1),rational(4,1),rational(7,1),rational(0,1),rational(0,1)],[rational(2,1),rational(6,1),
rational(7,1),rational(1,1),rational(0,1)]])
function=np.array([rational(3,1),rational(6,1),rational(-7,1),rational(0,1),rational(0,1)])
print(Simplex.calculateSolutionOfDualProblem(colsOfIteration,function,totalMatrix))
# Si el número de columnas(variables) de la función es mayor que el de la matriz, devuelve None
colsOfIteration=np.array([3,4,1])
totalMatrix = np.matrix([[rational(2,1),rational(3,1),rational(4,1),rational(0,1),rational(1,1)],
[rational(3,1),rational(4,1),rational(7,1),rational(0,1),rational(0,1)],[rational(2,1),rational(6,1),
rational(7,1),rational(1,1),rational(0,1)]])
function=np.array([rational(3,1),rational(6,1),rational(-7,1),rational(0,1),rational(0,1),rational(7,1)])
print(Simplex.calculateSolutionOfDualProblem(colsOfIteration,function,totalMatrix))
# Si se pasa algo que no es un array de numpy en el primer o el segundo parámetro(este debe ser de elementos rational), o algo
# que no es una matriz de numpy con elementos rational en el tercero, devuelve None
colsOfIteration=np.array([3,4,1])
totalMatrix = np.matrix([[2,3,4,0,1],[3,4,7,0,0],[2,6,7,1,0]])
function=np.array([3,6,-7,0,0,4])
print(Simplex.calculateSolutionOfDualProblem(colsOfIteration,function,totalMatrix))
"""
Explanation: calculateSolutionOfDualProblem
Este método recibe las columnas o variables de la última iteración del problema en un array de numpy, el vector de la función en su forma de maximización en un array de numpy, y la matriz inicial con las restricciones del problema, en una matriz de numpy. Es necesario que tanto la matriz como la función, se encuentren en la forma estándar. Si la introducción de parámetros es correcta, se devuelve la solución del problema dual, en un array de numpy. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
End of explanation
"""
# Si se le pasa todo correcto, devuelve una función, y un string con la función
lineOfMatrix=np.array([rational(3,4),rational(2,1)])
sign="<="
resource=rational(4,1)
x = np.linspace(0, 10)
Simplex.convertToPlotFunction(lineOfMatrix, sign, resource, x)
# Si se le pasa una restricción con la segunda componente 0, devuelve un número
lineOfMatrix=np.array([rational(3,4),rational(0,1)])
sign="<="
resource=rational(4,1)
x = np.linspace(0, 10)
Simplex.convertToPlotFunction(lineOfMatrix, sign, resource, x)
# Si se le pasa una restricción que no tiene 2 componentes o tiene más de 2,devuelve None
lineOfMatrix=np.array([rational(3,4)])
print(Simplex.convertToPlotFunction(lineOfMatrix, sign,
resource, x))
# Si se le pasa algo que no es un array de numpy de rational en el primer parámetro, algo que no es un string en el segundo, algo
#que no es un rational en el tercero o algo que no es un array de numpy en el tercero,devuelve None
print(Simplex.convertToPlotFunction(lineOfMatrix, sign,
4, x))
"""
Explanation: Solución gráfica
convertToPlotFunction
Este método transforma una restricción en una función para ser representada. Para ello, recibe un array de numpy que contiene la restricción(todos los coeficientes deben ser rational), sin signo ni recurso,un string que contiene el signo, un rational que es el recurso que contiene los recursos, y una variable que será el linespace para su representación. Además de devolver la función, devuelve un string, con la función. Si el valor de y en la restricción es 0, devuelve un rational, en lugar de una función. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
End of explanation
"""
% matplotlib inline
import matplotlib.pyplot as plt
function=lambda x: 3*x+1
x=np.linspace(0, 10)
label="3x+1 = 2"
Simplex.showFunction(function, x, label)
plt.show()
# Se le puede pasar un número si la función es de tipo y=n
x=np.linspace(0, 10)
label="3x+1 = 2"
Simplex.showFunction(4,x, label)
plt.show()
# Si se le pasa algo que no es una función o un número en el primer elemento, algo que no es un array de numpy en el segundo, o
# algo que no es un string en el tercero, devuelve None
print(Simplex.showFunction(np.array([3,4,5]),x, label))
"""
Explanation: * showFunction*
Este método recibe una función y la representa. Para ello recibe una función,o un número si la función es de tipo y=n, una variable que será el linespace para representarlo y un string que será la etiqueta que se le dará a la función. Es necesario después de ejecutar este método hacer plt.show(). En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
End of explanation
"""
# Como vemos en este caso elimina un punto que está repetido
seq=[(rational(2,1),rational(3,4)),(rational(6,1),rational(7,4)),(rational(2,1),rational(3,4)),(rational(5,2),rational(3,4)),]
Simplex.eliminateRepeatedPoints(seq)
# Con enteros funciona perfectamente
seq=[(3,1),(4,5),(4,5),(2,1)]
Simplex.eliminateRepeatedPoints(seq)
# Con float no funciona exactamente
seq=[(3.0,1.1),(4.0,5.0),(4.000001,5.0),(2.0,1.0)]
Simplex.eliminateRepeatedPoints(seq)
# Si no se introduce un lista, devuelve None
print(Simplex.eliminateRepeatedPoints(4))
"""
Explanation: * eliminateRepeatedPoints*
Este método recibe una lista de puntos(en forma de tupla) y devuelve la misma lista, con los puntos repetidos eliminados. Con enteros y rational, funciona exactamente, no así con float si los números tienen muchos decimales, puesto que podría considerar por ejemplo 5.33333 y 5.33334 como dos números distintos, cuando podrían ser el mismo. En caso de no recibir una lista, devuelve None. Ejemplos:
End of explanation
"""
# Con enteros funciona perfectamente
list1=[(3,1),(4,5),(6,7)]
list2=[(2,5),(4,5),(4,8)]
Simplex.eliminatePoints(list1, list2)
# Con rational funciona perfectamente
list1=[rational(5,1),rational(2,5),rational(6,1)]
list2=[rational(8,7),rational(2,5),rational(10,8)]
Simplex.eliminatePoints(list1, list2)
# Con float no funciona exactamente
list1=[(3.0,1.0),(4.0,5.0),(6.0,7.0)]
list2=[(2.0,5.0),(4.000001,5.0),(4.0,8.0)]
Simplex.eliminatePoints(list1, list2)
# Si recibe algo que no sean dos listas, devuelve None
print(Simplex.eliminatePoints(3, list2))
"""
Explanation: * eliminatePoints*
Este método recibe dos listas, y devuelve una lista con los elementos de la primera lista que no están en la segunda. Se puede utilizar para eliminar puntos(tuplas) o cualquier elemento. Igual que el método anterior, con float no funciona exactamente.Si no recibe dos listas, devuelve None. Ejemplos:
End of explanation
"""
functionVector=np.array([rational(2,1),rational(3,1)])
points=[(rational(4,2),rational(3,4)),(rational(5,4),rational(6,8)),(rational(1,4),rational(6,1))]
solution = rational(19,4)
Simplex.calculatePointOfSolution(functionVector, points, solution)
functionVector=np.array([rational(2,1),rational(3,1)])
points=[(rational(4,2),rational(3,4)),(rational(5,4),rational(6,8)),(rational(1,4),rational(6,1))]
solution = rational(18,3)
print(Simplex.calculatePointOfSolution(functionVector, points, solution))
# Si recibe algo que no sea un array de numpy en el primer parámetro, una lista de puntos rational en el segundo, o un rational
# en el tercero, devuelve None
print(Simplex.calculatePointOfSolution(functionVector, points, 3.0))
"""
Explanation: calculatePointOfSolution
Est método recibe un array de numpy con los coeficientes de la función a optimizar(en forma de maximización),una lista de puntos cuyas coordenadas son rational, y un rational con el valor de la función objetivo optimizada. El método devuelve cuál es el punto que alcanza el valor pasado. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
End of explanation
"""
function="max 2 3"
points=[(rational(4,2),rational(3,4)),(rational(5,4),rational(6,8)),(rational(1,4),rational(6,1))]
sol=Simplex.calculateSolution(function, points)
print(sol[0])
print(sol[1])
function="min 2 3"
points=[(rational(4,2),rational(3,4)),(rational(5,4),rational(6,8)),(rational(1,4),rational(6,1))]
sol=Simplex.calculateSolution(function, points)
print(sol[0])
print(sol[1])
# Si la lista esta vacía, devuelve None
print(Simplex.calculateSolution(function,[]))
# Si recibe algo que no es un string en el primer parámetro o una lista de puntos rational en el segundo devuelve None
print(Simplex.calculateSolution(function, 4))
"""
Explanation: calculateSolution
Este método recibe una función a optimizar en un string, en el formato que se puede ver en los ejemplos. Recibe un conjunto de puntos cuyas coordenas son rational. El método devuelve el valor de la función optimizada, y cuál es el punto de los pasados que la optimiza.Si la lista no tiene puntos, devuelve None. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
End of explanation
"""
line1=np.array([rational(2,1),rational(3,4)])
line2=np.array([rational(8,3),rational(7,9)])
resource1=rational(3,1)
resource2=rational(4,1)
point=Simplex.intersectionPoint(line1, line2, resource1, resource2)
print("("+str(point[0])+","+str(point[1])+")")
# Si no hay punto de intersección, devuelve None
line1=np.array([rational(2,1),rational(3,4)])
line2=np.array([rational(2,1),rational(3,4)])
resource1=rational(3,1)
resource2=rational(4,1)
print(Simplex.intersectionPoint(line1, line2, resource1, resource2))
# Si se introduce algo que no es un array de rational de longitud 2 en los dos primeros parámetros, o algo que no es un rational,
# en los dos últimos, devuelve None
print(Simplex.intersectionPoint(3, line2, resource1, resource2))
"""
Explanation: intersectionPoint
Este método calcula el punto de intersección entre dos restricciones de tipo "=". Recibe dos array de numpy, cuyos componenetes deben ser rational, que contienen los coeficientes de las restricciones, y recibe también los recursos de cada restricción en dos rational. En caso de que no haya punto de intersección entre ellas, devuelve None. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
End of explanation
"""
points=[(rational(4,2),rational(-3,4)),(rational(5,4),rational(6,-8)),(rational(1,4),rational(6,1))]
Simplex.eliminateNegativePoints(points)
# Si recibe algo que no es una lista de puntos rational, devuelve None
points=[(4,2),(6,-8),(6,1)]
print(Simplex.eliminateNegativePoints(points))
"""
Explanation: eliminateNegativePoints
Este método recibe una lista de puntos cuyas coordenadas son rational, y devuelve la lista, sin aquellos puntos con coordenadas negativas. Si recibe algo que no es una lista de puntos rational, devuelve None. Ejemplos:
End of explanation
"""
matrix=np.array([[rational(3,4),rational(3,1)],[rational(4,5),rational(9,1)],[rational(6,1),rational(0,1)]])
resources=np.array([rational(3,1),rational(2,1),rational(4,1)])
Simplex.calculateAllIntersectionPoints(matrix, resources)
# Si el número de restricciones es distinto del de recursos, devuelve None
matrix=np.array([[rational(3,4),rational(3,1)],[rational(4,5),rational(9,1)],[rational(6,1),rational(0,1)]])
resources=np.array([rational(3,1),rational(2,1)])
print(Simplex.calculateAllIntersectionPoints(matrix, resources))
# Si recibe algo que no sea un array de numpy, con elementos rational, devuelve None
print(Simplex.calculateAllIntersectionPoints(matrix, 4))
"""
Explanation: calculateAllIntersectionPoints
Este método recibe un array de arrays de numpy con todas las restricciones, sin signo ni recursos, y un array de numpy con los recursos de cada restricción. El método devuelve en una lista, todos los puntos de intersección entre las restricciones y de las restricciones con los ejes de coordenadas positivos. También añade el punto (0,0). En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
End of explanation
"""
matrix=np.array([[rational(3,4),rational(3,1)],[rational(4,5),rational(9,1)],[rational(6,1),rational(0,1)]])
resources=np.array([rational(3,1),rational(2,1),rational(4,1)])
constX= rational(10,1)
constY= rational(8,1)
Simplex.calculateNotBoundedIntersectionPoints(matrix, resources, constX, constY)
matrix=np.array([[rational(3,4),rational(3,1)],[rational(4,5),rational(9,1)]])
resources=np.array([rational(3,1),rational(2,1),rational(4,1)])
constX= rational(10,1)
constY= rational(8,1)
print(Simplex.calculateNotBoundedIntersectionPoints(matrix, resources, constX, constY))
# Si recibe algo que no sea un array de numpy, con elementos rational, en los dos primeros parámetros o algo que no sea un
# rational en los dos últimos, devuelve None
print(Simplex.calculateNotBoundedIntersectionPoints(matrix, resources, np.array([rational(4,5)]), constY))
"""
Explanation: calculateNotBoundedIntersectionPoints
Este método recibe un array de arrays de numpy con todas las restricciones, sin signo ni recursos,un array de numpy con los recursos de cada restricción y los máximos valores de x y de y que se van a representar, en dos ratioanl. El método devuelve en una lista, los puntos de intersección entre las restricciones y los ejes imaginarios constituidos en los máximos puntos representados. Por ejemplo si se pasa constX=3 y constY=4, devolverá los puntos de intersección entre las restricciones y los ejes y=3 y x=4 . También añade el punto de intersección entre los dos hipotéticos ejes(en el ejemplo anterior el punto (4,3). En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
End of explanation
"""
# Si cumple la inecuación
inecuation=np.array([3,4])
solution=(1,1)
sign=">="
resource=6
Simplex.checkIfIsSolution(inecuation, solution, sign, resource)
# Con rational también funciona
inecuation=np.array([rational(3,2),rational(4,3)])
solution=(rational(2,1),rational(1,1))
sign="<="
resource=rational(5,1)
Simplex.checkIfIsSolution(inecuation, solution, sign, resource)
# Si la inecuación no se cumple
inecuation=np.array([3,4])
solution=(1,1)
sign="="
resource=6
Simplex.checkIfIsSolution(inecuation, solution, sign, resource)
# No funciona exactamente con float
inecuation=np.array([3.0,4.0])
solution=(1.0,1.0)
sign="="
resource=7.00001
Simplex.checkIfIsSolution(inecuation, solution, sign, resource)
# Si se introduce algo que no se un array de numpy de longitud 2 en el primer parámetro, una tupla en el segundo, un string en el
# tercero o un número en el último, devuelve None
print(Simplex.checkIfIsSolution(inecuation, solution, sign,np.array([3,4])))
"""
Explanation: checkIfIsSolution
Este método recibe una restricción, con los coeficentes de la misma en una array de numpy, la solución a probar en una tupla, el signo en un string y el recurso en un número. El método devuelve True, si la solución satisface la restricción, o False si no la satisface. El método funciona con enteros y rational, perfectamente, pero con float, no es del todo exacto. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
End of explanation
"""
# El método funciona con valores rational, eliminando los puntos que no pertencen a la región factible
points=[(rational(0,1),rational(5,1)),(rational(5,1),rational(0,1)),(rational(10,1),rational(12,1)),
(rational(-30,1),rational(1,2))]
inecuations=np.array([np.array([rational(-7,1),rational(10,1)]),np.array([rational(2,1),rational(1,1)]),
np.array([rational(8,1),rational(-7,1)])])
resources=np.array([rational(50,1),rational(32,1),rational(40,1)])
sign=["<=","<=","<="]
Simplex.calculateFeasibleRegion(points, inecuations, resources, sign)
# El método funciona con valores enteros, eliminando los puntos que no pertencen a la región factible
points=[(0,5),(5,0),(10,12),(-30,1)]
inecuations=np.array([np.array([-7,10]),np.array([2,1]), np.array([8,-7])])
resources=np.array([50,32,40])
sign=["<=","<=","<="]
Simplex.calculateFeasibleRegion(points, inecuations, resources, sign)
# El número de restricciones tiene que ser igual que el de signos y el de recursos
points=[(0,5),(5,0),(10,12),(-30,1)]
inecuations=np.array([np.array([-7,10]),np.array([2,1]), np.array([8,-7])])
resources=np.array([50,32])
sign=["<=","<=","<="]
print(Simplex.calculateFeasibleRegion(points, inecuations, resources, sign))
# Si se introduce algo que no es una lista, en el primer parámetro, un array de numpy en el segundo y tercer parámetro, o una
# lista de strings, en el cuarto parámetro, devuelve None
inecuations=np.matrix([np.array([2,1]),np.array([1,-1]),np.array([5,2])])
print(Simplex.calculateFeasibleRegion(points, inecuations, resources, sign))
"""
Explanation: calculateFeasibleRegion
Este método recibe un conjunto de puntos en una lista, un conjunto de restricciones en un array de numpy, sin signos ni recursos,un array de numpy con los recursos y una lista de string con los signos. El método devuelve la lista de puntos introducidos, que cumplen todas las restricciones, es decir pertenecen a la región factible. El método funciona tanto con rational, como con enteros, no siendo tan exacto con float. Si ningún punto pertenece a la región factible, devolverá una lista vacía. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
End of explanation
"""
points=[(4,3),(5,6),(1,-2)]
Simplex.calculateMaxScale(points)
points=[(rational(0,1),rational(5,1)),(rational(5,1),rational(0,1)),(rational(10,1),rational(12,1)),
(rational(-30,1),rational(1,2))]
Simplex.calculateMaxScale(points)
points=[(4.6,3.7),(5.0,6.5),(1.2,-2.5)]
Simplex.calculateMaxScale(points)
# Si recibe algo que no es una lista, devuelve None
print(Simplex.calculateMaxScale(3))
"""
Explanation: calculateMaxScale
Este método recibe una lista de puntos, y devuelve el máximo valor de la coordenada x y de la coordenada y. Se utiliza para saber cuál es el punto máximo que se debe representar. En caso de no recibir una lista, devuelve None. Ejemplos:
End of explanation
"""
points=[(4,3),(5,6),(1,-2)]
Simplex.calculateMinScale(points)
points=[(rational(0,1),rational(5,1)),(rational(5,1),rational(0,1)),(rational(10,1),rational(12,1)),
(rational(-30,1),rational(1,2))]
Simplex.calculateMinScale(points)
points=[(4.6,3.7),(5.0,6.5),(1.2,-2.5)]
Simplex.calculateMinScale(points)
# Si recibe algo que no es una lista, devuelve None
print(Simplex.calculateMinScale(3))
"""
Explanation: calculateMinScale
Este método recibe una lista de puntos, y devuelve el mínimo valor de la coordenada x y de la coordenada y. Se utiliza para saber cuál es el punto mínimo que se debe representar. En caso de no recibir una lista, devuelve None. Ejemplos:
End of explanation
"""
point=(rational(0,1),rational(5,1))
inecuations=np.array([np.array([rational(-7,1),rational(10,1)]),np.array([rational(2,1),rational(1,1)]),
np.array([rational(8,1),rational(-7,1)])])
resources=np.array([rational(50,1),rational(32,1),rational(40,1)])
sign=["<=","<=","<="]
Simplex.checkIfPointInFeasibleRegion(point, inecuations, resources, sign)
point=(rational(-30,1),rational(1,2))
inecuations=np.array([np.array([rational(-7,1),rational(10,1)]),np.array([rational(2,1),rational(1,1)]),
np.array([rational(8,1),rational(-7,1)])])
resources=np.array([rational(50,1),rational(32,1),rational(40,1)])
sign=["<=","<=","<="]
Simplex.checkIfPointInFeasibleRegion(point, inecuations, resources, sign)
# El método funciona con valores enteros, eliminando los puntos que no pertencen a la región factible
points=(0,5)
inecuations=np.array([np.array([-7,10]),np.array([2,1]), np.array([8,-7])])
resources=np.array([50,32,40])
sign=["<=","<=","<="]
Simplex.checkIfPointInFeasibleRegion(point, inecuations, resources, sign)
# El número de restricciones tiene que ser igual que el de signos y el de recursos
points=(0,5)
inecuations=np.array([np.array([-7,10]),np.array([2,1])])
resources=np.array([50,32,40])
sign=["<=","<=","<="]
print(Simplex.checkIfPointInFeasibleRegion(point, inecuations, resources, sign))
# Si se introduce algo que no es una tupla, en el primer parámetro, un array de numpy en el segundo y tercer parámetro, o una
# lista de strings, en el cuarto parámetro, devuelve None
print(Simplex.checkIfPointInFeasibleRegion(4, inecuations, resources, sign))
"""
Explanation: checkIfPointInFeasibleRegion
Este método recibe un punto en una tupla, un conjunto de restricciones en un array de numpy, sin signos ni recursos,un array de numpy con los recursos y una lista de string con los signos. El método devuelve True, si el punto cumple todas las restricciones, es decir pertenece a la región factible, y False, si no pertenece. El método funciona tanto con rational, como con enteros, no siendo tan exacto con float. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
End of explanation
"""
# Puntos calculados con rational
inecuations=np.array([np.array([rational(-7,1),rational(10,1)]),np.array([rational(2,1),rational(1,1)]),
np.array([rational(8,1),rational(-7,1)])])
resources=np.array([rational(50,1),rational(32,1),rational(40,1)])
sign=["<=","<=","<="]
scale1=(rational(0,1),rational(0,1))
scale=(rational(10,1),rational(10,1))
Simplex.calculateIntegerPoints(inecuations, resources, sign, scale1,scale)
# El número de restricciones tiene que ser igual que el de signos y el de recursos
inecuations=np.array([np.array([rational(-7,1),rational(10,1)]),np.array([rational(2,1),rational(1,1)]),
np.array([rational(8,1),rational(-7,1)])])
resources=np.array([rational(50,1),rational(32,1),rational(40,1)])
sign=["<=","<="]
scale=(rational(10,1),rational(10,1))
print(Simplex.calculateIntegerPoints(inecuations, resources, sign, scale1, scale))
# Si se introduce algo que no es un array de numpy de rational en el primer y segundo parámetro,una lista de strings, en el
# tercer parámetro,o una tupla en el último parámetro devuelve None
print(Simplex.calculateIntegerPoints(inecuations, resources, sign, scale1, 4))
"""
Explanation: calculateIntegerPoints
Este método recibe un conjunto de restricciones en un array de numpy, sin signos ni recursos,un array de numpy con los recursos, una lista de string con los signos y dos tuplas, con el mínimo y el máximo punto a representar. El método devuelve una lista con todos los puntos enteros que pertenecen a esa región factible y que son menores que el punto máximo. Todos los elementos de las restricciones, recursos y de la tupla, deben ser rational. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
End of explanation
"""
points=[(rational(4,5),rational(1,2)),(rational(4,2),rational(3,1)),(rational(8,3),rational(3,5)),(rational(7,2),rational(4,5)),
(rational(7,9),rational(4,9)),(rational(9,8),rational(10,7))]
point=Simplex.centre(points)
print("("+str(point[0])+","+str(point[1])+")")
# Si recibe algo que no es una lista de puntos rational, devuelve None
points=[(4.0,5.0),(4.0,3.0),(8.0,5.0),(7.0,4.0),(7.0,9.0),(10.0,4.0)]
print(Simplex.centre(points))
"""
Explanation: centre
Este método recibe una lista de puntos, y devuelve el punto que está en el centro del polígono que forman dichos puntos. Las coordenadas de los puntos deben ser rational. En caso de no pasar una lista de puntos rational, devuelve None. Ejemplos:
End of explanation
"""
listPoints=[(rational(4,5),rational(1,2)),(rational(4,2),rational(3,1)),(rational(8,3),rational(3,5)),(rational(7,2)
,rational(4,5)),(rational(7,9),rational(4,9)),(rational(9,8),rational(10,7))]
M = (1.811574074074074,1.1288359788359787)
value = 2.7299657524245156
point=Simplex.isThePoint(listPoints, value, M)
print("("+str(point[0])+","+str(point[1])+")")
# En caso de no recibir una lista de puntos rational, en el primer parámetro, un número en el segundo o una tupla en el tercero,
# devuelve None(ver si coge float en el centro)
print(Simplex.isThePoint(listPoints, value, 4))
"""
Explanation: isThePoint
Este método recibe una lista de puntos, cuyas coordenadas son rational, un valor, que es el cálculo de la distancia al centro, y el centro de los puntos de la lista. El método devuelve el punto de la lista cuya distancia al centro, es el valor introducido. Si ningún punto, cumple la distancia devuelve None. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
End of explanation
"""
listPoints=[(rational(4,5),rational(1,2)),(rational(4,2),rational(3,1)),(rational(8,3),rational(3,5)),(rational(7,2),
rational(4,5)), (rational(7,9),rational(4,9)),(rational(9,8),rational(10,7))]
Simplex.calculateOrder(listPoints)
# Si recibe algo que no es una lista de puntos con coordenadas rational
listPoints=[(4.0,5.0),(4.0,3.0),(8.0,5.0),(7.0,4.0),(7.0,9.0),(10.0,4.0)]
print(Simplex.calculateOrder(listPoints))
"""
Explanation: calculateOrder
Este método recibe una lista de puntos, cuyas coordenadas son rational, y devuelve la misma lista de puntos, pero ordenadas en sentido horario. En caso de no introducir una lista de rational, devuelve None. Ejemplos:
End of explanation
"""
# Si el punto está en la línea, devuelve True
point = (3,4)
line = np.array([3,2])
resource = 17
Simplex.pointIsInALine(point, line, resource)
# El método funciona con rational
point = (rational(3,1),rational(4,2))
line = np.array([rational(3,3),rational(2,1)])
resource = rational(7,1)
Simplex.pointIsInALine(point, line, resource)
# Si el punto no está en la línea, devuelve False
point = (3,4)
line = np.array([3,2])
resource = 10
Simplex.pointIsInALine(point, line, resource)
# El método no funciona exactamente con float
point = (3.0,4.0)
line = np.array([3.0,2.0])
resource = 17.00001
Simplex.pointIsInALine(point, line, resource)
# En caso de no recibir una tupla,en el primer parámetro, un array de numpy en el segundo o un número en el tercero, devuelve
# None
print(Simplex.pointIsInALine(point, 3, resource))
"""
Explanation: pointIsInALine
Este método recibe un punto en una tupla, una restricción sin signos ni recursos en un array de numpy, y el recurso, como un número. El método devuelve True, si el punto, esta sobre la línea que representa la restricción en el plano, en otro caso devuelve False. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
End of explanation
"""
# Elimina el último punto que está en una línea
listPoints=[(rational(3,1),rational(5,7)),(rational(5,8),rational(6,2)),(rational(4,6),rational(8,9)),(rational(8,1),
rational(2,1))]
matrix=np.array([[rational(2,1),rational(1,1)],[rational(1,1),rational(-1,1)],[rational(5,1),rational(2,1)]])
resources=np.array([rational(18,1),rational(8,1),rational(0,1)])
Simplex.deleteLinePointsOfList(listPoints, matrix, resources)
# Si recibe algo que no es una lista de puntos con coordenadas rational,o algo que no es un array de numpy con elementos rational
# en el segundo y tercer parámetro,devuelve None
print(Simplex.deleteLinePointsOfList(listPoints, 4, resources))
"""
Explanation: deleteLinePointsOfList
Este método recibe un conjunto de puntos en una lista, un array de numpy con un conjunto de restricciones sin signos, ni recursos, y un array de numpy con los recursos de las restricciones. El método devuelve la lista de puntos, pero sin aquellos puntos que están en la línea que representa alguna de las restricciones introducidas. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
End of explanation
"""
%matplotlib inline
matrix=np.matrix([[rational(2,1),rational(1,1)],[rational(1,1),rational(-1,1)],[rational(5,1),rational(2,1)]])
resources=np.array([rational(18,1),rational(8,1),rational(0,1)])
signs=["<=","<=",">="]
function="max 2 1"
save= False
Simplex.showProblemSolution(matrix, resources, signs, function, save)
# Si el número de signos es diferente a la longitud del vector de recursos o diferente del número de filas de la matriz,
# devuelve None
matrix=np.matrix([[2,1],[1,-1],[5,2]])
resources=np.array([[18],[8]])
signs=["<=","<=",">="]
function="max 2 1"
save=False
print(Simplex.showProblemSolution(matrix, resources, signs, function, save))
# Si se pasa por parámetro algo que no es una matriz de numpy en el primer parámetro con elementos rational, algo que no es un
# array de numpy con elementos rationalen el segundo,algo que no es una lista de strings en el tercero,algo que no es un string
# en el cuarto o algo que no sea False o un string en el quinto,devuelve None
matrix=np.matrix([[2,1],[1,-1],[5,2]])
resources=np.array([[18],[8],[4]])
signs=["<=","<=",">="]
function="max 2 1"
print(Simplex.showProblemSolution(matrix, resources, signs, function, False))
"""
Explanation: showProblemSolution
Este método resuelve el problema de programación lineal que se le pasa por parámetro, de manera gráfica. Para ello, recibe una matriz de numpy que contiene las restricciones, sin signos ni recursos, un array de numpy que contiene los recursos, una lista de strings, que contienen los signos de las restricciones, un string que contiene la función en el formato "max/min 2 -3" y un valor False o un nombre, que determina si se quiere guardar la imagen en el archivo con el nombre indicado. El método muestra la solución gráfica, siempre que el problema tenga solo 2 variables, en otro caso devuelve None. No es necesario que se introduzca el problema en forma estándar. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:
End of explanation
"""
|
poppy-project/pypot | samples/notebooks/QuickStart playing with a PoppyErgo.ipynb | gpl-3.0 | from pypot.creatures import PoppyErgo
ergo = PoppyErgo()
"""
Explanation: QuickStart: Playing with a Poppy Ergo (or a PoppyErgoJr)
This notebook is still work in progress! Feedbacks are welcomed!
In this tutorial we will show how to get started with your PoppyErgo creature. You can use a PoppyErgoJr instead.
<img src="https://raw.githubusercontent.com/poppy-project/poppy-ergo-jr/master/doc/img/poppy-ergo-jr.jpg" alt="Poppy Ergo Jr" style="height: 500px;"/>
To run the code in this notebook, you will need:
* a poppy ergo creature (or a Jr)
* the pypot library
* the poppy-ergo library (or use the poppy-ergo-jr library instead)
You can install those libraries with the pip tool (see here if you don't know how to run this):
```bash
pip install pypot poppy-ergo
```
Connect to your robot
For a PoppyErgo:
End of explanation
"""
from pypot.creatures import PoppyErgoJr
ergo = PoppyErgoJr()
"""
Explanation: For a PoppyErgoJr:
End of explanation
"""
ergo
ergo.m2
ergo.m2.present_position
ergo.m2.present_temperature
for m in ergo.motors:
print 'Motor "{}" current position = {}'.format(m.name, m.present_position)
"""
Explanation: Get robot current status
End of explanation
"""
ergo.m3.compliant
ergo.m6.compliant = False
"""
Explanation: Turn on/off the compliancy of a motor
End of explanation
"""
ergo.m6.goal_position = 0.
for m in ergo.motors:
m.compliant = False
# Goes to the position 0 in 2s
m.goto_position(0, 2)
# You can also change the maximum speed of the motors
# Warning! Goto position also change the maximum speed.
for m in ergo.motors:
m.moving_speed = 50
"""
Explanation: Go to the zero position
End of explanation
"""
import time
ergo.m4.goal_position = 30
time.sleep(1.)
ergo.m4.goal_position = -30
"""
Explanation: Make a simple dance movement
On a single motor:
End of explanation
"""
ergo.m4.goal_position = 30
ergo.m5.goal_position = 20
ergo.m6.goal_position = -20
time.sleep(1.)
ergo.m4.goal_position = -30
ergo.m5.goal_position = -20
ergo.m6.goal_position = 20
"""
Explanation: On multiple motors:
End of explanation
"""
def dance():
ergo.m4.goal_position = 30
ergo.m5.goal_position = 20
ergo.m6.goal_position = -20
time.sleep(1.)
ergo.m4.goal_position = -30
ergo.m5.goal_position = -20
ergo.m6.goal_position = 20
time.sleep(1.)
dance()
for _ in range(4):
dance()
"""
Explanation: Wrap it inside a function for convenience:
End of explanation
"""
def dance2():
ergo.goto_position({'m4': 30, 'm5': 20, 'm6': -20}, 1., wait=True)
ergo.goto_position({'m4': -30, 'm5': -20, 'm6': 20}, 1., wait=True)
for _ in range(4):
dance2()
"""
Explanation: Using goto position instead:
End of explanation
"""
|
LeeBergstrand/pygenprop | docs/source/_static/tutorial/tutorial.ipynb | apache-2.0 | import requests
from io import StringIO
from pygenprop.results import GenomePropertiesResults, GenomePropertiesResultsWithMatches, \
load_assignment_caches_from_database, load_assignment_caches_from_database_with_matches
from pygenprop.database_file_parser import parse_genome_properties_flat_file
from pygenprop.assignment_file_parser import parse_interproscan_file, \
parse_interproscan_file_and_fasta_file
from sqlalchemy import create_engine
# The Genome Properties is a flat-file database that can be fount on Github.
# The latest release of the database can be found at the following URL:
genome_properties_database_url = 'https://raw.githubusercontent.com/ebi-pf-team/genome-properties/master/flatfiles/genomeProperties.txt'
# For this tutorial, we will stream the file directly into the Jupyter notebook. Alternatively,
# one could download the file with UNIX wget or curl commands and open it from the file system.
with requests.Session() as current_download:
response = current_download.get(genome_properties_database_url, stream=True)
tree = parse_genome_properties_flat_file(StringIO(response.text))
# There are 1286 properties in the Genome Properties tree.
len(tree)
# Find all properties of type "GUILD".
for genome_property in tree:
if genome_property.type == 'GUILD':
print(genome_property.name)
# Get property by identifier
virulence = tree['GenProp0074']
virulence
# Iterate to get the identifiers of child properties of virulence
types_of_vir = [genprop.id for genprop in virulence.children]
# Iterate the names of steps of Type III Secretion
steps_of_type_3_secretion = [step.name for step in virulence.children[0].steps]
steps_of_type_3_secretion
"""
Explanation: Introduction to Pygenprop
An python library for interactive programmatic usage of Genome Properties
InterProScan files used in this tutorial can be found at:
- https://raw.githubusercontent.com/Micromeda/pygenprop/master/docs/source/_static/tutorial/E_coli_K12.tsv
- https://raw.githubusercontent.com/Micromeda/pygenprop/master/docs/source/_static/tutorial/E_coli_K12.faa
- https://raw.githubusercontent.com/Micromeda/pygenprop/master/docs/source/_static/tutorial/E_coli_O157_H7.tsv
- https://raw.githubusercontent.com/Micromeda/pygenprop/master/docs/source/_static/tutorial/E_coli_O157_H7.faa
Creation and use of GenomePropertyTree objects
GenomePropertyTree objects allow for the programmatic exploration of the Genome properties database.
End of explanation
"""
# Parse InterProScan files
with open('E_coli_K12.tsv') as ipr5_file_one:
assignment_cache_1 = parse_interproscan_file(ipr5_file_one)
with open('E_coli_O157_H7.tsv') as ipr5_file_two:
assignment_cache_2 = parse_interproscan_file(ipr5_file_two)
# Create results comparison object
results = GenomePropertiesResults(assignment_cache_1, assignment_cache_2, properties_tree=tree)
results.sample_names
# The property results property is used to compare two property assignments across samples.
results.property_results
# The step results property is used to compare two step assignments across samples.
results.step_results
# Get properties with differing assignments
results.differing_property_results
# Get property assignments for virulence properties
results.get_results(*types_of_vir, steps=False)
# Get step assignments for virulence properties
results.get_results(*types_of_vir, steps=True)
# Get counts of virulence properties assigned YES, NO, and PARTIAL per organism
results.get_results_summary(*types_of_vir, steps=False, normalize=False)
# Get counts of virulence steps assigned YES, NO, and PARTIAL per organism
results.get_results_summary(*types_of_vir, steps=True, normalize=False)
# Get percentages of virulence steps assigned YES, NO, and PARTIAL per organism
results.get_results_summary(*types_of_vir, steps=True, normalize=True)
"""
Explanation: Creation and use of GenomePropertiesResults objects
GenomePropertiesResults are used to compare property and step assignments across organisms programmatically.
End of explanation
"""
# Parse InterProScan files and FASTA files
with open('./E_coli_K12.tsv') as ipr5_file_one:
with open('./E_coli_K12.faa') as fasta_file_one:
extended_cache_one = parse_interproscan_file_and_fasta_file(ipr5_file_one, fasta_file_one)
# Parse InterProScan files and FASTA files
with open('./E_coli_O157_H7.tsv') as ipr5_file_two:
with open('./E_coli_O157_H7.faa') as fasta_file_two:
extended_cache_two = parse_interproscan_file_and_fasta_file(ipr5_file_two, fasta_file_two)
# Create a GenomePropertiesResultsWithMatches from the assignment caches.
extended_results = GenomePropertiesResultsWithMatches(extended_cache_one,
extended_cache_two,
properties_tree=tree)
# GenomePropertiesResultsWithMatches objects possess the same
# assignment comparison methods as GenomePropertiesResults objects
extended_results.property_results
extended_results.step_results
# Get the matches and protein sequences that support properties and steps assigned YES across both organisms.
extended_results.step_matches
# Get all matches for E_coli_K12
extended_results.get_sample_matches('E_coli_K12', top=False)
type_three_secretion_property_id = types_of_vir[0] # From section above.
# Get all matches for each Type III Secretion step across both organisms.
extended_results.get_property_matches(type_three_secretion_property_id)
# Get the lowest E-value matches for each Type III Secretion step for E_coli_O157_H7.
extended_results.get_property_matches(type_three_secretion_property_id, sample='E_coli_O157_H7', top=True)
# Get the lowest E-value matches for step 22 of Type III Secretion property across both organisms.
extended_results.get_step_matches(type_three_secretion_property_id, 22, top=True)
# Get all matches for step 22 of the Type III Secretion property for E. coli K12.
extended_results.get_step_matches(type_three_secretion_property_id, 22, top=False, sample='E_coli_K12')
# Get skbio protein objects for a particular step.
extended_results.get_supporting_proteins_for_step(type_three_secretion_property_id, 22, top=True)
"""
Explanation: Creation and use of GenomePropertiesResultsWithMatches objects
The GenomePropertiesResultsWithMatches class is an extension of GenomePropertiesResults class whose objects provide additional methods for comparing the InterProScan match information and protein sequences that support the existence of property steps.
End of explanation
"""
# Write FASTA file containing the sequences of the lowest E-value matches for
# Type III Secretion System component 22 across both organisms.
with open('type_3_step_22_top.faa', 'w') as out_put_fasta_file:
extended_results.write_supporting_proteins_for_step_fasta(out_put_fasta_file,
type_three_secretion_property_id,
22, top=True)
# Write FASTA file containing the sequences of all matches for
# Type III Secretion System component 22 across both organisms.
with open('type_3_step_22_all.faa', 'w') as out_put_fasta_file:
extended_results.write_supporting_proteins_for_step_fasta(out_put_fasta_file,
type_three_secretion_property_id,
22, top=False)
"""
Explanation: A note on Scikit-Bio
Scikit-Bio is a numpy-based bioinformatics library that is a competitor to BioPython. Because it is numpy-based, it is quite fast and can be used to perform operations such as generating sequence alignments or producing phylogenetic trees. Pygenprop integrates Scikit-Bio for reading and writing FASTA files and the get_supporting_proteins_for_step() function of GenomePropertiesResultsWithMatches objects returns a list of Scikit-Bio Sequence objects. These Sequence objects can be aligned using Scikit-Bio and later incorporated into phylogenetic trees that compare the proteins that support a pathway step. Alignment and tree construction can be performed inside a Jupyter Notebook.
See the following documentation and tutorials for more information:
http://scikit-bio.org/docs/0.5.5/alignment.html
http://scikit-bio.org/docs/0.5.5/tree.html
https://nbviewer.jupyter.org/github/biocore/scikit-bio-cookbook/blob/master/Progressive%20multiple%20sequence%20alignment.ipynb
https://nbviewer.jupyter.org/github/biocore/scikit-bio-cookbook/blob/master/Alignments%20and%20phylogenetic%20reconstruction.ipynb
End of explanation
"""
# Create a SQLAlchemy engine object for a SQLite3 Micromeda file.
engine_no_proteins = create_engine('sqlite:///ecoli_compare_no_proteins.micro')
# Write the results to the file.
results.to_assignment_database(engine_no_proteins)
# Create a SQLAlchemy engine object for a SQLite3 Micromeda file.
engine_proteins = create_engine('sqlite:///ecoli_compare.micro')
# Write the results to the file.
extended_results.to_assignment_database(engine_proteins)
"""
Explanation: Reading and writing Micromeda files
Micromeda files are a new SQLite3-based pathway annotation storage format that allows for the simultaneous transfer of multiple organism's Genome Properties assignments and supporting information such as InterProScan annotations and protein sequences. These files allow for the transfer of complete Genome properties Datasets between researchers and software applications.
End of explanation
"""
# Load results from a Micromeda file.
assignment_caches = load_assignment_caches_from_database(engine_no_proteins)
results_reconstituted = GenomePropertiesResults(*assignment_caches, properties_tree=tree)
# Load results from a Micromeda file with proteins sequences.
assignment_caches_with_proteins = load_assignment_caches_from_database_with_matches(engine_proteins)
results_reconstituted_with_proteins = GenomePropertiesResultsWithMatches(*assignment_caches_with_proteins,
properties_tree=tree)
"""
Explanation: A note on SQLAlchemy
Because Pygenprop uses SQLAlchemy to write Micromeda files (SQlite3), it can also write assignment results and supporting information to a variety of relational databases.
For example:
python
create_engine('postgresql://scott:tiger@localhost/mydatabase')
See the following documentation for more information:
https://docs.sqlalchemy.org/en/13/core/engines.html
End of explanation
"""
|
wuafeing/Python3-Tutorial | 01 data structures and algorithms/01.03 keep last n items.ipynb | gpl-3.0 | from collections import deque
q = deque(maxlen = 3)
q.append(1)
q.append(2)
q.append(3)
q
q.append(4)
q
q.append(5)
q
"""
Explanation: Previous
1.3 保留最后N个元素
问题
在迭代操作或者其他操作的时候,怎样只保留最后有限几个元素的历史记录?
解决方案
保留有限历史记录正是 collections.deque 大显身手的时候。比如,下面的代码在多行上面做简单的文本匹配, 并返回匹配所在行的前 N 行:
``` python
from collections import deque
def search(lines, pattern, history = 5):
previous_lines = deque(maxlen = history)
for li in lines:
if pattern in li:
yield li, previous_lines
previous_lines.append(li)
Example use on a file
if main = "main":
with open(r"../../cookbook/somefile.txt", "r") as f:
for line, prevlines in search(f, "python", 5):
for pline in prevlines:
print(pline, end = "")
print(line, end = "")
print("-" * 20)
```
讨论
我们在写查询元素的代码时,通常会使用包含 yield 表达式的生成器函数,也就是我们上面示例代码中的那样。 这样可以将搜索过程代码和使用搜索结果代码解耦。如果你还不清楚什么是生成器,请参看 4.3 节。
使用 deque(maxlen=N) 构造函数会新建一个固定大小的队列。当新的元素加入并且这个队列已满的时候, 最老的元素会自动被移除掉。
代码示例:
End of explanation
"""
q = deque()
q.append(1)
q.append(2)
q.append(3)
q
q.appendleft(4)
q
q.pop()
q
q.popleft()
"""
Explanation: 尽管你也可以手动在一个列表上实现这一的操作(比如增加、删除等等)。但是这里的队列方案会更加优雅并且运行得更快些。
更一般的, deque 类可以被用在任何你只需要一个简单队列数据结构的场合。 如果你不设置最大队列大小,那么就会得到一个无限大小队列,你可以在队列的两端执行添加和弹出元素的操作。
代码示例:
End of explanation
"""
|
ueapy/ueapy.github.io | content/notebooks/2019-02-14-matplotlib-subplots-callum.ipynb | mit | import numpy as np
import matplotlib.pyplot as plt
"""
Explanation: Today we'll be using matplotlib's pyplot to make clearer, prettier figures.
First we import packages and generate some data to plot.
End of explanation
"""
plt.rcParams.update({"font.size": 20})
"""
Explanation: Pump up the font sizes on plots. Better to do it now than when you need 10 plots for a powerpoint!
End of explanation
"""
SMALL_SIZE = 16
MED_SIZE = 20
LARGE_SIZE = 26
plt.rc('font', size=MED_SIZE)
plt.rc('xtick', labelsize=SMALL_SIZE)
plt.rc('ytick', labelsize=SMALL_SIZE)
plt.rc('axes', titlesize=MED_SIZE)
plt.rc('legend', fontsize=MED_SIZE)
"""
Explanation: More fine grained control can be exerted with rc from matplotlib
End of explanation
"""
time = np.linspace(0, np.pi, 50)
no_of_lines = 9
x1 = np.empty((no_of_lines, len(time)))
for row in range(no_of_lines):
x1[row, :] = np.sin(time + 2 * np.pi * np.random.rand()) + 0.2 * np.random.rand(len(time))
plt.plot(time, x1[0, :])
"""
Explanation: Make 9 sine curves with random offsets and noise.
End of explanation
"""
for row in range(no_of_lines):
plt.plot(time, x1[row, :])
"""
Explanation: Now plot all of them
End of explanation
"""
# Using subplots to make a 3 X 3 grid of axes for plotting
fig, axs = plt.subplots(3, 3, figsize=(8, 8))
# ravel makes the axs object into an array of 9 objects we can access
# with indices, very handy for plotting within a loop
axs = axs.ravel()
for row in range(no_of_lines):
axs[row].plot(time, x1[row, :])
"""
Explanation: What a mess! Better off with some subplots
End of explanation
"""
fig, axs = plt.subplots(3, 3, figsize=(8, 8), sharex="col", sharey="row")
axs = axs.ravel()
fig.subplots_adjust(hspace=0.05, wspace=0.05)
for row in range(no_of_lines):
axs[row].plot(time, x1[row, :]);
"""
Explanation: We can improve this by sharing axis labels and reducing the space between plots
End of explanation
"""
# Making a grid of temperature data with meshgrid
delta1 = 0.025
x1 = np.sin(np.arange(np.pi / 4, 3 * np.pi / 4, delta1))
y1 = np.arange(10, 20, delta1)
X1, Y1 = np.meshgrid(x1, y1)
temp = X1 * Y1
distance = np.linspace(0, 90, len(x1))
depth = 20 - y1
fig, axs = plt.subplots(
2,
2,
figsize=(8, 8),
sharex="col",
sharey="row",
gridspec_kw={"height_ratios": [1, 4], "width_ratios": [4, 1]},
)
fig.subplots_adjust(hspace=0, wspace=0)
axs = axs.ravel()
# Plotting the surface temperature and mean temp depth profile alongside the contour plot of temp
axs[0].plot(distance, temp[-1, :], label="surface temp")
axs[2].contourf(distance, depth, temp)
axs[3].plot(np.nanmean(temp, 1), depth)
# Setting limits to keep things tight
axs[2].set(xlim=[distance[0], distance[-1]], ylim=[depth[0], depth[-1]])
axs[3].set(xlim=[np.min(temp), np.max(temp)]);
"""
Explanation: Subplots don't have to be identical. If we want we can change the relative sizes with gridspec. Especailly useful if you are mixing line and contour plots into one superfigure
End of explanation
"""
fig, axs = plt.subplots(2, 2, figsize=(8, 8), sharex="col", sharey="row",
gridspec_kw={"height_ratios": [1, 4], "width_ratios": [4, 1]})
fig.subplots_adjust(hspace=0, wspace=0)
axs = axs.ravel()
axs[0].plot(distance, temp[-1, :], label="surface temp")
p1 = axs[2].contourf(distance, depth, temp)
axs[3].plot(np.nanmean(temp, 1), depth)
# Add a colorbar to the right of the bottom right plot
fig.colorbar(ax=axs[3], mappable=p1, label=r"Temperature $\mathrm{^{\circ}C}$")
# Add a label to the first subplot
axs[0].legend()
# Remove the lines and ticks from the unused top right axis
axs[1].axis("off")
# Addding labels and limits
axs[2].set(
xlim=[distance[0], distance[-1]],
ylim=[depth[0], depth[-1]],
ylabel="Depth (m)",
xlabel="Distance (km)",
)
axs[3].set(xlim=[np.min(temp), np.max(temp)], xlabel="mean temp");
"""
Explanation: Some more tricks to make the plot look better
End of explanation
"""
fig,axs = plt.subplots(2,1,figsize=(8,8),sharex = 'col',
gridspec_kw = {'height_ratios':[1, 2]})
fig.subplots_adjust(hspace = 0, wspace=0)
axs = axs.ravel()
split_level = 0.1
# Plotting the surface temperature and mean temp depth profile alongside the contour plot of temp
axs[0].contourf(distance,depth,temp)
axs[0].set(ylim=[split_level,depth[-1]])
axs[1].contourf(distance,depth,temp)
axs[1].set(ylim=[depth[0],split_level])
# Setting limits to keep things tight
axs[1].set(xlim=[distance[0],distance[-1]])
"""
Explanation: We can use this method to make split level plots, if for instance you want to capture near surface variability in greater detail
End of explanation
"""
HTML(html)
"""
Explanation: A few more examples can be found in this post: More on subplots in matplotlib.
End of explanation
"""
|
csc-training/python-introduction | notebooks/examples/5.2 Strings.ipynb | mit | ananasakäämä = "höhö 电脑"
print(ananasakäämä)
"""
Explanation: Strings
The most major difference between Python versions 2 and 3 is in string handling.
In Python 3 all strings are by default Unicode strings. The Python interpreter expects Python source files to be UTF-8 encoded Unicode strings.
What Unicode is beyond the scope of this course, but you can check
* the Python documentation
* the Wikipedia article on Unicode
* the Unicode consortium home pages
* your resident programmer (warning: you may be in for along monologue)
If you don't what Unicode or encodings are, do not despair. You will find out if you need to find out and people have lived their entire lives happily without knowing what encodings are.
Suffice to say that it is safe to use Unicode characters in strings and in variable names in Python 3.
End of explanation
"""
print("\N{GREEK CAPITAL LETTER DELTA}") # using the character name
print("\u0394") # using a 16-bit hex value
print("\U00000394") # using a 32-bit hex value
"""
Explanation: Extra material
If you want to represent a character that for some reason you can't enter in your system or if you want to keep your source code ASCII-only (not necessarily a bad idea) you can enter non-ASCII characters
End of explanation
"""
permissible = "la'l'a'a"
print(permissible)
permissible = 'la"l"a"a'
print(permissible)
permissible = "\"i am a quote \\ \""
print(permissible)
"""
Explanation: If you have a bytes-object you can call the decode() method on it and give an encoding as an argument. Conversely you can encode() a string.
Creating Strings
Both single '' and double "" quotes denote a string. They are equally valid and it is a question of preference which to use. It is recommended to be consistent within the same file, though.
It is permissible to use single quotes inside a double-quoted string or double quotes inside a single quoted string.
If you want to have the same kind of quotes inside a string, you must escape the quotes in the string with a backslash \. As it is the escape character, any backslashes must be entered as double \\ to create a single literal backslash in a string.
End of explanation
"""
permissible = """
i am a multi
line
string
"""
print(permissible)
permissible = ("i"
" am" #note the whitespace before the word inside the string
' a'
" multiline"
' string')
print(permissible)
"""
Explanation: There are several ways to create multiline strings
* multiline string notation using triple quotes
* having multiple consecutive string literals inside parentheses, they will be interpreted as one
End of explanation
"""
example = "The quick brown fox jumps over the lazy dog "
## the split function splits at whitespace by default
example.split()
"""
Explanation: String wrangling
First it is essential to remember that strings are immutable: whatever you do with a string, it will not change. Most methods on strings will return a new, modified string or some other object.
If you have any programming experience many of the following examples will seem familiar to you.
A complete list can, as always be found at the documentation.
End of explanation
"""
example.split("e")[0]
"""
Explanation: It can be given any parameter. Te return value is a list so it can be indexed with [].
End of explanation
"""
example[5:10]
"""
Explanation: Strings can be indexed and sliced using the same notation as lists
End of explanation
"""
example.strip()
"""
Explanation: The strip() function removes the first and last instances of a character from the string, defaulting to whitespace. This is surprisingly often needed.
End of explanation
"""
example.upper()
"""
Explanation: Strings can be coerced to lower() or upper() case.
End of explanation
"""
example.find("ick")
"""
Explanation: Seeking a substring is also implemented with the find() method. It returns an index.
End of explanation
"""
"124".isdigit()
"""
Explanation: Sometimes it's important to know if a string is a digit or numeric.
End of explanation
"""
|
sueiras/training | tensorflow/00-Intro_to_tensorflow.ipynb | gpl-3.0 | # Header
# Basic libraries & options
from __future__ import print_function
#Basic libraries
import numpy as np
import tensorflow as tf
print('Tensorflow version: ', tf.__version__)
import time
# Select GPU
import os
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"]="1"
#Show images
import matplotlib.pyplot as plt
%matplotlib inline
# plt configuration
plt.rcParams['figure.figsize'] = (10, 10) # size of images
plt.rcParams['image.interpolation'] = 'nearest' # show exact image
plt.rcParams['image.cmap'] = 'gray' # use grayscale
"""
Explanation: Intro to tensor flow
Basic models over MNIST dataset
Linear model
NN one layer node
Convolutional model
Tensoboard example
Save & load models
End of explanation
"""
from tensorflow.contrib.keras import datasets, layers, models, optimizers
# Data
(X_train, y_train), (X_test, y_test) = datasets.mnist.load_data(path='mnist.npz')
X_train = X_train.astype('float32') / 255.
X_test = X_test.astype('float32') / 255.
# Base Keras Model
images = layers.Input(batch_shape=(None, 28, 28), dtype='float32', name='Images')
flat = layers.Flatten(name='Flat_image')(images)
dense = layers.Dense(500, activation='relu', name='Dense_layer')(flat)
output = layers.Dense(10, activation='softmax', name='Dense_output')(dense)
model_linear = models.Model(inputs=images, outputs=output)
# Train
sgd_optimizer = optimizers.SGD(lr=0.01)
model_linear.compile(loss='sparse_categorical_crossentropy',
optimizer=optimizers.SGD(lr=0.01), metrics=['accuracy'])
history_linear = model_linear.fit(X_train, y_train, batch_size=128, epochs=50,
verbose=1, validation_data=(X_test, y_test))
"""
Explanation: Base keras model
End of explanation
"""
# Start interactive session
sess = tf.InteractiveSession()
# Define input placeholders
x = tf.placeholder(tf.float32, shape=[None, 28, 28])
y = tf.placeholder(tf.int64, shape=[None, ])
# Define model using Keras layers
flat = layers.Flatten(name='Flat_image')(x)
dense = layers.Dense(500, activation='relu', name='Dense_layer')(flat)
output = layers.Dense(10, activation='relu', name='Dense_output')(dense)
# Define the Tensorflow Loss
cross_entropy = tf.losses.sparse_softmax_cross_entropy(y, output)
# Define the Tensorflow Train function
train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)
# Create an Accuracy metric to evaluate in test
y_pred = tf.nn.softmax(output)
correct_prediction = tf.equal(y, tf.argmax(y_pred,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Define a generator to data. Same code to an HDF5 source
def batch_generator(X, y, batch_size=64):
data_size = X.shape[0]
# Randomize batches in each epoch
batch_randomized = np.random.permutation(range(0, data_size-batch_size, batch_size))
# Iterate over each batch
for batch in batch_randomized:
x_batch = X[batch : batch+batch_size]
y_batch = y[batch : batch+batch_size]
yield x_batch, y_batch
# Intialize vars
sess.run(tf.global_variables_initializer())
# Basic iterator over the epochs and the batches to execute the trainer.
batch_size = 128
num_epoch = 50
for epoch in range(num_epoch):
for x_batch, y_batch in batch_generator(X_train, y_train, batch_size=batch_size):
train_step.run(feed_dict={x: x_batch, y: y_batch})
print(epoch, accuracy.eval(feed_dict={x: X_test, y: y_test}))
# Intialize vars
sess.run(tf.global_variables_initializer())
# Early stopping code. Stop if the current accuray is < that the min of the last 5 epochs.
batch_size = 128
acc_test=[]
epoch=0
stop=True
while stop:
#Train
epoch += 1
for x_batch, y_batch in batch_generator(X_train, y_train, batch_size=batch_size):
train_step.run(feed_dict={x: x_batch, y: y_batch})
# Test
acc_test += [accuracy.eval(feed_dict={x: X_test, y: y_test})]
if epoch%10==0:
print('Epoch: ', epoch, 'Accuracy: ', acc_test[-1])
# Stopping criteria
if epoch>10 and acc_test[-1] < min(acc_test[-10:-1]):
stop=False
print('STOP. Accuracy: ', acc_test[-1])
#When finish, close the interactive session
sess.close()
# Reset the graph to the next experiments
tf.reset_default_graph()
"""
Explanation: Tensorflow control the training proccess
- use a batch generator
- Implement a basic training process
- Create placeholders
- Create a loss function
- Create a trainer
- Define an accuracy metric
End of explanation
"""
# Start interactive session
sess = tf.InteractiveSession()
# Convolutional model
x = tf.placeholder(tf.float32, shape=[None, 28, 28])
y = tf.placeholder(tf.int64, shape=[None, ])
image = tf.reshape(x,[-1,28,28,1])
conv1 = layers.Conv2D(20, (5,5))(image)
pool1 = layers.MaxPooling2D(pool_size=(2, 2))(conv1)
conv2 = layers.Conv2D(20, (5,5))(pool1)
pool2 = layers.MaxPooling2D(pool_size=(2, 2))(conv2)
flat = layers.Flatten(name='Flat_image')(pool2)
print('Flat layer: ', flat)
# Tensorflow Dense layer
W_dense = tf.Variable(tf.truncated_normal([320, 500], stddev=0.1), name='W_dense')
b_dense = tf.Variable(tf.constant(0.1, shape=[500]), name='b_dense')
dense1 = tf.nn.relu(tf.matmul(flat, W_dense) + b_dense)
#dense1 = layers.Dense(500, activation='relu', name='Dense_1')(flat)
output = layers.Dense(10, activation='linear', name='Dense_output')(dense1)
# Define the Tensorflow Loss
cross_entropy = tf.losses.sparse_softmax_cross_entropy(y, output)
# Define the Tensorflow Train function
train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)
# Create an Accuracy metric to evaluate in test
y_pred = tf.nn.softmax(output)
correct_prediction = tf.equal(y, tf.argmax(y_pred,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Intialize vars
sess.run(tf.global_variables_initializer())
# Early stopping code. Stop if the current accuray is < that the min of the last 5 epochs.
batch_size = 128
acc_test=[]
epoch=0
stop=True
while stop:
#Train
epoch += 1
for x_batch, y_batch in batch_generator(X_train, y_train, batch_size=batch_size):
train_step.run(feed_dict={x: x_batch, y: y_batch})
# Test
acc_test += [accuracy.eval(feed_dict={x: X_test, y: y_test})]
if epoch%1==0:
print('Epoch: ', epoch, 'Accuracy: ', acc_test[-1])
# Stopping criteria
if epoch>10 and acc_test[-1] < min(acc_test[-10:-1]):
stop=False
print('STOP. Accuracy: ', acc_test[-1])
#When finish, close the interactive session
sess.close()
# Reset the graph to the next experiments
tf.reset_default_graph()
"""
Explanation: Tensorflow layers and tensorboard
- Combine Keras layers with tensorflow layers
- Monitor layers in tensorboard
End of explanation
"""
def variable_summaries(var, name):
"""Attach a lot of summaries to a Tensor."""
with tf.name_scope('summaries'):
mean = tf.reduce_mean(var)
tf.summary.scalar('mean/' + name, mean)
tf.summary.scalar('sttdev/' + name, tf.sqrt(tf.reduce_mean(tf.square(var - mean))))
tf.summary.scalar('max/' + name, tf.reduce_max(var))
tf.summary.scalar('min/' + name, tf.reduce_min(var))
tf.summary.histogram(name, var)
# Start interactive session
sess = tf.InteractiveSession()
# Convolutional model
x = tf.placeholder(tf.float32, shape=[None, 28, 28])
y = tf.placeholder(tf.int64, shape=[None, ])
image = tf.reshape(x,[-1,28,28,1])
with tf.name_scope('Conv1') as scope:
conv1 = layers.Conv2D(20, (5,5))(image)
pool1 = layers.MaxPooling2D(pool_size=(2, 2))(conv1)
variable_summaries(pool1, "pool1_summary")
with tf.name_scope('Conv2') as scope:
conv2 = layers.Conv2D(20, (5,5))(pool1)
pool2 = layers.MaxPooling2D(pool_size=(2, 2))(conv2)
variable_summaries(pool2, "pool2_summary")
with tf.name_scope('Dense') as scope:
flat = layers.Flatten(name='Flat_image')(pool2)
# Tensorflow Dense layer
W_dense = tf.Variable(tf.truncated_normal([320, 500], stddev=0.1), name='W_dense')
variable_summaries(W_dense, "W_dense_summary") # Summaries of the layer weigths
b_dense = tf.Variable(tf.constant(0.1, shape=[500]), name='b_dense')
variable_summaries(b_dense, "b_dense_summary") # Summaries of the layer weigths
dense1 = tf.nn.relu(tf.matmul(flat, W_dense) + b_dense)
variable_summaries(dense1, "dense1_summary") # Summaries of the layer output
output = layers.Dense(10, activation='linear', name='Dense_output')(dense1)
# Define the Tensorflow Loss
#with tf.name_scope('loss') as scope:
cross_entropy = tf.losses.sparse_softmax_cross_entropy(y, output)
loss_summary = tf.summary.scalar("loss", cross_entropy)
# Define the Tensorflow Train function
train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)
# Create an Accuracy metric to evaluate in test
#with tf.name_scope('acc') as scope:
y_pred = tf.nn.softmax(output)
correct_prediction = tf.equal(y, tf.argmax(y_pred,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
accuracy_summary = tf.summary.scalar("acc", accuracy)
# Add summaries to the graph
summaries_dir = '/tmp/tensorboard/tf'
with tf.name_scope('summaries') as scope:
merged = tf.summary.merge_all()
train_writer = tf.summary.FileWriter(summaries_dir + '/train', sess.graph)
test_writer = tf.summary.FileWriter(summaries_dir + '/test')
# Intialize vars
sess.run(tf.global_variables_initializer())
# Early stopping code. Stop if the current accuray is < that the min of the last 5 epochs.
batch_size = 128
acc_test=[]
epoch=0
stop=True
while stop:
#Train
epoch += 1
counter = 0
for x_batch, y_batch in batch_generator(X_train, y_train, batch_size=batch_size):
train_step.run(feed_dict={x: x_batch, y: y_batch})
counter += 1
if counter%10 == 0:
summary_str = merged.eval(feed_dict={x: x_batch, y: y_batch}) #TENSORBOARD
train_writer.add_summary(summary_str, epoch) #TENSORBOARD
# Test
acc_test += [accuracy.eval(feed_dict={x: X_test, y: y_test})]
summary_str = merged.eval(feed_dict={x: X_test, y: y_test}) #TENSORBOARD
test_writer.add_summary(summary_str, epoch) #TENSORBOARD
if epoch%1==0:
print('Epoch: ', epoch, 'Accuracy: ', acc_test[-1])
# Stopping criteria
if epoch>10 and acc_test[-1] < min(acc_test[-10:-1]):
stop=False
print('STOP. Accuracy: ', acc_test[-1])
sess.close()
tf.reset_default_graph()
"""
Explanation: Use tensorboard to show the net & the training process.
- The same previous convolutional model with the commands that need tensorboard
Based on https://www.tensorflow.org/how_tos/summaries_and_tensorboard/index.html
End of explanation
"""
|
sujitpal/polydlot | src/pytorch/11-shape-generation.ipynb | apache-2.0 | from __future__ import division, print_function
from sklearn.metrics import accuracy_score, confusion_matrix
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
import numpy as np
import matplotlib as mat
import matplotlib.pyplot as plt
import os
import shutil
%matplotlib inline
DATA_DIR = "../../data"
MODEL_FILE = os.path.join(DATA_DIR, "torch-11-shape-gen.model")
TRAIN_SIZE = 25000
# model expects 1 timestep input with (x, y) as features
# each batch is sequence of 3 input => 3 output coordinates
# we train for TRAIN_SIZE epochs
SEQ_LENGTH = 1
EMBED_SIZE = 2
BATCH_SIZE = 3
LEARNING_RATE = 1e-3
"""
Explanation: Shape Generation Problem
This is the sixth (and final) toy example from Jason Brownlee's Long Short Term Memory Networks with Python. We will train a network to generate rectangles. Per section 11.2 of the book:
We can frame the problem of generating random shapes as a sequence generation problem. We
can take drawing a rectangle as a sequence of points in the clockwise direction with 4 points in two-dimensional space. So to generatate a rectangle given by the points [BL, BR, TR, TL], the input sequence is [BL, BR, TR] and corresponding output sequence at these timesteps is [BR', TR', TL'].
Each point can be taken as one time step, with each of the x and y axes representing
separate features. Starting from 0,0, the task is to draw the remaining 3 points of the rectangle with consistent widths and heights. We will frame this problem as a one-coordinate generation problem, e.g. a one-to-one sequence prediction problem. Given a coordinate, predict the next coordinate. Then given the coordinate predicted at the last time step, predict the next coordinate, and so on.
End of explanation
"""
def draw_rectangles(points_seq):
fig, ax = plt.subplots()
polygons = []
for points in points_seq:
polygons.append(mat.patches.Polygon(points, closed=True, fill=True))
patches = mat.collections.PatchCollection(polygons, cmap=mat.cm.jet, alpha=0.4)
colors = 100 * np.random.rand(len(polygons))
patches.set_array(np.array(colors))
ax.add_collection(patches)
plt.show()
def random_rectangle():
bl = (0, 0)
br = (np.random.random(1), bl[1])
tr = (br[0], np.random.random(1))
tl = (0, tr[1])
return [bl, br, tr, tl]
rectangles = []
for i in range(5):
rectangles.append(random_rectangle())
draw_rectangles(rectangles)
def generate_data(num_recs):
xseq, yseq = [], []
for i in range(num_recs):
rect = random_rectangle()
xseq.append(np.array(rect[0:-1]))
yseq.append(np.array(rect[1:]))
X = np.expand_dims(np.array(xseq, dtype=np.float32), 2)
Y = np.expand_dims(np.array(yseq, dtype=np.float32), 2)
return X, Y
Xtrain, Ytrain = generate_data(TRAIN_SIZE)
print(Xtrain.shape, Ytrain.shape)
"""
Explanation: Prepare Data
Coordinates corresponding to random rectangles are generated using rules for training the network. Data is generated in (num_epoch, num_batches, sequence_length, num_features) format. This is because we are training the network to generate one coordinate at a time, so each batch corresponds to a single rectangle (3 moves).
End of explanation
"""
class ShapeGenerator(nn.Module):
def __init__(self, input_dim, hidden_dim, output_dim):
super(ShapeGenerator, self).__init__()
self.hidden_dim = hidden_dim
# network layers
self.lstm = nn.LSTM(input_dim, hidden_dim, 1, batch_first=True)
self.fcn = nn.Linear(hidden_dim, output_dim)
def forward(self, x):
if torch.cuda.is_available():
h = (Variable(torch.randn(1, x.size(0), self.hidden_dim).cuda()),
Variable(torch.randn(1, x.size(0), self.hidden_dim).cuda()))
else:
h = (Variable(torch.randn(1, x.size(0), self.hidden_dim)),
Variable(torch.randn(1, x.size(0), self.hidden_dim)))
x, h = self.lstm(x, h)
x = self.fcn(x)
return x
model = ShapeGenerator(EMBED_SIZE, 10, EMBED_SIZE)
if torch.cuda.is_available():
model.cuda()
print(model)
# size debugging
print("--- size debugging ---")
inp = Variable(torch.randn(BATCH_SIZE, SEQ_LENGTH, EMBED_SIZE))
outp = model(inp)
print(outp.size())
loss_fn = nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=LEARNING_RATE)
"""
Explanation: Define Network
End of explanation
"""
history = []
total_loss, prev_loss = 0., None
for epoch in range(TRAIN_SIZE):
Xbatch_data = Xtrain[epoch]
Ybatch_data = Ytrain[epoch]
Xbatch = Variable(torch.from_numpy(Xbatch_data).float())
Ybatch = Variable(torch.from_numpy(Ybatch_data).float())
if torch.cuda.is_available():
Xbatch = Xbatch.cuda()
Ybatch = Ybatch.cuda()
# initialize gradients
optimizer.zero_grad()
# forward
Ybatch_ = model(Xbatch)
loss = loss_fn(Ybatch_, Ybatch)
loss_data = loss.data[0]
total_loss += loss_data
# backward
loss.backward()
if epoch > 0 and (epoch % (TRAIN_SIZE // 10) == 0 or epoch == TRAIN_SIZE - 1):
cum_avg_loss = total_loss / epoch
history.append((epoch, cum_avg_loss))
if prev_loss is not None and prev_loss > cum_avg_loss:
torch.save(model.state_dict(), MODEL_FILE)
prev_loss = cum_avg_loss
print("Iteration {:5d}, cum. avg. loss: {:.3f}".format(epoch, cum_avg_loss))
plt.title("Loss")
epochs = [x[0] for x in history]
losses = [x[1] for x in history]
plt.plot(epochs, losses, color="r", label="train")
plt.legend(loc="best")
plt.show()
"""
Explanation: Train Network
End of explanation
"""
saved_model = ShapeGenerator(EMBED_SIZE, 10, EMBED_SIZE)
saved_model.load_state_dict(torch.load(MODEL_FILE))
if torch.cuda.is_available():
saved_model.cuda()
coord_seqs = []
for rect in range(5):
Xgen = None
coord_seq = [(0., 0.)]
for seq in range(3):
if Xgen is None:
Xgen_data = np.zeros((1, 1, 2), dtype=np.float32)
Xgen = Variable(torch.from_numpy(Xgen_data).float())
else:
Xgen = Ygen
Ygen = saved_model(Xgen)
if torch.cuda.is_available():
Ygen_data = Ygen.cpu().data.numpy()
else:
Ygen_data = Ygen.data.numpy()
coord_seq.append((Ygen_data[0, 0, 0], Ygen_data[0, 0, 1]))
coord_seqs.append(coord_seq)
draw_rectangles(coord_seqs)
os.remove(MODEL_FILE)
"""
Explanation: Generate Shapes
The trained network is now able to generate approximate rectangles as shown below. We generate 5 "rectangles" starting from (0, 0) in each case and generate the other 3 corners using the model.
End of explanation
"""
|
rasbt/algorithms_in_ipython_notebooks | ipython_nbs/essentials/divide-and-conquer-algorithm-intro.ipynb | gpl-3.0 | def linear_search(lst, item):
for i in range(len(lst)):
if lst[i] == item:
return i
return -1
lst = [1, 5, 8, 12, 13]
for k in [8, 1, 23, 11]:
print(linear_search(lst=lst, item=k))
"""
Explanation: Introduction to Divide-and-Conquer Algorithms
The subfamily of Divide-and-Conquer algorithms is one of the main paradigms of algorithmic problem solving next to Dynamic Programming and Greedy Algorithms. The main goal behind greedy algorithms is to implement an efficient procedure for often computationally more complex, often infeasible brute-force methods such as exhaustive search algorithms by splitting a task into subtasks that can be solved indpendently and in parallel; later, the solutions are combined to yield the final result.
Example 1 -- Binary Search
Let's say we want to implement an algorithm that returns the index position of an item that we are looking for in an array.
in an array. Here, we assume that the array is alreadt sorted. The simplest (and computationally most expensive) approach would be to check each element in the array iteratively, until we find the desired match or return -1:
End of explanation
"""
def binary_search(lst, item):
first = 0
last = len(lst) - 1
found = False
while first <= last and not found:
midpoint = (first + last) // 2
if lst[midpoint] == item:
found = True
else:
if item < lst[midpoint]:
last = midpoint - 1
else:
first = midpoint + 1
if found:
return midpoint
else:
return -1
for k in [8, 1, 23, 11]:
print(binary_search(lst=lst, item=k))
"""
Explanation: The runtime of linear search is obviously $O(n)$ since we are checking each element in the array -- remember that big-Oh is our upper bound. Now, a cleverer way of implementing a search algorithm would be binary search, which is a simple, yet nice example of a divide-and-conquer algorithm.
The idea behind divide-and-conquer algorithm is to break a problem down into non-overlapping subproblems of the original problem, which we can then solve recursively. Once, we processed these recursive subproblems, we combine the solutions into the end result.
Using a divide-and-conquer approach, we can implement an $O(\log n)$ search algorithm called binary search.
The idea behind binary search is quite simple:
We take the midpoint of an array and compare it to its search key
If the search key is equal to the midpoint, we are done, else
search key < midpoint?
Yes: repeat search (back to step 1) with subarray that ends at index position midpoint - 1
No: repeat search (back step 1) with subarray that starts midpoint + 1
Assuming that we are looking for the search key k=5, the individual steps of binary search can be illustrated as follows:
And below follows our Python implementation of this idea:
End of explanation
"""
def majority_ele_lin(lst):
cnt = {}
for ele in lst:
if ele not in cnt:
cnt[ele] = 1
else:
cnt[ele] += 1
for ele, c in cnt.items():
if c > (len(lst) // 2):
return (ele, c, cnt)
return (-1, -1, cnt)
###################################################
lst0 = []
print(lst0, '->', majority_ele_lin(lst=lst0)[0])
lst1 = [1, 2, 3, 4, 4, 5]
print(lst1, '->', majority_ele_lin(lst=lst1)[0])
lst2 = [1, 2, 4, 4, 4, 5]
print(lst2, '->', majority_ele_lin(lst=lst2)[0])
lst3 = [4, 2, 4, 4, 4, 5]
print(lst3, '->', majority_ele_lin(lst=lst3)[0])
print(lst3[::-1], '->', majority_ele_lin(lst=lst3[::-1])[0])
lst4 = [2, 3, 9, 2, 2]
print(lst4, '->',majority_ele_lin(lst=lst4)[0])
print(lst4[::-1], '->', majority_ele_lin(lst=lst4[::-1])[0])
lst5 = [0, 0, 2, 2, 2]
print(lst5, '->',majority_ele_lin(lst=lst5)[0])
print(lst5[::-1], '->', majority_ele_lin(lst=lst5[::-1])[0])
"""
Explanation: Example 2 -- Finding the Majority Element
"Finding the Majority Element" is a problem where we want to find an element in an array positive integers with length n that occurs more than n/2 in that array. For example, if we have an array $a = [1, 2, 3, 3, 3]$, $3$ would be the majority element. In another array, b = [1, 2, 3, 3] there exists no majority element, since $2$ (where $2$ is the the count of element $3$) is not greater than $n / 2$.
Let's start with a simple implementation where we count how often each unique element occurs in the array. Then, we return the element that meets the criterion "$\text{occurences } > n / 2$", and if such an element does not exist, we return -1. Note that we return a tuple of three items: (element, number_occurences, count_dictionary), which we will use later ...
End of explanation
"""
def majority_ele_dac(lst):
n = len(lst)
left = lst[:n // 2]
right = lst[n // 2:]
l_maj = majority_ele_lin(left)
r_maj = majority_ele_lin(right)
# case 3A
if l_maj[0] == -1 and r_maj[0] == -1:
return -1
# case 3B
elif l_maj[0] == -1 and r_maj[0] > -1:
cnt = r_maj[1]
if r_maj[0] in l_maj[2]:
cnt += l_maj[2][r_maj[0]]
if cnt > n // 2:
return r_maj[0]
# case 3C
elif r_maj[0] == -1 and l_maj[0] > -1:
cnt = l_maj[1]
if l_maj[0] in r_maj[2]:
cnt += r_maj[2][l_maj[0]]
if cnt > n // 2:
return l_maj[0]
# case 3D
else:
c1, c2 = l_maj[1], r_maj[1]
if l_maj[0] in r_maj[2]:
c1 = l_maj[1] + r_maj[2][l_maj[0]]
if r_maj[0] in l_maj[2]:
c2 = r_maj[1] + l_maj[2][r_maj[0]]
m = max(c1, c2)
if m > n // 2:
return m
return -1
###################################################
lst0 = []
print(lst0, '->', majority_ele_dac(lst=lst0))
lst1 = [1, 2, 3, 4, 4, 5]
print(lst1, '->', majority_ele_dac(lst=lst1))
lst2 = [1, 2, 4, 4, 4, 5]
print(lst2, '->', majority_ele_dac(lst=lst2))
lst3 = [4, 2, 4, 4, 4, 5]
print(lst3, '->', majority_ele_dac(lst=lst3))
print(lst3[::-1], '->', majority_ele_dac(lst=lst3[::-1]))
lst4 = [2, 3, 9, 2, 2]
print(lst4, '->',majority_ele_dac(lst=lst4))
print(lst4[::-1], '->', majority_ele_dac(lst=lst4[::-1]))
lst5 = [0, 0, 2, 2, 2]
print(lst5, '->',majority_ele_dac(lst=lst5))
print(lst5[::-1], '->', majority_ele_dac(lst=lst5[::-1]))
"""
Explanation: Now, "finding the majority element" is a nice task for a Divide and Conquer algorithm. Here, we use the fact that if a list has a majority element it is also the majority element of one of its two sublists, if we split it into 2 halves.
More concretely, what we do is:
Split the array into 2 halves
Run the majority element search on each of the two halves
Combine the 2 subresults
Neither of the 2 sub-arrays has a majority element; thus, the combined list can't have a majority element so that we return -1
The right sub-array has a majority element, whereas the left sub-array hasn't. Now, we need to take the count of this "right" majority element, add the number of times it occurs in the left sub-array, and check if the combined count satisfies the "$\text{occurences} > \frac{n}{2}$" criterion.
Same as above but with "left" and "right" sub-array swapped in the comparison.
Both sub-arrays have an majority element. Compute the combined count of each of the elements as before and check whether one of these elements satisfies the "$\text{occurences} > \frac{n}{2}$" criterion.
End of explanation
"""
import multiprocessing as mp
def majority_ele_dac_mp(lst):
n = len(lst)
left = lst[:n // 2]
right = lst[n // 2:]
results = (pool.apply_async(majority_ele_lin, args=(x,))
for x in (left, right))
l_maj, r_maj = [p.get() for p in results]
if l_maj[0] == -1 and r_maj[0] == -1:
return -1
elif l_maj[0] == -1 and r_maj[0] > -1:
cnt = r_maj[1]
if r_maj[0] in l_maj[2]:
cnt += l_maj[2][r_maj[0]]
if cnt > n // 2:
return r_maj[0]
elif r_maj[0] == -1 and l_maj[0] > -1:
cnt = l_maj[1]
if l_maj[0] in r_maj[2]:
cnt += r_maj[2][l_maj[0]]
if cnt > n // 2:
return l_maj[0]
else:
c1, c2 = l_maj[1], r_maj[1]
if l_maj[0] in r_maj[2]:
c1 = l_maj[1] + r_maj[2][l_maj[0]]
if r_maj[0] in l_maj[2]:
c2 = r_maj[1] + l_maj[2][r_maj[0]]
m = max(c1, c2)
if m > n // 2:
return m
return -1
###################################################
lst0 = []
print(lst0, '->', majority_ele_dac(lst=lst0))
lst1 = [1, 2, 3, 4, 4, 5]
print(lst1, '->', majority_ele_dac(lst=lst1))
lst2 = [1, 2, 4, 4, 4, 5]
print(lst2, '->', majority_ele_dac(lst=lst2))
lst3 = [4, 2, 4, 4, 4, 5]
print(lst3, '->', majority_ele_dac(lst=lst3))
print(lst3[::-1], '->', majority_ele_dac(lst=lst3[::-1]))
lst4 = [2, 3, 9, 2, 2]
print(lst4, '->',majority_ele_dac(lst=lst4))
print(lst4[::-1], '->', majority_ele_dac(lst=lst4[::-1]))
lst5 = [0, 0, 2, 2, 2]
print(lst5, '->',majority_ele_dac(lst=lst5))
print(lst5[::-1], '->', majority_ele_dac(lst=lst5[::-1]))
"""
Explanation: In algorithms such as binary search that we saw at the beginning of this notebook, we recursively break down our problem into smaller subproblems. Thus, we have a recurrence problem with time complexity
$T(n) = T(\frac{2}{n}) + O(1) \rightarrow T(n) = O(\log n).$
In this example, finding the majority element, we break our problem down into only 2 subproblems. Thus, the complexity of our algorithm is
$T(n) = 2T (\frac{2}{n} + O(n) \rightarrow T(n) = O(n \log n).$
Adding multiprocessing
Our Divide and Conquer approach above is actually a good candidate for multi-processing, since we can parallelize the majority element search in the two sub-lists. So, let's make a simple modification and use Python's multiprocessing module for that. Here, we use the apply_async method from the Pool class, which doesn't return the results in order (in contrast to the apply method). Thus, the left sublist and right sublist may be swapped in the variable assignment l_maj, r_maj = [p.get() for p in results]. However, for our implementation, this doesn't make a difference.
End of explanation
"""
|
dennisobrien/bokeh | examples/howto/Categorical Data.ipynb | bsd-3-clause | from bokeh.io import show, output_notebook
from bokeh.models import CategoricalColorMapper, ColumnDataSource, FactorRange
from bokeh.plotting import figure
output_notebook()
"""
Explanation: Handling Categorical Data with Bokeh
End of explanation
"""
fruits = ['Apples', 'Pears', 'Nectarines', 'Plums', 'Grapes', 'Strawberries']
counts = [5, 3, 4, 2, 4, 6]
p = figure(x_range=fruits, plot_height=250, toolbar_location=None, title="Fruit Counts")
p.vbar(x=fruits, top=counts, width=0.9)
p.xgrid.grid_line_color = None
p.y_range.start = 0
show(p)
"""
Explanation: Basic Bar Plot
To create a basic Bar Plot, typically all that is needed is to call vbar with x and top, and values, or hbar with y and right and values. The default width or height may also be supplied if something different than the default value of 1 is desired.
The example below plots vertical bars representing counts for different types of fruit on a categorical range:
x_range = ['Apples', 'Pears', 'Nectarines', 'Plums', 'Grapes', 'Strawberries']
End of explanation
"""
sorted_fruits = sorted(fruits, key=lambda x: counts[fruits.index(x)])
p = figure(x_range=sorted_fruits, plot_height=250, toolbar_location=None, title="Fruit Counts")
p.vbar(x=fruits, top=counts, width=0.9)
p.xgrid.grid_line_color = None
p.y_range.start = 0
show(p)
"""
Explanation: Sorting Bars
Bokeh displays the bars in the order the factors are given for the range. So, "sorting" bars in a bar plot is identical to sorting the factors for the range.
In the example below the fruit factors are sorted in increasing order according to their corresponing counts, causing the bars to be sorted.
End of explanation
"""
from bokeh.palettes import Spectral6
fruits = ['Apples', 'Pears', 'Nectarines', 'Plums', 'Grapes', 'Strawberries']
counts = [5, 3, 4, 2, 4, 6]
source = ColumnDataSource(data=dict(fruits=fruits, counts=counts, color=Spectral6))
p = figure(x_range=fruits, plot_height=250, toolbar_location=None, title="Fruit Counts")
p.vbar(x='fruits', top='counts', width=0.9, color='color', legend="fruits", source=source)
p.xgrid.grid_line_color = None
p.y_range.start = 0
p.y_range.end = 9
p.legend.orientation = "horizontal"
p.legend.location = "top_center"
show(p)
"""
Explanation: Bar Plot with Explicit Colors
To set the color of each bar, you can pass explicit color values to the color option (which is shorthand for setting both the fill_color and line_color).
In the example below add shading to the previous plot, but now all the data (including the explicit colors) is put inside a ColumnDataSource which is passed to vbar as the source argument.
End of explanation
"""
from bokeh.transform import factor_cmap
fruits = ['Apples', 'Pears', 'Nectarines', 'Plums', 'Grapes', 'Strawberries']
counts = [5, 3, 4, 2, 4, 6]
source = ColumnDataSource(data=dict(fruits=fruits, counts=counts))
p = figure(x_range=fruits, plot_height=250, toolbar_location=None, title="Fruit Counts")
p.vbar(x='fruits', top='counts', width=0.9, source=source, legend="fruits",
line_color='white', fill_color=factor_cmap('fruits', palette="Spectral6", factors=fruits))
p.xgrid.grid_line_color = None
p.y_range.start = 0
p.y_range.end = 9
p.legend.orientation = "horizontal"
p.legend.location = "top_center"
show(p)
"""
Explanation: Bar Plot with Color Mapper
Another way to shade bars different colors is to provide a colormapper. The factor_cmap transform can be applied to map a categorical value into a colot. Other transorm include linear_cmap and log_cmap which can be used to map continuous numercical values to colors.
The example below reproduces previous example using a factor_cmap to convert fruit types into colors.
End of explanation
"""
fruits = ['Apples', 'Pears', 'Nectarines', 'Plums', 'Grapes', 'Strawberries']
years = ['2015', '2016', '2017']
data = {'fruits' : fruits,
'2015' : [2, 1, 4, 3, 2, 4],
'2016' : [5, 3, 3, 2, 4, 6],
'2017' : [3, 2, 4, 4, 5, 3]}
# this creates [ ("Apples", "2015"), ("Apples", "2016"), ("Apples", "2017"), ("Pears", "2015), ... ]
x = [ (fruit, year) for fruit in fruits for year in years ]
counts = sum(zip(data['2015'], data['2016'], data['2017']), ()) # like an hstack
source = ColumnDataSource(data=dict(x=x, counts=counts))
p = figure(x_range=FactorRange(*x), plot_height=250,
toolbar_location=None, title="Fruit Counts by Year")
p.vbar(x='x', top='counts', width=0.9, source=source)
p.x_range.range_padding = 0.1
p.xgrid.grid_line_color = None
p.y_range.start = 0
p.xaxis.major_label_orientation = 1
show(p)
"""
Explanation: Grouped Bars (Hierarchical Categories)
Often categorical data is arranged into hierarchies, for instance we might have fruit counts, per year. To represent this kind of hierarchy, our range becomes a list of tuples:
x_range = [ ("Apples", "2015"), ("Apples", "2016"), ("Apples", "2017"), ... ]
The coordinates for the bars should be these same tuple values. When we create a hierarchcal range in this way, Bokeh will automatically create a visually grouped axis.
The plot below displays fruit counts per year.
End of explanation
"""
from bokeh.transform import factor_cmap
fruits = ['Apples', 'Pears', 'Nectarines', 'Plums', 'Grapes', 'Strawberries']
years = ['2015', '2016', '2017']
data = {'fruits' : fruits,
'2015' : [2, 1, 4, 3, 2, 4],
'2016' : [5, 3, 3, 2, 4, 6],
'2017' : [3, 2, 4, 4, 5, 3]}
# this creates [ ("Apples", "2015"), ("Apples", "2016"), ("Apples", "2017"), ("Pears", "2015), ... ]
x = [ (fruit, year) for fruit in fruits for year in years ]
counts = sum(zip(data['2015'], data['2016'], data['2017']), ()) # like an hstack
source = ColumnDataSource(data=dict(x=x, counts=counts))
p = figure(x_range=FactorRange(*x), plot_height=250, toolbar_location=None, title="Fruit Counts by Year")
p.vbar(x='x', top='counts', width=0.9, source=source, line_color="white",
fill_color=factor_cmap('x', palette=["#c9d9d3", "#718dbf", "#e84d60"], factors=years, start=1, end=2))
p.x_range.range_padding = 0.1
p.xgrid.grid_line_color = None
p.y_range.start = 0
p.xaxis.major_label_orientation = 1
show(p)
"""
Explanation: Grouped Bars with Color Mapper
We can combine a color mapper with hierachical ranges, and in fact we can choose to apply a color mapping based on only "part" of a categorical factor.
In the example below, the arguments start=1, end=2 are passed to factor_cmap. This means that for each factor value (which is a tuple), the value factor[1:2] is what shoud be used for colormapping. In this specific case, that translates to shading each bar according to the "year" portion.
End of explanation
"""
from bokeh.core.properties import value
from bokeh.transform import dodge, factor_cmap
fruits = ['Apples', 'Pears', 'Nectarines', 'Plums', 'Grapes', 'Strawberries']
years = ['2015', '2016', '2017']
data = {'fruits' : fruits,
'2015' : [2, 1, 4, 3, 2, 4],
'2016' : [5, 3, 3, 2, 4, 6],
'2017' : [3, 2, 4, 4, 5, 3]}
source = ColumnDataSource(data=data)
p = figure(x_range=fruits, plot_height=250, toolbar_location=None, title="Fruit Counts by Year")
p.vbar(x=dodge('fruits', -0.25, range=p.x_range), top='2015', width=0.2, source=source,
color="#c9d9d3", legend=value("2015"))
p.vbar(x=dodge('fruits', 0.0, range=p.x_range), top='2016', width=0.2, source=source,
color="#718dbf", legend=value("2016"))
p.vbar(x=dodge('fruits', 0.25, range=p.x_range), top='2017', width=0.2, source=source,
color="#e84d60", legend=value("2017"))
p.x_range.range_padding = 0.1
p.xgrid.grid_line_color = None
p.y_range.start = 0
p.y_range.end = 10
p.legend.location = "top_left"
p.legend.orientation = "horizontal"
show(p)
"""
Explanation: Grouped Bars with Position Dodge
Some times we may wish to have "grouped" bars without a visually grouped axis. For instance, we may wish to indicate groups by colormapping or other means. This can be accomplished in Bokeh by providing "flat" (i.e. non-tuple) factors, and using the dodge transform to shift the bars by an arbitrary amount.
The example below also shows fruit counts per year, grouping the bars with dodge on the flat categorical range from the original example above.
End of explanation
"""
from bokeh.core.properties import value
from bokeh.models import ColumnDataSource
from bokeh.plotting import figure
fruits = ['Apples', 'Pears', 'Nectarines', 'Plums', 'Grapes', 'Strawberries']
years = ["2015", "2016", "2017"]
colors = ["#c9d9d3", "#718dbf", "#e84d60"]
data = {'fruits' : fruits,
'2015' : [2, 1, 4, 3, 2, 4],
'2016' : [5, 3, 4, 2, 4, 6],
'2017' : [3, 2, 4, 4, 5, 3]}
source = ColumnDataSource(data=data)
p = figure(x_range=fruits, plot_height=250,
toolbar_location=None, title="Fruit Counts by Year")
p.vbar_stack(years, x='fruits', width=0.9, color=colors, source=source, legend=[value(x) for x in years])
p.x_range.range_padding = 0.1
p.xgrid.grid_line_color = None
p.y_range.start = 0
p.legend.location = "top_left"
p.legend.orientation = "horizontal"
p.axis.minor_tick_line_color = None
p.outline_line_color = None
show(p)
"""
Explanation: Vertically Stacked Bars
We may also wish to stack bars, instead of grouping them. Bokeh provides vbar_stack and hbar_stack to help with this. To use these functions we pass a list of "stackers" which is a sequence of column names for columns in our data source. Each column represents one "layer" across all of our stacked bars, and each column is added to the previous columns to position the next layer.
The example below shows out fruit counts per year, this time stacked by year instead of grouped.
End of explanation
"""
from bokeh.models import ColumnDataSource
from bokeh.palettes import GnBu3, OrRd3
from bokeh.plotting import figure
fruits = ['Apples', 'Pears', 'Nectarines', 'Plums', 'Grapes', 'Strawberries']
years = ["2015", "2016", "2017"]
exports = {'fruits' : fruits,
'2015' : [2, 1, 4, 3, 2, 4],
'2016' : [5, 3, 4, 2, 4, 6],
'2017' : [3, 2, 4, 4, 5, 3]}
imports = {'fruits' : fruits,
'2015' : [-1, 0, -1, -3, -2, -1],
'2016' : [-2, -1, -3, -1, -2, -2],
'2017' : [-1, -2, -1, 0, -2, -2]}
p = figure(y_range=fruits, plot_height=250, x_range=(-16, 16), title="Fruit import/export, by year",
toolbar_location=None)
p.hbar_stack(years, y='fruits', height=0.9, color=GnBu3, source=ColumnDataSource(exports),
legend=["%s exports" % x for x in years])
p.hbar_stack(years, y='fruits', height=0.9, color=OrRd3, source=ColumnDataSource(imports),
legend=["%s imports" % x for x in years])
p.y_range.range_padding = 0.1
p.ygrid.grid_line_color = None
p.legend.location = "top_left"
p.axis.minor_tick_line_color = None
p.outline_line_color = None
show(p)
"""
Explanation: Horizontally Stacked Bars
The example below uses hbar_stack to display exports for each fruit, stacked by year. It also demonstrates that negative stack values are acceptable.
End of explanation
"""
factors = [
("Q1", "jan"), ("Q1", "feb"), ("Q1", "mar"),
("Q2", "apr"), ("Q2", "may"), ("Q2", "jun"),
("Q3", "jul"), ("Q3", "aug"), ("Q3", "sep"),
("Q4", "oct"), ("Q4", "nov"), ("Q4", "dec"),
]
p = figure(x_range=FactorRange(*factors), plot_height=250,
toolbar_location=None, tools="")
x = [ 10, 12, 16, 9, 10, 8, 12, 13, 14, 14, 12, 16 ]
p.vbar(x=factors, top=x, width=0.9, alpha=0.5)
p.line(x=["Q1", "Q2", "Q3", "Q4"], y=[12, 9, 13, 14], color="red", line_width=2)
p.y_range.start = 0
p.x_range.range_padding = 0.1
p.xaxis.major_label_orientation = 1
p.xgrid.grid_line_color = None
show(p)
"""
Explanation: Grouped Bars with Line (Mixed Category Levels)
Whenever we use hierarchical categories, it is possible to use coordinates that refer to only the first portions of a factor. In this case, coordinates are centered inside the group appropriately.
The example below uses bars to show sales values for every month, grouped by quarter. Each bar has coordinates such as ("Q1", "jan"), etc. Additionally a line displays the quarterly average trends, by using coordinates such as "Q1".
End of explanation
"""
p = figure(x_range=FactorRange(*factors), plot_height=250,
toolbar_location=None, tools="")
regions = ['east', 'west']
source = ColumnDataSource(data=dict(
x=factors,
east=[ 5, 5, 6, 5, 5, 4, 5, 6, 7, 8, 6, 9 ],
west=[ 5, 7, 9, 4, 5, 4, 7, 7, 7, 6, 6, 7 ],
))
p.vbar_stack(regions, x='x', width=0.9, alpha=0.5, color=["blue", "red"], source=source,
legend=[value(x) for x in regions])
p.y_range.start = 0
p.y_range.end = 18
p.x_range.range_padding = 0.1
p.xaxis.major_label_orientation = 1
p.xgrid.grid_line_color = None
p.legend.location = "top_center"
p.legend.orientation = "horizontal"
show(p)
"""
Explanation: Stacked and Grouped Bars
The above techiques for stacking and grouping may also be used together to create a stacked, grouped bar plot.
Continuing the example above, we might stack each individual bar by region.
End of explanation
"""
from bokeh.sampledata.sprint import sprint
sprint.Year = sprint.Year.astype(str)
group = sprint.groupby('Year')
source = ColumnDataSource(group)
p = figure(y_range=group, x_range=(9.5,12.7), plot_width=400, plot_height=550, toolbar_location=None,
title="Time Spreads for Sprint Medalists (by Year)")
p.ygrid.grid_line_color = None
p.xaxis.axis_label = "Time (seconds)"
p.outline_line_color = None
p.hbar(y="Year", left='Time_min', right='Time_max', height=0.4, source=source)
show(p)
"""
Explanation: Interval Plot
So far we have used bar glyphs to create bar charts starting from a common baseline, but bars are also useful for displaying arbitrary intervals.
The example below shows the low/high time spread for sprint medalists in each year of the olympics.
End of explanation
"""
from bokeh.sampledata.autompg import autompg_clean as df
df.cyl = df.cyl.astype(str)
df.yr = df.yr.astype(str)
from bokeh.palettes import Spectral5
from bokeh.transform import factor_cmap
group = df.groupby(('cyl'))
source = ColumnDataSource(group)
cyl_cmap = factor_cmap('cyl', palette=Spectral5, factors=sorted(df.cyl.unique()))
p = figure(plot_height=350, x_range=group, toolbar_location=None)
p.vbar(x='cyl', top='mpg_mean', width=1, line_color="white",
fill_color=cyl_cmap, source=source)
p.xgrid.grid_line_color = None
p.y_range.start = 0
p.xaxis.axis_label = "some stuff"
p.xaxis.major_label_orientation = 1.2
p.outline_line_color = None
show(p)
"""
Explanation: Pandas to Simple Bars
Although Pandas is not required to use Bokeh, using Pandas can make many things simpler. For instance, Pandas GroupBy objects can be passed as the source argument to a glyph (or used to initialize a ColumnDataSource. When this is done, summary statistics for each group are automatically availanle in the data source.
In the example below we pass autompg.groupby(('cyl')) as our source. Since the "autompg" DataFrame has and mpg column, our grouped data source automatically has an mpg_mean column we can use to drive glyphs.
End of explanation
"""
from bokeh.models import HoverTool
from bokeh.palettes import Spectral5
from bokeh.transform import factor_cmap
group = df.groupby(by=['cyl', 'mfr'])
source = ColumnDataSource(group)
index_cmap = factor_cmap('cyl_mfr', palette=Spectral5, factors=sorted(df.cyl.unique()), end=1)
p = figure(plot_width=900, plot_height=400, x_range=group, toolbar_location=None,
title="Mean MPG by # Cylinders and Manufacturer")
p.vbar(x='cyl_mfr', top='mpg_mean', width=1, line_color="white",
fill_color=index_cmap, source=source)
p.x_range.range_padding = 0.05
p.xgrid.grid_line_color = None
p.y_range.start = 0
p.xaxis.axis_label = "Manufacturer grouped by # Cylinders"
p.xaxis.major_label_orientation = 1.2
p.outline_line_color = None
p.add_tools(HoverTool(tooltips=[("MPG", "@mpg_mean"), ("Cyl, Mfr", "@cyl_mfr")]))
show(p)
"""
Explanation: Pandas to Grouped Bars
We can also pass Pandas GroupBy objects as plot ranges. When this happens, Bokeh automatically creates a hierarchical nested axis.
The example below creates a doubly nested range.
End of explanation
"""
import pandas as pd
from bokeh.transform import jitter
from bokeh.sampledata.commits import data
DAYS = ['Sun', 'Sat', 'Fri', 'Thu', 'Wed', 'Tue', 'Mon']
source = ColumnDataSource(data)
p = figure(plot_width=800, plot_height=300, y_range=DAYS, x_axis_type='datetime',
title="Commits by Time of Day (US/Central) 2012—2016")
p.circle(x='time', y=jitter('day', width=0.6, range=p.y_range), source=source, alpha=0.3)
p.xaxis[0].formatter.days = ['%Hh']
p.x_range.range_padding = 0
p.ygrid.grid_line_color = None
show(p)
"""
Explanation: Categorical Scatter with Jitter
So far we have mostly plotted bars on categorical ranges, but other glyphs work as well. For instance we could plot a scatter plot of circles against a categorical range. Often times, such plots are improved by jittering the data along the categorical range. Bokeh provides a jitter transform that can accomplish that.
The example below shows an individual GitHub commit history grouped by day of the week, and jittered to improve readability.
End of explanation
"""
group = data.groupby('day')
source = ColumnDataSource(group)
p = figure(plot_width=800, plot_height=300, y_range=DAYS, x_range=(0, 1010),
title="Commits by Day of the Week, 2012—2016", toolbar_location=None)
p.hbar(y='day', right='time_count', height=0.9, source=source)
p.ygrid.grid_line_color = None
p.outline_line_color = None
show(p)
"""
Explanation: Alternatively we might show the same data using bars, only giving a count per day.
End of explanation
"""
import pandas as pd
from bokeh.io import show
from bokeh.models import BasicTicker, ColorBar, ColumnDataSource, LinearColorMapper, PrintfTickFormatter
from bokeh.plotting import figure
from bokeh.sampledata.unemployment1948 import data
from bokeh.transform import transform
data.Year = data.Year.astype(str)
data = data.set_index('Year')
data.drop('Annual', axis=1, inplace=True)
data.columns.name = 'Month'
# reshape to 1D array or rates with a month and year for each row.
df = pd.DataFrame(data.stack(), columns=['rate']).reset_index()
source = ColumnDataSource(df)
# this is the colormap from the original NYTimes plot
colors = ["#75968f", "#a5bab7", "#c9d9d3", "#e2e2e2", "#dfccce", "#ddb7b1", "#cc7878", "#933b41", "#550b1d"]
mapper = LinearColorMapper(palette=colors, low=df.rate.min(), high=df.rate.max())
p = figure(title="US Unemployment 1948—2016", toolbar_location=None, tools="",
x_range=list(data.index), y_range=list(reversed(data.columns)),
x_axis_location="above", plot_width=900, plot_height=400)
p.axis.axis_line_color = None
p.axis.major_tick_line_color = None
p.axis.major_label_text_font_size = "5pt"
p.axis.major_label_standoff = 0
p.xaxis.major_label_orientation = 1.0
p.rect(x="Year", y="Month", width=1, height=1, source=source,
line_color=None, fill_color=transform('rate', mapper))
color_bar = ColorBar(color_mapper=mapper, location=(0, 0),
ticker=BasicTicker(desired_num_ticks=len(colors)),
formatter=PrintfTickFormatter(format="%d%%"))
p.add_layout(color_bar, 'right')
show(p)
"""
Explanation: Categorical Heatmaps
Another kind of common categorical plot is the Categorical Heatmap, which has categorical ranges on both axes. Typically colormapped or shaded rectangles are diplayed for each (x,y) categorical comnination.
The examples below demonstrates a catgorical heatmap using unemployment data.
End of explanation
"""
from bokeh.io import output_file, show
from bokeh.models import ColumnDataSource, HoverTool
from bokeh.plotting import figure
from bokeh.sampledata.periodic_table import elements
from bokeh.transform import dodge, factor_cmap
periods = ["I", "II", "III", "IV", "V", "VI", "VII"]
groups = [str(x) for x in range(1, 19)]
df = elements.copy()
df["atomic mass"] = df["atomic mass"].astype(str)
df["group"] = df["group"].astype(str)
df["period"] = [periods[x-1] for x in df.period]
df = df[df.group != "-"]
df = df[df.symbol != "Lr"]
df = df[df.symbol != "Lu"]
cmap = {
"alkali metal" : "#a6cee3",
"alkaline earth metal" : "#1f78b4",
"metal" : "#d93b43",
"halogen" : "#999d9a",
"metalloid" : "#e08d79",
"noble gas" : "#eaeaea",
"nonmetal" : "#f1d4Af",
"transition metal" : "#599d7A",
}
source = ColumnDataSource(df)
p = figure(title="Periodic Table (omitting LA and AC Series)", plot_width=900, plot_height=500,
tools="", toolbar_location=None,
x_range=groups, y_range=list(reversed(periods)))
p.rect("group", "period", 0.95, 0.95, source=source, fill_alpha=0.6, legend="metal",
color=factor_cmap('metal', palette=list(cmap.values()), factors=list(cmap.keys())))
text_props = {"source": source, "text_align": "left", "text_baseline": "middle"}
x = dodge("group", -0.4, range=p.x_range)
r = p.text(x=x, y="period", text="symbol", **text_props)
r.glyph.text_font_style="bold"
r = p.text(x=x, y=dodge("period", 0.3, range=p.y_range), text="atomic number", **text_props)
r.glyph.text_font_size="8pt"
r = p.text(x=x, y=dodge("period", -0.35, range=p.y_range), text="name", **text_props)
r.glyph.text_font_size="5pt"
r = p.text(x=x, y=dodge("period", -0.2, range=p.y_range), text="atomic mass", **text_props)
r.glyph.text_font_size="5pt"
p.text(x=["3", "3"], y=["VI", "VII"], text=["LA", "AC"], text_align="center", text_baseline="middle")
p.add_tools(HoverTool(tooltips = [
("Name", "@name"),
("Atomic number", "@{atomic number}"),
("Atomic mass", "@{atomic mass}"),
("Type", "@metal"),
("CPK color", "$color[hex, swatch]:CPK"),
("Electronic configuration", "@{electronic configuration}"),
]))
p.outline_line_color = None
p.grid.grid_line_color = None
p.axis.axis_line_color = None
p.axis.major_tick_line_color = None
p.axis.major_label_standoff = 0
p.legend.orientation = "horizontal"
p.legend.location ="top_center"
show(p)
"""
Explanation: In addition to heatmaps that use colormapping to shade each rectangle, a similar technique can be used to create various kinds of illustrations, for instance the example below uses Bokeh to make an interactive periodic table.
End of explanation
"""
import colorcet as cc
from numpy import linspace
from scipy.stats.kde import gaussian_kde
from bokeh.sampledata.perceptions import probly
from bokeh.models import FixedTicker, PrintfTickFormatter
probly.head()
def ridge(category, data, scale=20):
''' For a given category and timeseries for that category, return categorical
coordiantes with offsets scaled by the timeseries.
'''
return list(zip([category]*len(data), scale*data))
cats = list(reversed(probly.keys()))
palette = [cc.rainbow[i*15] for i in range(17)]
x = linspace(-20,110, 500)
source = ColumnDataSource(data=dict(x=x))
p = figure(y_range=cats, plot_width=900, x_range=(-5, 105), toolbar_location=None)
for i, cat in enumerate(reversed(cats)):
pdf = gaussian_kde(probly[cat])
y = ridge(cat, pdf(x))
source.add(y, cat)
p.patch('x', cat, color=palette[i], alpha=0.6, line_color="black", source=source)
p.outline_line_color = None
p.background_fill_color = "#efefef"
p.xaxis.ticker = FixedTicker(ticks=list(range(0, 101, 10)))
p.xaxis.formatter = PrintfTickFormatter(format="%d%%")
p.ygrid.grid_line_color = None
p.xgrid.grid_line_color = "#dddddd"
p.xgrid.ticker = p.xaxis[0].ticker
p.axis.minor_tick_line_color = None
p.axis.major_tick_line_color = None
p.axis.axis_line_color = None
p.y_range.range_padding = 0.12
show(p)
"""
Explanation: Ridge Plot (Categorical Offsets)
We have seen above how the dodge transform can be used to shift an entire column of categorical values. But it is possible to offset individual coordinates but putting the offset at the end of a tuple with a factor. For instance, if we have catefories "foo" and "bar" then
("foo", 0.1), ("foo", 0.2), ("bar", -0.3)
Are all examples of individual coordinates shifted on a per-coordinate basis.
This technique can be used to create "Ridge Plots" which show lines (or filled areas) for different categories.
End of explanation
"""
|
kwant-project/kwant-tutorial-2016 | 3.3.MagnetoResistance.ipynb | bsd-2-clause | from types import SimpleNamespace
from math import cos, sin, pi
%run matplotlib_setup.ipy
from matplotlib import pyplot
import numpy as np
import scipy.stats as reg
import kwant
lat = kwant.lattice.square()
s_0 = np.identity(2)
s_z = np.array([[1, 0], [0, -1]])
s_x = np.array([[0, 1], [1, 0]])
s_y = np.array([[0, -1j], [1j, 0]])
def onsite(site, p):
x = site.pos[0]
if x > W and x < 2*W:
return 4*s_0 + p.Exc*s_z
if x > 3*W and x < 4*W:
return 4*s_0 + p.Exc*cos(p.angle)*s_z + p.Exc*sin(p.angle)*s_x
return 4*s_0
W = 10
H = kwant.Builder()
H[(lat(x,y) for x in range(5*W) for y in range(W))] = onsite
H[lat.neighbors()] = s_0
sym = kwant.TranslationalSymmetry(lat.vec((1,0)))
Hlead =kwant.Builder(sym)
Hlead[(lat(0,y) for y in range(W))] = 4*s_0
Hlead[lat.neighbors()] = s_0
H.attach_lead(Hlead)
H.attach_lead(Hlead.reversed())
kwant.plot(H);
"""
Explanation: Giant Magneto Resistance
<img src='images/GMR-cartoon.png'/>
In this example, we will learn how to play with the spin degree of freedom in a model. We will implement a crude
model for a Ferromagnetic-Normal spacer-Ferromagnetic (FNF) spin valve and compute the conductance as a function of the angle $\theta$ between the two magnetization. The ferromagnets are model by simply adding an sd exchange term of the form
$$-J m.\sigma$$
in the Hamiltonian where $m$ is the direction of the magnetization, $\sigma$ a vector of Pauli matrices and $J$ the exchange constant.
End of explanation
"""
ps = SimpleNamespace(Exc=2., E=1.2, angle=pi)
def V(site):
Hd = onsite(site,ps)
return (Hd[0,0] - Hd[1,1]).real
kwant.plotter.map(H, V);
"""
Explanation: In order to visualize the potential, it can be useful to have color maps of it.
End of explanation
"""
Hf = H.finalized()
data = []
angles = np.linspace(0,2*pi,100)
params = SimpleNamespace(Exc=0.2, E=2.3)
for params.angle in angles:
smatrix = kwant.smatrix(Hf, params.E, args=[params])
data.append(smatrix.transmission(1, 0))
pyplot.plot(angles, data);
pyplot.xlabel('angle')
pyplot.ylabel('Conductance in unit of $(e^2/h)$');
"""
Explanation: Now let us compute the angular magneto-resistance.
Try playing with the parameters, what do you observe? Do you understand why?
Is there anything wrong with our model?
End of explanation
"""
def HedgeHog(site,ps):
x,y = site.pos
r = ( x**2 + y**2 )**0.5
theta = (np.pi/2)*(np.tanh((ps.r0 - r)/ps.delta) + 1)
if r != 0:
Ex = (x/r)*np.sin(theta)*s_x + (y/r)*np.sin(theta)*s_y + np.cos(theta)*s_z
else:
Ex = s_z
return 4*s_0 + ps.Ex * Ex
def Lead_Pot(site,ps):
return 4*s_0 + ps.Ex * s_z
def MakeSystem(ps, show = False):
H = kwant.Builder()
def shape_2DEG(pos):
x,y = pos
return ( (abs(x) < ps.L) and (abs(y) < ps.W) ) or (
(abs(x) < ps.W) and (abs(y) < ps.L))
H[lat.shape(shape_2DEG,(0,0))] = HedgeHog
H[lat.neighbors()] = -s_0
# ITS LEADS
sym_x = kwant.TranslationalSymmetry((-1,0))
H_lead_x = kwant.Builder(sym_x)
shape_x = lambda pos: abs(pos[1])<ps.W and pos[0]==0
H_lead_x[lat.shape(shape_x,(0,0))] = Lead_Pot
H_lead_x[lat.neighbors()] = -s_0
sym_y = kwant.TranslationalSymmetry((0,-1))
H_lead_y = kwant.Builder(sym_y)
shape_y = lambda pos: abs(pos[0])<ps.W and pos[1]==0
H_lead_y[lat.shape(shape_y,(0,0))] = Lead_Pot
H_lead_y[lat.neighbors()] = -s_0
H.attach_lead(H_lead_x)
H.attach_lead(H_lead_y)
H.attach_lead(H_lead_y.reversed())
H.attach_lead(H_lead_x.reversed())
if show:
kwant.plot(H)
return H
def Transport(Hf,EE,ps):
smatrix = kwant.smatrix(Hf, energy=EE, args=[ps])
G=np.zeros((4,4))
for i in range(4):
a=0
for j in range(4):
G[i,j] = smatrix.transmission(i, j)
if i != j:
a += G[i,j]
G[i,i] = -a
V = np.linalg.solve(G[:3,:3], [1.,0,0])
Hall = V[2] - V[1]
return G, Hall
ps = SimpleNamespace(L=45, W=40, delta=10, r0=20, Ex=1.)
H = MakeSystem(ps, show=True)
Hf = H.finalized()
def Vz(site):
Hd = HedgeHog(site,ps)
return (Hd[0,0] - Hd[1,1]).real
def Vy(site):
Hd = HedgeHog(site, ps)
return Hd[0,1].imag
kwant.plotter.map(H, Vz);
kwant.plotter.map(H, Vy);
# HALL RESISTANCE
ps = SimpleNamespace(L=20, W=15, delta=3, r0=6, Ex=1.)
H = MakeSystem(ps, show=False)
Es = np.linspace(0.1,3.,50)
Hf = H.finalized()
dataG , dataHall = [],[]
for EE in Es:
ps.delta = EE
energy = 2.
G,Hall = Transport(Hf, energy, ps)
dataHall.append(Hall)
pyplot.plot(Es, dataHall, 'o-', label="Skyrmion")
pyplot.xlabel('Domain width $\delta$')
pyplot.ylabel('Hall Resistance')
pyplot.title('Topologial Hall Resistance?')
pyplot.legend();
"""
Explanation: Magnetic texture : the example of a skyrmion
Last, we can start playing with the magnetic texture, for instance a skyrmion as in the example below.
$$H = - t \sum_{<ij>} |i><j| + \sum_i V_i \ \ |i><i|$$
$$V_i = J \ m ( r) \cdot \sigma $$
$$ m ( r) = \left(x/r \sin\theta \ , \ y/r \sin\theta \ ,\ \cos\theta \right) $$
$$\theta (r) = \tanh \frac{r-r_0}{\delta}$$
Another difference is that we will have 4 terminals and calculate the Hall resistance instead of the 2 terminal conductance.
This amounts to imposing the current and measuring the voltage, i.e. solving a small linear problem which is readily done with numpy.
Can you calculate the longitudinal resistance?
End of explanation
"""
|
chainsawriot/pycon2016hk_sklearn | Super_textbook.ipynb | mit | from sklearn.datasets import load_breast_cancer
ourdata = load_breast_cancer()
print ourdata.DESCR
print ourdata.data.shape
ourdata.data
ourdata.target
ourdata.target.shape
ourdata.target_names
"""
Explanation: Textbook examples
Fairy tales kind of situation
The data has been processed, ready to analysis
The learning objective is clearly defined
Something that kNN works
Just something you feel so good working with
Classification example
Breast Cancer Wisconsin (Diagnostic) Data Set
https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+(Diagnostic)
End of explanation
"""
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(ourdata.data, ourdata.target, test_size=0.3)
print X_train.shape
print y_train.shape
print X_test.shape
print y_test.shape
from scipy.stats import itemfreq
itemfreq(y_train)
"""
Explanation: Data preparation
Split the data into training set and test set
End of explanation
"""
from sklearn.neighbors import KNeighborsClassifier
hx_knn = KNeighborsClassifier()
hx_knn.fit(X_train, y_train)
hx_knn.predict(X_train)
"""
Explanation: Let's try an unrealistic algorithm: kNN
End of explanation
"""
from sklearn.metrics import confusion_matrix, f1_score
print confusion_matrix(y_train, hx_knn.predict(X_train))
print f1_score(y_train, hx_knn.predict(X_train))
"""
Explanation: Training set performance evaluation
Confusion matrix
https://en.wikipedia.org/wiki/Confusion_matrix
End of explanation
"""
print confusion_matrix(y_test, hx_knn.predict(X_test))
print f1_score(y_test, hx_knn.predict(X_test))
"""
Explanation: Moment of truth: test set performance evaluation
End of explanation
"""
from sklearn.linear_model import LogisticRegression
hx_log = LogisticRegression()
hx_log.fit(X_train, y_train)
hx_log.predict(X_train)
# Training set evaluation
print confusion_matrix(y_train, hx_log.predict(X_train))
print f1_score(y_train, hx_log.predict(X_train))
# test set evaluation
print confusion_matrix(y_test, hx_log.predict(X_test))
print f1_score(y_test, hx_log.predict(X_test))
"""
Explanation: Classical analysis: Logistic Regression Classifier (Mint)
End of explanation
"""
from sklearn.datasets import load_boston
bostondata = load_boston()
print bostondata.target
print bostondata.data.shape
### Learn more about the dataset
print bostondata.DESCR
# how the first row of data looks like
bostondata.data[1,]
BX_train, BX_test, By_train, By_test = train_test_split(bostondata.data, bostondata.target, test_size=0.3)
"""
Explanation: Your turn: Task 1
Suppose there is a learning algorithm called Naive Bayes and the API is located in
http://scikit-learn.org/stable/modules/generated/sklearn.naive_bayes.GaussianNB.html#sklearn.naive_bayes.GaussianNB
Create a hx_nb, fit the data, and evaluate the training set and test set performance.
Regression example
Boston house price dataset
https://archive.ics.uci.edu/ml/datasets/Housing
End of explanation
"""
from sklearn.linear_model import LinearRegression
hx_lin = LinearRegression()
hx_lin.fit(BX_train, By_train)
hx_lin.predict(BX_train)
"""
Explanation: Classical algo: Linear Regression
End of explanation
"""
import matplotlib.pyplot as plt
plt.scatter(By_train, hx_lin.predict(BX_train))
plt.ylabel("predicted value")
plt.xlabel("actual value")
plt.show()
### performance evaluation: training set
from sklearn.metrics import mean_squared_error
mean_squared_error(By_train, hx_lin.predict(BX_train))
### performance evaluation: test set
mean_squared_error(By_test, hx_lin.predict(BX_test))
"""
Explanation: Plot a scatter plot of predicted and actual value
End of explanation
"""
|
sbussmann/buda-rank | notebooks/Insight Bayesian Workshop/Insight Bayesian Workshop - Artificial Data.ipynb | mit | import pandas as pd
import os
import numpy as np
import pymc3 as pm
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
true_rating = {
'All Stars': 2.0,
'Average': 0.0,
'Just Having Fun': -1.2,
}
true_index = {
0: 'All Stars',
1: 'Average',
2: 'Just Having Fun',
}
n_teams = len(true_rating)
team_numbers = range(n_teams)
team_names = [true_index[i] for i in team_numbers]
true_rating
team_names
"""
Explanation: Summary
We will use PyMC3 to estimate the posterior PDF for the true rating of a set of artificial teams using data from a simulated season. The idea is to test our model on a small set of artificial data where we know the answer to begin with, so we can learn about MCMC and make sure our model is sensible.
End of explanation
"""
season_length = [5, 20, 100]
traces = []
simulatedSeasons = []
for n_games in season_length:
games = range(n_games)
database = []
for game in games:
game_row = {}
matchup = np.random.choice(team_numbers, size=2, replace=False)
team0 = true_index[matchup[0]]
team1 = true_index[matchup[1]]
game_row['Team A'] = team0
game_row['Team B'] = team1
game_row['Index A'] = matchup[0]
game_row['Index B'] = matchup[1]
deltaRating = true_rating[team0] - true_rating[team1]
p = 1 / (1 + np.exp(-deltaRating))
randomNumber = np.random.random()
outcome_A = p > randomNumber
game_row['Team A Wins'] = outcome_A
database.append(game_row)
simulatedSeason = pd.DataFrame(database)
simulatedSeasons.append(simulatedSeason)
with pm.Model() as model:
rating = pm.Normal('rating', mu=0, sd=1, shape=n_teams)
deltaRating = rating[simulatedSeason['Index A'].values] - rating[simulatedSeason['Index B'].values]
p = 1 / (1 + np.exp(-deltaRating))
win = pm.Bernoulli('win', p, observed=simulatedSeason['Team A Wins'].values)
trace = pm.sample(1000)
traces.append(trace)
simulatedSeasons[1].groupby('Team A').sum()
1 / (1 + np.exp(-2))
sns.set_context('poster')
f, axes = plt.subplots(nrows=3, ncols=1, figsize=(10, 15))
# plt.figure(figsize=(10, 5))
for ax_index, n_games in enumerate(season_length):
ax = axes[ax_index]
for team_number in team_numbers:
rating_posterior = traces[ax_index]['rating'][:, team_number]
team_name = true_index[team_number]
sns.distplot(rating_posterior, label=team_name, ax=ax)
ax.legend()
ax.set_xlabel('Rating')
ax.set_ylabel('Density')
ax.set_title("Season length: {} games".format(n_games))
plt.tight_layout()
simulatedSeason = pd.DataFrame(database)
simulatedSeason
project_dir = '/Users/rbussman/Projects/BUDA/buda-ratings'
scores_dir = os.path.join(project_dir, 'data', 'raw', 'game_scores')
simulatedSeason.to_csv(os.path.join(scores_dir, 'artificial_scores_big.csv'))
"""
Explanation: We have two teams, one of which is much better than the other. Let's make a simulated season between these teams.
End of explanation
"""
simulatedSeason.shape
with pm.Model() as model:
rating = pm.Normal('rating', mu=0, sd=1, shape=n_teams)
deltaRating = rating[simulatedSeason['Index A'].values] - rating[simulatedSeason['Index B'].values]
p = 1 / (1 + np.exp(-deltaRating))
win = pm.Bernoulli('win', p, observed=simulatedSeason['Team A Wins'].values)
with model:
trace = pm.sample(1000)
sns.set_context('poster')
plt.figure(figsize=(10, 5))
for team_number in team_numbers:
rating_posterior = trace['rating'][:, team_number]
team_name = true_index[team_number]
sns.distplot(rating_posterior, label=team_name)
plt.legend()
plt.xlabel('Rating')
plt.ylabel('Density')
"""
Explanation: Prior on each team is a normal distribution with mean of 0 and standard deviation of 1.
End of explanation
"""
sns.set_context('poster')
plt.figure(figsize=(10, 5))
for team_number in team_numbers[:-1]:
rating_posterior = trace['rating'][:, team_number] - trace['rating'][:, -1]
team_name = true_index[team_number]
sns.distplot(rating_posterior, label="{} - {}".format(team_name, true_index[team_numbers[-1]]))
plt.legend()
plt.xlabel('Rating')
plt.ylabel('Density')
gt0 = rating_posterior > 0
print("Percentage of samples where 'All Stars' have a higher rating than 'Just Having Fun': {:.2f}%".format(
100. * rating_posterior[gt0].size / rating_posterior.size))
"""
Explanation: Hmm, something looks odd here. The posterior pdf for these two teams has significant overlap. Does this mean that our model is not sure about which team is better?
End of explanation
"""
rating_posterior
.75 ** 14
estimatedratings = trace['rating'].mean(axis=0)
estimatedratings
for key in true_rating:
print("True: {:.2f}; Estimated: {:.2f}".format((true_rating[key], estimatedratings[key]))
key
"""
Explanation: Ah, so the posterior pdf is actually quite clear: There is a 99.22% chance that "All Stars" are better than "Just Having Fun".
How does our confidence change as a function of the number of games in the season?
End of explanation
"""
|
AllenDowney/ModSimPy | soln/filter_soln.ipynb | mit | # Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
"""
Explanation: Modeling and Simulation in Python
Copyright 2017 Allen Downey
License: Creative Commons Attribution 4.0 International
End of explanation
"""
ohm = UNITS.ohm
farad = UNITS.farad
volt = UNITS.volt
Hz = UNITS.Hz
second = UNITS.second
params = Params(
R1 = 1e6 * ohm,
C1 = 1e-9 * farad,
A = 5 * volt,
f = 1000 * Hz)
"""
Explanation: Low pass filter
The following circuit diagram (from Wikipedia) shows a low-pass filter built with one resistor and one capacitor.
A "filter" is a circuit takes a signal, $V_{in}$, as input and produces a signal, $V_{out}$, as output. In this context, a "signal" is a voltage that changes over time.
A filter is "low-pass" if it allows low-frequency signals to pass from $V_{in}$ to $V_{out}$ unchanged, but it reduces the amplitude of high-frequency signals.
By applying the laws of circuit analysis, we can derive a differential equation that describes the behavior of this system. By solving the differential equation, we can predict the effect of this circuit on any input signal.
Suppose we are given $V_{in}$ and $V_{out}$ at a particular instant in time. By Ohm's law, which is a simple model of the behavior of resistors, the instantaneous current through the resistor is:
$ I_R = (V_{in} - V_{out}) / R $
where $R$ is resistance in ohms.
Assuming that no current flows through the output of the circuit, Kirchhoff's current law implies that the current through the capacitor is:
$ I_C = I_R $
According to a simple model of the behavior of capacitors, current through the capacitor causes a change in the voltage across the capacitor:
$ I_C = C \frac{d V_{out}}{dt} $
where $C$ is capacitance in farads (F).
Combining these equations yields a differential equation for $V_{out}$:
$ \frac{d }{dt} V_{out} = \frac{V_{in} - V_{out}}{R C} $
Follow the instructions blow to simulate the low-pass filter for input signals like this:
$ V_{in}(t) = A \cos (2 \pi f t) $
where $A$ is the amplitude of the input signal, say 5 V, and $f$ is the frequency of the signal in Hz.
Params and System objects
Here's a Params object to contain the quantities we need. I've chosen values for R1 and C1 that might be typical for a circuit that works with audio signal.
End of explanation
"""
def make_system(params):
"""Makes a System object for the given conditions.
params: Params object
returns: System object
"""
f, R1, C1 = params.f, params.R1, params.C1
init = State(V_out = 0)
omega = 2 * np.pi * f
tau = R1 * C1
cutoff = 1 / R1 / C1 / 2 / np.pi
t_end = 4 / f
dt = t_end / 4000
return System(params, init=init, t_end=t_end, dt=dt,
omega=omega, tau=tau, cutoff=cutoff.to(Hz))
"""
Explanation: Now we can pass the Params object make_system which computes some additional parameters and defines init.
omega is the frequency of the input signal in radians/second.
tau is the time constant for this circuit, which is the time it takes to get from an initial startup phase to
cutoff is the cutoff frequency for this circuit (in Hz), which marks the transition from low frequency signals, which pass through the filter unchanged, to high frequency signals, which are attenuated.
t_end is chosen so we run the simulation for 4 cycles of the input signal.
End of explanation
"""
system = make_system(params)
"""
Explanation: Let's make a System
End of explanation
"""
# Solution
def slope_func(state, t, system):
"""Compute derivatives of the state.
state: V_out
t: time
system: System object with A, omega, R1 and C1
returns: dV_out/dt
"""
[V_out] = state
R1, C1 = system.R1, system.C1
A, omega = system.A, system.omega
V_in = A * np.cos(omega * t)
V_R1 = V_in - V_out
I_R1 = V_R1 / R1
I_C1 = I_R1
dV_out_dt = I_C1 / C1
return [dV_out_dt]
"""
Explanation: Exercise: Write a slope function that takes as an input a State object that contains V_out, and returns the derivative of V_out.
Note: The ODE solver requires the return value from slope_func to be a sequence, even if there is only one element. The simplest way to do that is to return a list with a single element:
return [dV_out_dt]
End of explanation
"""
slope_func(system.init, 0*UNITS.s, system)
"""
Explanation: Test the slope function with the initial conditions.
End of explanation
"""
results, details = run_ode_solver(system, slope_func)
details
results.head()
"""
Explanation: And then run the simulation. I suggest using t_eval=ts to make sure we have enough data points to plot and analyze the results.
End of explanation
"""
def plot_results(results):
xs = results.V_out.index
ys = results.V_out.values
t_end = get_last_label(results)
if t_end < 10:
xs *= 1000
xlabel = 'Time (ms)'
else:
xlabel = 'Time (s)'
plot(xs, ys)
decorate(xlabel=xlabel,
ylabel='$V_{out}$ (volt)',
legend=False)
plot_results(results)
"""
Explanation: Here's a function you can use to plot V_out as a function of time.
End of explanation
"""
fs = [1, 10, 100, 1000, 10000, 100000] * Hz
for i, f in enumerate(fs):
system = make_system(Params(params, f=f))
results, details = run_ode_solver(system, slope_func)
subplot(3, 2, i+1)
plot_results(results)
"""
Explanation: If things have gone according to plan, the amplitude of the output signal should be about 0.8 V.
Also, you might notice that it takes a few cycles for the signal to get to the full amplitude.
Sweeping frequency
Plot V_out looks like for a range of frequencies:
End of explanation
"""
system = make_system(Params(params, f=1000*Hz))
results, details = run_ode_solver(system, slope_func)
V_out = results.V_out
plot_results(results)
"""
Explanation: At low frequencies, notice that there is an initial "transient" before the output gets to a steady-state sinusoidal output. The duration of this transient is a small multiple of the time constant, tau, which is 1 ms.
Estimating the output ratio
Let's compare the amplitudes of the input and output signals. Below the cutoff frequency, we expect them to be about the same. Above the cutoff, we expect the amplitude of the output signal to be smaller.
We'll start with a signal at the cutoff frequency, f=1000 Hz.
End of explanation
"""
def compute_vin(results, system):
"""Computes V_in as a TimeSeries.
results: TimeFrame with simulation results
system: System object with A and omega
returns: TimeSeries
"""
A, omega = system.A, system.omega
ts = results.index.values * UNITS.second
V_in = A * np.cos(omega * ts)
return TimeSeries(V_in, results.index, name='V_in')
"""
Explanation: The following function computes V_in as a TimeSeries:
End of explanation
"""
V_in = compute_vin(results, system)
plot(V_out)
plot(V_in)
decorate(xlabel='Time (s)',
ylabel='V (volt)')
"""
Explanation: Here's what the input and output look like. Notice that the output is not just smaller; it is also "out of phase"; that is, the peaks of the output are shifted to the right, relative to the peaks of the input.
End of explanation
"""
def estimate_A(series):
"""Estimate amplitude.
series: TimeSeries
returns: amplitude in volts
"""
return (series.max() - series.min()) / 2
"""
Explanation: The following function estimates the amplitude of a signal by computing half the distance between the min and max.
End of explanation
"""
A_in = estimate_A(V_in)
"""
Explanation: The amplitude of V_in should be near 5 (but not exact because we evaluated it at a finite number of points).
End of explanation
"""
A_out = estimate_A(V_out)
"""
Explanation: The amplitude of V_out should be lower.
End of explanation
"""
ratio = A_out / A_in
ratio.to_base_units()
"""
Explanation: And here's the ratio between them.
End of explanation
"""
# Solution
def estimate_ratio(V1, V2):
"""Estimate the ratio of amplitudes.
V1: TimeSeries
V2: TimeSeries
returns: amplitude ratio
"""
a1 = estimate_A(V1)
a2 = estimate_A(V2)
return a1 / a2
"""
Explanation: Exercise: Encapsulate the code we have so far in a function that takes two TimeSeries objects and returns the ratio between their amplitudes.
End of explanation
"""
estimate_ratio(V_out, V_in)
"""
Explanation: And test your function.
End of explanation
"""
corr = correlate(V_out, V_in, mode='same')
corr = TimeSeries(corr, V_in.index)
plot(corr, color='C4')
decorate(xlabel='Lag (s)',
ylabel='Correlation')
"""
Explanation: Estimating phase offset
The delay between the peak of the input and the peak of the output is call a "phase shift" or "phase offset", usually measured in fractions of a cycle, degrees, or radians.
To estimate the phase offset between two signals, we can use cross-correlation. Here's what the cross-correlation looks like between V_out and V_in:
End of explanation
"""
peak_time = corr.idxmax() * UNITS.second
"""
Explanation: The location of the peak in the cross correlation is the estimated shift between the two signals, in seconds.
End of explanation
"""
period = 1 / system.f
(peak_time / period).to_reduced_units()
"""
Explanation: We can express the phase offset as a multiple of the period of the input signal:
End of explanation
"""
frac, whole = np.modf(peak_time / period)
frac = frac.to_reduced_units()
"""
Explanation: We don't care about whole period offsets, only the fractional part, which we can get using modf:
End of explanation
"""
frac * 360 * UNITS.degree
"""
Explanation: Finally, we can convert from a fraction of a cycle to degrees:
End of explanation
"""
# Solution
def estimate_offset(V1, V2, system):
"""Estimate phase offset.
V1: TimeSeries
V2: TimeSeries
system: System object with f
returns: amplitude ratio
"""
corr = correlate(V1, V2, mode='same')
corr = TimeSeries(corr, V1.index)
peak_time = corr.idxmax() * UNITS.second
period = 1 / system.f
frac, whole = np.modf(peak_time / period)
frac = frac.to_reduced_units()
return -frac * 360 * UNITS.degree
"""
Explanation: Exercise: Encapsulate this code in a function that takes two TimeSeries objects and a System object, and returns the phase offset in degrees.
Note: by convention, if the output is shifted to the right, the phase offset is negative.
End of explanation
"""
estimate_offset(V_out, V_in, system)
"""
Explanation: Test your function.
End of explanation
"""
# Solution
def sweep_frequency(fs, params):
ratios = SweepSeries()
offsets = SweepSeries()
for i, f in enumerate(fs):
system = make_system(Params(params, f=f))
results, details = run_ode_solver(system, slope_func)
V_out = results.V_out
V_in = compute_vin(results, system)
f = magnitude(f)
ratios[f] = estimate_ratio(V_out, V_in)
offsets[f] = estimate_offset(V_out, V_in, system)
return ratios, offsets
"""
Explanation: Sweeping frequency again
Exercise: Write a function that takes as parameters an array of input frequencies and a Params object.
For each input frequency it should run a simulation and use the results to estimate the output ratio (dimensionless) and phase offset (in degrees).
It should return two SweepSeries objects, one for the ratios and one for the offsets.
End of explanation
"""
fs = 10 ** linspace(0, 4, 9) * Hz
ratios, offsets = sweep_frequency(fs, params)
"""
Explanation: Run your function with these frequencies.
End of explanation
"""
plot(ratios, color='C2', label='output ratio')
decorate(xlabel='Frequency (Hz)',
ylabel='$V_{out} / V_{in}$')
"""
Explanation: We can plot output ratios like this:
End of explanation
"""
def plot_ratios(ratios, system):
"""Plot output ratios.
"""
# axvline can't handle a Quantity with units
cutoff = magnitude(system.cutoff)
plt.axvline(cutoff, color='gray', alpha=0.4)
plot(ratios, color='C2', label='output ratio')
decorate(xlabel='Frequency (Hz)',
ylabel='$V_{out} / V_{in}$',
xscale='log', yscale='log')
plot_ratios(ratios, system)
"""
Explanation: But it is useful and conventional to plot ratios on a log-log scale. The vertical gray line shows the cutoff frequency.
End of explanation
"""
def plot_offsets(offsets, system):
"""Plot phase offsets.
"""
# axvline can't handle a Quantity with units
cutoff = magnitude(system.cutoff)
plt.axvline(cutoff, color='gray', alpha=0.4)
plot(offsets, color='C9')
decorate(xlabel='Frequency (Hz)',
ylabel='Phase offset (degree)',
xscale='log')
plot_offsets(offsets, system)
"""
Explanation: This plot shows the cutoff behavior more clearly. Below the cutoff, the output ratio is close to 1. Above the cutoff, it drops off linearly, on a log scale, which indicates that output ratios for high frequencies are practically 0.
Here's the plot for phase offset, on a log-x scale:
End of explanation
"""
# Solution
def output_ratios(fs, system):
R1, C1, omega = system.R1, system.C1, system.omega
omegas = 2 * np.pi * fs
rco = R1 * C1 * omegas
A = 1 / np.sqrt(1 + rco**2)
return SweepSeries(A, magnitude(fs))
"""
Explanation: For low frequencies, the phase offset is near 0. For high frequencies, it approaches 90 degrees.
Analysis
By analysis we can show that the output ratio for this signal is
$A = \frac{1}{\sqrt{1 + (R C \omega)^2}}$
where $\omega = 2 \pi f$, and the phase offset is
$ \phi = \arctan (- R C \omega)$
Exercise: Write functions that take an array of input frequencies and returns $A(f)$ and $\phi(f)$ as SweepSeries objects. Plot these objects and compare them with the results from the previous section.
End of explanation
"""
A = output_ratios(fs, system)
# Solution
def phase_offsets(fs, system):
R1, C1, omega = system.R1, system.C1, system.omega
omegas = 2 * np.pi * fs
rco = R1 * C1 * omegas
phi = np.arctan(-rco).to(UNITS.degree)
return SweepSeries(phi, magnitude(fs))
"""
Explanation: Test your function:
End of explanation
"""
phi = phase_offsets(fs, system)
"""
Explanation: Test your function:
End of explanation
"""
plot(A, ':', color='gray')
plot_ratios(ratios, system)
plot(phi, ':', color='gray')
plot_offsets(offsets, system)
"""
Explanation: Plot the theoretical results along with the simulation results and see if they agree.
End of explanation
"""
|
geo-fluid-dynamics/phaseflow-fenics | tutorials/FEniCS/00-StefanProblem.ipynb | mit | import fenics
"""
Explanation: Solving the Stefan problem with finite elements
This Jupyter notebook shows how to solve a Stefan problem with finite elements using FEniCS.
Python packages
Import the Python packages for use in this notebook.
We use the finite element method library FEniCS.
End of explanation
"""
import matplotlib
"""
Explanation: |Note|
|----|
| This Jupyter notebook server is using FEniCS 2017.2.0 from ppa:fenics-packages/fenics, installed via apt on Ubuntu 16.04.|
FEniCS has convenient plotting features that don't require us to import matplotlib; but using matplotlib directly will allow us to annotate the plots.
End of explanation
"""
%matplotlib inline
"""
Explanation: Tell this notebook to embed graphical outputs from matplotlib, includings those made by fenics.plot.
End of explanation
"""
import numpy
"""
Explanation: We will also use numpy.
End of explanation
"""
def semi_phase_field(T, T_r, r):
return 0.5*(1. + numpy.tanh((T_r - T)/r))
regularization_central_temperature = 0.
temperatures = numpy.linspace(
regularization_central_temperature - 0.5,
regularization_central_temperature + 0.5,
1000)
legend_strings = []
for regluarization_smoothing_parameter in (0.1, 0.05, 0.025):
matplotlib.pyplot.plot(
temperatures,
semi_phase_field(
T = temperatures,
T_r = regularization_central_temperature,
r = regluarization_smoothing_parameter))
legend_strings.append(
"$r = " + str(regluarization_smoothing_parameter) + "$")
matplotlib.pyplot.xlabel("$T$")
matplotlib.pyplot.ylabel("$\phi$")
matplotlib.pyplot.legend(legend_strings)
"""
Explanation: Nomenclature
|||
|-|-|
|$\mathbf{x}$| point in the spatial domain|
|$t$| time |
|$T = T(\mathbf{x},t)$| temperature field |
|$\phi$ | solid volume fraction |
|$()_t = \frac{\partial}{\partial t}()$| time derivative |
|$T_r$| central temperature of the regularization |
|$r$| smoothing parameter of the regularization |
|$\mathrm{Ste}$| Stefan number|
|$\Omega$| spatial domain |
|$\mathbf{V}$| finite element function space |
|$\psi$| test function |
|$T_h$| hot boundary temperature |
|$T_c$| cold boundary temperature |
|$\Delta t$| time step size |
Governing equations
To model the Stefan problem with a single domain, consider the enthalpy balance from [2] with zero velocity and unit Prandtl number.
\begin{align}
T_t - \nabla \cdot (\nabla T) - \frac{1}{\mathrm{Ste}}\phi_t &= 0
\end{align}
where the regularized semi-phase-field (representing the solid volume fraction) is
\begin{align}
\phi(T) = \frac{1}{2}\left(1 + \tanh{\frac{T_r - T}{r}} \right)
\end{align}
This is essentially a smoothed heaviside function, which approaches the exact heaviside function as $r$ approaches zero. Let's visualize this.
End of explanation
"""
N = 1000
mesh = fenics.UnitIntervalMesh(N)
"""
Explanation: Mesh
Define a fine mesh to capture the rapid variation in $\phi(T)$.
End of explanation
"""
P1 = fenics.FiniteElement('P', mesh.ufl_cell(), 1)
"""
Explanation: Finite element function space, test function, and solution function
Lets use piece-wise linear elements.
End of explanation
"""
V = fenics.FunctionSpace(mesh, P1)
"""
Explanation: |Note|
|----|
|fenics.FiniteElement requires the mesh.ufl_cell() argument to determine some aspects of the domain (e.g. that the spatial domain is two-dimensional).|
Make the finite element function space $V$, which enumerates the finite element basis functions on each cell of the mesh.
End of explanation
"""
psi = fenics.TestFunction(V)
"""
Explanation: Make the test function $\psi \in \mathbf{V}$.
End of explanation
"""
T = fenics.Function(V)
"""
Explanation: Make the solution function $T \in \mathbf{V}$.
End of explanation
"""
stefan_number = 0.045
Ste = fenics.Constant(stefan_number)
"""
Explanation: Benchmark parameters
Set the Stefan number, density, specific heat capacity, and thermal diffusivity. For each we define a fenics.Constant for use in the variational form so that FEniCS can more efficiently compile the finite element code.
End of explanation
"""
regularization_central_temperature = 0.
T_r = fenics.Constant(regularization_central_temperature)
regularization_smoothing_parameter = 0.005
r = fenics.Constant(regularization_smoothing_parameter)
tanh = fenics.tanh
def phi(T):
return 0.5*(1. + fenics.tanh((T_r - T)/r))
"""
Explanation: Define the regularized semi-phase-field for use with FEniCS.
End of explanation
"""
hot_wall_temperature = 1.
T_h = fenics.Constant(hot_wall_temperature)
cold_wall_temperature = -0.01
T_c = fenics.Constant(cold_wall_temperature)
"""
Explanation: Furthermore the benchmark problem involves hot and cold walls with constant temperatures $T_h$ and $T_c$, respectively.
End of explanation
"""
initial_melt_thickness = 10./float(N)
T_n = fenics.interpolate(
fenics.Expression(
"(T_h - T_c)*(x[0] < x_m0) + T_c",
T_h = hot_wall_temperature,
T_c = cold_wall_temperature,
x_m0 = initial_melt_thickness,
element = P1),
V)
"""
Explanation: Time discretization
To solve the initial value problem, we will prescribe the initial values, and then take discrete steps forward in time which solve the governing equations.
We set the initial values such that a small layer of melt already exists touching the hot wall.
\begin{align}
T^0 =
\begin{cases}
T_h, && x_0 < x_{m,0} \
T_c, && \mathrm{otherwise}
\end{cases}
\end{align}
Interpolate these values to create the initial solution function.
End of explanation
"""
fenics.plot(T_n)
matplotlib.pyplot.title(r"$T^0$")
matplotlib.pyplot.xlabel("$x$")
matplotlib.pyplot.show()
fenics.plot(phi(T_n))
matplotlib.pyplot.title(r"$\phi(T^0)$")
matplotlib.pyplot.xlabel("$x$")
matplotlib.pyplot.show()
"""
Explanation: Let's look at the initial values now.
End of explanation
"""
timestep_size = 1.e-2
Delta_t = fenics.Constant(timestep_size)
T_t = (T - T_n)/Delta_t
phi_t = (phi(T) - phi(T_n))/Delta_t
"""
Explanation: |Note|
|----|
|$\phi$ undershoots and overshoots the expected minimum and maximum values near the rapid change. This is a common feature of interior layers in finite element solutions. Here, fenics.plot projected $phi(T^0)$ onto a piece-wise linear basis for plotting. This could suggest we will encounter numerical issues. We'll see what happens.|
For the time derivative terms, we apply the first-order implicit Euler finite difference time discretization, i.e.
\begin{align}
T_t = \frac{T^{n+1} - T^n}{\Delta t} \
\phi_t = \frac{\phi\left(T^{n+1}\right) - \phi\left(T^n\right)}{\Delta t}
\end{align}
|Note|
|----|
|We will use the shorthand $T = T^{n+1}$, since we will always be solving for the latest discrete time.|
Choose a time step size and set the discrete time derivatives.
End of explanation
"""
dot, grad = fenics.dot, fenics.grad
F = (psi*(T_t - 1./Ste*phi_t) + dot(grad(psi), grad(T)))*fenics.dx
"""
Explanation: Variational form
To obtain the finite element weak form, we follow the standard Ritz-Galerkin method. Therefore, we multiply the strong form from the left by the test function $\psi$ from the finite element function space $V$ and integrate over the spatial domain $\Omega$. This gives us the variational problem: Find $T \in V$ such that
\begin{align}
(\psi,T_t - \frac{1}{\mathrm{Ste}}\phi_t) + (\nabla \psi, \nabla T) = 0 \quad \forall \psi \in V
\end{align}
|Note|
|----|
|We denote integrating inner products over the domain as $(v,u) = \int_\Omega v u d \mathbf{x}$.|
Define the nonlinear variational form for FEniCS.
|Note|
|----|
|The term $\phi(T)$ is nonlinear.|
End of explanation
"""
JF = fenics.derivative(F, T, fenics.TrialFunction(V))
"""
Explanation: Linearization
Notice that $\mathcal{F}$ is a nonlinear variational form. FEniCS will solve the nonlinear problem using Newton's method. This requires computing the Jacobian (formally the Gâteaux derivative) of the nonlinear variational form, yielding a a sequence of linearized problems whose solutions may converge to approximate the nonlinear solution.
We could manually define the Jacobian; but thankfully FEniCS can do this for us.
|Note|
|----|
|When solving linear variational problems in FEniCS, one defines the linear variational form using fenics.TrialFunction instead of fenics.Function (while both approaches will need fenics.TestFunction). When solving nonlinear variational problems with FEniCS, we only need fenics.TrialFunction to define the linearized problem, since it is the linearized problem which will be assembled into a linear system and solved.|
End of explanation
"""
hot_wall = "near(x[0], 0.)"
cold_wall = "near(x[0], 1.)"
"""
Explanation: Boundary conditions
We need boundary conditions before we can define a variational problem (i.e. in this case a boundary value problem).
We consider a constant hot temperature on the left wall, a constant cold temperature on the right wall. Because the problem's geometry is simple, we can identify the boundaries with the following piece-wise function.
\begin{align}
T(\mathbf{x}) &=
\begin{cases}
T_h , && x_0 = 0 \
T_c , && x_0 = 1
\end{cases}
\end{align}
End of explanation
"""
boundary_conditions = [
fenics.DirichletBC(V, hot_wall_temperature, hot_wall),
fenics.DirichletBC(V, cold_wall_temperature, cold_wall)]
"""
Explanation: Define the boundary conditions for FEniCS.
End of explanation
"""
problem = fenics.NonlinearVariationalProblem(F, T, boundary_conditions, JF)
"""
Explanation: The variational problem
Now we have everything we need to define the variational problem for FEniCS.
End of explanation
"""
solver = fenics.NonlinearVariationalSolver(problem)
"""
Explanation: The benchmark solution
Finally we instantiate the solver.
End of explanation
"""
solver.solve()
"""
Explanation: and solve the problem.
End of explanation
"""
def plot(T):
fenics.plot(T)
matplotlib.pyplot.title("Temperature")
matplotlib.pyplot.xlabel("$x$")
matplotlib.pyplot.ylabel("$T$")
matplotlib.pyplot.show()
fenics.plot(phi(T))
matplotlib.pyplot.title("Solid volume fraction")
matplotlib.pyplot.xlabel("$x$")
matplotlib.pyplot.ylabel("$\phi$")
matplotlib.pyplot.show()
plot(T)
"""
Explanation: |Note|
|----|
|solver.solve will modify the solution w, which means that u and p will also be modified.|
Now plot the temperature and solid volume fraction.
End of explanation
"""
for timestep in range(10):
T_n.vector()[:] = T.vector()
solver.solve()
plot(T)
"""
Explanation: Let's run further.
End of explanation
"""
|
albahnsen/PracticalMachineLearningClass | exercises/E20-NeuralNetworksKeras.ipynb | mit | import numpy as np
import keras
import pandas as pd
import matplotlib.pyplot as plt
"""
Explanation: E20- Neural Networks in Keras
Use keras framework to solve the below exercises.
End of explanation
"""
# Import dataset
data = pd.read_csv('https://raw.githubusercontent.com/albahnsen/PracticalMachineLearningClass/master/datasets/universityGraduateAdmissions.csv', index_col=0)
data.head()
data.columns
X = data.drop(data.columns[-1], axis=1)
Y = data[data.columns[-1]]
from sklearn.model_selection import train_test_split
xTrain, xTest, yTrain, yTest = train_test_split(X,Y,test_size=0.3, random_state=22)
from keras import initializers
from keras import optimizers
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
"""
Explanation: 20.1 Predicting Student Admissions with Neural Networks
In this notebook, we predict student admissions to graduate schools based on six pieces of data:
GRE Scores (Test)
TOEFL Scores (Test)
University Ranking (1-5)
Statement of Purpose (SOP) and Letter of Recommendation Strength ( out of 5 )
Undergraduate GPA Scores (Grades)
Research Experience ( either 0 or 1 )
Exercise: Design and train a shallow neural network to predict the chance of Admission for each entry. Choose the number of hidden layer and neurons that minimizes the error.
End of explanation
"""
# Create moons dataset.
from sklearn.datasets.samples_generator import make_moons
x_train, y_train = make_moons(n_samples=1000, noise= 0.2, random_state=3)
plt.figure(figsize=(12, 8))
plt.scatter(x_train[:, 0], x_train[:,1], c=y_train, s=40, cmap=plt.cm.Spectral);
"""
Explanation: 20.2 Decision Boundary -- Moons Dataset
Exercise: Use keras framework to find a decision boundary for point in the make_moons.
End of explanation
"""
model = 'Sequential neural network in keras'
def plot_decision_region(model, X, pred_fun):
min_x = np.min(X[:, 0])
max_x = np.max(X[:, 0])
min_y = np.min(X[:, 1])
max_y = np.max(X[:, 1])
min_x = min_x - (max_x - min_x) * 0.05
max_x = max_x + (max_x - min_x) * 0.05
min_y = min_y - (max_y - min_y) * 0.05
max_y = max_y + (max_y - min_y) * 0.05
x_vals = np.linspace(min_x, max_x, 30)
y_vals = np.linspace(min_y, max_y, 30)
XX, YY = np.meshgrid(x_vals, y_vals)
grid_r, grid_c = XX.shape
ZZ = np.zeros((grid_r, grid_c))
for i in range(grid_r):
for j in range(grid_c):
'''
Here 'model' is the neural network you previous trained.
'''
ZZ[i, j] = pred_fun(model, XX[i, j], YY[i, j])
plt.contourf(XX, YY, ZZ, 30, cmap = pl.cm.coolwarm, vmin= 0, vmax=1)
plt.colorbar()
plt.xlabel("x")
plt.ylabel("y")
def pred_fun(model,x1, x2):
'''
Here 'model' is the neural network you previous trained.
'''
xval = np.array([[x1, x2]])
return model.predict(xval)[0, 0]
plt.figure(figsize = (8,16/3))
'''
Here 'model' is the neural network you previous trained.
'''
plot_decision_region(model, x_train, pred_fun)
plot_data(x_train, y_train)
"""
Explanation: Hint: Use the next function to plt the decision boundary,
End of explanation
"""
|
esa-as/2016-ml-contest | Facies_classification.ipynb | apache-2.0 | %matplotlib inline
import pandas as pd
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.colors as colors
from mpl_toolkits.axes_grid1 import make_axes_locatable
from pandas import set_option
set_option("display.max_rows", 10)
pd.options.mode.chained_assignment = None
filename = 'training_data.csv'
training_data = pd.read_csv(filename)
training_data
"""
Explanation: Facies classification using Machine Learning
Brendon Hall, Enthought
This notebook demonstrates how to train a machine learning algorithm to predict facies from well log data. The dataset we will use comes from a class excercise from The University of Kansas on Neural Networks and Fuzzy Systems. This exercise is based on a consortium project to use machine learning techniques to create a reservoir model of the largest gas fields in North America, the Hugoton and Panoma Fields. For more info on the origin of the data, see Bohling and Dubois (2003) and Dubois et al. (2007).
The dataset we will use is log data from nine wells that have been labeled with a facies type based on oberservation of core. We will use this log data to train a support vector machine to classify facies types. Support vector machines (or SVMs) are a type of supervised learning model that can be trained on data to perform classification and regression tasks. The SVM algorithm uses the training data to fit an optimal hyperplane between the different classes (or facies, in our case). We will use the SVM implementation in scikit-learn.
First we will explore the dataset. We will load the training data from 9 wells, and take a look at what we have to work with. We will plot the data from a couple wells, and create cross plots to look at the variation within the data.
Next we will condition the data set. We will remove the entries that have incomplete data. The data will be scaled to have zero mean and unit variance. We will also split the data into training and test sets.
We will then be ready to build the SVM classifier. We will demonstrate how to use the cross validation set to do model parameter selection.
Finally, once we have a built and tuned the classifier, we can apply the trained model to classify facies in wells which do not already have labels. We will apply the classifier to two wells, but in principle you could apply the classifier to any number of wells that had the same log data.
Exploring the dataset
First, we will examine the data set we will use to train the classifier. The training data is contained in the file facies_vectors.csv. The dataset consists of 5 wireline log measurements, two indicator variables and a facies label at half foot intervals. In machine learning terminology, each log measurement is a feature vector that maps a set of 'features' (the log measurements) to a class (the facies type). We will use the pandas library to load the data into a dataframe, which provides a convenient data structure to work with well log data.
End of explanation
"""
training_data['Well Name'] = training_data['Well Name'].astype('category')
training_data['Formation'] = training_data['Formation'].astype('category')
training_data['Well Name'].unique()
training_data.describe()
"""
Explanation: This data is from the Council Grove gas reservoir in Southwest Kansas. The Panoma Council Grove Field is predominantly a carbonate gas reservoir encompassing 2700 square miles in Southwestern Kansas. This dataset is from nine wells (with 4149 examples), consisting of a set of seven predictor variables and a rock facies (class) for each example vector and validation (test) data (830 examples from two wells) having the same seven predictor variables in the feature vector. Facies are based on examination of cores from nine wells taken vertically at half-foot intervals. Predictor variables include five from wireline log measurements and two geologic constraining variables that are derived from geologic knowledge. These are essentially continuous variables sampled at a half-foot sample rate.
The seven predictor variables are:
* Five wire line log curves include gamma ray (GR), resistivity logging (ILD_log10),
photoelectric effect (PE), neutron-density porosity difference and average neutron-density porosity (DeltaPHI and PHIND). Note, some wells do not have PE.
* Two geologic constraining variables: nonmarine-marine indicator (NM_M) and relative position (RELPOS)
The nine discrete facies (classes of rocks) are:
1. Nonmarine sandstone
2. Nonmarine coarse siltstone
3. Nonmarine fine siltstone
4. Marine siltstone and shale
5. Mudstone (limestone)
6. Wackestone (limestone)
7. Dolomite
8. Packstone-grainstone (limestone)
9. Phylloid-algal bafflestone (limestone)
These facies aren't discrete, and gradually blend into one another. Some have neighboring facies that are rather close. Mislabeling within these neighboring facies can be expected to occur. The following table lists the facies, their abbreviated labels and their approximate neighbors.
Facies |Label| Adjacent Facies
:---: | :---: |:--:
1 |SS| 2
2 |CSiS| 1,3
3 |FSiS| 2
4 |SiSh| 5
5 |MS| 4,6
6 |WS| 5,7
7 |D| 6,8
8 |PS| 6,7,9
9 |BS| 7,8
Let's clean up this dataset. The 'Well Name' and 'Formation' columns can be turned into a categorical data type.
End of explanation
"""
blind = training_data[training_data['Well Name'] == 'SHANKLE']
training_data = training_data[training_data['Well Name'] != 'SHANKLE']
"""
Explanation: This is a quick view of the statistical distribution of the input variables. Looking at the count values, there are 3232 feature vectors in the training set.
Remove a single well to use as a blind test later.
End of explanation
"""
# 1=sandstone 2=c_siltstone 3=f_siltstone
# 4=marine_silt_shale 5=mudstone 6=wackestone 7=dolomite
# 8=packstone 9=bafflestone
facies_colors = ['#F4D03F', '#F5B041','#DC7633','#6E2C00',
'#1B4F72','#2E86C1', '#AED6F1', '#A569BD', '#196F3D']
facies_labels = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS',
'WS', 'D','PS', 'BS']
#facies_color_map is a dictionary that maps facies labels
#to their respective colors
facies_color_map = {}
for ind, label in enumerate(facies_labels):
facies_color_map[label] = facies_colors[ind]
def label_facies(row, labels):
return labels[ row['Facies'] -1]
training_data.loc[:,'FaciesLabels'] = training_data.apply(lambda row: label_facies(row, facies_labels), axis=1)
"""
Explanation: These are the names of the 10 training wells in the Council Grove reservoir. Data has been recruited into pseudo-well 'Recruit F9' to better represent facies 9, the Phylloid-algal bafflestone.
Before we plot the well data, let's define a color map so the facies are represented by consistent color in all the plots in this tutorial. We also create the abbreviated facies labels, and add those to the facies_vectors dataframe.
End of explanation
"""
def make_facies_log_plot(logs, facies_colors):
#make sure logs are sorted by depth
logs = logs.sort_values(by='Depth')
cmap_facies = colors.ListedColormap(
facies_colors[0:len(facies_colors)], 'indexed')
ztop=logs.Depth.min(); zbot=logs.Depth.max()
cluster=np.repeat(np.expand_dims(logs['Facies'].values,1), 100, 1)
f, ax = plt.subplots(nrows=1, ncols=6, figsize=(8, 12))
ax[0].plot(logs.GR, logs.Depth, '-g')
ax[1].plot(logs.ILD_log10, logs.Depth, '-')
ax[2].plot(logs.DeltaPHI, logs.Depth, '-', color='0.5')
ax[3].plot(logs.PHIND, logs.Depth, '-', color='r')
ax[4].plot(logs.PE, logs.Depth, '-', color='black')
im=ax[5].imshow(cluster, interpolation='none', aspect='auto',
cmap=cmap_facies,vmin=1,vmax=9)
divider = make_axes_locatable(ax[5])
cax = divider.append_axes("right", size="20%", pad=0.05)
cbar=plt.colorbar(im, cax=cax)
cbar.set_label((17*' ').join([' SS ', 'CSiS', 'FSiS',
'SiSh', ' MS ', ' WS ', ' D ',
' PS ', ' BS ']))
cbar.set_ticks(range(0,1)); cbar.set_ticklabels('')
for i in range(len(ax)-1):
ax[i].set_ylim(ztop,zbot)
ax[i].invert_yaxis()
ax[i].grid()
ax[i].locator_params(axis='x', nbins=3)
ax[0].set_xlabel("GR")
ax[0].set_xlim(logs.GR.min(),logs.GR.max())
ax[1].set_xlabel("ILD_log10")
ax[1].set_xlim(logs.ILD_log10.min(),logs.ILD_log10.max())
ax[2].set_xlabel("DeltaPHI")
ax[2].set_xlim(logs.DeltaPHI.min(),logs.DeltaPHI.max())
ax[3].set_xlabel("PHIND")
ax[3].set_xlim(logs.PHIND.min(),logs.PHIND.max())
ax[4].set_xlabel("PE")
ax[4].set_xlim(logs.PE.min(),logs.PE.max())
ax[5].set_xlabel('Facies')
ax[1].set_yticklabels([]); ax[2].set_yticklabels([]); ax[3].set_yticklabels([])
ax[4].set_yticklabels([]); ax[5].set_yticklabels([])
ax[5].set_xticklabels([])
f.suptitle('Well: %s'%logs.iloc[0]['Well Name'], fontsize=14,y=0.94)
"""
Explanation: Let's take a look at the data from individual wells in a more familiar log plot form. We will create plots for the five well log variables, as well as a log for facies labels. The plots are based on the those described in Alessandro Amato del Monte's excellent tutorial.
End of explanation
"""
make_facies_log_plot(
training_data[training_data['Well Name'] == 'SHRIMPLIN'],
facies_colors)
"""
Explanation: Placing the log plotting code in a function will make it easy to plot the logs from multiples wells, and can be reused later to view the results when we apply the facies classification model to other wells. The function was written to take a list of colors and facies labels as parameters.
We then show log plots for wells SHRIMPLIN.
End of explanation
"""
#count the number of unique entries for each facies, sort them by
#facies number (instead of by number of entries)
facies_counts = training_data['Facies'].value_counts().sort_index()
#use facies labels to index each count
facies_counts.index = facies_labels
facies_counts.plot(kind='bar',color=facies_colors,
title='Distribution of Training Data by Facies')
facies_counts
"""
Explanation: In addition to individual wells, we can look at how the various facies are represented by the entire training set. Let's plot a histogram of the number of training examples for each facies class.
End of explanation
"""
#save plot display settings to change back to when done plotting with seaborn
inline_rc = dict(mpl.rcParams)
import seaborn as sns
sns.set()
sns.pairplot(training_data.drop(['Well Name','Facies','Formation','Depth','NM_M','RELPOS'],axis=1),
hue='FaciesLabels', palette=facies_color_map,
hue_order=list(reversed(facies_labels)))
#switch back to default matplotlib plot style
mpl.rcParams.update(inline_rc)
"""
Explanation: This shows the distribution of examples by facies for the examples in the training set. Dolomite (facies 7) has the fewest with 81 examples. Depending on the performance of the classifier we are going to train, we may consider getting more examples of these facies.
Crossplots are a familiar tool in the geosciences to visualize how two properties vary with rock type. This dataset contains 5 log variables, and scatter matrix can help to quickly visualize the variation between the all the variables in the dataset. We can employ the very useful Seaborn library to quickly create a nice looking scatter matrix. Each pane in the plot shows the relationship between two of the variables on the x and y axis, with each point is colored according to its facies. The same colormap is used to represent the 9 facies.
End of explanation
"""
correct_facies_labels = training_data['Facies'].values
feature_vectors = training_data.drop(['Formation', 'Well Name', 'Depth','Facies','FaciesLabels'], axis=1)
feature_vectors.describe()
"""
Explanation: Conditioning the data set
Now we extract just the feature variables we need to perform the classification. The predictor variables are the five wireline values and two geologic constraining variables. We also get a vector of the facies labels that correspond to each feature vector.
End of explanation
"""
from sklearn import preprocessing
scaler = preprocessing.StandardScaler().fit(feature_vectors)
scaled_features = scaler.transform(feature_vectors)
feature_vectors
"""
Explanation: Scikit includes a preprocessing module that can 'standardize' the data (giving each variable zero mean and unit variance, also called whitening). Many machine learning algorithms assume features will be standard normally distributed data (ie: Gaussian with zero mean and unit variance). The factors used to standardize the training set must be applied to any subsequent feature set that will be input to the classifier. The StandardScalar class can be fit to the training set, and later used to standardize any training data.
End of explanation
"""
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
scaled_features, correct_facies_labels, test_size=0.1, random_state=42)
"""
Explanation: Scikit also includes a handy function to randomly split the training data into training and test sets. The test set contains a small subset of feature vectors that are not used to train the network. Because we know the true facies labels for these examples, we can compare the results of the classifier to the actual facies and determine the accuracy of the model. Let's use 20% of the data for the test set.
End of explanation
"""
from sklearn import svm
clf = svm.SVC()
"""
Explanation: Training the SVM classifier
Now we use the cleaned and conditioned training set to create a facies classifier. As mentioned above, we will use a type of machine learning model known as a support vector machine. The SVM is a map of the feature vectors as points in a multi dimensional space, mapped so that examples from different facies are divided by a clear gap that is as wide as possible.
The SVM implementation in scikit-learn takes a number of important parameters. First we create a classifier using the default settings.
End of explanation
"""
clf.fit(X_train,y_train)
"""
Explanation: Now we can train the classifier using the training set we created above.
End of explanation
"""
predicted_labels = clf.predict(X_test)
"""
Explanation: Now that the model has been trained on our data, we can use it to predict the facies of the feature vectors in the test set. Because we know the true facies labels of the vectors in the test set, we can use the results to evaluate the accuracy of the classifier.
End of explanation
"""
from sklearn.metrics import confusion_matrix
from classification_utilities import display_cm, display_adj_cm
conf = confusion_matrix(y_test, predicted_labels)
display_cm(conf, facies_labels, hide_zeros=True)
"""
Explanation: We need some metrics to evaluate how good our classifier is doing. A confusion matrix is a table that can be used to describe the performance of a classification model. Scikit-learn allows us to easily create a confusion matrix by supplying the actual and predicted facies labels.
The confusion matrix is simply a 2D array. The entries of confusion matrix C[i][j] are equal to the number of observations predicted to have facies j, but are known to have facies i.
To simplify reading the confusion matrix, a function has been written to display the matrix along with facies labels and various error metrics. See the file classification_utilities.py in this repo for the display_cm() function.
End of explanation
"""
def accuracy(conf):
total_correct = 0.
nb_classes = conf.shape[0]
for i in np.arange(0,nb_classes):
total_correct += conf[i][i]
acc = total_correct/sum(sum(conf))
return acc
"""
Explanation: The rows of the confusion matrix correspond to the actual facies labels. The columns correspond to the labels assigned by the classifier. For example, consider the first row. For the feature vectors in the test set that actually have label SS, 23 were correctly indentified as SS, 21 were classified as CSiS and 2 were classified as FSiS.
The entries along the diagonal are the facies that have been correctly classified. Below we define two functions that will give an overall value for how the algorithm is performing. The accuracy is defined as the number of correct classifications divided by the total number of classifications.
End of explanation
"""
adjacent_facies = np.array([[1], [0,2], [1], [4], [3,5], [4,6,7], [5,7], [5,6,8], [6,7]])
def accuracy_adjacent(conf, adjacent_facies):
nb_classes = conf.shape[0]
total_correct = 0.
for i in np.arange(0,nb_classes):
total_correct += conf[i][i]
for j in adjacent_facies[i]:
total_correct += conf[i][j]
return total_correct / sum(sum(conf))
print('Facies classification accuracy = %f' % accuracy(conf))
print('Adjacent facies classification accuracy = %f' % accuracy_adjacent(conf, adjacent_facies))
"""
Explanation: As noted above, the boundaries between the facies classes are not all sharp, and some of them blend into one another. The error within these 'adjacent facies' can also be calculated. We define an array to represent the facies adjacent to each other. For facies label i, adjacent_facies[i] is an array of the adjacent facies labels.
End of explanation
"""
#model selection takes a few minutes, change this variable
#to true to run the parameter loop
do_model_selection = True
if do_model_selection:
C_range = np.array([.01, 1, 5, 10, 20, 50, 100, 1000, 5000, 10000])
gamma_range = np.array([0.0001, 0.001, 0.01, 0.1, 1, 10])
fig, axes = plt.subplots(3, 2,
sharex='col', sharey='row',figsize=(10,10))
plot_number = 0
for outer_ind, gamma_value in enumerate(gamma_range):
row = int(plot_number / 2)
column = int(plot_number % 2)
cv_errors = np.zeros(C_range.shape)
train_errors = np.zeros(C_range.shape)
for index, c_value in enumerate(C_range):
clf = svm.SVC(C=c_value, gamma=gamma_value)
clf.fit(X_train,y_train)
train_conf = confusion_matrix(y_train, clf.predict(X_train))
cv_conf = confusion_matrix(y_test, clf.predict(X_test))
cv_errors[index] = accuracy(cv_conf)
train_errors[index] = accuracy(train_conf)
ax = axes[row, column]
ax.set_title('Gamma = %g'%gamma_value)
ax.semilogx(C_range, cv_errors, label='CV error')
ax.semilogx(C_range, train_errors, label='Train error')
plot_number += 1
ax.set_ylim([0.2,1])
ax.legend(bbox_to_anchor=(1.05, 0), loc='lower left', borderaxespad=0.)
fig.text(0.5, 0.03, 'C value', ha='center',
fontsize=14)
fig.text(0.04, 0.5, 'Classification Accuracy', va='center',
rotation='vertical', fontsize=14)
"""
Explanation: Model parameter selection
The classifier so far has been built with the default parameters. However, we may be able to get improved classification results with optimal parameter choices.
We will consider two parameters. The parameter C is a regularization factor, and tells the classifier how much we want to avoid misclassifying training examples. A large value of C will try to correctly classify more examples from the training set, but if C is too large it may 'overfit' the data and fail to generalize when classifying new data. If C is too small then the model will not be good at fitting outliers and will have a large error on the training set.
The SVM learning algorithm uses a kernel function to compute the distance between feature vectors. Many kernel functions exist, but in this case we are using the radial basis function rbf kernel (the default). The gamma parameter describes the size of the radial basis functions, which is how far away two vectors in the feature space need to be to be considered close.
We will train a series of classifiers with different values for C and gamma. Two nested loops are used to train a classifier for every possible combination of values in the ranges specified. The classification accuracy is recorded for each combination of parameter values. The results are shown in a series of plots, so the parameter values that give the best classification accuracy on the test set can be selected.
This process is also known as 'cross validation'. Often a separate 'cross validation' dataset will be created in addition to the training and test sets to do model selection. For this tutorial we will just use the test set to choose model parameters.
End of explanation
"""
clf = svm.SVC(C=10, gamma=1)
clf.fit(X_train, y_train)
cv_conf = confusion_matrix(y_test, clf.predict(X_test))
print('Optimized facies classification accuracy = %.2f' % accuracy(cv_conf))
print('Optimized adjacent facies classification accuracy = %.2f' % accuracy_adjacent(cv_conf, adjacent_facies))
"""
Explanation: The best accuracy on the cross validation error curve was achieved for gamma = 1, and C = 10. We can now create and train an optimized classifier based on these parameters:
End of explanation
"""
display_cm(cv_conf, facies_labels,
display_metrics=True, hide_zeros=True)
"""
Explanation: Precision and recall are metrics that give more insight into how the classifier performs for individual facies. Precision is the probability that given a classification result for a sample, the sample actually belongs to that class. Recall is the probability that a sample will be correctly classified for a given class.
Precision and recall can be computed easily using the confusion matrix. The code to do so has been added to the display_confusion_matrix() function:
End of explanation
"""
display_adj_cm(cv_conf, facies_labels, adjacent_facies,
display_metrics=True, hide_zeros=True)
"""
Explanation: To interpret these results, consider facies SS. In our test set, if a sample was labeled SS the probability the sample was correct is 0.8 (precision). If we know a sample has facies SS, then the probability it will be correctly labeled by the classifier is 0.78 (recall). It is desirable to have high values for both precision and recall, but often when an algorithm is tuned to increase one, the other decreases. The F1 score combines both to give a single measure of relevancy of the classifier results.
These results can help guide intuition for how to improve the classifier results. For example, for a sample with facies MS or mudstone, it is only classified correctly 57% of the time (recall). Perhaps this could be improved by introducing more training samples. Sample quality could also play a role. Facies BS or bafflestone has the best F1 score and relatively few training examples. But this data was handpicked from other wells to provide training examples to identify this facies.
We can also consider the classification metrics when we consider misclassifying an adjacent facies as correct:
End of explanation
"""
blind
"""
Explanation: Considering adjacent facies, the F1 scores for all facies types are above 0.9, except when classifying SiSh or marine siltstone and shale. The classifier often misclassifies this facies (recall of 0.66), most often as wackestone.
These results are comparable to those reported in Dubois et al. (2007).
Applying the classification model to the blind data
We held a well back from the training, and stored it in a dataframe called blind:
End of explanation
"""
y_blind = blind['Facies'].values
"""
Explanation: The label vector is just the Facies column:
End of explanation
"""
well_features = blind.drop(['Facies', 'Formation', 'Well Name', 'Depth'], axis=1)
"""
Explanation: We can form the feature matrix by dropping some of the columns and making a new dataframe:
End of explanation
"""
X_blind = scaler.transform(well_features)
"""
Explanation: Now we can transform this with the scaler we made before:
End of explanation
"""
y_pred = clf.predict(X_blind)
blind['Prediction'] = y_pred
"""
Explanation: Now it's a simple matter of making a prediction and storing it back in the dataframe:
End of explanation
"""
cv_conf = confusion_matrix(y_blind, y_pred)
print('Optimized facies classification accuracy = %.2f' % accuracy(cv_conf))
print('Optimized adjacent facies classification accuracy = %.2f' % accuracy_adjacent(cv_conf, adjacent_facies))
"""
Explanation: Let's see how we did with the confusion matrix:
End of explanation
"""
display_cm(cv_conf, facies_labels,
display_metrics=True, hide_zeros=True)
"""
Explanation: We managed 0.71 using the test data, but it was from the same wells as the training data. This more reasonable test does not perform as well...
End of explanation
"""
display_adj_cm(cv_conf, facies_labels, adjacent_facies,
display_metrics=True, hide_zeros=True)
def compare_facies_plot(logs, compadre, facies_colors):
#make sure logs are sorted by depth
logs = logs.sort_values(by='Depth')
cmap_facies = colors.ListedColormap(
facies_colors[0:len(facies_colors)], 'indexed')
ztop=logs.Depth.min(); zbot=logs.Depth.max()
cluster1 = np.repeat(np.expand_dims(logs['Facies'].values,1), 100, 1)
cluster2 = np.repeat(np.expand_dims(logs[compadre].values,1), 100, 1)
f, ax = plt.subplots(nrows=1, ncols=7, figsize=(9, 12))
ax[0].plot(logs.GR, logs.Depth, '-g')
ax[1].plot(logs.ILD_log10, logs.Depth, '-')
ax[2].plot(logs.DeltaPHI, logs.Depth, '-', color='0.5')
ax[3].plot(logs.PHIND, logs.Depth, '-', color='r')
ax[4].plot(logs.PE, logs.Depth, '-', color='black')
im1 = ax[5].imshow(cluster1, interpolation='none', aspect='auto',
cmap=cmap_facies,vmin=1,vmax=9)
im2 = ax[6].imshow(cluster2, interpolation='none', aspect='auto',
cmap=cmap_facies,vmin=1,vmax=9)
divider = make_axes_locatable(ax[6])
cax = divider.append_axes("right", size="20%", pad=0.05)
cbar=plt.colorbar(im2, cax=cax)
cbar.set_label((17*' ').join([' SS ', 'CSiS', 'FSiS',
'SiSh', ' MS ', ' WS ', ' D ',
' PS ', ' BS ']))
cbar.set_ticks(range(0,1)); cbar.set_ticklabels('')
for i in range(len(ax)-2):
ax[i].set_ylim(ztop,zbot)
ax[i].invert_yaxis()
ax[i].grid()
ax[i].locator_params(axis='x', nbins=3)
ax[0].set_xlabel("GR")
ax[0].set_xlim(logs.GR.min(),logs.GR.max())
ax[1].set_xlabel("ILD_log10")
ax[1].set_xlim(logs.ILD_log10.min(),logs.ILD_log10.max())
ax[2].set_xlabel("DeltaPHI")
ax[2].set_xlim(logs.DeltaPHI.min(),logs.DeltaPHI.max())
ax[3].set_xlabel("PHIND")
ax[3].set_xlim(logs.PHIND.min(),logs.PHIND.max())
ax[4].set_xlabel("PE")
ax[4].set_xlim(logs.PE.min(),logs.PE.max())
ax[5].set_xlabel('Facies')
ax[6].set_xlabel(compadre)
ax[1].set_yticklabels([]); ax[2].set_yticklabels([]); ax[3].set_yticklabels([])
ax[4].set_yticklabels([]); ax[5].set_yticklabels([])
ax[5].set_xticklabels([])
ax[6].set_xticklabels([])
f.suptitle('Well: %s'%logs.iloc[0]['Well Name'], fontsize=14,y=0.94)
compare_facies_plot(blind, 'Prediction', facies_colors)
"""
Explanation: ...but does remarkably well on the adjacent facies predictions.
End of explanation
"""
well_data = pd.read_csv('validation_data_nofacies.csv')
well_data['Well Name'] = well_data['Well Name'].astype('category')
well_features = well_data.drop(['Formation', 'Well Name', 'Depth'], axis=1)
"""
Explanation: Applying the classification model to new data
Now that we have a trained facies classification model we can use it to identify facies in wells that do not have core data. In this case, we will apply the classifier to two wells, but we could use it on any number of wells for which we have the same set of well logs for input.
This dataset is similar to the training data except it does not have facies labels. It is loaded into a dataframe called test_data.
End of explanation
"""
X_unknown = scaler.transform(well_features)
"""
Explanation: The data needs to be scaled using the same constants we used for the training data.
End of explanation
"""
#predict facies of unclassified data
y_unknown = clf.predict(X_unknown)
well_data['Facies'] = y_unknown
well_data
well_data['Well Name'].unique()
"""
Explanation: Finally we predict facies labels for the unknown data, and store the results in a Facies column of the test_data dataframe.
End of explanation
"""
make_facies_log_plot(
well_data[well_data['Well Name'] == 'STUART'],
facies_colors=facies_colors)
make_facies_log_plot(
well_data[well_data['Well Name'] == 'CRAWFORD'],
facies_colors=facies_colors)
"""
Explanation: We can use the well log plot to view the classification results along with the well logs.
End of explanation
"""
well_data.to_csv('well_data_with_facies.csv')
"""
Explanation: Finally we can write out a csv file with the well data along with the facies classification results.
End of explanation
"""
|
napsternxg/GET17_SNA | notebooks/Twitter.ipynb | gpl-3.0 | if not os.path.isfile(TWITTER_CONFIG_FILE):
with open(os.path.join(DATA_DIR, "twitter_config.sample.json")) as fp:
creds = json.load(fp)
for k in sorted(creds.keys()):
v = input("Enter %s:\t" % k)
creds[k] = v
print(creds)
with open(TWITTER_CONFIG_FILE, "w+") as fp:
json.dump(creds, fp, indent=4, sort_keys=True)
clear_output()
print("Printed credentials to file %s" % TWITTER_CONFIG_FILE)
with open(TWITTER_CONFIG_FILE) as fp:
creds = json.load(fp)
print(creds.keys())
auth = tw.OAuthHandler(creds["consumer_key"], creds["consumer_secret"])
auth.set_access_token(creds["access_token"], creds["access_token_secret"])
api = tw.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True,
retry_count=5, retry_delay=100,
)
print("Tweepy ready for search")
statuses = api.search(q=input("What is your search term?"), count=10)
len(statuses)
for status in statuses:
print(status.text)
def dict2df(data):
return pd.DataFrame(
list(data.items()),
columns=["item", "counts"]
).sort_values("counts", ascending=False)
def get_entities(statuses):
hashtags = defaultdict(int)
mentions = defaultdict(int)
keys = ("hashtags", "user_mentions")
for s in statuses:
entities = s.entities
if "hashtags" in entities:
e = map(lambda x: x["text"], entities["hashtags"])
for t in e:
hashtags[t] += 1
if "user_mentions" in entities:
e = map(lambda x: x["screen_name"], entities["user_mentions"])
for t in e:
mentions[t] += 1
return dict2df(hashtags), dict2df(mentions)
hashtags, mentions = get_entities(statuses)
len(statuses)
hashtags
mentions
"""
Explanation: Twitter Access Tokens
If you are proceeding further then you are expected to have created your Twitter application by following the steps from Twitter App Creation page.
Make sure you have the following details of your Twitter application readily available:
* 'access_token'
* 'access_token_secret'
* 'consumer_key'
* 'consumer_secret'
Please enter the value of each of the items as shown in your Twitter application, when prompted by the code below.
End of explanation
"""
current_user = api.me()
current_user
status
print(
"""Username: {}
Full Name: {}
# Followers: {}
# Friends: {}
# Statuses: {}""".format(
current_user.screen_name,
current_user.name,
current_user.followers_count,
current_user.friends_count,
current_user.statuses_count
)
)
"""
Explanation: Current user's information
End of explanation
"""
friends = []
for friend in tw.Cursor(api.friends, count=100).items():
friends.append(friend)
print("{} friends found for {}".format(len(friends), current_user.name))
df_friends = pd.DataFrame(
list(map(
lambda k: (k.id, k.name, k.friends_count, k.followers_count, k.statuses_count),
friends
)), columns=["id", "name", "friends", "followers", "statuses"]
).sort_values("followers", ascending=False).reset_index(drop=True)
df_friends.head(15)
network = np.zeros([df_friends.shape[0], df_friends.shape[0]])
network.shape
def get_friendship(id1, id2, verbose=False):
response = api.show_friendship(source_id=id1, target_id=id2)
if verbose:
print(response)
return response[0].following, response[1].following
get_friendship(df_friends["id"].values[0], df_friends["id"].values[1], verbose=True)
network[0, 0] = False
network[1, 0] = True
network[0:3, 0]
def generate_ego_network(df_friends):
network = np.zeros([df_friends.shape[0], df_friends.shape[0]])
processed_friendships=0
for i, fid1 in enumerate(df_friends["id"].values):
for j, fid2 in enumerate(df_friends["id"].values[i+1:], start=i+1):
try:
tie_labels = get_friendship(fid1, fid2)
processed_friendships += 1
except:
print("Processed friendships = {}".format(processed_friendships))
print("Error occurred")
return network
network[i, j] = tie_labels[0]
network[j, i] = tie_labels[1]
return network
df_friends.tail()
"""
Explanation: Friends API
End of explanation
"""
statuses = [status for status in tw.Cursor(
api.search, q=input("What is your search term?"), count=1000).items(1000)]
len(statuses)
status = next(filter(lambda x: len(x.entities["hashtags"]), statuses))
status.entities
def get_entities(statuses, entity_type, text_property):
entity_counts = defaultdict(int)
entity_network = defaultdict(int)
for status in statuses:
for i, entity in enumerate(status.entities[entity_type]):
entity_counts[entity[text_property].lower()] += 1
for j, entity_2 in enumerate(status.entities[entity_type][i+1:], start=i+1):
entity_network[(
entity[text_property].lower(),
entity_2[text_property].lower()
)] += 1
return entity_counts, entity_network
entity_type="user_mentions"
text_property="screen_name"
entity_counts, entity_network = get_entities(statuses, entity_type, text_property)
df_entities = pd.DataFrame(list(entity_counts.items()),
columns=["entity", "counts"]).sort_values(
"counts", ascending=False
).reset_index(drop=True)
df_entities.head()
df_entities.head(20)
df_entity_pairs = pd.DataFrame([(k1, k2, v) for (k1,k2), v in entity_network.items()],
columns=[
"{}_1".format(entity_type),
"{}_2".format(entity_type),
"counts"]).sort_values(
"counts", ascending=False
).reset_index(drop=True)
df_entity_pairs.head()
df_entity_pairs.head(20)
"""
Explanation: Generate user mention network
End of explanation
"""
G = nx.Graph()
G.add_nodes_from(entity_counts)
G.add_edges_from([
(k[0], k[1], {"weight": v})
for k, v in entity_network.items()
])
fig, ax = plt.subplots(1,1)
nx.draw_networkx(
G, with_labels=True,
node_size=[x[1]*3 for x in G.degree_iter()],
pos=nx.spring_layout(G),
ax=ax
)
ax.axis("off")
connected_components = sorted(nx.connected_component_subgraphs(G), key = len, reverse=True)
print("{} connected components found.".format(len(connected_components)))
fig, ax = plt.subplots(1,1)
nx.draw_networkx(
connected_components[0], with_labels=True,
node_size=[x[1]*5 for x in connected_components[0].degree_iter()],
pos=nx.spring_layout(connected_components[0]),
ax=ax
)
ax.axis("off")
fig, ax = plt.subplots(1,2, figsize=(16,8))
ax[0].hist(list(G.degree().values()), bins=list(range(max(G.degree().values()))), log=True)
ax[0].set_xlabel("Degree")
ax[0].set_ylabel("Frequency")
ax[1].hist(list(entity_counts.values()), bins=list(range(max(entity_counts.values()))), log=True)
ax[1].set_xlabel("Counts")
ax[1].set_ylabel("Frequency")
sns.despine(offset=10)
"""
Explanation: Plot network
End of explanation
"""
|
DistrictDataLabs/ceb-training | notes/BLS Timeseries Data Exploration.ipynb | mit | # Imports
import csv
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from itertools import groupby
from operator import itemgetter
"""
Explanation: BLS Timeseries Data Exploration
In this workbook, I've set up a data frame of Bureau of Labor Statistics time series data, your goal is to explore and visualize the time series data using pandas, matplotlib, seaborn, or even Bokeh!
End of explanation
"""
# Load the series data
info = pd.read_csv('../data/bls/series.csv')
def series_info(blsid, info=info):
return info[info.blsid == blsid]
# Use this function to lookup specific BLS series info.
series_info("LNS14000025")
# Load each series, grouping by BLS ID
def load_series_records(path='../data/bls/records.csv'):
with open(path, 'r') as f:
reader = csv.DictReader(f)
for blsid, rows in groupby(reader, itemgetter('blsid')):
# Read all the data from the file and sort
rows = list(rows)
rows.sort(key=itemgetter('period'))
# Extract specific data from each row, namely:
# The period at the month granularity
# The value as a float
periods = [pd.Period(row['period']).asfreq('M') for row in rows]
values = [float(row['value']) for row in rows]
yield pd.Series(values, index=periods, name=blsid)
series = pd.concat(list(load_series_records()), axis=1)
series
"""
Explanation: Data Loading
The data is stored in a zip file in the data directory called data/bls.zip -- unzip this file and there are two CSV files. The first series.csv is a description of the various time series that are in the data frame. The second complete record of the time series data, with the associated time series id.
We we load the series information into it's own dataframe for quick lookup (like a database) and then create a dataframe of each individual series data, identified by their ID. There is more information in the CSV, which you can explore if you'd like.
End of explanation
"""
|
alexandrnikitin/algorithm-sandbox | courses/DAT256x/Module03/03-05-Transformations Eigenvectors and Eigenvalues.ipynb | mit | import numpy as np
v = np.array([1,2])
A = np.array([[2,3],
[5,2]])
t = A@v
print (t)
"""
Explanation: Transformations, Eigenvectors, and Eigenvalues
Matrices and vectors are used together to manipulate spatial dimensions. This has a lot of applications, including the mathematical generation of 3D computer graphics, geometric modeling, and the training and optimization of machine learning algorithms. We're not going to cover the subject exhaustively here; but we'll focus on a few key concepts that are useful to know when you plan to work with machine learning.
Linear Transformations
You can manipulate a vector by multiplying it with a matrix. The matrix acts a function that operates on an input vector to produce a vector output. Specifically, matrix multiplications of vectors are linear transformations that transform the input vector into the output vector.
For example, consider this matrix A and vector v:
$$ A = \begin{bmatrix}2 & 3\5 & 2\end{bmatrix} \;\;\;\; \vec{v} = \begin{bmatrix}1\2\end{bmatrix}$$
We can define a transformation T like this:
$$ T(\vec{v}) = A\vec{v} $$
To perform this transformation, we simply calculate the dot product by applying the RC rule; multiplying each row of the matrix by the single column of the vector:
$$\begin{bmatrix}2 & 3\5 & 2\end{bmatrix} \cdot \begin{bmatrix}1\2\end{bmatrix} = \begin{bmatrix}8\9\end{bmatrix}$$
Here's the calculation in Python:
End of explanation
"""
import numpy as np
v = np.array([1,2])
A = np.array([[2,3],
[5,2],
[1,1]])
t = A@v
print (t)
import numpy as np
v = np.array([1,2])
A = np.array([[1,2],
[2,1]])
t = A@v
print (t)
"""
Explanation: In this case, both the input vector and the output vector have 2 components - in other words, the transformation takes a 2-dimensional vector and produces a new 2-dimensional vector; which we can indicate like this:
$$ T: \rm I!R^{2} \to \rm I!R^{2} $$
Note that the output vector may have a different number of dimensions from the input vector; so the matrix function might transform the vector from one space to another - or in notation, ${\rm I!R}$<sup>n</sup> -> ${\rm I!R}$<sup>m</sup>.
For example, let's redefine matrix A, while retaining our original definition of vector v:
$$ A = \begin{bmatrix}2 & 3\5 & 2\1 & 1\end{bmatrix} \;\;\;\; \vec{v} = \begin{bmatrix}1\2\end{bmatrix}$$
Now if we once again define T like this:
$$ T(\vec{v}) = A\vec{v} $$
We apply the transformation like this:
$$\begin{bmatrix}2 & 3\5 & 2\1 & 1\end{bmatrix} \cdot \begin{bmatrix}1\2\end{bmatrix} = \begin{bmatrix}8\9\3\end{bmatrix}$$
So now, our transformation transforms the vector from 2-dimensional space to 3-dimensional space:
$$ T: \rm I!R^{2} \to \rm I!R^{3} $$
Here it is in Python:
End of explanation
"""
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
v = np.array([1,0])
A = np.array([[2,0],
[0,2]])
t = A@v
print (t)
# Plot v and t
vecs = np.array([t,v])
origin = [0], [0]
plt.axis('equal')
plt.grid()
plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
plt.quiver(*origin, vecs[:,0], vecs[:,1], color=['blue', 'orange'], scale=10)
plt.show()
"""
Explanation: Transformations of Magnitude and Amplitude
When you multiply a vector by a matrix, you transform it in at least one of the following two ways:
* Scale the length (magnitude) of the matrix to make it longer or shorter
* Change the direction (amplitude) of the matrix
For example consider the following matrix and vector:
$$ A = \begin{bmatrix}2 & 0\0 & 2\end{bmatrix} \;\;\;\; \vec{v} = \begin{bmatrix}1\0\end{bmatrix}$$
As before, we transform the vector v by multiplying it with the matrix A:
\begin{equation}\begin{bmatrix}2 & 0\0 & 2\end{bmatrix} \cdot \begin{bmatrix}1\0\end{bmatrix} = \begin{bmatrix}2\0\end{bmatrix}\end{equation}
In this case, the resulting vector has changed in length (magnitude), but has not changed its direction (amplitude).
Let's visualize that in Python:
End of explanation
"""
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
v = np.array([1,0])
A = np.array([[0,-1],
[1,0]])
t = A@v
print (t)
# Plot v and t
vecs = np.array([v,t])
origin = [0], [0]
plt.axis('equal')
plt.grid()
plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
plt.quiver(*origin, vecs[:,0], vecs[:,1], color=['orange', 'blue'], scale=10)
plt.show()
"""
Explanation: The original vector v is shown in orange, and the transformed vector t is shown in blue - note that t has the same direction (amplitude) as v but a greater length (magnitude).
Now let's use a different matrix to transform the vector v:
\begin{equation}\begin{bmatrix}0 & -1\1 & 0\end{bmatrix} \cdot \begin{bmatrix}1\0\end{bmatrix} = \begin{bmatrix}0\1\end{bmatrix}\end{equation}
This time, the resulting vector has been changed to a different amplitude, but has the same magnitude.
End of explanation
"""
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
v = np.array([1,0])
A = np.array([[2,1],
[1,2]])
t = A@v
print (t)
# Plot v and t
vecs = np.array([v,t])
origin = [0], [0]
plt.axis('equal')
plt.grid()
plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
plt.quiver(*origin, vecs[:,0], vecs[:,1], color=['orange', 'blue'], scale=10)
plt.show()
"""
Explanation: Now let's see change the matrix one more time:
\begin{equation}\begin{bmatrix}2 & 1\1 & 2\end{bmatrix} \cdot \begin{bmatrix}1\0\end{bmatrix} = \begin{bmatrix}2\1\end{bmatrix}\end{equation}
Now our resulting vector has been transformed to a new amplitude and magnitude - the transformation has affected both direction and scale.
End of explanation
"""
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
v = np.array([1,1])
A = np.array([[5,2],
[3,1]])
b = np.array([-2,-6])
t = A@v + b
print (t)
# Plot v and t
vecs = np.array([v,t])
origin = [0], [0]
plt.axis('equal')
plt.grid()
plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
plt.quiver(*origin, vecs[:,0], vecs[:,1], color=['orange', 'blue'], scale=15)
plt.show()
"""
Explanation: Afine Transformations
An Afine transformation multiplies a vector by a matrix and adds an offset vector, sometimes referred to as bias; like this:
$$T(\vec{v}) = A\vec{v} + \vec{b}$$
For example:
\begin{equation}\begin{bmatrix}5 & 2\3 & 1\end{bmatrix} \cdot \begin{bmatrix}1\1\end{bmatrix} + \begin{bmatrix}-2\-6\end{bmatrix} = \begin{bmatrix}5\-2\end{bmatrix}\end{equation}
This kind of transformation is actually the basis of linear regression, which is a core foundation for machine learning. The matrix defines the features, the first vector is the coefficients, and the bias vector is the intercept.
here's an example of an Afine transformation in Python:
End of explanation
"""
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
v = np.array([1,0])
A = np.array([[2,0],
[0,2]])
t1 = A@v
print (t1)
t2 = 2*v
print (t2)
fig = plt.figure()
a=fig.add_subplot(1,1,1)
# Plot v and t1
vecs = np.array([t1,v])
origin = [0], [0]
plt.axis('equal')
plt.grid()
plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
plt.quiver(*origin, vecs[:,0], vecs[:,1], color=['blue', 'orange'], scale=10)
plt.show()
a=fig.add_subplot(1,2,1)
# Plot v and t2
vecs = np.array([t2,v])
origin = [0], [0]
plt.axis('equal')
plt.grid()
plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
plt.quiver(*origin, vecs[:,0], vecs[:,1], color=['blue', 'orange'], scale=10)
plt.show()
"""
Explanation: Eigenvectors and Eigenvalues
So we can see that when you transform a vector using a matrix, we change its direction, length, or both. When the transformation only affects scale (in other words, the output vector has a different magnitude but the same amplitude as the input vector), the matrix multiplication for the transformation is the equivalent operation as some scalar multiplication of the vector.
For example, earlier we examined the following transformation that dot-mulitplies a vector by a matrix:
$$\begin{bmatrix}2 & 0\0 & 2\end{bmatrix} \cdot \begin{bmatrix}1\0\end{bmatrix} = \begin{bmatrix}2\0\end{bmatrix}$$
You can achieve the same result by mulitplying the vector by the scalar value 2:
$$2 \times \begin{bmatrix}1\0\end{bmatrix} = \begin{bmatrix}2\0\end{bmatrix}$$
The following python performs both of these calculation and shows the results, which are identical.
End of explanation
"""
import numpy as np
A = np.array([[2,0],
[0,3]])
eVals, eVecs = np.linalg.eig(A)
print(eVals)
print(eVecs)
"""
Explanation: In cases like these, where a matrix transformation is the equivelent of a scalar-vector multiplication, the scalar-vector pairs that correspond to the matrix are known respectively as eigenvalues and eigenvectors. We generally indicate eigenvalues using the Greek letter lambda (λ), and the formula that defines eigenvalues and eigenvectors with respect to a transformation is:
$$ T(\vec{v}) = \lambda\vec{v}$$
Where the vector v is an eigenvector and the value λ is an eigenvalue for transformation T.
When the transformation T is represented as a matrix multiplication, as in this case where the transformation is represented by matrix A:
$$ T(\vec{v}) = A\vec{v} = \lambda\vec{v}$$
Then v is an eigenvector and λ is an eigenvalue of A.
A matrix can have multiple eigenvector-eigenvalue pairs, and you can calculate them manually. However, it's generally easier to use a tool or programming language. For example, in Python you can use the linalg.eig function, which returns an array of eigenvalues and a matrix of the corresponding eigenvectors for the specified matrix.
Here's an example that returns the eigenvalue and eigenvector pairs for the following matrix:
$$A=\begin{bmatrix}2 & 0\0 & 3\end{bmatrix}$$
End of explanation
"""
vec1 = eVecs[:,0]
lam1 = eVals[0]
print('Matrix A:')
print(A)
print('-------')
print('lam1: ' + str(lam1))
print ('v1: ' + str(vec1))
print ('Av1: ' + str(A@vec1))
print ('lam1 x v1: ' + str(lam1*vec1))
print('-------')
vec2 = eVecs[:,1]
lam2 = eVals[1]
print('lam2: ' + str(lam2))
print ('v2: ' + str(vec2))
print ('Av2: ' + str(A@vec2))
print ('lam2 x v2: ' + str(lam2*vec2))
"""
Explanation: So there are two eigenvalue-eigenvector pairs for this matrix, as shown here:
$$ \lambda_{1} = 2, \vec{v_{1}} = \begin{bmatrix}1 \ 0\end{bmatrix} \;\;\;\;\;\; \lambda_{2} = 3, \vec{v_{2}} = \begin{bmatrix}0 \ 1\end{bmatrix} $$
Let's verify that multiplying each eigenvalue-eigenvector pair corresponds to the dot-product of the eigenvector and the matrix. Here's the first pair:
$$ 2 \times \begin{bmatrix}1 \ 0\end{bmatrix} = \begin{bmatrix}2 \ 0\end{bmatrix} \;\;\;and\;\;\; \begin{bmatrix}2 & 0\0 & 3\end{bmatrix} \cdot \begin{bmatrix}1 \ 0\end{bmatrix} = \begin{bmatrix}2 \ 0\end{bmatrix} $$
So far so good. Now let's check the second pair:
$$ 3 \times \begin{bmatrix}0 \ 1\end{bmatrix} = \begin{bmatrix}0 \ 3\end{bmatrix} \;\;\;and\;\;\; \begin{bmatrix}2 & 0\0 & 3\end{bmatrix} \cdot \begin{bmatrix}0 \ 1\end{bmatrix} = \begin{bmatrix}0 \ 3\end{bmatrix} $$
So our eigenvalue-eigenvector scalar multiplications do indeed correspond to our matrix-eigenvector dot-product transformations.
Here's the equivalent code in Python, using the eVals and eVecs variables you generated in the previous code cell:
End of explanation
"""
t1 = lam1*vec1
print (t1)
t2 = lam2*vec2
print (t2)
fig = plt.figure()
a=fig.add_subplot(1,1,1)
# Plot v and t1
vecs = np.array([t1,vec1])
origin = [0], [0]
plt.axis('equal')
plt.grid()
plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
plt.quiver(*origin, vecs[:,0], vecs[:,1], color=['blue', 'orange'], scale=10)
plt.show()
a=fig.add_subplot(1,2,1)
# Plot v and t2
vecs = np.array([t2,vec2])
origin = [0], [0]
plt.axis('equal')
plt.grid()
plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
plt.quiver(*origin, vecs[:,0], vecs[:,1], color=['blue', 'orange'], scale=10)
plt.show()
"""
Explanation: You can use the following code to visualize these transformations:
End of explanation
"""
import numpy as np
A = np.array([[2,0],
[0,2]])
eVals, eVecs = np.linalg.eig(A)
print(eVals)
print(eVecs)
"""
Explanation: Similarly, earlier we examined the following matrix transformation:
$$\begin{bmatrix}2 & 0\0 & 2\end{bmatrix} \cdot \begin{bmatrix}1\0\end{bmatrix} = \begin{bmatrix}2\0\end{bmatrix}$$
And we saw that you can achieve the same result by mulitplying the vector by the scalar value 2:
$$2 \times \begin{bmatrix}1\0\end{bmatrix} = \begin{bmatrix}2\0\end{bmatrix}$$
This works because the scalar value 2 and the vector (1,0) are an eigenvalue-eigenvector pair for this matrix.
Let's use Python to determine the eigenvalue-eigenvector pairs for this matrix:
End of explanation
"""
vec1 = eVecs[:,0]
lam1 = eVals[0]
print('Matrix A:')
print(A)
print('-------')
print('lam1: ' + str(lam1))
print ('v1: ' + str(vec1))
print ('Av1: ' + str(A@vec1))
print ('lam1 x v1: ' + str(lam1*vec1))
print('-------')
vec2 = eVecs[:,1]
lam2 = eVals[1]
print('lam2: ' + str(lam2))
print ('v2: ' + str(vec2))
print ('Av2: ' + str(A@vec2))
print ('lam2 x v2: ' + str(lam2*vec2))
# Plot the resulting vectors
t1 = lam1*vec1
t2 = lam2*vec2
fig = plt.figure()
a=fig.add_subplot(1,1,1)
# Plot v and t1
vecs = np.array([t1,vec1])
origin = [0], [0]
plt.axis('equal')
plt.grid()
plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
plt.quiver(*origin, vecs[:,0], vecs[:,1], color=['blue', 'orange'], scale=10)
plt.show()
a=fig.add_subplot(1,2,1)
# Plot v and t2
vecs = np.array([t2,vec2])
origin = [0], [0]
plt.axis('equal')
plt.grid()
plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
plt.quiver(*origin, vecs[:,0], vecs[:,1], color=['blue', 'orange'], scale=10)
plt.show()
"""
Explanation: So once again, there are two eigenvalue-eigenvector pairs for this matrix, as shown here:
$$ \lambda_{1} = 2, \vec{v_{1}} = \begin{bmatrix}1 \ 0\end{bmatrix} \;\;\;\;\;\; \lambda_{2} = 2, \vec{v_{2}} = \begin{bmatrix}0 \ 1\end{bmatrix} $$
Let's verify that multiplying each eigenvalue-eigenvector pair corresponds to the dot-product of the eigenvector and the matrix. Here's the first pair:
$$ 2 \times \begin{bmatrix}1 \ 0\end{bmatrix} = \begin{bmatrix}2 \ 0\end{bmatrix} \;\;\;and\;\;\; \begin{bmatrix}2 & 0\0 & 2\end{bmatrix} \cdot \begin{bmatrix}1 \ 0\end{bmatrix} = \begin{bmatrix}2 \ 0\end{bmatrix} $$
Well, we already knew that. Now let's check the second pair:
$$ 2 \times \begin{bmatrix}0 \ 1\end{bmatrix} = \begin{bmatrix}0 \ 2\end{bmatrix} \;\;\;and\;\;\; \begin{bmatrix}2 & 0\0 & 2\end{bmatrix} \cdot \begin{bmatrix}0 \ 1\end{bmatrix} = \begin{bmatrix}0 \ 2\end{bmatrix} $$
Now let's use Pythonto verify and plot these transformations:
End of explanation
"""
import numpy as np
A = np.array([[2,1],
[1,2]])
eVals, eVecs = np.linalg.eig(A)
print(eVals)
print(eVecs)
"""
Explanation: Let's take a look at one more, slightly more complex example. Here's our matrix:
$$\begin{bmatrix}2 & 1\1 & 2\end{bmatrix}$$
Let's get the eigenvalue and eigenvector pairs:
End of explanation
"""
vec1 = eVecs[:,0]
lam1 = eVals[0]
print('Matrix A:')
print(A)
print('-------')
print('lam1: ' + str(lam1))
print ('v1: ' + str(vec1))
print ('Av1: ' + str(A@vec1))
print ('lam1 x v1: ' + str(lam1*vec1))
print('-------')
vec2 = eVecs[:,1]
lam2 = eVals[1]
print('lam2: ' + str(lam2))
print ('v2: ' + str(vec2))
print ('Av2: ' + str(A@vec2))
print ('lam2 x v2: ' + str(lam2*vec2))
# Plot the results
t1 = lam1*vec1
t2 = lam2*vec2
fig = plt.figure()
a=fig.add_subplot(1,1,1)
# Plot v and t1
vecs = np.array([t1,vec1])
origin = [0], [0]
plt.axis('equal')
plt.grid()
plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
plt.quiver(*origin, vecs[:,0], vecs[:,1], color=['blue', 'orange'], scale=10)
plt.show()
a=fig.add_subplot(1,2,1)
# Plot v and t2
vecs = np.array([t2,vec2])
origin = [0], [0]
plt.axis('equal')
plt.grid()
plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
plt.quiver(*origin, vecs[:,0], vecs[:,1], color=['blue', 'orange'], scale=10)
plt.show()
"""
Explanation: This time the eigenvalue-eigenvector pairs are:
$$ \lambda_{1} = 3, \vec{v_{1}} = \begin{bmatrix}0.70710678 \ 0.70710678\end{bmatrix} \;\;\;\;\;\; \lambda_{2} = 1, \vec{v_{2}} = \begin{bmatrix}-0.70710678 \ 0.70710678\end{bmatrix} $$
So let's check the first pair:
$$ 3 \times \begin{bmatrix}0.70710678 \ 0.70710678\end{bmatrix} = \begin{bmatrix}2.12132034 \ 2.12132034\end{bmatrix} \;\;\;and\;\;\; \begin{bmatrix}2 & 1\0 & 2\end{bmatrix} \cdot \begin{bmatrix}0.70710678 \ 0.70710678\end{bmatrix} = \begin{bmatrix}2.12132034 \ 2.12132034\end{bmatrix} $$
Now let's check the second pair:
$$ 1 \times \begin{bmatrix}-0.70710678 \ 0.70710678\end{bmatrix} = \begin{bmatrix}-0.70710678\0.70710678\end{bmatrix} \;\;\;and\;\;\; \begin{bmatrix}2 & 1\1 & 2\end{bmatrix} \cdot \begin{bmatrix}-0.70710678 \ 0.70710678\end{bmatrix} = \begin{bmatrix}-0.70710678\0.70710678\end{bmatrix} $$
With more complex examples like this, it's generally easier to do it with Python:
End of explanation
"""
import numpy as np
A = np.array([[3,2],
[1,0]])
l, Q = np.linalg.eig(A)
print(Q)
"""
Explanation: Eigendecomposition
So we've learned a little about eigenvalues and eigenvectors; but you may be wondering what use they are. Well, one use for them is to help decompose transformation matrices.
Recall that previously we found that a matrix transformation of a vector changes its magnitude, amplitude, or both. Without getting too technical about it, we need to remember that vectors can exist in any spatial orientation, or basis; and the same transformation can be applied in different bases.
We can decompose a matrix using the following formula:
$$A = Q \Lambda Q^{-1}$$
Where A is a trasformation that can be applied to a vector in its current base, Q is a matrix of eigenvectors that defines a change of basis, and Λ is a matrix with eigenvalues on the diagonal that defines the same linear transformation as A in the base defined by Q.
Let's look at these in some more detail. Consider this matrix:
$$A=\begin{bmatrix}3 & 2\1 & 0\end{bmatrix}$$
Q is a matrix in which each column is an eigenvector of A; which as we've seen previously, we can calculate using Python:
End of explanation
"""
L = np.diag(l)
print (L)
"""
Explanation: So for matrix A, Q is the following matrix:
$$Q=\begin{bmatrix}0.96276969 & -0.48963374\0.27032301 & 0.87192821\end{bmatrix}$$
Λ is a matrix that contains the eigenvalues for A on the diagonal, with zeros in all other elements; so for a 2x2 matrix, Λ will look like this:
$$\Lambda=\begin{bmatrix}\lambda_{1} & 0\0 & \lambda_{2}\end{bmatrix}$$
In our Python code, we've already used the linalg.eig function to return the array of eigenvalues for A into the variable l, so now we just need to format that as a matrix:
End of explanation
"""
Qinv = np.linalg.inv(Q)
print(Qinv)
"""
Explanation: So Λ is the following matrix:
$$\Lambda=\begin{bmatrix}3.56155281 & 0\0 & -0.56155281\end{bmatrix}$$
Now we just need to find Q<sup>-1</sup>, which is the inverse of Q:
End of explanation
"""
v = np.array([1,3])
t = A@v
print(t)
# Plot v and t
vecs = np.array([v,t])
origin = [0], [0]
plt.axis('equal')
plt.grid()
plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
plt.quiver(*origin, vecs[:,0], vecs[:,1], color=['orange', 'b'], scale=20)
plt.show()
"""
Explanation: The inverse of Q then, is:
$$Q^{-1}=\begin{bmatrix}0.89720673 & 0.50382896\-0.27816009 & 0.99068183\end{bmatrix}$$
So what does that mean? Well, it means that we can decompose the transformation of any vector multiplied by matrix A into the separate operations QΛQ<sup>-1</sup>:
$$A\vec{v} = Q \Lambda Q^{-1}\vec{v}$$
To prove this, let's take vector v:
$$\vec{v} = \begin{bmatrix}1\3\end{bmatrix} $$
Our matrix transformation using A is:
$$\begin{bmatrix}3 & 2\1 & 0\end{bmatrix} \cdot \begin{bmatrix}1\3\end{bmatrix} $$
So let's show the results of that using Python:
End of explanation
"""
import math
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
t = (Q@(L@(Qinv)))@v
# Plot v and t
vecs = np.array([v,t])
origin = [0], [0]
plt.axis('equal')
plt.grid()
plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
plt.quiver(*origin, vecs[:,0], vecs[:,1], color=['orange', 'b'], scale=20)
plt.show()
"""
Explanation: And now, let's do the same thing using the QΛQ<sup>-1</sup> sequence of operations:
End of explanation
"""
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
t1 = Qinv@v
t2 = L@t1
t3 = Q@t2
# Plot the transformations
vecs = np.array([v,t1, t2, t3])
origin = [0], [0]
plt.axis('equal')
plt.grid()
plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
plt.quiver(*origin, vecs[:,0], vecs[:,1], color=['orange', 'red', 'magenta', 'blue'], scale=20)
plt.show()
"""
Explanation: So A and QΛQ<sup>-1</sup> are equivalent.
If we view the intermediary stages of the decomposed transformation, you can see the transformation using A in the original base for v (orange to blue) and the transformation using Λ in the change of basis decribed by Q (red to magenta):
End of explanation
"""
import numpy as np
A = np.array([[1,2],
[4,3]])
l, Q = np.linalg.eig(A)
L = np.diag(l)
print(L)
"""
Explanation: So from this visualization, it should be apparent that the transformation Av can be performed by changing the basis for v using Q (from orange to red in the above plot) applying the equivalent linear transformation in that base using Λ (red to magenta), and switching back to the original base using Q<sup>-1</sup> (magenta to blue).
Rank of a Matrix
The rank of a square matrix is the number of non-zero eigenvalues of the matrix. A full rank matrix has the same number of non-zero eigenvalues as the dimension of the matrix. A rank-deficient matrix has fewer non-zero eigenvalues as dimensions. The inverse of a rank deficient matrix is singular and so does not exist (this is why in a previous notebook we noted that some matrices have no inverse).
Consider the following matrix A:
$$A=\begin{bmatrix}1 & 2\4 & 3\end{bmatrix}$$
Let's find its eigenvalues (Λ):
End of explanation
"""
B = np.array([[3,-3,6],
[2,-2,4],
[1,-1,2]])
lb, Qb = np.linalg.eig(B)
Lb = np.diag(lb)
print(Lb)
"""
Explanation: $$\Lambda=\begin{bmatrix}-1 & 0\0 & 5\end{bmatrix}$$
This matrix has full rank. The dimensions of the matrix is 2. There are two non-zero eigenvalues.
Now consider this matrix:
$$B=\begin{bmatrix}3 & -3 & 6\2 & -2 & 4\1 & -1 & 2\end{bmatrix}$$
Note that the second and third columns are just scalar multiples of the first column.
Let's examine it's eigenvalues:
End of explanation
"""
import numpy as np
A = np.array([[1,2],
[4,3]])
l, Q = np.linalg.eig(A)
L = np.diag(l)
print(Q)
Linv = np.linalg.inv(L)
Qinv = np.linalg.inv(Q)
print(Linv)
print(Qinv)
"""
Explanation: $$\Lambda=\begin{bmatrix}3 & 0& 0\0 & -6\times10^{-17} & 0\0 & 0 & 3.6\times10^{-16}\end{bmatrix}$$
Note that matrix has only 1 non-zero eigenvalue. The other two eigenvalues are so extremely small as to be effectively zero. This is an example of a rank-deficient matrix; and as such, it has no inverse.
Inverse of a Square Full Rank Matrix
You can calculate the inverse of a square full rank matrix by using the following formula:
$$A^{-1} = Q \Lambda^{-1} Q^{-1}$$
Let's apply this to matrix A:
$$A=\begin{bmatrix}1 & 2\4 & 3\end{bmatrix}$$
Let's find the matrices for Q, Λ<sup>-1</sup>, and Q<sup>-1</sup>:
End of explanation
"""
Ainv = (Q@(Linv@(Qinv)))
print(Ainv)
"""
Explanation: So:
$$A^{-1}=\begin{bmatrix}-0.70710678 & -0.4472136\0.70710678 & -0.89442719\end{bmatrix}\cdot\begin{bmatrix}-1 & -0\0 & 0.2\end{bmatrix}\cdot\begin{bmatrix}-0.94280904 & 0.47140452\-0.74535599 & -0.74535599\end{bmatrix}$$
Let's calculate that in Python:
End of explanation
"""
print(np.linalg.inv(A))
"""
Explanation: That gives us the result:
$$A^{-1}=\begin{bmatrix}-0.6 & 0.4\0.8 & -0.2\end{bmatrix}$$
We can apply the np.linalg.inv function directly to A to verify this:
End of explanation
"""
|
arcyfelix/Courses | 17-08-31-Zero-to-Deep-Learning-with-Python-and-Keras/6 Convolutional Neural Networks.ipynb | apache-2.0 | import pandas as pd
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
"""
Explanation: Convolutional Neural Networks
Machine learning on images
End of explanation
"""
from keras.datasets import mnist
(X_train, y_train), (X_test, y_test) = mnist.load_data('/tmp/mnist.npz')
X_train.shape
X_test.shape
X_train[0]
plt.imshow(X_train[0],
cmap = 'gray')
# Flattening the input
X_train = X_train.reshape(-1, 28 * 28)
X_test = X_test.reshape(-1, 28 * 28)
X_train.shape
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255.0
X_test /= 255.0
X_train[0]
from keras.utils.np_utils import to_categorical
y_train_cat = to_categorical(y_train)
y_test_cat = to_categorical(y_test)
y_train[0]
# One-hot encoded labels
y_train_cat[0]
y_train_cat.shape
y_test_cat.shape
"""
Explanation: MNIST
End of explanation
"""
from keras.models import Sequential
from keras.layers import Dense
import keras.backend as K
K.clear_session()
model = Sequential()
model.add(Dense(10,
input_dim = 28 * 28,
activation = 'relu'))
model.add(Dense(256,
activation = 'relu'))
'''
model.add(Dense(128,
activation = 'relu'))
'''
model.add(Dense(16,
activation = 'relu'))
model.add(Dense(10,
activation = 'softmax'))
model.compile(loss = 'categorical_crossentropy',
optimizer = 'rmsprop',
metrics = ['accuracy'])
h = model.fit(X_train,
y_train_cat,
batch_size = 12,
epochs = 10,
verbose = 2,
validation_split = 0.3)
plt.figure(figsize = (12,7))
plt.plot(h.history['acc'])
plt.plot(h.history['val_acc'])
plt.legend(['Training', 'Validation'], loc = 'lower right')
plt.title('Accuracy')
plt.xlabel('Epochs')
test_accuracy = model.evaluate(X_test, y_test_cat)[1]
# Printing the accuracy
test_accuracy * 100
"""
Explanation: Fully connected on images
End of explanation
"""
A = np.random.randint(10, size = (2, 3, 4, 5))
B = np.random.randint(10, size = (2, 3))
A
A[0, 1, 0, 3]
B
"""
Explanation: Tensor Math
End of explanation
"""
img = np.random.randint(255,
size = (4, 4, 3),
dtype = 'uint8')
img
plt.figure(figsize = (10, 7))
plt.subplot(221)
plt.imshow(img)
plt.title("All Channels combined")
plt.subplot(222)
plt.imshow(img[:, : , 0],
cmap = 'Reds')
plt.title("Red channel")
plt.subplot(223)
plt.imshow(img[:, : , 1],
cmap = 'Greens')
plt.title("Green channel")
plt.subplot(224)
plt.imshow(img[:, : , 2],
cmap = 'Blues')
plt.title("Blue channel")
"""
Explanation: A random colored image
End of explanation
"""
2 * A
A + A
A.shape
B.shape
np.tensordot(A,
B,
axes = ([0, 1], [0, 1]))
np.tensordot(A,
B,
axes = ([0], [0])).shape
"""
Explanation: Tensor operations
End of explanation
"""
a = np.array([0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0],
dtype='float32')
b = np.array([-1, 1],
dtype='float32')
c = np.convolve(a, b)
a
b
c
plt.subplot(211)
plt.plot(a, 'o-')
plt.subplot(212)
plt.plot(c, 'o-')
"""
Explanation: 1D convolution
End of explanation
"""
from scipy.ndimage.filters import convolve
from scipy.signal import convolve2d
from scipy import misc
img = misc.ascent()
img.shape
plt.imshow(img,
cmap = 'gray')
h_kernel = np.array([[ 1, 2, 1],
[ 0, 0, 0],
[-1, -2, -1]])
plt.imshow(h_kernel,
cmap = 'gray')
res = convolve2d(img, h_kernel)
plt.imshow(res,
cmap = 'gray')
"""
Explanation: Image filters with convolutions
End of explanation
"""
from keras.layers import Conv2D
img.shape
plt.figure(figsize = (7, 7))
plt.imshow(img,
cmap = 'gray')
img_tensor = img.reshape((1, 512, 512, 1))
model = Sequential()
model.add(Conv2D(filters = 1,
kernel_size = (3, 3),
strides = (2,1),
input_shape = (512, 512, 1)))
model.compile('adam', 'mse')
img_pred_tensor = model.predict(img_tensor)
img_pred_tensor.shape
img_pred = img_pred_tensor[0, :, :, 0]
plt.imshow(img_pred,
cmap = 'gray')
weights = model.get_weights()
weights[0].shape
plt.imshow(weights[0][:, :, 0, 0],
cmap = 'gray')
weights[0] = np.ones(weights[0].shape)
model.set_weights(weights)
img_pred_tensor = model.predict(img_tensor)
img_pred = img_pred_tensor[0, :, :, 0]
plt.imshow(img_pred,
cmap = 'gray')
model = Sequential()
model.add(Conv2D(filters = 1,
kernel_size = (3, 3),
input_shape = (512, 512, 1),
padding='same'))
model.compile('adam', 'mse')
img_pred_tensor = model.predict(img_tensor)
img_pred_tensor.shape
img_pred_tensor = img_pred_tensor[0, :, :, 0]
plt.imshow(img_pred_tensor,
cmap = 'gray')
"""
Explanation: Convolutional neural networks
End of explanation
"""
from keras.layers import MaxPool2D, AvgPool2D
model = Sequential()
model.add(MaxPool2D(pool_size = (5, 5),
input_shape = (512, 512, 1)))
model.compile('adam', 'mse')
img_pred = model.predict(img_tensor)[0, :, :, 0]
plt.imshow(img_pred,
cmap = 'gray')
model = Sequential()
model.add(AvgPool2D(pool_size = (5, 5),
input_shape = (512, 512, 1)))
model.compile('adam', 'mse')
img_pred = model.predict(img_tensor)[0, :, :, 0]
plt.imshow(img_pred,
cmap = 'gray')
"""
Explanation: Pooling layers
End of explanation
"""
X_train = X_train.reshape(-1, 28, 28, 1)
X_test = X_test.reshape(-1, 28, 28, 1)
X_train.shape
from keras.layers import Flatten, Activation
K.clear_session()
model = Sequential()
model.add(Conv2D(8,
(3, 3),
input_shape = (28, 28, 1)))
model.add(MaxPool2D(pool_size = (2, 2)))
model.add(Activation('relu'))
model.add(Flatten())
model.add(Dense(32,
activation = 'relu'))
model.add(Dense(10,
activation = 'softmax'))
model.compile(loss = 'categorical_crossentropy',
optimizer = 'rmsprop',
metrics = ['accuracy'])
model.summary()
model.fit(X_train,
y_train_cat,
batch_size = 12,
epochs = 2,
verbose = 2,
validation_split = 0.3)
evaluated = model.evaluate(X_test, y_test_cat)
evaluated[1] * 100
"""
Explanation: Final architecture
End of explanation
"""
X_train.shape, X_test.shape, y_train_cat.shape, y_test_cat.shape
from keras.layers import Input, Dense, Conv2D, MaxPool2D, Activation, Flatten
import keras.backend as K
from keras.models import Model
from keras.optimizers import Adam
K.clear_session()
inp = Input(shape = (28, 28, 1 ))
net = Conv2D(filters = 64,
kernel_size = (2, 2),
activation = 'relu',
padding = 'valid')(inp)
net = MaxPool2D(pool_size = (2, 2),
strides = (2, 2),
padding = 'valid')(net)
net = Conv2D(filters = 8,
kernel_size = (2, 2),
activation = 'relu',
padding = 'valid')(net)
net = MaxPool2D(pool_size = (2, 2),
strides = (2, 2),
padding = 'valid')(net)
net = Activation(activation = 'relu')(net)
net = Flatten()(net)
prediction = Dense(10, activation = 'softmax')(net)
model = Model(inputs = inp, outputs = prediction)
model.compile(optimizer = Adam(),
loss = 'categorical_crossentropy',
metrics = ['accuracy'])
model.summary()
model.fit(X_train,
y_train_cat,
batch_size = 50,
validation_split = 0.2,
epochs = 5,
verbose = 2)
"""
Explanation: Exercise 1
You've been hired by a shipping company to overhaul the way they route mail, parcels and packages. They want to build an image recognition system capable of recognizing the digits in the zipcode on a package, so that it can be automatically routed to the correct location.
You are tasked to build the digit recognition system. Luckily, you can rely on the MNIST dataset for the intial training of your model!
Build a deep convolutional neural network with at least two convolutional and two pooling layers before the fully connected layer.
Start from the network we have just built
Insert a Conv2D layer after the first MaxPool2D, give it 64 filters.
Insert a MaxPool2D after that one
Insert an Activation layer
retrain the model
does performance improve?
how many parameters does this new model have? More or less than the previous model? Why?
how long did this second model take to train? Longer or shorter than the previous model? Why?
did it perform better or worse than the previous model?
End of explanation
"""
from keras.datasets import cifar10
(X_train, y_train), (X_test, y_test) = cifar10.load_data()
X_train.shape
X_train[0, :, :, 0]
X_train[0, :, :, 1]
X_train[0, :, :, 2]
np.max(X_train[0, :, :, :])
np.min(X_train[0, :, :, :])
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train = X_train / 255.0
X_test = X_test / 255.0
from keras.utils import to_categorical
y_train_cat = to_categorical(y_train)
y_test_cat = to_categorical(y_test)
y_train_cat[:5]
print(y_train_cat.shape)
y_test_cat[:5]
print(y_test_cat.shape)
'''conv2d
conv2d
maxpool
conv2d
conv2d
maxpool
flatten
dense
output
'''
from keras.models import Model
from keras.layers import Input, Dense, Conv2D, MaxPool2D, Flatten
from keras.regularizers import l2
# Inputs 32 x 32 x 3
inp = Input(shape = (32, 32, 3))
net = Conv2D(filters = 8,
kernel_size = (2, 2),
padding = 'same')(inp)
net = Conv2D(filters = 8,
kernel_size = (2, 2),
padding = 'same')(net)
net = MaxPool2D(pool_size = (2, 2), padding = 'valid')(net)
net = Conv2D(filters = 8,
kernel_size = (2, 2),
padding = 'same')(net)
net = Conv2D(filters = 8,
kernel_size = (2, 2),
padding = 'same')(net)
net = MaxPool2D(pool_size = (2, 2), padding = 'valid')(net)
net = Flatten()(net)
net = Dense(units = 10,
activation = 'relu')(net)
prediction = Dense(units = 10,
activation = 'softmax')(net)
model = Model(inputs = [inp], outputs = [prediction])
from keras.optimizers import Adam
model.compile(optimizer = Adam(),
loss = 'categorical_crossentropy',
metrics = ['accuracy'])
model.summary()
model.fit(X_train,
y_train_cat,
batch_size = 50,
validation_split = 0.2,
epochs = 5,
verbose = 2)
"""
Explanation: Exercise 2
Pleased with your performance with the digits recognition task, your boss decides to challenge you with a harder task. Their online branch allows people to upload images to a website that generates and prints a postcard that is shipped to destination. Your boss would like to know what images people are loading on the site in order to provide targeted advertising on the same page, so he asks you to build an image recognition system capable of recognizing a few objects. Luckily for you, there's a dataset ready made with a collection of labeled images. This is the Cifar 10 Dataset, a very famous dataset that contains images for 10 different categories:
airplane
automobile
bird
cat
deer
dog
frog
horse
ship
truck
In this exercise we will reach the limit of what you can achieve on your laptop and get ready for the next session on cloud GPUs.
Here's what you have to do:
- load the cifar10 dataset using keras.datasets.cifar10.load_data()
- display a few images, see how hard/easy it is for you to recognize an object with such low resolution
- check the shape of X_train, does it need reshape?
- check the scale of X_train, does it need rescaling?
- check the shape of y_train, does it need reshape?
- build a model with the following architecture, and choose the parameters and activation functions for each of the layers:
- conv2d
- conv2d
- maxpool
- conv2d
- conv2d
- maxpool
- flatten
- dense
- output
- compile the model and check the number of parameters
- attempt to train the model with the optimizer of your choice. How fast does training proceed?
- If training is too slow (as expected) stop the execution and move to the next session!
End of explanation
"""
|
tensorflow/docs-l10n | site/en-snapshot/tfx/tutorials/serving/rest_simple.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
"""
import sys
# Confirm that we're using Python 3
assert sys.version_info.major == 3, 'Oops, not running Python 3. Use Runtime > Change runtime type'
# TensorFlow and tf.keras
print("Installing dependencies for Colab environment")
!pip install -Uq grpcio==1.26.0
import tensorflow as tf
from tensorflow import keras
# Helper libraries
import numpy as np
import matplotlib.pyplot as plt
import os
import subprocess
print('TensorFlow version: {}'.format(tf.__version__))
"""
Explanation: Train and serve a TensorFlow model with TensorFlow Serving
Warning: This notebook is designed to be run in a Google Colab only. It installs packages on the system and requires root access. If you want to run it in a local Jupyter notebook, please proceed with caution.
Note: You can run this example right now in a Jupyter-style notebook, no setup required! Just click "Run in Google Colab"
<div class="devsite-table-wrapper"><table class="tfo-notebook-buttons" align="left">
<tr><td><a target="_blank" href="https://www.tensorflow.org/tfx/tutorials/serving/rest_simple">
<img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/tfx/blob/master/docs/tutorials/serving/rest_simple.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png">Run in Google Colab</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/tfx/blob/master/docs/tutorials/serving/rest_simple.ipynb">
<img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">View source on GitHub</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/tfx/docs/tutorials/serving/rest_simple.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a></td>
</tr></table></div>
This guide trains a neural network model to classify images of clothing, like sneakers and shirts, saves the trained model, and then serves it with TensorFlow Serving. The focus is on TensorFlow Serving, rather than the modeling and training in TensorFlow, so for a complete example which focuses on the modeling and training see the Basic Classification example.
This guide uses tf.keras, a high-level API to build and train models in TensorFlow.
End of explanation
"""
fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
# scale the values to 0.0 to 1.0
train_images = train_images / 255.0
test_images = test_images / 255.0
# reshape for feeding into the model
train_images = train_images.reshape(train_images.shape[0], 28, 28, 1)
test_images = test_images.reshape(test_images.shape[0], 28, 28, 1)
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
print('\ntrain_images.shape: {}, of {}'.format(train_images.shape, train_images.dtype))
print('test_images.shape: {}, of {}'.format(test_images.shape, test_images.dtype))
"""
Explanation: Create your model
Import the Fashion MNIST dataset
This guide uses the Fashion MNIST dataset which contains 70,000 grayscale images in 10 categories. The images show individual articles of clothing at low resolution (28 by 28 pixels), as seen here:
<table>
<tr><td>
<img src="https://tensorflow.org/images/fashion-mnist-sprite.png"
alt="Fashion MNIST sprite" width="600">
</td></tr>
<tr><td align="center">
<b>Figure 1.</b> <a href="https://github.com/zalandoresearch/fashion-mnist">Fashion-MNIST samples</a> (by Zalando, MIT License).<br/>
</td></tr>
</table>
Fashion MNIST is intended as a drop-in replacement for the classic MNIST dataset—often used as the "Hello, World" of machine learning programs for computer vision. You can access the Fashion MNIST directly from TensorFlow, just import and load the data.
Note: Although these are really images, they are loaded as NumPy arrays and not binary image objects.
End of explanation
"""
model = keras.Sequential([
keras.layers.Conv2D(input_shape=(28,28,1), filters=8, kernel_size=3,
strides=2, activation='relu', name='Conv1'),
keras.layers.Flatten(),
keras.layers.Dense(10, name='Dense')
])
model.summary()
testing = False
epochs = 5
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[keras.metrics.SparseCategoricalAccuracy()])
model.fit(train_images, train_labels, epochs=epochs)
test_loss, test_acc = model.evaluate(test_images, test_labels)
print('\nTest accuracy: {}'.format(test_acc))
"""
Explanation: Train and evaluate your model
Let's use the simplest possible CNN, since we're not focused on the modeling part.
End of explanation
"""
# Fetch the Keras session and save the model
# The signature definition is defined by the input and output tensors,
# and stored with the default serving key
import tempfile
MODEL_DIR = tempfile.gettempdir()
version = 1
export_path = os.path.join(MODEL_DIR, str(version))
print('export_path = {}\n'.format(export_path))
tf.keras.models.save_model(
model,
export_path,
overwrite=True,
include_optimizer=True,
save_format=None,
signatures=None,
options=None
)
print('\nSaved model:')
!ls -l {export_path}
"""
Explanation: Save your model
To load our trained model into TensorFlow Serving we first need to save it in SavedModel format. This will create a protobuf file in a well-defined directory hierarchy, and will include a version number. TensorFlow Serving allows us to select which version of a model, or "servable" we want to use when we make inference requests. Each version will be exported to a different sub-directory under the given path.
End of explanation
"""
!saved_model_cli show --dir {export_path} --all
"""
Explanation: Examine your saved model
We'll use the command line utility saved_model_cli to look at the MetaGraphDefs (the models) and SignatureDefs (the methods you can call) in our SavedModel. See this discussion of the SavedModel CLI in the TensorFlow Guide.
End of explanation
"""
import sys
# We need sudo prefix if not on a Google Colab.
if 'google.colab' not in sys.modules:
SUDO_IF_NEEDED = 'sudo'
else:
SUDO_IF_NEEDED = ''
# This is the same as you would do from your command line, but without the [arch=amd64], and no sudo
# You would instead do:
# echo "deb [arch=amd64] http://storage.googleapis.com/tensorflow-serving-apt stable tensorflow-model-server tensorflow-model-server-universal" | sudo tee /etc/apt/sources.list.d/tensorflow-serving.list && \
# curl https://storage.googleapis.com/tensorflow-serving-apt/tensorflow-serving.release.pub.gpg | sudo apt-key add -
!echo "deb http://storage.googleapis.com/tensorflow-serving-apt stable tensorflow-model-server tensorflow-model-server-universal" | {SUDO_IF_NEEDED} tee /etc/apt/sources.list.d/tensorflow-serving.list && \
curl https://storage.googleapis.com/tensorflow-serving-apt/tensorflow-serving.release.pub.gpg | {SUDO_IF_NEEDED} apt-key add -
!{SUDO_IF_NEEDED} apt update
"""
Explanation: That tells us a lot about our model! In this case we just trained our model, so we already know the inputs and outputs, but if we didn't this would be important information. It doesn't tell us everything, like the fact that this is grayscale image data for example, but it's a great start.
Serve your model with TensorFlow Serving
Warning: If you are running this NOT on a Google Colab, following cells
will install packages on the system with root access. If you want to run it in
a local Jupyter notebook, please proceed with caution.
Add TensorFlow Serving distribution URI as a package source:
We're preparing to install TensorFlow Serving using Aptitude since this Colab runs in a Debian environment. We'll add the tensorflow-model-server package to the list of packages that Aptitude knows about. Note that we're running as root.
Note: This example is running TensorFlow Serving natively, but you can also run it in a Docker container, which is one of the easiest ways to get started using TensorFlow Serving.
End of explanation
"""
!{SUDO_IF_NEEDED} apt-get install tensorflow-model-server
"""
Explanation: Install TensorFlow Serving
This is all you need - one command line!
End of explanation
"""
os.environ["MODEL_DIR"] = MODEL_DIR
%%bash --bg
nohup tensorflow_model_server \
--rest_api_port=8501 \
--model_name=fashion_model \
--model_base_path="${MODEL_DIR}" >server.log 2>&1
!tail server.log
"""
Explanation: Start running TensorFlow Serving
This is where we start running TensorFlow Serving and load our model. After it loads we can start making inference requests using REST. There are some important parameters:
rest_api_port: The port that you'll use for REST requests.
model_name: You'll use this in the URL of REST requests. It can be anything.
model_base_path: This is the path to the directory where you've saved your model.
End of explanation
"""
def show(idx, title):
plt.figure()
plt.imshow(test_images[idx].reshape(28,28))
plt.axis('off')
plt.title('\n\n{}'.format(title), fontdict={'size': 16})
import random
rando = random.randint(0,len(test_images)-1)
show(rando, 'An Example Image: {}'.format(class_names[test_labels[rando]]))
"""
Explanation: Make a request to your model in TensorFlow Serving
First, let's take a look at a random example from our test data.
End of explanation
"""
import json
data = json.dumps({"signature_name": "serving_default", "instances": test_images[0:3].tolist()})
print('Data: {} ... {}'.format(data[:50], data[len(data)-52:]))
"""
Explanation: Ok, that looks interesting. How hard is that for you to recognize? Now let's create the JSON object for a batch of three inference requests, and see how well our model recognizes things:
End of explanation
"""
# docs_infra: no_execute
!pip install -q requests
import requests
headers = {"content-type": "application/json"}
json_response = requests.post('http://localhost:8501/v1/models/fashion_model:predict', data=data, headers=headers)
predictions = json.loads(json_response.text)['predictions']
show(0, 'The model thought this was a {} (class {}), and it was actually a {} (class {})'.format(
class_names[np.argmax(predictions[0])], np.argmax(predictions[0]), class_names[test_labels[0]], test_labels[0]))
"""
Explanation: Make REST requests
Newest version of the servable
We'll send a predict request as a POST to our server's REST endpoint, and pass it three examples. We'll ask our server to give us the latest version of our servable by not specifying a particular version.
End of explanation
"""
# docs_infra: no_execute
headers = {"content-type": "application/json"}
json_response = requests.post('http://localhost:8501/v1/models/fashion_model/versions/1:predict', data=data, headers=headers)
predictions = json.loads(json_response.text)['predictions']
for i in range(0,3):
show(i, 'The model thought this was a {} (class {}), and it was actually a {} (class {})'.format(
class_names[np.argmax(predictions[i])], np.argmax(predictions[i]), class_names[test_labels[i]], test_labels[i]))
"""
Explanation: A particular version of the servable
Now let's specify a particular version of our servable. Since we only have one, let's select version 1. We'll also look at all three results.
End of explanation
"""
|
saullocastro/pyNastran | docs/quick_start/demo/op2_pandas_multi_case.ipynb | lgpl-3.0 | import os
import pandas as pd
import pyNastran
from pyNastran.op2.op2 import read_op2
pkg_path = pyNastran.__path__[0]
model_path = os.path.join(pkg_path, '..', 'models')
"""
Explanation: Static & Transient DataFrames in PyNastran
The iPython notebook for this demo can be found in:
- docs\quick_start\demo\op2_pandas_multi_case.ipynb
- https://github.com/SteveDoyle2/pyNastran/tree/master/docs/quick_start/demo/op2_pandas_multi_case.ipynb
End of explanation
"""
solid_bending_op2 = os.path.join(model_path, 'solid_bending', 'solid_bending.op2')
solid_bending = read_op2(solid_bending_op2, combine=False, debug=False)
print(solid_bending.displacements.keys())
solid_bending_op2 = os.path.join(model_path, 'solid_bending', 'solid_bending.op2')
solid_bending2 = read_op2(solid_bending_op2, combine=True, debug=False)
print(solid_bending2.displacements.keys())
"""
Explanation: Solid Bending
Let's show off combine=True/False. We'll talk about the keys soon.
End of explanation
"""
op2_filename = os.path.join(model_path, 'sol_101_elements', 'buckling_solid_shell_bar.op2')
model = read_op2(op2_filename, combine=True, debug=False, build_dataframe=True)
stress_keys = model.cquad4_stress.keys()
print (stress_keys)
# isubcase, analysis_code, sort_method, count, subtitle
key0 = (1, 1, 1, 0, 'DEFAULT1')
key1 = (1, 8, 1, 0, 'DEFAULT1')
"""
Explanation: Single Subcase Buckling Example
The keys cannot be "combined" despite us telling the program that it was OK.
We'll get the following values that we need to handle.
isubcase, analysis_code, sort_method, count, subtitle
isubcase -> the same key that you're used to accessing
sort_method -> 1 (SORT1), 2 (SORT2)
count -> the optimization count
subtitle -> the analysis subtitle (changes for superlements)
analysis code -> the "type" of solution
### Partial code for calculating analysis code:
if trans_word == 'LOAD STEP': # nonlinear statics
analysis_code = 10
elif trans_word in ['TIME', 'TIME STEP']: # TODO check name
analysis_code = 6
elif trans_word == 'EIGENVALUE': # normal modes
analysis_code = 2
elif trans_word == 'FREQ': # TODO check name
analysis_code = 5
elif trans_word == 'FREQUENCY':
analysis_code = 5
elif trans_word == 'COMPLEX EIGENVALUE':
analysis_code = 9
else:
raise NotImplementedError('transient_word=%r is not supported...' % trans_word)
Let's look at an odd case:
You can do buckling as one subcase or two subcases (makes parsing it a lot easier!).
However, you have to do this once you start messing around with superelements or multi-step optimization.
For optimization, sometimes Nastran will downselect elements and do an optimization on that and print out a subset of the elements.
At the end, it will rerun an analysis to double check the constraints are satisfied.
It does not always do multi-step optimization.
End of explanation
"""
stress_static = model.cquad4_stress[key0].data_frame
stress_transient = model.cquad4_stress[key1].data_frame
# The final calculated factor:
# Is it a None or not?
# This defines if it's static or transient
print('stress_static.nonlinear_factor = %s' % model.cquad4_stress[key0].nonlinear_factor)
print('stress_transient.nonlinear_factor = %s' % model.cquad4_stress[key1].nonlinear_factor)
print('data_names = %s' % model.cquad4_stress[key1].data_names)
print('loadsteps = %s' % model.cquad4_stress[key1].lsdvmns)
print('eigenvalues = %s' % model.cquad4_stress[key1].eigrs)
"""
Explanation: Keys:
* key0 is the "static" key
* key1 is the "buckling" key
Similarly:
* Transient solutions can have preload
* Frequency solutions can have loadsets (???)
Moving onto the data frames
The static case is the initial deflection state
The buckling case is "transient", where the modes (called load steps or lsdvmn here) represent the "times"
pyNastran reads these tables differently and handles them differently internally. They look very similar though.
End of explanation
"""
# Sets default precision of real numbers for pandas output\n"
pd.set_option('precision', 2)
stress_static.head(20)
"""
Explanation: Static Table
End of explanation
"""
# Sets default precision of real numbers for pandas output\n"
pd.set_option('precision', 3)
#import numpy as np
#np.set_printoptions(formatter={'all':lambda x: '%g'})
stress_transient.head(20)
"""
Explanation: Transient Table
End of explanation
"""
|
glennrfisher/introduction-to-machine-learning | notebook/Teaching a Computer to Diagnose Cancer.ipynb | mit | import pandas as pd
import numpy as np
import sklearn.cross_validation
import sklearn.neighbors
import sklearn.metrics
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
import seaborn as sns
# enable matplotlib mode
%matplotlib inline
# configure plot readability
sns.set_style("white")
sns.set_context("talk")
"""
Explanation: Teaching a Computer to Diagnose Cancer
This notebook is intended to accompany my presentation on "Teaching a Computer to Diagnose Cancer: An Introduction to Machine Learning."
Load Python Dependencies
Python has a number of excellent libraries available for scientific computing. This notebook uses several, including:
Pandas: Data Analysis Library
Numpy: Linear Algebra Library
SciKit Learn: Machine Learning Library
MatPlotLib: 2D Plotting Library
Seaborn: Data Visualization Library
End of explanation
"""
columns_to_features = { 1: "Diagnosis",
22: "Radius",
23: "Texture"}
features_to_keep = columns_to_features.values()
wdbc = pd.read_csv("wdbc.data", header = None) \
.rename(columns = columns_to_features) \
.filter(features_to_keep, axis = 1) \
.replace("M", "Malignant") \
.replace("B", "Benign")
"""
Explanation: Load Dataset
We will use the Wisconsin Breast Cancer Diagnostic Data Set (WDBC) to classify breast cancer cells as malignant or benign. This dataset has features computed from digital images of a fine needle aspirate (FNA) of breast mass. Each feature describes characteristics of the cell nuclei in a particular image. Since column headers are not included in the raw data, we manually encode the feature names.
Number of Instances: 569
Number of Attributes: 32 (ID, diagnosis, 30 real-valued input features)
Class Distribution: 357 benign, 212 malignant
To simplify our analysis, we will focus on only two features: largest radius and worst texture.
End of explanation
"""
wdbc.sample(5)
"""
Explanation: Data Exploration
It's important to gain some insight and intuition about our data. This will help us choose the right statistical model to use.
We'll start by looking at 5 randomly selected observations from the dataset.
Notice that:
- The diagnosis is categorical and represented with a string label.
- The input features are real-valued.
End of explanation
"""
sns.countplot(x = "Diagnosis", data = wdbc)
sns.despine()
"""
Explanation: Next, let's compare the number of malignant and benign observations by plotting the class distribution.
End of explanation
"""
sns.regplot(wdbc["Radius"], wdbc["Texture"], fit_reg = False)
"""
Explanation: Let's plot the radius and texture for each observation.
End of explanation
"""
wdbc_benign = wdbc[wdbc["Diagnosis"] == "Benign"]
wdbc_malignant = wdbc[wdbc["Diagnosis"] == "Malignant"]
sns.regplot(wdbc_benign["Radius"], wdbc_benign["Texture"], color = "green", fit_reg = False)
sns.regplot(wdbc_malignant["Radius"], wdbc_malignant["Texture"], color = "red", fit_reg = False)
green_patch = mpatches.Patch(color = "green", label = "Benign")
red_patch = mpatches.Patch(color = "red", label = "Malignant")
plt.legend(handles=[green_patch, red_patch])
"""
Explanation: That's an interesting plot, but does it reveal anything? Let's take a closer look by generating the same plot, but color-coding it by diagnosis.
End of explanation
"""
sns.regplot(wdbc_benign["Radius"], wdbc_benign["Texture"], color = "green", fit_reg = False)
sns.regplot(wdbc_malignant["Radius"], wdbc_malignant["Texture"], color = "red", fit_reg = False)
radii = np.array([10, 25, 15])
textures = np.array([20, 30, 25])
sns.regplot(radii, textures, color = "blue", fit_reg = False,
marker = "+", scatter_kws = {"s": 800})
green_patch = mpatches.Patch(color = "green", label = "Benign")
red_patch = mpatches.Patch(color = "red", label = "Malignant")
plt.legend(handles=[green_patch, red_patch])
"""
Explanation: This plot shows that the radius of malignant cells tends to be much larger than the radius of benign cells. As a result, the data appears to "separate" when plotted--the benign cells are clustered together on the left side of the plot, while the malignant cells are clustered together on the right side. You can use this information to intuitively classify new cells.
For example, let's consider classifying the following (radius, texture) pairs:
(10, 20): This point is close to benign (green) cells. Therefore, it's likely that this cell is also benign.
(25, 30): This point is close to malignant (red) cells. Therefore, it's likely that this cell is also malignant.
(15, 25): This point is close to both benign (green) and malignant (red) cells. However, there are more benign (green) cells near this point than malignant (red) cells. Therefore, our best guess is that this cell is benign.
End of explanation
"""
x_full = wdbc.drop("Diagnosis", axis = 1)
y_full = wdbc["Diagnosis"]
x_train, x_test, y_train, y_test = sklearn.cross_validation.train_test_split(
x_full, y_full, test_size = 0.3, random_state = 3)
"""
Explanation: k-Nearest Neighbors Algorithm
To classify the cells above, you intuitively applied a classification algorithm called k-nearest neighbors (kNN). Let's learn more about k-nearest neighbors and use it to train a computer to diagnose cells.
The best way to understand the k-nearest neighbors algorithm is to trace its execution. Here is how the k-nearest neighbors algorithm would diagnose a new cell given its radius and texture:
1. Consider (radius, texture) as a point in the plane.
2. Use the dataset to find the k nearest observations to (radius, texture).
3. Among the k nearest observations, find the most common diagnosis.
4. Classify the given cell using that diagnosis.
Although the kNN algorithm is simple and straightforward, it is extremely powerful. It is often one of the first (and most successful) algorithms that data scientists apply to a new dataset.
Model Training
Let's train a k-Nearest Neighbors model to diagnose new cell observations.
Since we want to know how well our model generalizes (i.e. how well it can diagnose new observations that it hasn't seen before) we first split our dataset into separate training and test sets. We will train the model with the training set then analyze its performance with the test set.
End of explanation
"""
model = sklearn.neighbors.KNeighborsClassifier().fit(x_train, y_train)
"""
Explanation: Now we can train a k-nearest neighbors classifier using our training set. This is surprisingly straightforward since we use an implementation provided by the SciKit-Learn machine learning library.
End of explanation
"""
predictions = model.predict(x_test)
sklearn.metrics.accuracy_score(predictions, y_test)
"""
Explanation: Model Evaluation
How well does our model work? To learn how well it can diagnose new cells, we will use it to predict the diagnosis of each test set observation. Since we know the true diagnosis of each test set observation, we can calculate the model's accuracy by comparing the predicted and actual diagnoses.
End of explanation
"""
|
tlkh/Generating-Inference-from-3D-Printing-Jobs | Simple Data Plots (W1 - W4 data).ipynb | mit | import numpy as np
import csv
%run 'preprocessor.ipynb' #our own preprocessor functions
"""
Explanation: Simple 2D Plots using Matplotlib
These plots are for the data obtained from the cohort classroom printers from Week 1 to Week 4 of Term 2.
Import dependencies
End of explanation
"""
with open('data_w1w4.csv', 'r') as f:
reader = csv.reader(f)
data = list(reader)
"""
Explanation: Load the dataset from the csv file
End of explanation
"""
matrix = obtain_data_matrix(data)
samples = len(matrix)
print("Number of samples: " + str(samples))
print(matrix[0])
"""
Explanation: Process the data
We will output the data in the form of a numpy matrix using our preprocessing functions.
The first row in the matrix (first data entry) and total number of samples will be displayed.
End of explanation
"""
filament = matrix[:,[8]]
time = matrix[:,[9]]
satisfaction = matrix[:,[10]]
result = matrix[:,[11]]
"""
Explanation: Prepare the individual data axis
End of explanation
"""
import matplotlib.pyplot as plt
plt.figure(1, figsize=(10, 6))
plt.xlabel('Print Time (mins)')
plt.ylabel('Filament (m)')
plt.title('Filament Use')
plt.scatter([filament], [time])
plt.figure(2, figsize=(10, 6))
plt.xlabel('Print Time (mins)')
plt.ylabel('Rating (1 - 5)')
plt.title('Rating')
plt.scatter([time], [satisfaction])
plt.figure(3, figsize=(10, 6))
plt.xlabel('Print Time (mins)')
plt.ylabel('Success / Fail / NIL')
plt.title('Result')
plt.scatter([time], [result])
plt.tight_layout()
plt.show()
"""
Explanation: Plot the data in 2D
End of explanation
"""
from mpl_toolkits.mplot3d import Axes3D
fig = plt.clf()
fig = plt.figure(1, figsize=(10, 6))
ax = fig.add_subplot(111, projection='3d')
ax.scatter([filament], [time], [satisfaction])
plt.show()
"""
Explanation: Plot the data in 3D
We are plotting filament usage against time against satisfaction rating.
End of explanation
"""
|
rcrehuet/Python_for_Scientists_2017 | notebooks/6_1_linear_regression.ipynb | gpl-3.0 | import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
x = np.arange(10.)
y = 5*x+3
np.random.seed(3)
y+= np.random.normal(scale=10,size=x.size)
plt.scatter(x,y);
def lin_reg(x,y):
"""
Perform a linear regression of x vs y.
x, y are 1 dimensional numpy arrays
returns alpha and beta for the model y = alpha + beta*x
"""
beta = np.mean(x*y)-np.mean(x)*np.mean(y)
#finish...
lin_reg(x,y)
"""
Explanation: Linear regression
Linear regression in Python can be done in different ways. From coding it yourself to using a function from a statistics module.
Here we will do both.
Coding with numpy
From the Wikipedia, we see that linear regression can be expressed as:
$$
y = \alpha + \beta x
$$
where:
$$
\beta = \frac{\overline{xy} -\bar x \bar y}{\overline{x^2} - \bar{x}^2}=\frac{\mathrm{Cov}[x,y]}{\mathrm{Var}[x]}
$$
and $\alpha=\overline y - \beta \bar x$
We first import the basic modules and generate some data with noise.
End of explanation
"""
def lin_reg2(x,y):
"""
Perform a linear regression of x vs y. Uses covariances.
x, y are 1 dimensional numpy arrays
returns alpha and beta for the model y = alpha + beta*x
"""
c = np.cov(x,y)
#finish...
lin_reg2(x,y)
"""
Explanation: We could also implement it with the numpy covariance function. The diagonal terms represent the variance.
End of explanation
"""
def lin_reg3(x,y):
"""
Perform a linear regression of x vs y. Uses least squares.
x, y are 1 dimensional numpy arrays
returns alpha and beta for the model y = alpha + beta*x
"""
#finish...
lin_reg3(x,y)
"""
Explanation: Coding as a least square problem
The previous methods only works for single variables. We could generalize it if we code it as a least square problem:
$$
\bf y = \bf A \boldsymbol \beta
$$
Remark that $\bf A$ is $\bf X$ with an extra column to represent independent term, previously called $\alpha$, that now corresponds to $\beta_{N+1}$.
$$
\bf A = \left[\bf X , \bf 1 \right]
$$
End of explanation
"""
#finish...
"""
Explanation: The simple ways
numpy
As usual, for tasks as common as a linear regression, there are already implemented solutions in several packages. In numpy, we can use polyfit, which can fit polinomial of degree $N$.
End of explanation
"""
import scipy.stats as stats
#finish
"""
Explanation: scipy
scipy has a statistics module that returns the fit and the correlation coefficient:
End of explanation
"""
from sklearn import linear_model
#Finish
"""
Explanation: scikit-learn
The most powerful module for doing data analysis, and Machine Learning is scikit-learn. There is a good documentation on linear models
End of explanation
"""
x = np.arange(10.)
y = 5*x+3
np.random.seed(3)
y+= np.random.normal(scale=10,size=x.size)
plt.scatter(x,y);
"""
Explanation: Efficiency
As an exercice test the speed of these implementation for a larger dataset.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/bcc/cmip6/models/sandbox-1/seaice.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'bcc', 'sandbox-1', 'seaice')
"""
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: BCC
Source ID: SANDBOX-1
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:39
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation
"""
|
NYUDataBootcamp/Materials | Code/notebooks/bootcamp_pandas-summarize.ipynb | mit | import sys # system module
import pandas as pd # data package
import matplotlib.pyplot as plt # graphics module
import datetime as dt # date and time module
import numpy as np # foundation for Pandas
%matplotlib inline
# check versions (overkill, but why not?)
print('Python version:', sys.version)
print('Pandas version: ', pd.__version__)
print('Today: ', dt.date.today())
"""
Explanation: Pandas 5: Summarizing data
Another in a series of notebooks that describe Pandas' powerful data management tools. In this one we summarize our data in a variety of ways. Which is more interesting than it sounds.
Outline:
WEO government debt data. Something to work with. How does Argentina's government debt compare to the debt of other countries? How did it compare when it defaulted in 2001?
Describing numerical data. Descriptive statistics: numbers of non-missing values, mean, median, quantiles.
Describing catgorical data. The excellent value_counts method.
Grouping data. An incredibly useful collection of tools based on grouping data based on a variable: men and woman, grads and undergrads, and so on.
Note: requires internet access to run.
This Jupyter notebook was created by Dave Backus, Chase Coleman, and Spencer Lyon for the NYU Stern course Data Bootcamp.
<a id=prelims></a>
Preliminaries
Import packages, etc.
End of explanation
"""
url1 = "http://www.imf.org/external/pubs/ft/weo/2016/02/weodata/"
url2 = "WEOOct2016all.xls"
url = url1 + url2
weo = pd.read_csv(url, sep='\t',
usecols=[1,2] + list(range(19,46)),
thousands=',',
na_values=['n/a', '--'])
print('Variable dtypes:\n', weo.dtypes.head(6), sep='')
"""
Explanation: <a id=weo></a>
WEO data on government debt
We use the IMF's data on government debt again, specifically its World Economic Outlook database, commonly referred to as the WEO. We focus on government debt expressed as a percentage of GDP, variable code GGXWDG_NGDP.
The central question here is how the debt of Argentina, which defaulted in 2001, compared to other countries. Was it a matter of too much debt or something else?
Load data
First step: load the data and extract a single variable: government debt (code GGXWDG_NGDP) expressed as a percentage of GDP.
End of explanation
"""
# select debt variable
variables = ['GGXWDG_NGDP']
db = weo[weo['WEO Subject Code'].isin(variables)]
# drop variable code column (they're all the same)
db = db.drop('WEO Subject Code', axis=1)
# set index to country code
db = db.set_index('ISO')
# name columns
db.columns.name = 'Year'
# transpose
dbt = db.T
# see what we have
dbt.head()
"""
Explanation: Clean and shape
Second step: select the variable we want and generate the two dataframes.
End of explanation
"""
fig, ax = plt.subplots()
dbt.plot(ax=ax,
legend=False, color='blue', alpha=0.3,
ylim=(0,150)
)
ax.set_ylabel('Percent of GDP')
ax.set_xlabel('')
ax.set_title('Government debt', fontsize=14, loc='left')
dbt['ARG'].plot(ax=ax, color='black', linewidth=1.5)
"""
Explanation: Example. Let's try a simple graph of the dataframe dbt. The goal is to put Argentina in perspective by plotting it along with many other countries.
End of explanation
"""
dbt.shape
# count non-missing values
dbt.count(axis=1).plot()
"""
Explanation: Exercise.
What do you take away from this graph?
What would you change to make it look better?
To make it mnore informative?
To put Argentina's debt in context?
Exercise. Do the same graph with Greece (GRC) as the country of interest. How does it differ? Why do you think that is?
<a id=describe></a>
Describing numerical data
Let's step back a minute. What we're trying to do is compare Argentina to other countries. What's the best way to do that? This isn't a question with an obvious best answer, but we can try some things, see how they look. One thing we could do is compare Argentina to the mean or median. Or to some other feature of the distribution.
We work up to this by looking first at some features of the distribution of government debt numbers across countries. Some of this we've seen, some is new.
What's (not) there?
Let's check out the data first. How many non-missing values do we have at each date? We can do that with the count method. The argument axis=1 says to do this by date, counting across columns (axis number 1).
End of explanation
"""
# 2001 data
db01 = db['2001']
db01['ARG']
db01.mean()
db01.median()
db01.describe()
db01.quantile(q=[0.25, 0.5, 0.75])
"""
Explanation: Describing series
Let's take the data for 2001 -- the year of Argentina's default -- and see what how Argentina compares. Was its debt high compare to other countries?
which leads to more questions. How would we compare? Compare Argentina to the mean or median? Something else?
Let's see how that works.
End of explanation
"""
fig, ax = plt.subplots()
db01.hist(bins=15, ax=ax, alpha=0.35)
ax.set_xlabel('Government Debt (Percent of GDP)')
ax.set_ylabel('Number of Countries')
ymin, ymax = ax.get_ylim()
ax.vlines(db01['ARG'], ymin, ymax, color='blue', lw=2)
"""
Explanation: Comment. If we add enough quantiles, we might as well plot the whole distribution. The easiest way to do this is with a histogram.
End of explanation
"""
# here we compute the mean across countries at every date
dbt.mean(axis=1).head()
# or we could do the median
dbt.median(axis=1).head()
# or a bunch of stats at once
# NB: db not dbt (there's no axix argument here)
db.describe()
# the other way
dbt.describe()
"""
Explanation: Comment Compared to the whole sample of countries in 2001, it doesn't seem that Argentina had particularly high debt.
Describing dataframes
We can compute the same statistics for dataframes. Here we hve a choice: we can compute (say) the mean down rows (axis=0) or across columns (axis=1). If we use the dataframe dbt, computing the mean across countries (columns) calls for axis=1.
End of explanation
"""
fig, ax = plt.subplots()
dbt.plot(ax=ax,
legend=False, color='blue', alpha=0.2,
ylim=(0,200)
)
dbt['ARG'].plot(ax=ax, color='black', linewidth=1.5)
ax.set_ylabel('Percent of GDP')
ax.set_xlabel('')
ax.set_title('Government debt', fontsize=14, loc='left')
dbt.mean(axis=1).plot(ax=ax, color='black', linewidth=2, linestyle='dashed')
"""
Explanation: Example. Let's add the mean to our graph. We make it a dashed line with linestyle='dashed'.
End of explanation
"""
dbar = dbt.mean().mean()
dbar
fig, ax = plt.subplots()
dbt.plot(ax=ax,
legend=False, color='blue', alpha=0.3,
ylim=(0,150)
)
dbt['ARG'].plot(ax=ax, color='black', linewidth=1.5)
ax.set_ylabel('Percent of GDP')
ax.set_xlabel('')
ax.set_title('Government debt', fontsize=14, loc='left')
xmin, xmax = ax.get_xlim()
ax.hlines(dbar, xmin, xmax, linewidth=2, linestyle='dashed')
"""
Explanation: Question. Do you think this looks better when the mean varies with time, or when we use a constant mean? Let's try it and see.
End of explanation
"""
url = 'http://pages.stern.nyu.edu/~dbackus/Data/mlcombined.csv'
ml = pd.read_csv(url, index_col=0,encoding = "ISO-8859-1")
print('Dimensions:', ml.shape)
# fix up the dates
ml["timestamp"] = pd.to_datetime(ml["timestamp"], unit="s")
ml.head(10)
# which movies have the most ratings?
ml['title'].value_counts().head(10)
ml['title'].value_counts().head(10).plot.barh(alpha=0.5)
# which people have rated the most movies?
ml['userId'].value_counts().head(10)
"""
Explanation: Exercise. Which do we like better?
Exercise. Replace the (constant) mean with the (constant) median? Which do you prefer?
<a id=value-counts></a>
Describing categorical data
A categorical variable is one that takes on a small number of values. States take on one of fifty values. University students are either grad or undergrad. Students select majors and concentrations.
We're going to do two things with categorical data:
In this section, we count the number of observations in each category using the value_counts method. This is a series method, we apply it to one series/variable at a time.
In the next section, we go on to describe how other variables differ across catagories. How do students who major in finance differ from those who major in English? And so on.
We start with the combined MovieLens data we constructed in the previous notebook.
End of explanation
"""
# group
g = ml[['title', 'rating']].groupby('title')
type(g)
"""
Explanation: <a id=groupby></a>
Grouping data
Next up: group data by some variable. As an example, how would we compute the average rating of each movie? If you think for a minute, you might think of these steps:
Group the data by movie: Put all the "Pulp Fiction" ratings in one bin, all the "Shawshank" ratings in another. We do that with the groupby method.
Compute a statistic (the mean, for example) for each group.
Pandas has tools that make that relatively easy.
End of explanation
"""
# the number in each category
g.count().head(10)
# what type of object have we created?
type(g.count())
"""
Explanation: Now that we have a groupby object, what can we do with it?
End of explanation
"""
counts = ml.groupby(['title', 'movieId']).count()
gm = g.mean()
gm.head()
# we can put them together
grouped = g.count()
grouped = grouped.rename(columns={'rating': 'Number'})
grouped['Mean'] = g.mean()
grouped.head(10)
grouped.plot.scatter(x='Number', y='Mean')
"""
Explanation: Comment. Note that the combination of groupby and count created a dataframe with
Its index is the variable we grouped by. If we group by more than one, we get a multi-index.
Its columns are the other variables.
Exercise. Take the code
python
counts = ml.groupby(['title', 'movieId'])
Without running it, what is the index of counts? What are its columns?
End of explanation
"""
|
gaufung/Data_Analytics_Learning_Note | Data_Analytics_in_Action/pandasIO.ipynb | mit | import numpy as np
import pandas as pd
csvframe=pd.read_csv('myCSV_01.csv')
csvframe
# 也可以通过read_table来读写数据
pd.read_table('myCSV_01.csv',sep=',')
"""
Explanation: Pandas 数据读写
API
读取 | 写入
--- | ---
read_csv | to_csv
read_excel | to_excel
read_hdf | to_hdf
read_sql | to_sql
read_json | to_json
read_html | to_html
read_stata | to_stata
read_clipboard | to_clipboard
read_pickle | to_pickle
CVS 文件读写
csv 文件内容
white,read,blue,green,animal
1,5,2,3,cat
2,7,8,5,dog
3,3,6,7,horse
2,2,8,3,duck
4,4,2,1,mouse
End of explanation
"""
pd.read_csv('myCSV_02.csv',header=None)
"""
Explanation: 读取没有head的数据
1,5,2,3,cat
2,7,8,5,dog
3,3,6,7,horse
2,2,8,3,duck
4,4,2,1,mouse
End of explanation
"""
pd.read_csv('myCSV_02.csv',names=['white','red','blue','green','animal'])
"""
Explanation: 可以指定header
End of explanation
"""
pd.read_csv('myCSV_03.csv',index_col=['colors','status'])
"""
Explanation: 创建一个具有等级结构的DataFrame对象,可以添加index_col选项,数据文件格式
colors,status,item1,item2,item3
black,up,3,4,6
black,down,2,6,7
white,up,5,5,5
white,down,3,3,2
red,up,2,2,2
red,down,1,1,4
End of explanation
"""
pd.read_csv('myCSV_04.csv',sep='\s+')
"""
Explanation: Regexp 解析TXT文件
使用正则表达式指定sep,来达到解析数据文件的目的。
正则元素 | 功能
--- | ---
. | 换行符以外所有元素
\d | 数字
\D | 非数字
\s | 空白字符
\S | 非空白字符
\n | 换行符
\t | 制表符
\uxxxx | 使用十六进制表示ideaUnicode字符
数据文件随机以制表符和空格分隔
white red blue green
1 4 3 2
2 4 6 7
End of explanation
"""
pd.read_csv('myCSV_05.csv',sep='\D*',header=None,engine='python')
"""
Explanation: 读取有字母分隔的数据
000end123aaa122
001end125aaa144
End of explanation
"""
pd.read_table('myCSV_06.csv',sep=',',skiprows=[0,1,3,6])
"""
Explanation: 读取文本文件跳过一些不必要的行
```
log file
this file has been generate by automatic system
white,red,blue,green,animal
12-feb-2015:counting of animals inside the house
1,3,5,2,cat
2,4,8,5,dog
13-feb-2015:counting of animals inside the house
3,3,6,7,horse
2,2,8,3,duck
```
End of explanation
"""
pd.read_csv('myCSV_02.csv',skiprows=[2],nrows=3,header=None)
"""
Explanation: 从TXT文件中读取部分数据
只想读文件的一部分,可明确指定解析的行号,这时候用到nrows和skiprows选项,从指定的行开始和从起始行往后读多少行(norow=i)
End of explanation
"""
out = pd.Series()
i=0
pieces = pd.read_csv('myCSV_01.csv',chunksize=3)
for piece in pieces:
print piece
out.set_value(i,piece['white'].sum())
i += 1
out
"""
Explanation: 实例 :
对于一列数据,每隔两行取一个累加起来,最后把和插入到列的Series对象中
End of explanation
"""
frame = pd.DataFrame(np.arange(4).reshape((2,2)))
print frame.to_html()
"""
Explanation: 写入文件
to_csv(filenmae)
to_csv(filename,index=False,header=False)
to_csv(filename,na_rep='NaN')
HTML文件读写
写入HTML文件
End of explanation
"""
frame = pd.DataFrame(np.random.random((4,4)),
index=['white','black','red','blue'],
columns=['up','down','left','right'])
frame
s = ['<HTML>']
s.append('<HEAD><TITLE>MY DATAFRAME</TITLE></HEAD>')
s.append('<BODY>')
s.append(frame.to_html())
s.append('</BODY></HTML>')
html=''.join(s)
with open('myFrame.html','w') as html_file:
html_file.write(html)
"""
Explanation: 创建复杂的DataFrame
End of explanation
"""
web_frames = pd.read_html('myFrame.html')
web_frames[0]
# 以网址作为参数
ranking = pd.read_html('http://www.meccanismocomplesso.org/en/meccanismo-complesso-sito-2/classifica-punteggio/')
ranking[0]
"""
Explanation: HTML读表格
End of explanation
"""
from lxml import objectify
xml = objectify.parse('books.xml')
xml
root =xml.getroot()
root.Book.Author
root.Book.PublishDate
root.getchildren()
[child.tag for child in root.Book.getchildren()]
[child.text for child in root.Book.getchildren()]
def etree2df(root):
column_names=[]
for i in range(0,len(root.getchildren()[0].getchildren())):
column_names.append(root.getchildren()[0].getchildren()[i].tag)
xml_frame = pd.DataFrame(columns=column_names)
for j in range(0,len(root.getchildren())):
obj = root.getchildren()[j].getchildren()
texts = []
for k in range(0,len(column_names)):
texts.append(obj[k].text)
row = dict(zip(column_names,texts))
row_s=pd.Series(row)
row_s.name=j
xml_frame = xml_frame.append(row_s)
return xml_frame
etree2df(root)
"""
Explanation: 读写xml文件
使用的第三方的库 lxml
End of explanation
"""
pd.read_excel('data.xlsx')
pd.read_excel('data.xlsx','Sheet2')
frame = pd.DataFrame(np.random.random((4,4)),
index=['exp1','exp2','exp3','exp4'],
columns=['Jan2015','Feb2015','Mar2015','Apr2015'])
frame
frame.to_excel('data2.xlsx')
"""
Explanation: 读写Excel文件
End of explanation
"""
frame = pd.DataFrame(np.arange(16).reshape((4,4)),
index=['white','black','red','blue'],
columns=['up','down','right','left'])
frame.to_json('frame.json')
# 读取json
pd.read_json('frame.json')
"""
Explanation: JSON数据
End of explanation
"""
from pandas.io.pytables import HDFStore
store = HDFStore('mydata.h5')
store['obj1']=frame
store['obj1']
"""
Explanation: HDF5数据
HDF文件(hierarchical data from)等级数据格式,用二进制文件存储数据。
End of explanation
"""
frame.to_pickle('frame.pkl')
pd.read_pickle('frame.pkl')
"""
Explanation: pickle数据
End of explanation
"""
frame=pd.DataFrame(np.arange(20).reshape((4,5)),
columns=['white','red','blue','black','green'])
frame
from sqlalchemy import create_engine
enegine=create_engine('sqlite:///foo.db')
frame.to_sql('colors',enegine)
pd.read_sql('colors',enegine)
"""
Explanation: 数据库连接
以sqlite3为例介绍
End of explanation
"""
|
google-research/computation-thru-dynamics | experimental/notebooks/Contextual Integration RNN Tutorial.ipynb | apache-2.0 | # Numpy, JAX, Matplotlib and h5py should all be correctly installed and on the python path.
from __future__ import print_function, division, absolute_import
import datetime
import h5py
import jax.numpy as np
from jax import random
from jax.experimental import optimizers
import matplotlib.pyplot as plt
import numpy as onp # original CPU-backed NumPy
import os
import sys
import time
os.environ['XLA_PYTHON_CLIENT_PREALLOCATE'] = 'False'
from importlib import reload
# Import the tutorial code.
# You must change this to the location of computation-thru-dynamics directory.
HOME_DIR = '/home/youngjujo/'
sys.path.append(os.path.join(HOME_DIR,'computation-thru-dynamics/experimental'))
import contextual_integrator_rnn_tutorial.integrator as integrator
import contextual_integrator_rnn_tutorial.rnn as rnn
import contextual_integrator_rnn_tutorial.utils as utils
"""
Explanation: Contextual Integration RNN tutorial
In this notebook, we train a vanilla RNN to integrate one of two streams of white noise. This example is useful on its own to understand how RNN training works, and how to use JAX. In addition, it provides the input for the LFADS JAX Gaussian Mixture model notebook.
Copyright 2019 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Imports
End of explanation
"""
# Integration parameters
T = 1.0 # Arbitrary amount time, roughly physiological.
ntimesteps = 25 # Divide T into this many bins
bval = 0.01 # bias value limit
sval = 0.025 # standard deviation (before dividing by sqrt(dt))
input_params = (bval, sval, T, ntimesteps)
# Integrator RNN hyperparameters
u = 4 # Number of inputs to the RNN
n = 100 # Number of units in the RNN
o = 1 # Number of outputs in the RNN
# The scaling of the recurrent parameters in an RNN really matters.
# The correct scaling is 1/sqrt(number of recurrent inputs), which
# yields an order 1 signal output to a neuron if the input is order 1.
# Given that VRNN uses a tanh nonlinearity, with min and max output
# values of -1 and 1, this works out. The scaling just below 1
# (0.95) is because we know we are making a line attractor so, we
# might as well start it off basically right 1.0 is also basically
# right, but perhaps will lead to crazier dynamics.
param_scale = 0.8 # Scaling of the recurrent weight matrix
# Optimization hyperparameters
num_batchs = 10000 # Total number of batches to train on.
batch_size = 128 # How many examples in each batch
eval_batch_size = 1024 # How large a batch for evaluating the RNN
step_size = 0.025 # initial learning rate
decay_factor = 0.99975 # decay the learning rate this much
# Gradient clipping is HUGELY important for training RNNs
max_grad_norm = 10.0 # max gradient norm before clipping, clip to this value.
l2reg = 0.0002 # amount of L2 regularization on the weights
adam_b1 = 0.9 # Adam parameters
adam_b2 = 0.999
adam_eps = 1e-1
print_every = 100 # Print training informatino every so often
# JAX handles randomness differently than numpy or matlab.
# one threads the randomness through to each function.
# It's a bit tedious, but very easy to understand and with
# reliable effect.
seed = onp.random.randint(0, 1000000) # get randomness from CPU level numpy
print("Seed: %d" % seed)
key = random.PRNGKey(seed) # create a random key for jax for use on device.
# Plot a few input/target examples to make sure things look sane.
ntoplot = 10 # how many examples to plot
# With this split command, we are always getting a new key from the old key,
# and I use first key as as source of randomness for new keys.
# key, subkey = random.split(key, 2)
# ## do something random with subkey
# key, subkey = random.split(key, 2)
# ## do something random with subkey
# In this way, the same top level randomness source stays random.
# The number of examples to plot is given by the number of
# random keys in this function.
key, skey = random.split(key, 2)
skeys = random.split(skey, ntoplot) # get ntoplot random keys
reload(integrator)
inputs, targets = integrator.build_inputs_and_targets(input_params, skeys)
# Plot the input to the RNN and the target for the RNN.
integrator.plot_batch(ntimesteps, inputs, targets, ntoplot=1)
# Init some parameters for training.
reload(rnn)
key = random.PRNGKey(onp.random.randint(100000000))
init_params = rnn.random_vrnn_params(key, u, n, o, g=param_scale)
rnn.plot_params(init_params)
# Create a decay function for the learning rate
decay_fun = optimizers.exponential_decay(step_size, decay_steps=1,
decay_rate=decay_factor)
batch_idxs = onp.linspace(1, num_batchs)
plt.plot(batch_idxs, [decay_fun(b) for b in batch_idxs])
plt.axis('tight')
plt.xlabel('Batch number')
plt.ylabel('Learning rate');
"""
Explanation: Hyperparameters
End of explanation
"""
reload(rnn)
# Initialize the optimizer. Please see jax/experimental/optimizers.py
opt_init, opt_update, get_params = optimizers.adam(decay_fun, adam_b1, adam_b2, adam_eps)
opt_state = opt_init(init_params)
# Run the optimization loop, first jit'd call will take a minute.
start_time = time.time()
all_train_losses = []
for batch in range(num_batchs):
key = random.fold_in(key, batch)
skeys = random.split(key, batch_size)
inputs, targets = integrator.build_inputs_and_targets_jit(input_params, skeys)
opt_state = rnn.update_w_gc_jit(batch, opt_state, opt_update, get_params, inputs,
targets, max_grad_norm, l2reg)
if batch % print_every == 0:
params = get_params(opt_state)
all_train_losses.append(rnn.loss_jit(params, inputs, targets, l2reg))
train_loss = all_train_losses[-1]['total']
batch_time = time.time() - start_time
step_size = decay_fun(batch)
s = "Batch {} in {:0.2f} sec, step size: {:0.5f}, training loss {:0.4f}"
print(s.format(batch, batch_time, step_size, train_loss))
start_time = time.time()
# List of dicts to dict of lists
all_train_losses = {k: [dic[k] for dic in all_train_losses] for k in all_train_losses[0]}
# Show the loss through training.
xlims = [2, 50]
plt.figure(figsize=(16,4))
plt.subplot(141)
plt.plot(all_train_losses['total'][xlims[0]:xlims[1]], 'k')
plt.title('Total')
plt.subplot(142)
plt.plot(all_train_losses['lms'][xlims[0]:xlims[1]], 'r')
plt.title('Least mean square')
plt.subplot(143)
plt.plot(all_train_losses['l2'][xlims[0]:xlims[1]], 'g');
plt.title('L2')
plt.subplot(144)
plt.plot(all_train_losses['total'][xlims[0]:xlims[1]], 'k')
plt.plot(all_train_losses['lms'][xlims[0]:xlims[1]], 'r')
plt.plot(all_train_losses['l2'][xlims[0]:xlims[1]], 'g')
plt.title('All losses')
"""
Explanation: Train the VRNN
End of explanation
"""
# Take a batch for an evalulation loss, notice the L2 penalty is 0
# for the evaluation.
params = get_params(opt_state)
key, subkey = random.split(key, 2)
skeys = random.split(subkey, batch_size)
inputs, targets = integrator.build_inputs_and_targets_jit(input_params, skeys)
eval_loss = rnn.loss_jit(params, inputs, targets, l2reg=0.0)['total']
eval_loss_str = "{:.5f}".format(eval_loss)
print("Loss on a new large batch: %s" % (eval_loss_str))
"""
Explanation: Testing
End of explanation
"""
reload(rnn)
# Visualize how good this trained integrator is
def inputs_targets_no_h0s(keys):
inputs_b, targets_b = \
integrator.build_inputs_and_targets_jit(input_params, keys)
h0s_b = None # Use trained h0
return inputs_b, targets_b, h0s_b
rnn_run = lambda inputs: rnn.batched_rnn_run(params, inputs)
give_trained_h0 = lambda batch_size : np.array([params['h0']] * batch_size)
rnn_internals = rnn.run_trials(rnn_run, inputs_targets_no_h0s, 1, 16)
integrator.plot_batch(ntimesteps, rnn_internals['inputs'],
rnn_internals['targets'], rnn_internals['outputs'],
onp.abs(rnn_internals['targets'] - rnn_internals['outputs']))
# Visualize the hidden state, as an example.
reload(rnn)
rnn.plot_examples(ntimesteps, rnn_internals, nexamples=4)
# Take a look at the trained parameters.
rnn.plot_params(params)
"""
Explanation: Visualizations of trained system
End of explanation
"""
# Define directories, etc.
task_type = 'contextual_int'
rnn_type = 'vrnn'
fname_uniquifier = datetime.datetime.now().strftime("%Y-%m-%d_%H:%M:%S")
data_dir = os.path.join(os.path.join('/tmp', rnn_type), task_type)
print(data_dir)
print(fname_uniquifier)
# Save parameters
params_fname = ('trained_params_' + rnn_type + '_' + task_type + '_' + \
eval_loss_str + '_' + fname_uniquifier + '.h5')
params_fname = os.path.join(data_dir, params_fname)
print("Saving params in %s" % (params_fname))
utils.write_file(params_fname, params)
"""
Explanation: Saving
End of explanation
"""
nsave_batches = 20 # Save about 20000 trials
h0_ntimesteps = 30 # First few steps would generate a distribution of initial conditions
h0_input_params = (bval, sval,
T * float(h0_ntimesteps) / float(ntimesteps),
h0_ntimesteps)
def get_h0s_inputs_targets_h0s(keys):
inputs_bxtxu, targets_bxtxm = \
integrator.build_inputs_and_targets_jit(h0_input_params, keys)
h0s = give_trained_h0(len(keys))
return (inputs_bxtxu, targets_bxtxm, h0s)
rnn_run_w_h0 = lambda inputs, h0s: rnn.batched_rnn_run_w_h0(params, inputs, h0s)
data_dict = rnn.run_trials(rnn_run_w_h0, get_h0s_inputs_targets_h0s,
nsave_batches, eval_batch_size)
data_dict['inputs'] = h0_data_dict['inputs'][:,5:,:]
data_dict['hiddens'] = h0_data_dict['hiddens'][:,5:,:]
data_dict['outputs'] = h0_data_dict['targets'][:,5:,:]
data_dict['targets'] = h0_data_dict['targets'][:,5:,:]
data_dict['h0s'] = h0_data_dict['hiddens'][:,5,:]
rnn.plot_examples(ntimesteps, data_dict, nexamples=4)
data_fname = ('trained_data_' + rnn_type + '_' + task_type + '_' + \
eval_loss_str + '_' + fname_uniquifier + '.h5')
data_fname = os.path.join(data_dir, data_fname)
print("Saving data in %s" %(data_fname))
utils.write_file(data_fname, data_dict)
"""
Explanation: Create per-trial initial conditions along the line attractor.
Let's create some per-trial initial conditions by running the integrator for a few time steps. This will make comparing the learned initial states in the LFADS tutorial easier to visualize.
End of explanation
"""
|
QuantConnect/Lean | Tests/Research/RegressionTemplates/BasicTemplateResearchPython.ipynb | apache-2.0 | import warnings
warnings.filterwarnings("ignore")
# Load in our startup script, required to set runtime for PythonNet
%run ./start.py
# Create an instance
qb = QuantBook()
# Select asset data
spy = qb.AddEquity("SPY")
"""
Explanation: Welcome to The QuantConnect Research Page
Refer to this page for documentation https://www.quantconnect.com/docs/research/overview
Contribute to this template file https://github.com/QuantConnect/Lean/blob/master/Research/BasicQuantBookTemplate.ipynb
QuantBook Basics
Start QuantBook
Add the references and imports
Create a QuantBook instance
End of explanation
"""
startDate = DateTime(2021,1,1)
endDate = DateTime(2021,12,31)
# Gets historical data from the subscribed assets, the last 360 datapoints with daily resolution
h1 = qb.History(qb.Securities.Keys, startDate, endDate, Resolution.Daily)
if h1.shape[0] < 1:
raise Exception("History request resulted in no data")
"""
Explanation: Historical Data Requests
We can use the QuantConnect API to make Historical Data Requests. The data will be presented as multi-index pandas.DataFrame where the first index is the Symbol.
For more information, please follow the link.
End of explanation
"""
# Example with BB, it is a datapoint indicator
# Define the indicator
bb = BollingerBands(30, 2)
# Gets historical data of indicator
bbdf = qb.Indicator(bb, "SPY", startDate, endDate, Resolution.Daily)
# drop undesired fields
bbdf = bbdf.drop('standarddeviation', 1)
if bbdf.shape[0] < 1:
raise Exception("Bollinger Bands resulted in no data")
"""
Explanation: Indicators
We can easily get the indicator of a given symbol with QuantBook.
For all indicators, please checkout QuantConnect Indicators Reference Table
End of explanation
"""
|
kthouz/NYC_Green_Taxi | NYC Green Taxi.ipynb | mit | # Download the September 2015 dataset
if os.path.exists('data_september_2015.csv'): # Check if the dataset is present on local disk and load it
data = pd.read_csv('data_september_2015.csv')
else: # Download dataset if not available on disk
url = "https://s3.amazonaws.com/nyc-tlc/trip+data/green_tripdata_2015-09.csv"
data = pd.read_csv(url)
data.to_csv(url.split('/')[-1])
# Print the size of the dataset
print "Number of rows:", data.shape[0]
print "Number of columns: ", data.shape[1]
"""
Explanation: NYC Green Taxi
In this notebook, I will explore data on New York City Green Taxi of september 2015. I will start with some warm up questions about the dataset. Later, I will build a model to predict the percentage tip a driver would exepect on each trip. The code is fully written in python with few additional open-source libraries easy to install.
- shapely
- scikit learn
- tabulate
In this analysis, some notion of statistics and hypothesis test are used but are very easy to follow. This handbook of statistics can be used as a reference to explain basics.
Warm up
Let's first download the dataset and print out the its size
End of explanation
"""
# define the figure with 2 subplots
fig,ax = plt.subplots(1,2,figsize = (15,4))
# histogram of the number of trip distance
data.Trip_distance.hist(bins=30,ax=ax[0])
ax[0].set_xlabel('Trip Distance (miles)')
ax[0].set_ylabel('Count')
ax[0].set_yscale('log')
ax[0].set_title('Histogram of Trip Distance with outliers included')
# create a vector to contain Trip Distance
v = data.Trip_distance
# exclude any data point located further than 3 standard deviations of the median point and
# plot the histogram with 30 bins
v[~((v-v.median()).abs()>3*v.std())].hist(bins=30,ax=ax[1]) #
ax[1].set_xlabel('Trip Distance (miles)')
ax[1].set_ylabel('Count')
ax[1].set_title('A. Histogram of Trip Distance (without outliers)')
# apply a lognormal fit. Use the mean of trip distance as the scale parameter
scatter,loc,mean = lognorm.fit(data.Trip_distance.values,
scale=data.Trip_distance.mean(),
loc=0)
pdf_fitted = lognorm.pdf(np.arange(0,12,.1),scatter,loc,mean)
ax[1].plot(np.arange(0,12,.1),600000*pdf_fitted,'r')
ax[1].legend(['data','lognormal fit'])
# export the figure
plt.savefig('Question2.jpeg',format='jpeg')
plt.show()
"""
Explanation: Let's have a look at the distribution of trip distance
End of explanation
"""
# First, convert pickup and drop off datetime variable in their specific righ format
data['Pickup_dt'] = data.lpep_pickup_datetime.apply(lambda x:dt.datetime.strptime(x,"%Y-%m-%d %H:%M:%S"))
data['Dropoff_dt'] = data.Lpep_dropoff_datetime.apply(lambda x:dt.datetime.strptime(x,"%Y-%m-%d %H:%M:%S"))
# Second, create a variable for pickup hours
data['Pickup_hour'] = data.Pickup_dt.apply(lambda x:x.hour)
# Mean and Median of trip distance by pickup hour
# I will generate the table but also generate a plot for a better visualization
fig,ax = plt.subplots(1,1,figsize=(9,5)) # prepare fig to plot mean and median values
# use a pivot table to aggregate Trip_distance by hour
table1 = data.pivot_table(index='Pickup_hour', values='Trip_distance',aggfunc=('mean','median')).reset_index()
# rename columns
table1.columns = ['Hour','Mean_distance','Median_distance']
table1[['Mean_distance','Median_distance']].plot(ax=ax)
plt.ylabel('Metric (miles)')
plt.xlabel('Hours after midnight')
plt.title('Distribution of trip distance by pickup hour')
#plt.xticks(np.arange(0,30,6)+0.35,range(0,30,6))
plt.xlim([0,23])
plt.savefig('Question3_1.jpeg',format='jpeg')
plt.show()
print '-----Trip distance by hour of the day-----\n'
print tabulate(table1.values.tolist(),["Hour","Mean distance","Median distance"])
"""
Explanation: The Trip Distance is asymmetrically distributed. It is skewed to the right and it has a median smaller than its mean and both smaller than the standard deviation. The skewness is due to the fact that the variable has a lower boundary of 0. The distance can't be negative. This distribution has a structure of a lognormal distribution. To the left is plotted the distribution of the entire raw set of Trip distance. To the right, outliers have been removed before plotting. Outliers are defined as any point located further than 3 standard deviations from the mean
The hypothesis: The trips are not random. If there were random, we would have a (symmetric) Gaussian distribution. The non-zero autocorrelation may be related the fact that people taking ride are pushed by a common cause, for instance, people rushing to work.
Let's see if the time of the day has any impact on the trip distance
End of explanation
"""
# select airport trips
airports_trips = data[(data.RateCodeID==2) | (data.RateCodeID==3)]
print "Number of trips to/from NYC airports: ", airports_trips.shape[0]
print "Average fare (calculated by the meter) of trips to/from NYC airports: $", airports_trips.Fare_amount.mean(),"per trip"
print "Average total charged amount (before tip) of trips to/from NYC airports: $", airports_trips.Total_amount.mean(),"per trip"
"""
Explanation: -> We observe long range trips in the morning and evenings. Are these people commuting to work? If so how do they get back home. The evening peak are shorter than the morning peak. I would hypothesize that people are okay to take cabs in the morning to avoid being late to their early appointments while they would take public transportation in the evening. However, this might not apply to NYC
Let's also compare trips that originate (or terminate) from (at) one of the NYC airports. We can look at how many they are, the average fair, ...
Reading through the dictionary of variables, I found that the variable RateCodeID contains values indicating the final rate that was applied. Among those values, I realized that there is Newark and JFK which are the major airports in New York. In this part, I will use this knowledge and group data with RateCodeID 2 (JFK) and 3 (Newark). - An alternative (which I didn't due to time constraint) is to (1) get coordinates of airports from google map or http://transtats.bts.gov (2) get at least 4 points defining a rectangular buffer zone near the airport (3) build a polygon shape using shapely [https://pypi.python.org/pypi/Shapely] and (3) check if any pickup/dropoff location coordinates is within the polygon using shapely again. This method was first tried but was found to be time consuming -
End of explanation
"""
# create a vector to contain Trip Distance for
v2 = airports_trips.Trip_distance # airport trips
v3 = data.loc[~data.index.isin(v2.index),'Trip_distance'] # non-airport trips
# remove outliers:
# exclude any data point located further than 3 standard deviations of the median point and
# plot the histogram with 30 bins
v2 = v2[~((v2-v2.median()).abs()>3*v2.std())]
v3 = v3[~((v3-v3.median()).abs()>3*v3.std())]
# define bins boundaries
bins = np.histogram(v2,normed=True)[1]
h2 = np.histogram(v2,bins=bins,normed=True)
h3 = np.histogram(v3,bins=bins,normed=True)
# plot distributions of trip distance normalized among groups
fig,ax = plt.subplots(1,2,figsize = (15,4))
w = .4*(bins[1]-bins[0])
ax[0].bar(bins[:-1],h2[0],alpha=1,width=w,color='b')
ax[0].bar(bins[:-1]+w,h3[0],alpha=1,width=w,color='g')
ax[0].legend(['Airport trips','Non-airport trips'],loc='best',title='group')
ax[0].set_xlabel('Trip distance (miles)')
ax[0].set_ylabel('Group normalized trips count')
ax[0].set_title('A. Trip distance distribution')
#ax[0].set_yscale('log')
# plot hourly distribution
airports_trips.Pickup_hour.value_counts(normalize=True).sort_index().plot(ax=ax[1])
data.loc[~data.index.isin(v2.index),'Pickup_hour'].value_counts(normalize=True).sort_index().plot(ax=ax[1])
ax[1].set_xlabel('Hours after midnight')
ax[1].set_ylabel('Group normalized trips count')
ax[1].set_title('B. Hourly distribution of trips')
ax[1].legend(['Airport trips','Non-airport trips'],loc='best',title='group')
plt.savefig('Question3_2.jpeg',format='jpeg')
plt.show()
"""
Explanation: In addition to the number and mean fare of airport trips, let's have aso look at how trips are distributed by trip distances and hour of the day
End of explanation
"""
data = data[(data.Total_amount>=2.5)] #cleaning
data['Tip_percentage'] = 100*data.Tip_amount/data.Total_amount
print "Summary: Tip percentage\n",data.Tip_percentage.describe()
"""
Explanation: A. The trip distance distribution shows two peaks. Airport trips follow the same trend as the rest of the trips for short trips (trip distance ≤ 2miles). However, there is also an increased number of long range trips (18 miles) which might correspond to a great number people coming to airports from further residential areas. A check on google map shows that the distance between JFK and Manhattan is about 18 miles whereas Newark to Manhattan is 15 miles.
B. The hourly distribution shows that the number of trips at airports peaks around 3PM while it peaks 2 hours later. On the other hand, there is a shortage in airports riders at 2AM while the rest of NYC goes completely down 3 hours later 5AM.
Predictive model
In this section, I am going to guide my analysis towards building a model to predict the percentage tip
1. Let's build a derived variable for tip as a percentage of the total fare.
Before we proceed with this, some cleaning is necessary.
Since the initial charge for NYC green taxi is $2.5, any transaction with a smaller total amount is invalid, thus it is to be dropped
End of explanation
"""
# import library
from shapely.geometry import Point,Polygon,MultiPoint
# data points that define the bounding box of the Upper Manhattan
umanhattan = [(40.796937, -73.949503),(40.787945, -73.955822),(40.782772, -73.943575),
(40.794715, -73.929801),(40.811261, -73.934153),(40.835371, -73.934515),
(40.868910, -73.911145),(40.872719, -73.910765),(40.878252, -73.926350),
(40.850557, -73.947262),(40.836225, -73.949899),(40.806050, -73.971255)]
poi = Polygon(umanhattan)
# create a function to check if a location is located inside Upper Manhattan
def is_within_bbox(loc,poi=poi):
"""
This function returns 1 if a location loc(lat,lon) is located inside a polygon of interest poi
loc: tuple, (latitude, longitude)
poi: shapely.geometry.Polygon, polygon of interest
"""
return 1*(Point(loc).within(poi))
tic = dt.datetime.now()
# Create a new variable to check if a trip originated in Upper Manhattan
data['U_manhattan'] = data[['Pickup_latitude','Pickup_longitude']].apply(lambda r:is_within_bbox((r[0],r[1])),axis=1)
print "Processing time ", dt.datetime.now()-tic
"""
Explanation: 2. Similarly to the comparison between trips to/from airports with the rest of the trips, it is worthy to spend more time and check wether trips originating from upper manhattan have different percentage tip.
To identify trips originating from upper manhattan:
- From googgle map, collect latitude and longitude data of at least 12 points that approximately define the bounding box of upper Manhattan
- Create a polygon using shapely.geometry.Polygon [https://pypi.python.org/pypi/Shapely]
- Check if the polygon contains a location defined by (latitude,longitude)
End of explanation
"""
# create a vector to contain Tip percentage for
v1 = data[(data.U_manhattan==0) & (data.Tip_percentage>0)].Tip_percentage
v2 = data[(data.U_manhattan==1) & (data.Tip_percentage>0)].Tip_percentage
# generate bins and histogram values
bins = np.histogram(v1,bins=10)[1]
h1 = np.histogram(v1,bins=bins)
h2 = np.histogram(v2,bins=bins)
# generate the plot
# First suplot: visualize all data with outliers
fig,ax = plt.subplots(1,1,figsize=(10,5))
w = .4*(bins[1]-bins[0])
ax.bar(bins[:-1],h1[0],width=w,color='b')
ax.bar(bins[:-1]+w,h2[0],width=w,color='g')
ax.set_yscale('log')
ax.set_xlabel('Tip (%)')
ax.set_ylabel('Count')
ax.set_title('Tip')
ax.legend(['Non-Manhattan','Manhattah'],title='origin')
plt.show()
print 't-test results:', ttest_ind(v1,v2,equal_var=False)
"""
Explanation: Compare distributions of the two groups
End of explanation
"""
# Download the September 2015 dataset
if os.path.exists('data_september_2015.csv'): # Check if the dataset is present on local disk and load it
data = pd.read_csv('data_september_2015.csv')
else: # Download dataset if not available on disk
url = "https://s3.amazonaws.com/nyc-tlc/trip+data/green_tripdata_2015-09.csv"
data = pd.read_csv(url)
data.to_csv(url.split('/')[-1])
# Print the size of the dataset
print "Number of rows:", data.shape[0]
print "Number of columns: ", data.shape[1]
# create backup dataset
backup_data = data.copy()
"""
Explanation: The two distributions look the same however the t-test results in a zero p-value to imply that the two groups are different at 95% level of condidence
The Model
Summary
The initial dataset contained 1494926 transactions with 21 time-series, categorical and numerical variables. In order to build the final model, four phases were followed (1) data cleaning, (2) feature engineering (3) exploratory data analysis and (4) model creation
The cleaning consisted in drop zero variance variables (Ehail_fee), replacing invalid with the most frequent values in each categorical variable whereas the median was used for continuous numerical variables. Invalid values could be missing values or values not allowed for specific variables as per the dictionary of variables. In this part, variables were also converted in their appropriate format such datetime.
The feature engineering part created 10 new variables derived from pickup and dropoff locations and timestamps, trip distance.
During the exploration, each variable was carefully analyzed and compared to other variables and eventually the target variable, Percentage tip. All numerical variables were found to follow lognormal or power law distributions althouth there was found no linear relationship between numerical and the target variable. An interesting insight was uncovered in the distribution of the percentage tip. It was found that only 40% of the transactions paid tip. And 99.99% of these payments were done by credit cards. This inspired me to build the predictive model in two stages (1) classification model to find out weither a transaction will pay tip and (2) regression model to find the percentage of the tip only if the transaction was classified as a tipper. Another insight was that the most frequent percentage is 18% which corresponds to the usual restaurant gratuity rate.
With lack of linear relationship between independent and depend variables, the predictive model was built on top of the random forest regression and gradient boosting classifier algorithms implemented in sklearn after routines to optimize best parameters. A usable script to make predictions as attached to this notebook and available in the same directory.
Note: The code to make predictions is provided in the same directory as tip_predictor.py and the instructions are in the recommendation part of this section.
Following, each part of the analysis is fully explained with accompanying python code
End of explanation
"""
# define a function to clean a loaded dataset
def clean_data(adata):
"""
This function cleans the input dataframe adata:
. drop Ehail_fee [99% transactions are NaNs]
. impute missing values in Trip_type
. replace invalid data by most frequent value for RateCodeID and Extra
. encode categorical to numeric
. rename pickup and dropff time variables (for later use)
input:
adata: pandas.dataframe
output:
pandas.dataframe
"""
## make a copy of the input
data = adata.copy()
## drop Ehail_fee: 99% of its values are NaNs
if 'Ehail_fee' in data.columns:
data.drop('Ehail_fee',axis=1,inplace=True)
## replace missing values in Trip_type with the most frequent value 1
data['Trip_type '] = data['Trip_type '].replace(np.NaN,1)
## replace all values that are not allowed as per the variable dictionary with the most frequent allowable value
# remove negative values from Total amound and Fare_amount
print "Negative values found and replaced by their abs"
print "Total_amount", 100*data[data.Total_amount<0].shape[0]/float(data.shape[0]),"%"
print "Fare_amount", 100*data[data.Fare_amount<0].shape[0]/float(data.shape[0]),"%"
print "Improvement_surcharge", 100*data[data.improvement_surcharge<0].shape[0]/float(data.shape[0]),"%"
print "Tip_amount", 100*data[data.Tip_amount<0].shape[0]/float(data.shape[0]),"%"
print "Tolls_amount", 100*data[data.Tolls_amount<0].shape[0]/float(data.shape[0]),"%"
print "MTA_tax", 100*data[data.MTA_tax<0].shape[0]/float(data.shape[0]),"%"
data.Total_amount = data.Total_amount.abs()
data.Fare_amount = data.Fare_amount.abs()
data.improvement_surcharge = data.improvement_surcharge.abs()
data.Tip_amount = data.Tip_amount.abs()
data.Tolls_amount = data.Tolls_amount.abs()
data.MTA_tax = data.MTA_tax.abs()
# RateCodeID
indices_oi = data[~((data.RateCodeID>=1) & (data.RateCodeID<=6))].index
data.loc[indices_oi, 'RateCodeID'] = 2 # 2 = Cash payment was identified as the common method
print round(100*len(indices_oi)/float(data.shape[0]),2),"% of values in RateCodeID were invalid.--> Replaced by the most frequent 2"
# Extra
indices_oi = data[~((data.Extra==0) | (data.Extra==0.5) | (data.Extra==1))].index
data.loc[indices_oi, 'Extra'] = 0 # 0 was identified as the most frequent value
print round(100*len(indices_oi)/float(data.shape[0]),2),"% of values in Extra were invalid.--> Replaced by the most frequent 0"
# Total_amount: the minimum charge is 2.5, so I will replace every thing less than 2.5 by the median 11.76 (pre-obtained in analysis)
indices_oi = data[(data.Total_amount<2.5)].index
data.loc[indices_oi,'Total_amount'] = 11.76
print round(100*len(indices_oi)/float(data.shape[0]),2),"% of values in total amount worth <$2.5.--> Replaced by the median 1.76"
# encode categorical to numeric (I avoid to use dummy to keep dataset small)
if data.Store_and_fwd_flag.dtype.name != 'int64':
data['Store_and_fwd_flag'] = (data.Store_and_fwd_flag=='Y')*1
# rename time stamp variables and convert them to the right format
print "renaming variables..."
data.rename(columns={'lpep_pickup_datetime':'Pickup_dt','Lpep_dropoff_datetime':'Dropoff_dt'},inplace=True)
print "converting timestamps variables to right format ..."
data['Pickup_dt'] = data.Pickup_dt.apply(lambda x:dt.datetime.strptime(x,"%Y-%m-%d %H:%M:%S"))
data['Dropoff_dt'] = data.Dropoff_dt.apply(lambda x:dt.datetime.strptime(x,"%Y-%m-%d %H:%M:%S"))
print "Done cleaning"
return data
# Run code to clean the data
data = clean_data(data)
"""
Explanation: 1 Data cleaning
This part concerns work done to treat invalid data.
Ehail_fee was removed since 99% of the data are missing
Missing values in Trip_type were replace with the most common value that was 1
Invalid data were found in:
RateCodeID: about 0.01% of the values were 99. These were replaced by the most common value 2
Extra: 0.08% of transactions had negative Extra. These were replaced by 0 as the most frequent
Total_amount, Fare_amount, improvement_surcharge, Tip_amount: 0.16% of values were negative. The cases were considered as being machine errors during the data entry. They were replaced by their absolute values. Furthermore, as the minimum Total_amount that is chargeable for any service is $2.5, every transaction falling below that amount was replaced by the median value of the Total_amount 11.76.
The code is provided below
End of explanation
"""
# Function to run the feature engineering
def engineer_features(adata):
"""
This function create new variables based on present variables in the dataset adata. It creates:
. Week: int {1,2,3,4,5}, Week a transaction was done
. Week_day: int [0-6], day of the week a transaction was done
. Month_day: int [0-30], day of the month a transaction was done
. Hour: int [0-23], hour the day a transaction was done
. Shift type: int {1=(7am to 3pm), 2=(3pm to 11pm) and 3=(11pm to 7am)}, shift of the day
. Speed_mph: float, speed of the trip
. Tip_percentage: float, target variable
. With_tip: int {0,1}, 1 = transaction with tip, 0 transction without tip
input:
adata: pandas.dataframe
output:
pandas.dataframe
"""
# make copy of the original dataset
data = adata.copy()
# derive time variables
print "deriving time variables..."
ref_week = dt.datetime(2015,9,1).isocalendar()[1] # first week of september in 2015
data['Week'] = data.Pickup_dt.apply(lambda x:x.isocalendar()[1])-ref_week+1
data['Week_day'] = data.Pickup_dt.apply(lambda x:x.isocalendar()[2])
data['Month_day'] = data.Pickup_dt.apply(lambda x:x.day)
data['Hour'] = data.Pickup_dt.apply(lambda x:x.hour)
#data.rename(columns={'Pickup_hour':'Hour'},inplace=True)
# create shift variable: 1=(7am to 3pm), 2=(3pm to 11pm) and 3=(11pm to 7am)
data['Shift_type'] = np.NAN
data.loc[data[(data.Hour>=7) & (data.Hour<15)].index,'Shift_type'] = 1
data.loc[data[(data.Hour>=15) & (data.Hour<23)].index,'Shift_type'] = 2
data.loc[data[data.Shift_type.isnull()].index,'Shift_type'] = 3
# Trip duration
print "deriving Trip_duration..."
data['Trip_duration'] = ((data.Dropoff_dt-data.Pickup_dt).apply(lambda x:x.total_seconds()/60.))
print "deriving direction variables..."
# create direction variable Direction_NS.
# This is 2 if taxi moving from north to south, 1 in the opposite direction and 0 otherwise
data['Direction_NS'] = (data.Pickup_latitude>data.Dropoff_latitude)*1+1
indices = data[(data.Pickup_latitude == data.Dropoff_latitude) & (data.Pickup_latitude!=0)].index
data.loc[indices,'Direction_NS'] = 0
# create direction variable Direction_EW.
# This is 2 if taxi moving from east to west, 1 in the opposite direction and 0 otherwise
data['Direction_EW'] = (data.Pickup_longitude>data.Dropoff_longitude)*1+1
indices = data[(data.Pickup_longitude == data.Dropoff_longitude) & (data.Pickup_longitude!=0)].index
data.loc[indices,'Direction_EW'] = 0
# create variable for Speed
print "deriving Speed. Make sure to check for possible NaNs and Inf vals..."
data['Speed_mph'] = data.Trip_distance/(data.Trip_duration/60)
# replace all NaNs values and values >240mph by a values sampled from a random distribution of
# mean 12.9 and standard deviation 6.8mph. These values were extracted from the distribution
indices_oi = data[(data.Speed_mph.isnull()) | (data.Speed_mph>240)].index
data.loc[indices_oi,'Speed_mph'] = np.abs(np.random.normal(loc=12.9,scale=6.8,size=len(indices_oi)))
print "Feature engineering done! :-)"
# Create a new variable to check if a trip originated in Upper Manhattan
print "checking where the trip originated..."
data['U_manhattan'] = data[['Pickup_latitude','Pickup_longitude']].apply(lambda r:is_within_bbox((r[0],r[1])),axis=1)
# create tip percentage variable
data['Tip_percentage'] = 100*data.Tip_amount/data.Total_amount
# create with_tip variable
data['With_tip'] = (data.Tip_percentage>0)*1
return data
# collected bounding box points
umanhattan = [(40.796937, -73.949503),(40.787945, -73.955822),(40.782772, -73.943575),
(40.794715, -73.929801),(40.811261, -73.934153),(40.835371, -73.934515),
(40.868910, -73.911145),(40.872719, -73.910765),(40.878252, -73.926350),
(40.850557, -73.947262),(40.836225, -73.949899),(40.806050, -73.971255)]
poi = Polygon(umanhattan)
# create a function to check if a location is located inside Upper Manhattan
def is_within_bbox(loc,poi=poi):
"""
This function checks if a location loc with lat and lon is located within the polygon of interest
input:
loc: tuple, (latitude, longitude)
poi: shapely.geometry.Polygon, polygon of interest
"""
return 1*(Point(loc).within(poi))
# run the code to create new features on the dataset
print "size before feature engineering:", data.shape
data = engineer_features(data)
print "size after feature engineering:", data.shape
# Uncomment to check for data validity.
# data.describe() .T
"""
Explanation: 2 Feature engineering
In this step, I intuitively created new varibles derived from current variables.
Time variables: Week, Month_day(Day of month), Week_day (Day of week), Hour (hour of day), Shift_type (shift period of the day) and Trip_duration.The were created under the hypothesis that people may be willing to tip depending on the week days or time of the day. For instance, people are more friendly and less stressful to easily tip over the weekend. They were derived from pickup time
Trip directions: Direction_NS (is the cab moving Northt to South?) and Direction_EW (is the cab moving East to West). These are components of the two main directions, horizontal and vertical. The hypothesis is that the traffic may be different in different directions and it may affect the riders enthousiasm to tipping. They were derived from pickup and dropoff coordinates
Speed: this the ratio of Trip_distance to Trip_duration. At this level, all entries with speeds higher than 240 mph were dropped since this is the typical highest speed for cars commonly used as taxi in addition to the fact that the speed limit in NYC is 50 mph. An alternative filter threshold would be the highest posted speed limit in NYC but it might be sometimes violated.
With_tip: This is to identify transactions with tips or not. This variable was created after discovering that 60% of transactions have 0 tip.
As seen that using the mean of trips from Manhattan is different from the mean from other boroughs, this variable can be considered as well in the model building. A further and deep analysis, would be to create a variable of the origin and destination of each trip. This was tried but it was computationally excessive to my system. Here, coming from Manhattan or not, is the only variable to be used.
End of explanation
"""
## code to compare the two Tip_percentage identified groups
# split data in the two groups
data1 = data[data.Tip_percentage>0]
data2 = data[data.Tip_percentage==0]
# generate histograms to compare
fig,ax=plt.subplots(1,2,figsize=(14,4))
data.Tip_percentage.hist(bins = 20,normed=True,ax=ax[0])
ax[0].set_xlabel('Tip (%)')
ax[0].set_title('Distribution of Tip (%) - All transactions')
data1.Tip_percentage.hist(bins = 20,normed=True,ax=ax[1])
ax[1].set_xlabel('Tip (%)')
ax[1].set_title('Distribution of Tip (%) - Transaction with tips')
ax[1].set_ylabel('Group normed count')
plt.savefig('Question4_target_varc.jpeg',format='jpeg')
plt.show()
"""
Explanation: 3 Exploratory Data Analysis
This was the key phase of my analysis. A look at the distribution of the target variable, "Tip_percentage" showed that 60% of all transactions did not give tip (see Figure below, left). A second tip at 18% corresponds to the usual NYC customary gratuity rate which fluctuates between 18% and 25% (see Figure below,right). Based on this information, the model can be built in two steps
Create classification model to predict weither tip will be given or not. Here a new variable "With_tip" of 1 (if there is tip) and 0 (otherwise) was created.
Create regression model for transaction with non-zero tip
End of explanation
"""
# Functions for exploratory data analysis
def visualize_continuous(df,label,method={'type':'histogram','bins':20},outlier='on'):
"""
function to quickly visualize continous variables
df: pandas.dataFrame
label: str, name of the variable to be plotted. It should be present in df.columns
method: dict, contains info of the type of plot to generate. It can be histogram or boxplot [-Not yet developped]
outlier: {'on','off'}, Set it to off if you need to cut off outliers. Outliers are all those points
located at 3 standard deviations further from the mean
"""
# create vector of the variable of interest
v = df[label]
# define mean and standard deviation
m = v.mean()
s = v.std()
# prep the figure
fig,ax = plt.subplots(1,2,figsize=(14,4))
ax[0].set_title('Distribution of '+label)
ax[1].set_title('Tip % by '+label)
if outlier=='off': # remove outliers accordingly and update titles
v = v[(v-m)<=3*s]
ax[0].set_title('Distribution of '+label+'(no outliers)')
ax[1].set_title('Tip % by '+label+'(no outliers)')
if method['type'] == 'histogram': # plot the histogram
v.hist(bins = method['bins'],ax=ax[0])
if method['type'] == 'boxplot': # plot the box plot
df.loc[v.index].boxplot(label,ax=ax[0])
ax[1].plot(v,df.loc[v.index].Tip_percentage,'.',alpha=0.4)
ax[0].set_xlabel(label)
ax[1].set_xlabel(label)
ax[0].set_ylabel('Count')
ax[1].set_ylabel('Tip (%)')
def visualize_categories(df,catName,chart_type='histogram',ylimit=[None,None]):
"""
This functions helps to quickly visualize categorical variables.
This functions calls other functions generate_boxplot and generate_histogram
df: pandas.Dataframe
catName: str, variable name, it must be present in df
chart_type: {histogram,boxplot}, choose which type of chart to plot
ylim: tuple, list. Valid if chart_type is histogram
"""
print catName
cats = sorted(pd.unique(df[catName]))
if chart_type == 'boxplot': #generate boxplot
generate_boxplot(df,catName,ylimit)
elif chart_type == 'histogram': # generate histogram
generate_histogram(df,catName)
else:
pass
#=> calculate test statistics
groups = df[[catName,'Tip_percentage']].groupby(catName).groups #create groups
tips = df.Tip_percentage
if len(cats)<=2: # if there are only two groups use t-test
print ttest_ind(tips[groups[cats[0]]],tips[groups[cats[1]]])
else: # otherwise, use one_way anova test
# prepare the command to be evaluated
cmd = "f_oneway("
for cat in cats:
cmd+="tips[groups["+str(cat)+"]],"
cmd=cmd[:-1]+")"
print "one way anova test:", eval(cmd) #evaluate the command and print
print "Frequency of categories (%):\n",df[catName].value_counts(normalize=True)*100
def test_classification(df,label,yl=[0,50]):
"""
This function test if the means of the two groups with_tip and without_tip are different at 95% of confidence level.
It will also generate a box plot of the variable by tipping groups
label: str, label to test
yl: tuple or list (default = [0,50]), y limits on the ylabel of the boxplot
df: pandas.DataFrame (default = data)
Example: run <visualize_continuous(data,'Fare_amount',outlier='on')>
"""
if len(pd.unique(df[label]))==2: #check if the variable is categorical with only two categores and run chisquare test
vals=pd.unique(df[label])
gp1 = df[df.With_tip==0][label].value_counts().sort_index()
gp2 = df[df.With_tip==1][label].value_counts().sort_index()
print "t-test if", label, "can be used to distinguish transaction with tip and without tip"
print chisquare(gp1,gp2)
elif len(pd.unique(df[label]))>=10: #other wise run the t-test
df.boxplot(label,by='With_tip')
plt.ylim(yl)
plt.show()
print "t-test if", label, "can be used to distinguish transaction with tip and without tip"
print "results:",ttest_ind(df[df.With_tip==0][label].values,df[df.With_tip==1][label].values,False)
else:
pass
def generate_boxplot(df,catName,ylimit):
"""
generate boxplot of tip percentage by variable "catName" with ylim set to ylimit
df: pandas.Dataframe
catName: str
ylimit: tuple, list
"""
df.boxplot('Tip_percentage',by=catName)
#plt.title('Tip % by '+catName)
plt.title('')
plt.ylabel('Tip (%)')
if ylimit != [None,None]:
plt.ylim(ylimit)
plt.show()
def generate_histogram(df,catName):
"""
generate histogram of tip percentage by variable "catName" with ylim set to ylimit
df: pandas.Dataframe
catName: str
ylimit: tuple, list
"""
cats = sorted(pd.unique(df[catName]))
colors = plt.cm.jet(np.linspace(0,1,len(cats)))
hx = np.array(map(lambda x:round(x,1),np.histogram(df.Tip_percentage,bins=20)[1]))
fig,ax = plt.subplots(1,1,figsize = (15,4))
for i,cat in enumerate(cats):
vals = df[df[catName] == cat].Tip_percentage
h = np.histogram(vals,bins=hx)
w = 0.9*(hx[1]-hx[0])/float(len(cats))
plt.bar(hx[:-1]+w*i,h[0],color=colors[i],width=w)
plt.legend(cats)
plt.yscale('log')
plt.title('Distribution of Tip by '+catName)
plt.xlabel('Tip (%)')
"""
Explanation: Next, each variable distribution and its relationship with the Tip percentage were explored. Few functions were implemented to quickly explore those variables:
End of explanation
"""
# Example of exploration of the Fare_amount using the implented code:
visualize_continuous(data1,'Fare_amount',outlier='on')
test_classification(data,'Fare_amount',[0,25])
"""
Explanation: Starting with continuous variables, two main insights were discovered: A lognormal-like or power law distribution of the Fare amount and a non-linear function of the tip percentage as function of the total amount. The tip percentage decreases as the fare amount increases but converges around 20%. The density of scattered points implies that there is a high frequency of smaller tipps at low Fare_amount. Can we say that people restrain themselves to tipping more money as the cost of the ride becomes more and more expensive? Or since the fare grows with the length of the trip and trip duration, can we say that riders get bored and don't appreciate the service they are getting? Many questsions can be explored at this point
End of explanation
"""
# Code to generate the heat map to uncover hidden information in the cluster
# We will first source NYC boroughs shape files,
# then create polygons and check to which polygon does each of the point of the cluster begongs
## download geojson of NYC boroughs
nyc_boros = json.loads(requests.get("https://raw.githubusercontent.com/dwillis/nyc-maps/master/boroughs.geojson").content)
# parse boros into Multipolygons
boros = {}
for f in nyc_boros['features']:
name = f['properties']['BoroName']
code = f['properties']['BoroCode']
polygons = []
for p in f['geometry']['coordinates']:
polygons.append(Polygon(p[0]))
boros[code] = {'name':name,'polygon':MultiPolygon(polygons=polygons)}
# creae function to assign each coordinates point to its borough
def find_borough(lat,lon):
"""
return the borough of a location given its latitude and longitude
lat: float, latitude
lon: float, longitude
"""
boro = 0 # initialize borough as 0
for k,v in boros.iteritems(): # update boro to the right key corresponding to the parent polygon
if v['polygon'].contains(Point(lon,lat)):
boro = k
break # break the loop once the borough is found
return [boro]
## Analyse the cluster now
# create data frame of boroughs
df = data1[data1.Trip_duration>=1350]
orig_dest = []
for v in df[['Pickup_latitude','Pickup_longitude','Dropoff_latitude','Dropoff_longitude']].values:
orig_dest.append((find_borough(v[0],v[1])[0],find_borough(v[2],v[3])[0]))
df2 = pd.DataFrame(orig_dest)
## creae pivot table for the heat map plot
df2['val']=1 # dummy variable
mat_cluster1 = df2.pivot_table(index=0,columns=1,values='val',aggfunc='count')
## generate the map
fig,ax = plt.subplots(1,2,figsize=(15,4))
im = ax[0].imshow(mat_cluster1)
ax[0].set_ylabel('From')
ax[0].set_xlabel('To')
ax[0].set_xticklabels(['','Other','Manhattan','Bronx','Brooklyn','Queens'],rotation='vertical')
ax[0].set_yticklabels(['','Other','Manhattan','Bronx','Brooklyn','Queens'])
ax[0].set_title('Cluster of rides with duration >1350 min')
fig.colorbar(im,ax=ax[0])
h = df.Hour.value_counts(normalize=True)
plt.bar(h.index,h.values,width = .4,color='b')
h = data1.Hour.value_counts(normalize=True)
ax[1].bar(h.index+.4,h.values,width = .4,color='g')
ax[1].set_title('Hourly traffic: All rides vs cluster rides')
ax[1].legend(['cluster','all'],loc='best')
ax[1].set_xlabel('Hour')
ax[1].set_xticks(np.arange(25)+.4,range(25))
ax[1].set_ylabel('Normalized Count')
plt.savefig('duration_cluster.jpeg',format='jpeg')
plt.show()
"""
Explanation: A negative t-test value and null p-value imply that the means of Total_amount are significantly different in the group of transactions with tips compared to the group with no tip. Therefore, this variable would used to train the classification model.
Using the same function, a plot of the tip percentage as function of trip duration showed a cluster of points at duration time greater than 1350 min (22 hours).
<img src="Q4_trip_duration.png">
These points look like outliers since it doesn't make sense to have a trip of 22 hours within NYC. Probably, tourists can!
The following code was used to analyze the cluser with trip duration greater than 1350 min
End of explanation
"""
continuous_variables=['Total_amount','Fare_amount','Trip_distance','Trip_duration','Tolls_amount','Speed_mph','Tip_percentage']
cor_mat = data1[continuous_variables].corr()
#fig,ax = plt.subplots(1,1,figsize = [6,6])
plt.imshow(cor_mat)
plt.xticks(range(len(continuous_variables)),continuous_variables,rotation='vertical')
plt.yticks(range(len(continuous_variables)),continuous_variables)
plt.colorbar()
plt.title('Correlation between continuous variables')
plt.show()
#print cor_mat
"""
Explanation: The heat map color represents the number of trips between two given boroughs. We can see that the majority of the trips are intra-boroughs. There is a great number of trips from Brooklyn to Manhattan whereas there is no Staten Island trip that takes more than 1350 minutes. Are there specific hours for these events? Unfortunately, the distribution on the rigtht shows that the cluster behaves the same as the rest of the traffic.
Finally, correlation heatmap was used to find which independent variables are correlated to each other. The following code provides the construction of the correlation heatmap
End of explanation
"""
# exploration of the U_manhattan (trip originating from Upper Manhattan) variable
visualize_categories(data1,'U_manhattan','boxplot',[13,20])
test_classification(data,'U_manhattan')
"""
Explanation: A further analysis of all continuous variables revealed similar lognormal and non-linearlity behaviors. Since there is no linear relationship between the the Tip percentage and variables, random forest algorithm will be considered to build the regression part of the model
As far as categorical variables concerned, the function visualize_categories was used to explore each variable as it was done for continuous numerical variables (See demostration below)
End of explanation
"""
# visualization of the Payment_type
visualize_categories(data1,'Payment_type','histogram',[13,20])
"""
Explanation: The above plot compares the means and range of the Tip_percentage between trips originating from Manhattan and the rest of the trips. The t-test reported says that these groups have different means. Furthermore, a chi-square test shows that this variable can be used to significantly distinguish transactions with tips from those without tips.
Another interesting figure is that of the Payment_type.
End of explanation
"""
# import scikit learn libraries
from sklearn import cross_validation, metrics #model optimization and valuation tools
from sklearn.grid_search import GridSearchCV #Perforing grid search
# define a function that help to train models and perform cv
def modelfit(alg,dtrain,predictors,target,scoring_method,performCV=True,printFeatureImportance=True,cv_folds=5):
"""
This functions train the model given as 'alg' by performing cross-validation. It works on both regression and classification
alg: sklearn model
dtrain: pandas.DataFrame, training set
predictors: list, labels to be used in the model training process. They should be in the column names of dtrain
target: str, target variable
scoring_method: str, method to be used by the cross-validation to valuate the model
performCV: bool, perform Cv or not
printFeatureImportance: bool, plot histogram of features importance or not
cv_folds: int, degree of cross-validation
"""
# train the algorithm on data
alg.fit(dtrain[predictors],dtrain[target])
#predict on train set:
dtrain_predictions = alg.predict(dtrain[predictors])
if scoring_method == 'roc_auc':
dtrain_predprob = alg.predict_proba(dtrain[predictors])[:,1]
#perform cross-validation
if performCV:
cv_score = cross_validation.cross_val_score(alg,dtrain[predictors],dtrain[target],cv=cv_folds,scoring=scoring_method)
#print model report
print "\nModel report:"
if scoring_method == 'roc_auc':
print "Accuracy:",metrics.accuracy_score(dtrain[target].values,dtrain_predictions)
print "AUC Score (Train):",metrics.roc_auc_score(dtrain[target], dtrain_predprob)
if (scoring_method == 'mean_squared_error'):
print "Accuracy:",metrics.mean_squared_error(dtrain[target].values,dtrain_predictions)
if performCV:
print "CV Score - Mean : %.7g | Std : %.7g | Min : %.7g | Max : %.7g" % (np.mean(cv_score),np.std(cv_score),np.min(cv_score),np.max(cv_score))
#print feature importance
if printFeatureImportance:
if dir(alg)[0] == '_Booster': #runs only if alg is xgboost
feat_imp = pd.Series(alg.booster().get_fscore()).sort_values(ascending=False)
else:
feat_imp = pd.Series(alg.feature_importances_,predictors).sort_values(ascending=False)
feat_imp.plot(kind='bar',title='Feature Importances')
plt.ylabel('Feature Importe Score')
plt.show()
# optimize n_estimator through grid search
def optimize_num_trees(alg,param_test,scoring_method,train,predictors,target):
"""
This functions is used to tune paremeters of a predictive algorithm
alg: sklearn model,
param_test: dict, parameters to be tuned
scoring_method: str, method to be used by the cross-validation to valuate the model
train: pandas.DataFrame, training data
predictors: list, labels to be used in the model training process. They should be in the column names of dtrain
target: str, target variable
"""
gsearch = GridSearchCV(estimator=alg, param_grid = param_test, scoring=scoring_method,n_jobs=2,iid=False,cv=5)
gsearch.fit(train[predictors],train[target])
return gsearch
# plot optimization results
def plot_opt_results(alg):
cv_results = []
for i in range(len(param_test['n_estimators'])):
cv_results.append((alg.grid_scores_[i][1],alg.grid_scores_[i][0]['n_estimators']))
cv_results = pd.DataFrame(cv_results)
plt.plot(cv_results[1],cv_results[0])
plt.xlabel('# trees')
plt.ylabel('score')
plt.title('optimization report')
"""
Explanation: This distribution shows that 99.99% transactions with tips were paid by Credit Card (method 1). This variable is not a good candidate for the regression model because of this unbalanced frequenced but it is eventually an important feature to use in the classification model. An intuitive rule would be that if a rider is not paying with a credit card, there will be no tip.
Similar analysis were carried on every variable in order to find the most important variables with enough variance for either the regression model and/or classification model. This visual exploration analysis and statistical tests section concluded by selecting Total_amount, Fare_amount, Trip_distance, Tolls_amount, Trip_duration, Speed_mph, U_manhattan, Direction_NS and Direction_EW as initial important features to train and optimized the regression model. Payment_type, Passenger_count, Extra, Week_day, Hour, Direction_NS, Direction_EW, U_manhattan and Shift_type were selected as initial variables to train the classification model.
5 Building the Model
As explained in the previous section, this model will be a combination of rules from two models (1) The classification model to classify a transaction into a tipper (=1) or not (=0)and (2) regression model to estimate the percentage of the tip given that the results from the classification model was 1
First of all, functions for cross-validation and parameter optimization were defined such that they can be used on either classification or regression algorithm
End of explanation
"""
## OPTIMIZATION & TRAINING OF THE CLASSIFIER
from sklearn.ensemble import GradientBoostingClassifier
print "Optimizing the classifier..."
train = data.copy() # make a copy of the training set
# since the dataset is too big for my system, select a small sample size to carry on training and 5 folds cross validation
train = train.loc[np.random.choice(train.index,size=100000,replace=False)]
target = 'With_tip' # set target variable - it will be used later in optimization
tic = dt.datetime.now() # initiate the timing
# for predictors start with candidates identified during the EDA
predictors = ['Payment_type','Total_amount','Trip_duration','Speed_mph','MTA_tax',
'Extra','Hour','Direction_NS', 'Direction_EW','U_manhattan']
# optimize n_estimator through grid search
param_test = {'n_estimators':range(30,151,20)} # define range over which number of trees is to be optimized
# initiate classification model
model_cls = GradientBoostingClassifier(
learning_rate=0.1, # use default
min_samples_split=2,# use default
max_depth=5,
max_features='auto',
subsample=0.8, # try <1 to decrease variance and increase bias
random_state = 10)
# get results of the search grid
gs_cls = optimize_num_trees(model_cls,param_test,'roc_auc',train,predictors,target)
print gs_cls.grid_scores_, gs_cls.best_params_, gs_cls.best_score_
# cross validate the best model with optimized number of estimators
modelfit(gs_cls.best_estimator_,train,predictors,target,'roc_auc')
# save the best estimator on disk as pickle for a later use
with open('my_classifier.pkl','wb') as fid:
pickle.dump(gs_cls.best_estimator_,fid)
fid.close()
print "Processing time:", dt.datetime.now()-tic
"""
Explanation: 5.1. Classification Model
After spending a time on feature exploration, engineering and discovering that the Payment_type was a strong variable (99.99% of all transactions with tips were paid with credit cards) to differentiate transactions with tip from those without tip, a model based on the logistic regression classifier algorithm was optimized and gave an accuracy score of 0.94. However, this was outperformed by using the GradientBoostingClassifier (from scikit learn) which gave a score of 0.96. Starting with the GradientBoostinClassier model (default paremeters), the number of trees was optimized through a grid search (see function 'optimize_num_trees').
-- Key points --:
Sample size for training and optimization was chosen as 100000. This is surely a small sample size compared to the available data but the optimization was stable and good enough with 5 folds cross-validation
Only the number of trees were optimized as they are the controlling key of boosting model accuracy. Other parameters were not optimized since the improvement yield was too small compared to the computation time and cost
ROC-AUC (Area under the curve of receiver operating characteristic) was used as a model validation metric
-- Results --:
optimized number of trees: 130
optimized variables: ['Payment_type','Total_amount','Trip_duration','Speed_mph','MTA_tax','Extra','Hour','Direction_NS', 'Direction_EW','U_manhattan']
roc-auc on a different test sample: 0.9636
The following code shows the optimization process
End of explanation
"""
# testing on a different set
indices = data.index[~data.index.isin(train.index)]
test = data.loc[np.random.choice(indices,size=100000,replace=False)]
ypred = gs_cls.best_estimator_.predict(test[predictors])
print "ROC AUC:", metrics.roc_auc_score(ypred,test.With_tip)
"""
Explanation: The output shows that the optimum number of trees in 130 and that the important variables for this specific number of tree are as shown on the barchart.
Let's test it on a different sample with the following code
End of explanation
"""
train = data1.copy()
train = train.loc[np.random.choice(train.index,size=100000,replace=False)]
indices = data1.index[~data1.index.isin(train.index)]
test = data1.loc[np.random.choice(indices,size=100000,replace=False)]
train['ID'] = train.index
IDCol = 'ID'
target = 'Tip_percentage'
predictors = ['VendorID', 'Passenger_count', 'Trip_distance', 'Total_amount',
'Extra', 'MTA_tax', 'Tolls_amount', 'Payment_type',
'Hour', 'U_manhattan', 'Week', 'Week_day', 'Month_day', 'Shift_type',
'Direction_NS', 'Direction_EW', 'Trip_duration', 'Speed_mph']
predictors = ['Trip_distance','Tolls_amount', 'Direction_NS', 'Direction_EW', 'Trip_duration', 'Speed_mph']
predictors = ['Total_amount', 'Trip_duration', 'Speed_mph']
# Random Forest
tic = dt.datetime.now()
from sklearn.ensemble import RandomForestRegressor
# optimize n_estimator through grid search
param_test = {'n_estimators':range(50,200,25)} # define range over which number of trees is to be optimized
# initiate classification model
#rfr = RandomForestRegressor(min_samples_split=2,max_depth=5,max_features='auto',random_state = 10)
rfr = RandomForestRegressor()#n_estimators=100)
# get results of the search grid
gs_rfr = optimize_num_trees(rfr,param_test,'mean_squared_error',train,predictors,target)
# print optimization results
print gs_rfr.grid_scores_, gs_rfr.best_params_, gs_rfr.best_score_
# plot optimization results
#plot_opt_results(gs_rfr)
# cross validate the best model with optimized number of estimators
modelfit(gs_rfr.best_estimator_,train,predictors,target,'mean_squared_error')
# save the best estimator on disk as pickle for a later use
with open('my_rfr_reg2.pkl','wb') as fid:
pickle.dump(gs_rfr.best_estimator_,fid)
fid.close()
ypred = gs_rfr.best_estimator_.predict(test[predictors])
print 'RFR test mse:',metrics.mean_squared_error(ypred,test.Tip_percentage)
print 'RFR r2:', metrics.r2_score(ypred,test.Tip_percentage)
print dt.datetime.now()-tic
plot_opt_results(gs_rfr)
"""
Explanation: 5.2 Regression Model
Following a similar pipeline of optimization as in the classification model, a model was built on top of the random forest algorithm.
-- Key points --:
- Sample size for training and optimization was chosen as 100000 with 5 folds cross-validation
- Only the number of trees were optimized as they are the controlling key of boosting model accuracy. Other parameters were not optimized since the improvement yield was too small compared to the computation time and cost
- The mean square error was used as a valuation metric
-- Results --:
optimized number of trees: 150
optimized variables: Total_amount, Trip_duration, Speed_mph
mean square error on a different test sample: 14.3648
The following code shows the optimization process
End of explanation
"""
def predict_tip(transaction):
"""
This function predicts the percentage tip expected on 1 transaction
transaction: pandas.dataframe, this should have been cleaned first and feature engineered
"""
# define predictors labels as per optimization results
cls_predictors = ['Payment_type','Total_amount','Trip_duration','Speed_mph','MTA_tax',
'Extra','Hour','Direction_NS', 'Direction_EW','U_manhattan']
reg_predictors = ['Total_amount', 'Trip_duration', 'Speed_mph']
# classify transactions
clas = gs_cls.best_estimator_.predict(transaction[cls_predictors])
# predict tips for those transactions classified as 1
return clas*gs_rfr.best_estimator_.predict(transaction[reg_predictors])
"""
Explanation: The output shows that the optimum number of trees in 150 and that the important variables for this specific number of tree are as shown on the barchart.
5.3 Final Model
This is a combination of the classification model and regression model in order to get the final predictions. The model was run on the entire dataset to predict expected tip percentages. It resulted in a mean squared error of 0.8793.
The process is as follow:
1. get transaction to predict
2. classify the transaction into zero and non-zero tip
3. if the transaction is classified as non-zero, predict the tip percentage otherwise return 0
Next, I define a function that is to be used to make final predictions
End of explanation
"""
test = data.loc[np.random.choice(data.index,size = 100000,replace=False)]
ypred = predict_tip(test)
print "final mean_squared_error:", metrics.mean_squared_error(ypred,test.Tip_percentage)
print "final r2_score:", metrics.r2_score(ypred,test.Tip_percentage)
"""
Explanation: Make predictions on a sample of 100000 transactions
End of explanation
"""
df = test.copy() # make a copy of data
df['predictions'] = ypred # add predictions column
df['residuals'] = df.Tip_percentage - df.predictions # calculate residuals
df.residuals.hist(bins = 20) # plot histogram of residuals
plt.yscale('log')
plt.xlabel('predicted - real')
plt.ylabel('count')
plt.title('Residual plot')
plt.show()
"""
Explanation: The results are pretty good for a black box model.
Finally, I plot the residuals.
End of explanation
"""
def read_me():
"""
This is a function to print a read me instruction
"""
print ("=========Introduction=========\n\nUse this code to predict the percentage tip expected after a trip in NYC green taxi. \nThe code is a predictive model that was built and trained on top of the Gradient Boosting Classifer and the Random Forest Gradient both provided in scikit-learn\n\nThe input: \npandas.dataframe with columns:This should be in the same format as downloaded from the website\n\nThe data frame go through the following pipeline:\n\t1. Cleaning\n\t2. Creation of derived variables\n\t3. Making predictions\n\nThe output:\n\tpandas.Series, two files are saved on disk, submission.csv and cleaned_data.csv respectively.\n\nTo make predictions, run 'tip_predictor.make_predictions(data)', where data is any 2015 raw dataframe fresh from http://www.nyc.gov/html/tlc/html/about/trip_record_data.shtml\nRun tip_predictor.read_me() for further instructions\n")
read_me()
"""
Explanation: The residual is pretty much symmetrically distributed. This indicate that the model is equally biased. The best model would be the one with mean at 0 and 0 variance
Recommendations
As a future work, I would find a transformation function that can linearlize indipendent values. I would also optimize different algorithms such as extreme gradient boosting and make a bag of multiple models as a final model. This was actually tried but failed because of the computational power.
The following section is the instruction on how to use the predictor script
End of explanation
"""
print "mean speed by week:\n", data[['Speed_mph','Week']].groupby('Week').mean()
# generate boxplot
data.boxplot('Speed_mph','Week')
plt.ylim([0,20]) # cut off outliers
plt.ylabel('Speed (mph)')
plt.show()
# calculate t-test
weeks = pd.unique(data.Week)
pvals = []
for i in range(len(weeks)): # for each pair, run t-test
for j in range(len(weeks)):
pvals.append((weeks[i], weeks[j],ttest_ind(data[data.Week==weeks[i]].Speed_mph,data[data.Week==weeks[j]].Speed_mph,False)[1]))
pvalues = pd.DataFrame(pvals,columns=['w1','w2','pval'])
print "p-values:\n",pvalues.pivot_table(index='w1',columns='w2',values='pval').T
"""
Explanation: This dataset is very rich of information and can be used to learn about other aspects of traffice in NYC. For instance, here I give a very small preview of an upcoming analysis of the speed
Let's build a derived variable representing the average speed over the course of a trip
I perform a test to determine if the average speeds are materially the same in all weeks of September. I will use pairwise t-student test. The null hypothesis for t-test is that the mean is the same in two tested samples. We will see that the speed is not really different in all weeks. From the hypothesis test, we see that we don't have enough evidence that speeds are different in Week 2 and Week 3 are significantly different hence we fail to reject the null hypothesis at 95% level of confidence. The rest of the weeks have smaller p-values, so we can reject the null hypothesis and say that they are significantly different. In general, the speed can be dependent of the week of the month. It would be interesting to look at data of August and October as well
End of explanation
"""
# calculate anova
hours = range(24)
cmd = "f_oneway("
for h in hours:
cmd+="data[data.Hour=="+str(h)+"].Speed_mph,"
cmd=cmd[:-1]+")"
print "one way anova test:", eval(cmd) #evaluate the command and print
# boxplot
data.boxplot('Speed_mph','Hour')
plt.ylim([5,24]) # cut off outliers
plt.ylabel('Speed (mph)')
plt.show()
"""
Explanation: - Another interesting question is how the speed changes over the course of the day. In this case I use one-way anova test on multiple samples. We find that the speed is different in different hours with a zero pvalue of the anova test. The boxplot reveals that traffic is faster early morning and gets really slow in the evening.
End of explanation
"""
|
hpparvi/PyTransit | notebooks/roadrunner/roadrunner_model_example_1.ipynb | gpl-2.0 | %pylab inline
rc('figure', figsize=(13,5))
def plot_lc(time, flux, c=None, ylim=(0.9865, 1.0025), ax=None):
if ax is None:
fig, ax = subplots()
else:
fig, ax = None, ax
ax.plot(time, flux, c=c)
ax.autoscale(axis='x', tight=True)
setp(ax, xlabel='Time [d]', ylabel='Flux', xlim=time[[0,-1]], ylim=ylim)
if fig is not None:
fig.tight_layout()
return ax
"""
Explanation: RoadRunner transit model example I - basics
Author: Hannu Parviainen<br>
Last modified: 16.9.2020
The RoadRunner transit model (Parviainen, submitted 2020) implemented by pytransit.RoadRunnerModel is a fast transit model that allows for any radially symmetric function to be used to model stellar limb darkening. The model offers flexibility with performance that is similar or superior to the analytical quadratic model by Mandel & Agol (2002) implemented by pytransit.QuadraticModel.
The model follows the standard PyTransit API. The limb darkening model is given in the initialisation, and can be either the name of a set of built-in standard analytical limb darkening models
constant, linear, quadratic, nonlinear, general, power2, and power2-pm,
an instance of pytransit.LDTkModel, a Python callable that takes an array of $\mu$ values and a parameter vector, or a tuple with two callables where the first is the limb darkening model and the second a function returning the stellar surface brightness integrated over the stellar disk.
I demonstrate the use of custom limb darkening models and the LDTk-based limb darkening model (pytransit.LDTkModel) in the next notebooks, and here show basic examples of the RoadRunner model use with the named limb darkening models.
End of explanation
"""
from pytransit import RoadRunnerModel
"""
Explanation: Import the model
End of explanation
"""
time = linspace(-0.05, 0.05, 1500)
"""
Explanation: Example 1: simple light curve
We begin with a simple light curve without any fancy stuff such as multipassband modeling. First, we create a time array centred around zero
End of explanation
"""
tm = RoadRunnerModel('nonlinear')
tm.set_data(time)
"""
Explanation: Next, we initialise and set up a RoadRunnerModel choosing to use the four-parameter nonlinear limb darkening model and giving it the mid-exposure time array
End of explanation
"""
flux1 = tm.evaluate(k=0.1, ldc=[0.36, 0.04, 0.1, 0.05], t0=0.0, p=1.0, a=4.2, i=0.5*pi, e=0.0, w=0.0)
plot_lc(time, flux1);
"""
Explanation: Evaluation for scalar parameters
After the transit model has been initialised and the data set, we can evaluate the model for a given radius ratio (k), limb darkening ccoefficients (ldc), zero epoch (t0), orbital period (p), scaled semi-major axis ($a/R_\star$, a), orbital inclination (i), eccentricity (e), and argument of periastron (w). Eccentricity and argument of periastron are optional and default to zero if not given.
The tm.evaluate method returns a 1D array with shape (npt) with the transit model evaluated for each mid-exposure time given in the time array.
Note: The first tm.set_data and tm.evaluate evaluation takes a significantly longer time than the succeeding calls to these methods. This is because most of the PyTransit routines are accelerated with numba, and numba takes some time compiling all the required methods.
End of explanation
"""
npv = 5
ks = normal(0.10, 0.002, (npv, 1))
t0s = normal(0, 0.001, npv)
ps = normal(1.0, 0.01, npv)
smas = normal(4.2, 0.1, npv)
incs = uniform(0.48*pi, 0.5*pi, npv)
es = uniform(0, 0.25, size=npv)
os = uniform(0, 2*pi, size=npv)
ldc = uniform(0, 0.2, size=(npv,1,4))
flux2 = tm.evaluate(ks, ldc, t0s, ps, smas, incs, es, os)
plot_lc(time, flux2.T);
"""
Explanation: Evaluation for a set of parameters
Like the rest of the PyTransit transit models, the RoadRunner model can be evaluated simultaneously for a set of parameters. This is also done using tm.evaluate, but now each argument is a vector with npv values. Model evaluation is parallelised and can be significantly faster than looping over an parameter array in Python.
Now, the tm.evaluate returns a 2D array with shape [npv, npt] with the transit model evaluated for each parameter vector and mid-transit time given in the time array
End of explanation
"""
tm = RoadRunnerModel('nonlinear')
tm.set_data(time, exptimes=0.02, nsamples=10)
flux3 = tm.evaluate(k=0.1, ldc=[0.36, 0.04, 0.1, 0.05], t0=0.0, p=1.0, a=4.2, i=0.5*pi, e=0.0, w=0.0)
ax = plot_lc(time, flux1, c='0.75')
plot_lc(time, flux3, ax=ax);
"""
Explanation: Supersampling
A single photometry observation is always an exposure over time. If the exposure time is short compared to the changes in the transit signal shape during the exposure, the observation can be modelled by evaluating the model at the mid-exposure time. However, if the exposure time is long, we need to simluate the integration by calculating the model average over the exposure time (although numerical integration is also a valid approach, it is slightly more demanding computationally and doesn't improve the accuracy significantly). This is achieved by supersampling the model, that is, evaluating the model at several locations inside the exposure and averaging the samples.
Evaluating the model many times for each observation naturally increases the computational burden of the model, but is necessary to model long-cadence observations from the Kepler and TESS telescopes.
All the transit models in PyTransit support supersampling.
GPU computing: supersampling increases the computational burden of a single observation, what also leads to increasing advantage of using a GPU version of the transit model rather than a CPU version.
End of explanation
"""
lcids1 = zeros(time.size, int)
lcids1[time.size//2:] = 1
plot_lc(time, lcids1, ylim=(-0.5, 1.5));
"""
Explanation: Example 2: heterogeneous light curve
Multiple passbands
PyTransit aims to simplify modelling of heterogeneous light curves as much as possible. Here heterogeneous means that we can model light curves observed in different passbands, with different instruments, and with different supersampling requirements in one go. This is because most of the real exoplanet transit modelling science cases nowadays involve heterogeneous datasets, such as modelling long-cadence Kepler light curves together with short-cadence ground-based observations, or transmission spectroscopy where the light curves are created from a spectroscopic time series.
To model heterogeneous light curves, PyTransit designates each observation (exposure, datapoint) to a specific light curve, and each light curve to a specific passband. This is done throught the light curve index array (lcids) and passband index array (pbids). Light curve index array is an integer array giving an index for each observed datapoints (suchs as, the indices for dataset of light curves would be either 0 or 1), while the passband index array is an integer array containing a passband index for each light curve in the dataset. So, a dataset of two light curves observed in a same passband would be
times = [0, 1, 2, 3]
lcids = [0, 0, 1, 1]
pbids = [0, 0]
while a dataset containing two light curves observed in different passbands would be
times = [0, 1, 2, 3]
lcids = [0, 0, 1, 1]
pbids = [0, 1]
Let's create two datasets. The first one divides our single light curve into two halves parts and gives each a different light curve index (0 for the first half and 1 for the second)
End of explanation
"""
time2 = tile(time, 3)
lcids2 = repeat([0, 1, 1], time.size)
ax = plot_lc(arange(time2.size), lcids2, ylim=(-0.5, 1.5))
[ax.axvline(i*time.size, c='k', ls='--') for i in range(1,3)];
"""
Explanation: The second dataset considers a more realistic scenario where we have three separate transits observed in two passbands. We create this by tiling our time array three times.
End of explanation
"""
tm = RoadRunnerModel('power-2')
tm.set_data(time, lcids=lcids1, pbids=[0, 1])
flux = tm.evaluate(k=0.1, ldc=[[3.1, 0.1],[2.1, 0.03]], t0=0.0, p=1.0, a=4.3, i=0.5*pi)
plot_lc(time, flux);
tm.set_data(time2, lcids=lcids2, pbids=[0, 1])
flux = tm.evaluate(k=0.1, ldc=[[3.1, 0.1],[2.1, 0.03]], t0=0.0, p=1.0, a=4.3, i=0.5*pi)
plot_lc(arange(flux.size), flux);
"""
Explanation: Achromatic radius ratio
Let's see how this works in practice. We divide our current light curve into two halves observed in different passbands. These passbands have different limb darkening, but we first assume that the radius ratio is achromatic.
End of explanation
"""
tm.set_data(time, lcids=lcids1, pbids=[0, 1])
flux = tm.evaluate(k=[0.105, 0.08], ldc=[[3.1, 0.1],[2.1, 0.03]], t0=0.0, p=1.0, a=4.3, i=0.5*pi)
plot_lc(time, flux);
tm.set_data(time2, lcids=lcids2, pbids=[0, 1])
flux = tm.evaluate(k=[0.105, 0.08], ldc=[[3.1, 0.1],[2.1, 0.03]], t0=0.0, p=1.0, a=4.3, i=0.5*pi)
plot_lc(arange(flux.size), flux);
"""
Explanation: Chromatic radius ratio
Next, we assume that the radius ratio is chromatic, that is, it depends on the passband. This is achieved by giving the model an array of radius ratios (where the number should equal to the number of passbands) instead of giving it a scalar radius ratio.
End of explanation
"""
tm.set_data(time, lcids=lcids1, exptimes=[0.0, 0.02], nsamples=[1, 10])
flux = tm.evaluate(k=0.105, ldc=[3.1, 0.1], t0=0.0, p=1.0, a=4.3, i=0.5*pi)
plot_lc(time, flux);
tm.set_data(time2, lcids=lcids2, exptimes=[0.0, 0.02], nsamples=[1, 10])
flux = tm.evaluate(k=0.105, ldc=[3.1, 0.1], t0=0.0, p=1.0, a=4.3, i=0.5*pi)
plot_lc(arange(flux.size), flux);
"""
Explanation: Different superampling rates
Next, let's set different supersampling rates to the two light curves. There's no reason why we couldn't also let them have different passbands, but it's better to keep things simple at this stage.
End of explanation
"""
tm = RoadRunnerModel('quadratic-tri')
time3 = tile(time, 3)
lcids3 = repeat([0, 1, 2], time.size)
tm.set_data(time3, lcids=lcids3, pbids=[0, 1, 2], exptimes=[0.0, 0.02, 0.0], nsamples=[1, 10, 1])
npv = 5
ks = uniform(0.09, 0.1, (npv, 3))
t0s = normal(0, 0.002, npv)
ps = normal(1.0, 0.01, npv)
smas = normal(5.0, 0.1, npv)
incs = uniform(0.48*pi, 0.5*pi, npv)
es = uniform(0, 0.25, size=npv)
os = uniform(0, 2*pi, size=npv)
ldc = uniform(0, 0.5, size=(npv,3,2))
flux = tm.evaluate(k=ks, ldc=ldc, t0=t0s, p=ps, a=smas, i=incs, e=es, w=os)
plot_lc(arange(flux.shape[1]), flux.T + linspace(0, 0.06, npv), ylim=(0.988, 1.065));
"""
Explanation: Everything together
Finally, let's throw everything together and create a set of light curves observed in different passbands, requiring different supersampling rates, assuming chromatic radius ratios, for a set of parameter vectors.
End of explanation
"""
|
hannorein/rebound | ipython_examples/RadialVelocity.ipynb | gpl-3.0 | import rebound
import emcee # pip install emcee
import corner # pip install corner
import numpy as np
import matplotlib.pyplot as plt
"""
Explanation: Fitting Radial Velocity Data
This example shows how to fit a dynamical model of a star and two planets to a set of radial velocity observations using the N-body integrator REBOUND and MCMC sampler emcee.
First, let's import the REBOUND, emcee, numpy, corner, and matplotlib packages.
End of explanation
"""
sim = rebound.Simulation()
sim.units = ["msun", "m", "s"] # Units of solar mass, meters, and seconds
"""
Explanation: We start by creating some artifical radial velocity data. Naturally, we also use REBOUND for this.
End of explanation
"""
sim.add(m=1) #star
sim.add(m=1e-3, P=21.0*60*60*24, h=0.1, k=0.05)
sim.add(m=1e-3, P=30.0*60*60*24)
sim.move_to_com()
"""
Explanation: We add a star and two Jupiter mass planets on 21 and 30 day orbits. The inner planet has an eccentricity of 0.1 (here expressed in terms of h and k variables).
End of explanation
"""
N=30
times = np.sort(50*60*60*24*np.random.random(N)) # 30 randomly spaced observations
RVs = np.zeros(N)
for i, t in enumerate(times):
sim.integrate(times[i])
RVs[i] = sim.particles[0].vx # radial velocity of the host star
RVs += np.random.normal(size=N, scale=20) # add 20m/s Gaussian noise
"""
Explanation: We simulate 30 randomly spaced observations over a 50 day interval and add some noise along the way. We assume the line of sight is along the x direction.
End of explanation
"""
fig, ax = plt.subplots(1,1)
ax.set_xlabel("time [days]")
ax.set_ylabel("radial velocity [m/s]")
ax.errorbar(times/(24*60*60), RVs, yerr=20, fmt="o");
"""
Explanation: This is how our artificial dataset looks like:
End of explanation
"""
def setup_sim(params):
P1, P2, l1, l2, h1, h2, k1, k2, m1, m2 = params # unpack
sim = rebound.Simulation()
sim.units = ["msun", "m", "s"]
sim.add(m=1)
sim.add(m=m1, P=P1*60*60*24, h=h1, k=k1, l=l1)
sim.add(m=m2, P=P2*60*60*24, h=h2, k=k2, l=l2)
sim.move_to_com()
return sim
def log_likelihood(params, times, RVs):
ll = 0. # We use the log likelihood to avoid numerical issues with very small/large numbers
sigma = 20 # We assume the error bars are 30 m/s for all observations
sim = setup_sim(params)
for i, t in enumerate(times):
sim.integrate(times[i])
deltaRV = sim.particles[0].vx - RVs[i]
ll += -(deltaRV/sigma)**2
return ll
"""
Explanation: Next, we create a likelihood function. For simplicity, we assume a flat prior for all parameters. So effectively, our MCMC will sample the likelihood function and we will interpret this as our posterior. As part of an actual data reduction pipeline, you will probably have a more complicated model with physically motivated priors. At the very least, you should add some basic sanity checks to your prior. For example, the mass should never become negative.
To further simplify things a little, we restrict the planetary system to always be in the x-y plane. This means we have 5 free parameters per planet, 2 positions, 2 velocities, and 1 mass. We will run the MCMC in a coordinate system where we use the period $P$, the orbital phase in terms of the mean longitude $l$, and $h$ and $k$. We use $h$ and $k$ instead of the eccentricity and the argument of periastron to avoid a coordinate singularity in the case of $e=0$. We consider the mass of the host star as fixed.
End of explanation
"""
ndim, nwalkers = 5*2, 20
# P1, P2, l1, l2, h1, h2, k1, k2, m1, m2
ic = [20.0, 31.0, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 1e-3, 1e-3]
ic = np.tile(ic,(20,1)) # copy initial conditions for each walker
ic += 0.05*np.random.random((20,10))*ic # slightly perturb initial conditions
"""
Explanation: Next, we need to come up with some reasonable initial conditions. The closer we start to the correct solution, the faster the MCMC will converge. For this example, we'll start very close. Note that we should in principle also allow other parameters to vary. For example, the noise should be modelled self-consistently, rather than assuming Gaussian noise with a given strength. We should also allow for an arbitrary offset to the radial velocity in case the system is moving towards or away from us. We might also want to allow for a linear term in the radial velocity that can account for yet undetected perturbers further out.
End of explanation
"""
sampler = emcee.EnsembleSampler(nwalkers, ndim, log_likelihood, args=[times, RVs])
state = sampler.run_mcmc(ic, 500)
"""
Explanation: Now we can finally run the MCMC for 500 iterations. We have 20 walkers, so this will generate 10000 samples. This may take a minute or two.
End of explanation
"""
fig, ax = plt.subplots(1,1)
ax.set_xlabel("iterations")
ax.set_ylabel("log probability")
ax.plot(sampler.flatlnprobability);
"""
Explanation: Let us check the convergence of the MCMC by plotting the log probability.
End of explanation
"""
corner.corner(sampler.flatchain[2500:],
labels = ["P1","P2","l1","l2","h1","h2","k1","k2","m1","m2"],
truths = [21,30,0,0,0.1,0,0,0,1e-3,1e-3]);
"""
Explanation: This plots gives us some convidence that we have converged. Let's make a corner plot, comparing the posterior samples to the true values which we used to setup our test system. We cut out the first quarter of the MCMC (the burn-in phase).
End of explanation
"""
fig, ax = plt.subplots(1,1)
ax.set_xlabel("time [days]")
ax.set_ylabel("radial velocity [m/s]")
times_plot = np.linspace(0,times[-1],1000)
RVs_plot = np.zeros(len(times_plot))
Nplot = 20
indx = np.random.choice(7500, Nplot, replace=False)
for i in range(Nplot):
s = setup_sim(sampler.flatchain[2500+indx[i]]) # skipping burn-in
for j, t in enumerate(times_plot):
s.integrate(t)
RVs_plot[j] = s.particles[0].vx
ax.plot(times_plot/(24*60*60), RVs_plot, color="black", alpha=0.13)
ax.errorbar(times/(24*60*60), RVs, yerr=20, fmt="o");
"""
Explanation: That's a pretty good recovery of the correct parameters (but to be fair, we started pretty close to them). Let's draw a few random samples from the posterior and plot the corresponding RV curves so we can compare our model to our data.
End of explanation
"""
|
xMyrst/BigData | python/howto/010_Importar_Módulos.ipynb | gpl-3.0 | # Importamos solo la función array del modulo numpy
from numpy import array
a = array( [2,3,4] )
a
# Importamos todo el módulo numpy
import numpy
a = numpy.array( [2,3,4] )
a
# Importamos el módulo numpy y le asignamos el alias 'np'
# Cuando queramos importar funciones de dicho módulo lo haremos refiriendonos al alias
import numpy as np
# En vez de escribir numpy.array, escribimos np.array
a = np.array( [2,3,4] )
a
"""
Explanation: IMPORTACIÓN DE MÓDULOS EN PYTHON
<BR/>
COMO IMPORTAR MÓDULOS
Los módulos son programas que amplían las funciones y clases de Python para realizar tareas específicas.
En https://docs.python.org/3/py-modindex.html se puede encontar el índice de módulos de Python.
Por ejemplo, el modulo nos permite usar muchas funciones del sistema operativo.
Los módulos se pueden cargar de las siguientes formas:
from modulo import
import modulo
import modulo as alias
Dependiendo de cómo hayamos importado el módulo, así tendremos que utilizarlo.
End of explanation
"""
# Importamos el módulo math y listamos todas las operaciones disponibles mediante dir()
import math
dir(math)
# Dentro del módulo 'math' hay una función llamada 'pow'
# Para acceder a la documentación de la función pow escribimos lo siguiente:
help(pow)
# Importamos el módulo 'math' como 'mt' y llamamos a la función 'sin' para calcular el seno del número 34
import math as mt
mt.sin(34)
# Llamada a la función 'pi' del módulo 'math'
mt.pi
# Llamada a la función raíz cuadradra del módulo 'math'
mt.sqrt(24)
help(mt.sqrt)
"""
Explanation: <br/>
OPERACIONES DISPONIBLES DE UNA LIBERÍA
End of explanation
"""
!conda list
"""
Explanation: Para listar los módulos que tenemos instalados y que por tanto podemos importar en nuestros programas, basta con ejecutar el siguiente comando:
End of explanation
"""
|
mdeff/ntds_2016 | toolkit/01_sol_acquisition_exploration.ipynb | mit | # Number of posts / tweets to retrieve.
# Small value for development, then increase to collect final data.
n = 4000 # 20
"""
Explanation: A Python Tour of Data Science: Data Acquisition & Exploration
Michaël Defferrard, PhD student, EPFL LTS2
1 Exercise: problem definition
Theme of the exercise: understand the impact of your communication on social networks. A real life situation: the marketing team needs help in identifying which were the most engaging posts they made on social platforms to prepare their next AdWords campaign.
As you probably don't have a company (yet?), you can either use your own social network profile as if it were the company's one or choose an established entity, e.g. EPFL. You will need to be registered in FB or Twitter to generate access tokens. If you're not, either ask a classmate to create a token for you or create a fake / temporary account for yourself (no need to follow other people, we can fetch public data).
At the end of the exercise, you should have two datasets (Facebook & Twitter) and have used them to answer the following questions, for both Facebook and Twitter.
1. How many followers / friends / likes has your chosen profile ?
2. How many posts / tweets in the last year ?
3. What were the 5 most liked posts / tweets ?
4. Plot histograms of number of likes and comments / retweets.
5. Plot basic statistics and an histogram of text lenght.
6. Is there any correlation between the lenght of the text and the number of likes ?
7. Be curious and explore your data. Did you find something interesting or surprising ?
1. Create at least one interactive plot (with bokeh) to explore an intuition (e.g. does the posting time plays a role).
2 Ressources
Here are some links you may find useful to complete that exercise.
Web APIs: these are the references.
* Facebook Graph API
* Twitter REST API
Tutorials:
* Mining the Social Web
* Mining Twitter data with Python
* Simple Python Facebook Scraper
3 Web scraping
Tasks:
1. Download the relevant information from Facebook and Twitter. Try to minimize the quantity of collected data to the minimum required to answer the questions.
2. Build two SQLite databases, one for Facebook and the other for Twitter, using pandas and SQLAlchemy.
1. For FB, each row is a post, and the columns are at least (you can include more if you want): the post id, the message (i.e. the text), the time when it was posted, the number of likes and the number of comments.
2. For Twitter, each row is a tweet, and the columns are at least: the tweet id, the text, the creation time, the number of likes (was called favorite before) and the number of retweets.
Note that some data cleaning is already necessary. E.g. there are some FB posts without message, i.e. without text. Some tweets are also just retweets without any more information. Should they be collected ?
End of explanation
"""
import configparser
# Read the confidential token.
credentials = configparser.ConfigParser()
credentials.read('credentials.ini')
token = credentials.get('facebook', 'token')
# Or token = 'YOUR-FB-ACCESS-TOKEN'
import requests # pip install requests
import facebook # pip install facebook-sdk
import pandas as pd
page = 'EPFL.ch'
"""
Explanation: 3.1 Facebook
There is two ways to scrape data from Facebook, you can choose one or combine them.
1. The low-level approach, sending HTTP requests and receiving JSON responses to / from their Graph API. That can be achieved with the json and requests packages (altough you can use urllib or urllib2, requests has a better API). The knowledge you'll acquire using that method will be useful to query other web APIs than FB. This method is also more flexible.
2. The high-level approach, using a Python SDK. The code you'll have to write for this method is gonna be shorter, but specific to the FB Graph API.
You will need an access token, which can be created with the help of the Graph Explorer. That tool may prove useful to test queries. Once you have your token, you may create a credentials.ini file with the following content:
[facebook]
token = YOUR-FB-ACCESS-TOKEN
End of explanation
"""
# 1. Form URL.
url = 'https://graph.facebook.com/{}?fields=likes&access_token={}'.format(page, token)
#print(url)
# 2. Get data.
data = requests.get(url).json()
print('data:', data)
# Optionally, check for errors. Most probably the session has expired.
if 'error' in data.keys():
raise Exception(data)
# 3. Extract data.
print('{} has {} likes'.format(page, data['likes']))
"""
Explanation: 3.1.1 Scrap with HTTP requests
3.1.1.1 Get the number of likes
The process is three-way:
1. Assemble an URL to query. The documentation of the FB Graph API is useful there. You can click on the URL to let your browser make the query and return the result.
2. Send an HTTP GET request, receive the results and interpret it as JSON (because Facebook sends data in JSON).
3. Explore the received data and extract what interests us, here the number of likes. If we don't get what we want (or if we get too much), we can modify the query url. Note that the hierarchical JSON format is exposed as a dictionary.
End of explanation
"""
# 1. Form URL. You can click that url and see the returned JSON in your browser.
fields = 'id,created_time,message,likes.limit(0).summary(1),comments.limit(0).summary(1)'
url = 'https://graph.facebook.com/{}/posts?fields={}&access_token={}'.format(page, fields, token)
#print(url)
# Create the pandas DataFrame, a table which columns are post id, message, created time, #likes and #comments.
fb = pd.DataFrame(columns=['id', 'text', 'time', 'likes', 'comments'])
# The outer loop is to query FB multiple times, as FB sends at most 100 posts at a time.
while len(fb) < n:
# 2. Get the data from FB. At most 100 posts.
posts = requests.get(url).json()
# 3. Here we extract information for each of the received post.
for post in posts['data']:
# The information is stored in a dictionary.
serie = dict(id=post['id'], time=post['created_time'])
try:
serie['text'] = post['message']
except KeyError:
# Let's say we are not interested in posts without text.
continue
serie['likes'] = post['likes']['summary']['total_count']
serie['comments'] = post['comments']['summary']['total_count']
# Add the dictionary as a new line to our pandas DataFrame.
fb = fb.append(serie, ignore_index=True)
try:
# That URL is returned by FB to access the next 'page', i.e. the next 100 posts.
url = posts['paging']['next']
except KeyError:
# No more posts.
break
fb[:5]
"""
Explanation: 3.1.1.2 Get posts
The process is similar here, except that the query and extraction are more complicated (because we work with more data). As you may have found out, FB returns at most 100 posts at a time. To get more posts, they provide paging, which we use to requests the next posts.
End of explanation
"""
g = facebook.GraphAPI(token, version='2.7')
# We limit to 10 because it's slow.
posts = g.get_connections(page, 'posts', limit=10)
if 'error' in posts.keys():
# Most probably the session has expired.
raise Exception(data)
for post in posts['data']:
pid = post['id']
try:
text = post['message']
except KeyError:
continue
time = post['created_time']
likes = g.get_connections(pid, 'likes', summary=True, limit=0)
nlikes = likes['summary']['total_count']
comments = g.get_connections(pid, 'comments', summary=True, limit=0)
ncomments = comments['summary']['total_count']
print('{:6d} {:6d} {} {}'.format(nlikes, ncomments, time, text[:50]))
"""
Explanation: 3.1.2 Scrap with Facebook SDK
That method is much slower because it should retrieve the comments and likes, not only their number, for each post. The API is not expressive enough to do otherwise.
End of explanation
"""
import tweepy # pip install tweepy
auth = tweepy.OAuthHandler(credentials.get('twitter', 'consumer_key'), credentials.get('twitter', 'consumer_secret'))
auth.set_access_token(credentials.get('twitter', 'access_token'), credentials.get('twitter', 'access_secret'))
api = tweepy.API(auth)
user = 'EPFL_en'
followers = api.get_user(user).followers_count
print('{} has {} followers'.format(user, followers))
"""
Explanation: 3.2 Twitter
There exists a bunch of Python-based clients for Twitter. Tweepy is a popular choice.
You will need to create a Twitter app and copy the four tokens and secrets in the credentials.ini file:
[twitter]
consumer_key = YOUR-CONSUMER-KEY
consumer_secret = YOUR-CONSUMER-SECRET
access_token = YOUR-ACCESS-TOKEN
access_secret = YOUR-ACCESS-SECRET
End of explanation
"""
tw = pd.DataFrame(columns=['id', 'text', 'time', 'likes', 'shares'])
for tweet in tweepy.Cursor(api.user_timeline, screen_name=user).items(n):
serie = dict(id=tweet.id, text=tweet.text, time=tweet.created_at)
serie.update(dict(likes=tweet.favorite_count, shares=tweet.retweet_count))
tw = tw.append(serie, ignore_index=True)
"""
Explanation: The code is much simpler for Twitter than Facebook because Tweepy handles much of the dirty work, like paging.
End of explanation
"""
#fb.id = fb.id.astype(int)
fb.likes = fb.likes.astype(int)
fb.comments = fb.comments.astype(int)
tw.id = tw.id.astype(int)
tw.likes = tw.likes.astype(int)
tw.shares = tw.shares.astype(int)
from datetime import datetime
def convert_time(row):
return datetime.strptime(row['time'], '%Y-%m-%dT%H:%M:%S+0000')
fb['time'] = fb.apply(convert_time, axis=1)
from IPython.display import display
display(fb[:5])
display(tw[:5])
"""
Explanation: 4 Prepare and save data
To facilitate our analysis, we first prepare the data.
1. Convert floating point numbers to integers.
1. Convert Facebook post time from string to datetime.
That is not necessary, but it'll allow to e.g. compare posting dates with standard comparison operators like > and <.
End of explanation
"""
import os
folder = os.path.join('..', 'data', 'social_media')
try:
os.makedirs(folder)
except FileExistsError:
pass
filename = os.path.join(folder, 'facebook.sqlite')
fb.to_sql('facebook', 'sqlite:///' + filename, if_exists='replace')
filename = os.path.join(folder, 'twitter.sqlite')
tw.to_sql('twitter', 'sqlite:///' + filename, if_exists='replace')
"""
Explanation: Now that we collected everything, let's save it in two SQLite databases.
End of explanation
"""
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('ggplot')
%matplotlib inline
"""
Explanation: 5 Data analysis
Answer the questions using pandas, statsmodels, scipy.stats, bokeh.
End of explanation
"""
date = datetime(2016, 9, 4)
datestr = date.strftime('%Y-%m-%d')
print('Number of posts after {}: {}'.format(datestr, sum(fb.time > date)))
print('Number of tweets after {}: {}'.format(datestr, sum(tw.time > date)))
"""
Explanation: 5.1 Number of posts
End of explanation
"""
display(fb.sort_values(by='likes', ascending=False)[:5])
display(tw.sort_values(by='likes', ascending=False)[:5])
"""
Explanation: 5.2 Most liked
Looks like we're really into rankings !!
End of explanation
"""
pd.concat([fb.describe(), tw.loc[:,'likes':'shares'].describe()], axis=1)
fig, axs = plt.subplots(1, 4, figsize=(15, 5))
fb.likes.plot(kind='box', ax=axs[0]);
fb.comments.plot(kind='box', ax=axs[1]);
tw.likes.plot(kind='box', ax=axs[2]);
tw.shares.plot(kind='box', ax=axs[3]);
fb.hist(bins=20, log=True, figsize=(15, 5));
fig, axs = plt.subplots(1, 2, figsize=(15, 5))
tw.loc[:,'likes'].hist(bins=20, log=True, ax=axs[0]);
tw.loc[tw.shares < 200, 'shares'].hist(bins=20, log=True, ax=axs[1]);
"""
Explanation: 5.3 Engagement: likes, comments, shares
End of explanation
"""
def text_length(texts):
lengths = np.empty(len(texts), dtype=int)
for i, text in enumerate(texts):
lengths[i] = len(text)
plt.figure(figsize=(15, 5))
prop = lengths.min(), '{:.2f}'.format(lengths.mean()), lengths.max()
plt.title('min = {}, mean={}, max = {}'.format(*prop))
plt.hist(lengths, bins=20)
text_length(tw.text)
text_length(fb.text)
"""
Explanation: 5.4 Text length
There is a stricking difference here:
1. On Twitter, almost all tweets reach the 140 characters limit.
2. The distribution is more Gaussian on Facebook.
End of explanation
"""
fb.id.groupby(fb.time.dt.hour).count().plot(kind='bar', alpha=0.4, color='y', figsize=(15,5));
tw.id.groupby(tw.time.dt.hour).count().plot(kind='bar', alpha=0.4, color='g', figsize=(15,5));
"""
Explanation: 5.5 Posting time
We can clearly observe the office hours.
End of explanation
"""
fb.likes.groupby(fb.time.dt.hour).mean().plot(kind='bar', figsize=(15,5));
plt.figure()
tw.likes.groupby(tw.time.dt.hour).mean().plot(kind='bar', figsize=(15,5));
"""
Explanation: Let's look if the time of posting influence the number of likes. Do you see a peak at 5am ? Do you really think we should post at 5am ? What's going on here ?
End of explanation
"""
|
GoogleCloudPlatform/vertex-ai-samples | notebooks/community/gapic/custom/showcase_custom_image_classification_online.ipynb | apache-2.0 | import os
import sys
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install -U google-cloud-aiplatform $USER_FLAG
"""
Explanation: Vertex client library: Custom training image classification model for online prediction
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/custom/showcase_custom_image_classification_online.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/custom/showcase_custom_image_classification_online.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
</table>
<br/><br/><br/>
Overview
This tutorial demonstrates how to use the Vertex client library for Python to train and deploy a custom image classification model for online prediction.
Dataset
The dataset used for this tutorial is the CIFAR10 dataset from TensorFlow Datasets. The version of the dataset you will use is built into TensorFlow. The trained model predicts which type of class an image is from ten classes: airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck.
Objective
In this tutorial, you create a custom model from a Python script in a Google prebuilt Docker container using the Vertex client library, and then do a prediction on the deployed model by sending data. You can alternatively create custom models using gcloud command-line tool or online using Google Cloud Console.
The steps performed include:
Create a Vertex custom job for training a model.
Train a TensorFlow model.
Retrieve and load the model artifacts.
View the model evaluation.
Upload the model as a Vertex Model resource.
Deploy the Model resource to a serving Endpoint resource.
Make a prediction.
Undeploy the Model resource.
Costs
This tutorial uses billable components of Google Cloud (GCP):
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Installation
Install the latest version of Vertex client library.
End of explanation
"""
! pip3 install -U google-cloud-storage $USER_FLAG
"""
Explanation: Install the latest GA version of google-cloud-storage library as well.
End of explanation
"""
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
"""
Explanation: Restart the kernel
Once you've installed the Vertex client library and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.
End of explanation
"""
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
"""
Explanation: Before you begin
GPU runtime
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex APIs and Compute Engine APIs.
The Google Cloud SDK is already installed in Google Cloud Notebook.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
End of explanation
"""
REGION = "us-central1" # @param {type: "string"}
"""
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You may not use a multi-regional bucket for training with Vertex. Not all regions provide support for all Vertex services. For the latest support per region, see the Vertex locations documentation
End of explanation
"""
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
"""
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
End of explanation
"""
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
"""
Explanation: Authenticate your Google Cloud account
If you are using Google Cloud Notebook, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key page.
Click Create service account.
In the Service account name field, enter a name, and click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex" into the filter box, and select Vertex Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
"""
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
"""
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you submit a custom training job using the Vertex client library, you upload a Python package
containing your training code to a Cloud Storage bucket. Vertex runs
the code from this package. In this tutorial, Vertex also saves the
trained model that results from your job in the same bucket. You can then
create an Endpoint resource based on this output in order to serve
online predictions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
End of explanation
"""
! gsutil mb -l $REGION $BUCKET_NAME
"""
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
"""
! gsutil ls -al $BUCKET_NAME
"""
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
"""
import time
from google.cloud.aiplatform import gapic as aip
from google.protobuf import json_format
from google.protobuf.struct_pb2 import Value
"""
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Import Vertex client library
Import the Vertex client library into our Python environment.
End of explanation
"""
# API service endpoint
API_ENDPOINT = "{}-aiplatform.googleapis.com".format(REGION)
# Vertex location root path for your dataset, model and endpoint resources
PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION
"""
Explanation: Vertex constants
Setup up the following constants for Vertex:
API_ENDPOINT: The Vertex API service endpoint for dataset, model, job, pipeline and endpoint services.
PARENT: The Vertex location root path for dataset, model, job, pipeline and endpoint resources.
End of explanation
"""
if os.getenv("IS_TESTING_TRAIN_GPU"):
TRAIN_GPU, TRAIN_NGPU = (
aip.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_TRAIN_GPU")),
)
else:
TRAIN_GPU, TRAIN_NGPU = (aip.AcceleratorType.NVIDIA_TESLA_K80, 1)
if os.getenv("IS_TESTING_DEPOLY_GPU"):
DEPLOY_GPU, DEPLOY_NGPU = (
aip.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_DEPOLY_GPU")),
)
else:
DEPLOY_GPU, DEPLOY_NGPU = (None, None)
"""
Explanation: Hardware Accelerators
Set the hardware accelerators (e.g., GPU), if any, for training and prediction.
Set the variables TRAIN_GPU/TRAIN_NGPU and DEPLOY_GPU/DEPLOY_NGPU to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify:
(aip.AcceleratorType.NVIDIA_TESLA_K80, 4)
For GPU, available accelerators include:
- aip.AcceleratorType.NVIDIA_TESLA_K80
- aip.AcceleratorType.NVIDIA_TESLA_P100
- aip.AcceleratorType.NVIDIA_TESLA_P4
- aip.AcceleratorType.NVIDIA_TESLA_T4
- aip.AcceleratorType.NVIDIA_TESLA_V100
Otherwise specify (None, None) to use a container image to run on a CPU.
Note: TF releases before 2.3 for GPU support will fail to load the custom model in this tutorial. It is a known issue and fixed in TF 2.3 -- which is caused by static graph ops that are generated in the serving function. If you encounter this issue on your own custom models, use a container image for TF 2.3 with GPU support.
End of explanation
"""
if os.getenv("IS_TESTING_TF"):
TF = os.getenv("IS_TESTING_TF")
else:
TF = "2-1"
if TF[0] == "2":
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
if DEPLOY_GPU:
DEPLOY_VERSION = "tf2-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf2-cpu.{}".format(TF)
else:
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
if DEPLOY_GPU:
DEPLOY_VERSION = "tf-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf-cpu.{}".format(TF)
TRAIN_IMAGE = "gcr.io/cloud-aiplatform/training/{}:latest".format(TRAIN_VERSION)
DEPLOY_IMAGE = "gcr.io/cloud-aiplatform/prediction/{}:latest".format(DEPLOY_VERSION)
print("Training:", TRAIN_IMAGE, TRAIN_GPU, TRAIN_NGPU)
print("Deployment:", DEPLOY_IMAGE, DEPLOY_GPU, DEPLOY_NGPU)
"""
Explanation: Container (Docker) image
Next, we will set the Docker container images for training and prediction
TensorFlow 1.15
gcr.io/cloud-aiplatform/training/tf-cpu.1-15:latest
gcr.io/cloud-aiplatform/training/tf-gpu.1-15:latest
TensorFlow 2.1
gcr.io/cloud-aiplatform/training/tf-cpu.2-1:latest
gcr.io/cloud-aiplatform/training/tf-gpu.2-1:latest
TensorFlow 2.2
gcr.io/cloud-aiplatform/training/tf-cpu.2-2:latest
gcr.io/cloud-aiplatform/training/tf-gpu.2-2:latest
TensorFlow 2.3
gcr.io/cloud-aiplatform/training/tf-cpu.2-3:latest
gcr.io/cloud-aiplatform/training/tf-gpu.2-3:latest
TensorFlow 2.4
gcr.io/cloud-aiplatform/training/tf-cpu.2-4:latest
gcr.io/cloud-aiplatform/training/tf-gpu.2-4:latest
XGBoost
gcr.io/cloud-aiplatform/training/xgboost-cpu.1-1
Scikit-learn
gcr.io/cloud-aiplatform/training/scikit-learn-cpu.0-23:latest
Pytorch
gcr.io/cloud-aiplatform/training/pytorch-cpu.1-4:latest
gcr.io/cloud-aiplatform/training/pytorch-cpu.1-5:latest
gcr.io/cloud-aiplatform/training/pytorch-cpu.1-6:latest
gcr.io/cloud-aiplatform/training/pytorch-cpu.1-7:latest
For the latest list, see Pre-built containers for training.
TensorFlow 1.15
gcr.io/cloud-aiplatform/prediction/tf-cpu.1-15:latest
gcr.io/cloud-aiplatform/prediction/tf-gpu.1-15:latest
TensorFlow 2.1
gcr.io/cloud-aiplatform/prediction/tf2-cpu.2-1:latest
gcr.io/cloud-aiplatform/prediction/tf2-gpu.2-1:latest
TensorFlow 2.2
gcr.io/cloud-aiplatform/prediction/tf2-cpu.2-2:latest
gcr.io/cloud-aiplatform/prediction/tf2-gpu.2-2:latest
TensorFlow 2.3
gcr.io/cloud-aiplatform/prediction/tf2-cpu.2-3:latest
gcr.io/cloud-aiplatform/prediction/tf2-gpu.2-3:latest
XGBoost
gcr.io/cloud-aiplatform/prediction/xgboost-cpu.1-2:latest
gcr.io/cloud-aiplatform/prediction/xgboost-cpu.1-1:latest
gcr.io/cloud-aiplatform/prediction/xgboost-cpu.0-90:latest
gcr.io/cloud-aiplatform/prediction/xgboost-cpu.0-82:latest
Scikit-learn
gcr.io/cloud-aiplatform/prediction/sklearn-cpu.0-23:latest
gcr.io/cloud-aiplatform/prediction/sklearn-cpu.0-22:latest
gcr.io/cloud-aiplatform/prediction/sklearn-cpu.0-20:latest
For the latest list, see Pre-built containers for prediction
End of explanation
"""
if os.getenv("IS_TESTING_TRAIN_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_TRAIN_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
TRAIN_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Train machine type", TRAIN_COMPUTE)
if os.getenv("IS_TESTING_DEPLOY_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Deploy machine type", DEPLOY_COMPUTE)
"""
Explanation: Machine Type
Next, set the machine type to use for training and prediction.
Set the variables TRAIN_COMPUTE and DEPLOY_COMPUTE to configure the compute resources for the VMs you will use for for training and prediction.
machine type
n1-standard: 3.75GB of memory per vCPU.
n1-highmem: 6.5GB of memory per vCPU
n1-highcpu: 0.9 GB of memory per vCPU
vCPUs: number of [2, 4, 8, 16, 32, 64, 96 ]
Note: The following is not supported for training:
standard: 2 vCPUs
highcpu: 2, 4 and 8 vCPUs
Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs.
End of explanation
"""
# client options same for all services
client_options = {"api_endpoint": API_ENDPOINT}
def create_job_client():
client = aip.JobServiceClient(client_options=client_options)
return client
def create_model_client():
client = aip.ModelServiceClient(client_options=client_options)
return client
def create_endpoint_client():
client = aip.EndpointServiceClient(client_options=client_options)
return client
def create_prediction_client():
client = aip.PredictionServiceClient(client_options=client_options)
return client
clients = {}
clients["job"] = create_job_client()
clients["model"] = create_model_client()
clients["endpoint"] = create_endpoint_client()
clients["prediction"] = create_prediction_client()
for client in clients.items():
print(client)
"""
Explanation: Tutorial
Now you are ready to start creating your own custom model and training for CIFAR10.
Set up clients
The Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server.
You will use different clients in this tutorial for different steps in the workflow. So set them all up upfront.
Model Service for Model resources.
Endpoint Service for deployment.
Job Service for batch jobs and custom training.
Prediction Service for serving.
End of explanation
"""
if TRAIN_GPU:
machine_spec = {
"machine_type": TRAIN_COMPUTE,
"accelerator_type": TRAIN_GPU,
"accelerator_count": TRAIN_NGPU,
}
else:
machine_spec = {"machine_type": TRAIN_COMPUTE, "accelerator_count": 0}
"""
Explanation: Train a model
There are two ways you can train a custom model using a container image:
Use a Google Cloud prebuilt container. If you use a prebuilt container, you will additionally specify a Python package to install into the container image. This Python package contains your code for training a custom model.
Use your own custom container image. If you use your own container, the container needs to contain your code for training a custom model.
Prepare your custom job specification
Now that your clients are ready, your first step is to create a Job Specification for your custom training job. The job specification will consist of the following:
worker_pool_spec : The specification of the type of machine(s) you will use for training and how many (single or distributed)
python_package_spec : The specification of the Python package to be installed with the pre-built container.
Prepare your machine specification
Now define the machine specification for your custom training job. This tells Vertex what type of machine instance to provision for the training.
- machine_type: The type of GCP instance to provision -- e.g., n1-standard-8.
- accelerator_type: The type, if any, of hardware accelerator. In this tutorial if you previously set the variable TRAIN_GPU != None, you are using a GPU; otherwise you will use a CPU.
- accelerator_count: The number of accelerators.
End of explanation
"""
DISK_TYPE = "pd-ssd" # [ pd-ssd, pd-standard]
DISK_SIZE = 200 # GB
disk_spec = {"boot_disk_type": DISK_TYPE, "boot_disk_size_gb": DISK_SIZE}
"""
Explanation: Prepare your disk specification
(optional) Now define the disk specification for your custom training job. This tells Vertex what type and size of disk to provision in each machine instance for the training.
boot_disk_type: Either SSD or Standard. SSD is faster, and Standard is less expensive. Defaults to SSD.
boot_disk_size_gb: Size of disk in GB.
End of explanation
"""
JOB_NAME = "custom_job_" + TIMESTAMP
MODEL_DIR = "{}/{}".format(BUCKET_NAME, JOB_NAME)
if not TRAIN_NGPU or TRAIN_NGPU < 2:
TRAIN_STRATEGY = "single"
else:
TRAIN_STRATEGY = "mirror"
EPOCHS = 20
STEPS = 100
DIRECT = True
if DIRECT:
CMDARGS = [
"--model-dir=" + MODEL_DIR,
"--epochs=" + str(EPOCHS),
"--steps=" + str(STEPS),
"--distribute=" + TRAIN_STRATEGY,
]
else:
CMDARGS = [
"--epochs=" + str(EPOCHS),
"--steps=" + str(STEPS),
"--distribute=" + TRAIN_STRATEGY,
]
worker_pool_spec = [
{
"replica_count": 1,
"machine_spec": machine_spec,
"disk_spec": disk_spec,
"python_package_spec": {
"executor_image_uri": TRAIN_IMAGE,
"package_uris": [BUCKET_NAME + "/trainer_cifar10.tar.gz"],
"python_module": "trainer.task",
"args": CMDARGS,
},
}
]
"""
Explanation: Define the worker pool specification
Next, you define the worker pool specification for your custom training job. The worker pool specification will consist of the following:
replica_count: The number of instances to provision of this machine type.
machine_spec: The hardware specification.
disk_spec : (optional) The disk storage specification.
python_package: The Python training package to install on the VM instance(s) and which Python module to invoke, along with command line arguments for the Python module.
Let's dive deeper now into the python package specification:
-executor_image_spec: This is the docker image which is configured for your custom training job.
-package_uris: This is a list of the locations (URIs) of your python training packages to install on the provisioned instance. The locations need to be in a Cloud Storage bucket. These can be either individual python files or a zip (archive) of an entire package. In the later case, the job service will unzip (unarchive) the contents into the docker image.
-python_module: The Python module (script) to invoke for running the custom training job. In this example, you will be invoking trainer.task.py -- note that it was not neccessary to append the .py suffix.
-args: The command line arguments to pass to the corresponding Pythom module. In this example, you will be setting:
- "--model-dir=" + MODEL_DIR : The Cloud Storage location where to store the model artifacts. There are two ways to tell the training script where to save the model artifacts:
- direct: You pass the Cloud Storage location as a command line argument to your training script (set variable DIRECT = True), or
- indirect: The service passes the Cloud Storage location as the environment variable AIP_MODEL_DIR to your training script (set variable DIRECT = False). In this case, you tell the service the model artifact location in the job specification.
- "--epochs=" + EPOCHS: The number of epochs for training.
- "--steps=" + STEPS: The number of steps (batches) per epoch.
- "--distribute=" + TRAIN_STRATEGY" : The training distribution strategy to use for single or distributed training.
- "single": single device.
- "mirror": all GPU devices on a single compute instance.
- "multi": all GPU devices on all compute instances.
End of explanation
"""
if DIRECT:
job_spec = {"worker_pool_specs": worker_pool_spec}
else:
job_spec = {
"worker_pool_specs": worker_pool_spec,
"base_output_directory": {"output_uri_prefix": MODEL_DIR},
}
custom_job = {"display_name": JOB_NAME, "job_spec": job_spec}
"""
Explanation: Assemble a job specification
Now assemble the complete description for the custom job specification:
display_name: The human readable name you assign to this custom job.
job_spec: The specification for the custom job.
worker_pool_specs: The specification for the machine VM instances.
base_output_directory: This tells the service the Cloud Storage location where to save the model artifacts (when variable DIRECT = False). The service will then pass the location to the training script as the environment variable AIP_MODEL_DIR, and the path will be of the form: <output_uri_prefix>/model
End of explanation
"""
# Make folder for Python training script
! rm -rf custom
! mkdir custom
# Add package information
! touch custom/README.md
setup_cfg = "[egg_info]\n\ntag_build =\n\ntag_date = 0"
! echo "$setup_cfg" > custom/setup.cfg
setup_py = "import setuptools\n\nsetuptools.setup(\n\n install_requires=[\n\n 'tensorflow_datasets==1.3.0',\n\n ],\n\n packages=setuptools.find_packages())"
! echo "$setup_py" > custom/setup.py
pkg_info = "Metadata-Version: 1.0\n\nName: CIFAR10 image classification\n\nVersion: 0.0.0\n\nSummary: Demostration training script\n\nHome-page: www.google.com\n\nAuthor: Google\n\nAuthor-email: aferlitsch@google.com\n\nLicense: Public\n\nDescription: Demo\n\nPlatform: Vertex"
! echo "$pkg_info" > custom/PKG-INFO
# Make the training subfolder
! mkdir custom/trainer
! touch custom/trainer/__init__.py
"""
Explanation: Examine the training package
Package layout
Before you start the training, you will look at how a Python package is assembled for a custom training job. When unarchived, the package contains the following directory/file layout.
PKG-INFO
README.md
setup.cfg
setup.py
trainer
__init__.py
task.py
The files setup.cfg and setup.py are the instructions for installing the package into the operating environment of the Docker image.
The file trainer/task.py is the Python script for executing the custom training job. Note, when we referred to it in the worker pool specification, we replace the directory slash with a dot (trainer.task) and dropped the file suffix (.py).
Package Assembly
In the following cells, you will assemble the training package.
End of explanation
"""
%%writefile custom/trainer/task.py
# Single, Mirror and Multi-Machine Distributed Training for CIFAR-10
import tensorflow_datasets as tfds
import tensorflow as tf
from tensorflow.python.client import device_lib
import argparse
import os
import sys
tfds.disable_progress_bar()
parser = argparse.ArgumentParser()
parser.add_argument('--model-dir', dest='model_dir',
default=os.getenv("AIP_MODEL_DIR"), type=str, help='Model dir.')
parser.add_argument('--lr', dest='lr',
default=0.01, type=float,
help='Learning rate.')
parser.add_argument('--epochs', dest='epochs',
default=10, type=int,
help='Number of epochs.')
parser.add_argument('--steps', dest='steps',
default=200, type=int,
help='Number of steps per epoch.')
parser.add_argument('--distribute', dest='distribute', type=str, default='single',
help='distributed training strategy')
args = parser.parse_args()
print('Python Version = {}'.format(sys.version))
print('TensorFlow Version = {}'.format(tf.__version__))
print('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found')))
print('DEVICES', device_lib.list_local_devices())
# Single Machine, single compute device
if args.distribute == 'single':
if tf.test.is_gpu_available():
strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0")
else:
strategy = tf.distribute.OneDeviceStrategy(device="/cpu:0")
# Single Machine, multiple compute device
elif args.distribute == 'mirror':
strategy = tf.distribute.MirroredStrategy()
# Multiple Machine, multiple compute device
elif args.distribute == 'multi':
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
# Multi-worker configuration
print('num_replicas_in_sync = {}'.format(strategy.num_replicas_in_sync))
# Preparing dataset
BUFFER_SIZE = 10000
BATCH_SIZE = 64
def make_datasets_unbatched():
# Scaling CIFAR10 data from (0, 255] to (0., 1.]
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255.0
return image, label
datasets, info = tfds.load(name='cifar10',
with_info=True,
as_supervised=True)
return datasets['train'].map(scale).cache().shuffle(BUFFER_SIZE).repeat()
# Build the Keras model
def build_and_compile_cnn_model():
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(32, 32, 3)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(
loss=tf.keras.losses.sparse_categorical_crossentropy,
optimizer=tf.keras.optimizers.SGD(learning_rate=args.lr),
metrics=['accuracy'])
return model
# Train the model
NUM_WORKERS = strategy.num_replicas_in_sync
# Here the batch size scales up by number of workers since
# `tf.data.Dataset.batch` expects the global batch size.
GLOBAL_BATCH_SIZE = BATCH_SIZE * NUM_WORKERS
train_dataset = make_datasets_unbatched().batch(GLOBAL_BATCH_SIZE)
with strategy.scope():
# Creation of dataset, and model building/compiling need to be within
# `strategy.scope()`.
model = build_and_compile_cnn_model()
model.fit(x=train_dataset, epochs=args.epochs, steps_per_epoch=args.steps)
model.save(args.model_dir)
"""
Explanation: Task.py contents
In the next cell, you write the contents of the training script task.py. We won't go into detail, it's just there for you to browse. In summary:
Get the directory where to save the model artifacts from the command line (--model_dir), and if not specified, then from the environment variable AIP_MODEL_DIR.
Loads CIFAR10 dataset from TF Datasets (tfds).
Builds a model using TF.Keras model API.
Compiles the model (compile()).
Sets a training distribution strategy according to the argument args.distribute.
Trains the model (fit()) with epochs and steps according to the arguments args.epochs and args.steps
Saves the trained model (save(args.model_dir)) to the specified model directory.
End of explanation
"""
! rm -f custom.tar custom.tar.gz
! tar cvf custom.tar custom
! gzip custom.tar
! gsutil cp custom.tar.gz $BUCKET_NAME/trainer_cifar10.tar.gz
"""
Explanation: Store training script on your Cloud Storage bucket
Next, you package the training folder into a compressed tar ball, and then store it in your Cloud Storage bucket.
End of explanation
"""
def create_custom_job(custom_job):
response = clients["job"].create_custom_job(parent=PARENT, custom_job=custom_job)
print("name:", response.name)
print("display_name:", response.display_name)
print("state:", response.state)
print("create_time:", response.create_time)
print("update_time:", response.update_time)
return response
response = create_custom_job(custom_job)
"""
Explanation: Train the model
Now start the training of your custom training job on Vertex. Use this helper function create_custom_job, which takes the following parameter:
-custom_job: The specification for the custom job.
The helper function calls job client service's create_custom_job method, with the following parameters:
-parent: The Vertex location path to Dataset, Model and Endpoint resources.
-custom_job: The specification for the custom job.
You will display a handful of the fields returned in response object, with the two that are of most interest are:
response.name: The Vertex fully qualified identifier assigned to this custom training job. You save this identifier for using in subsequent steps.
response.state: The current state of the custom training job.
End of explanation
"""
# The full unique ID for the custom job
job_id = response.name
# The short numeric ID for the custom job
job_short_id = job_id.split("/")[-1]
print(job_id)
"""
Explanation: Now get the unique identifier for the custom job you created.
End of explanation
"""
def get_custom_job(name, silent=False):
response = clients["job"].get_custom_job(name=name)
if silent:
return response
print("name:", response.name)
print("display_name:", response.display_name)
print("state:", response.state)
print("create_time:", response.create_time)
print("update_time:", response.update_time)
return response
response = get_custom_job(job_id)
"""
Explanation: Get information on a custom job
Next, use this helper function get_custom_job, which takes the following parameter:
name: The Vertex fully qualified identifier for the custom job.
The helper function calls the job client service'sget_custom_job method, with the following parameter:
name: The Vertex fully qualified identifier for the custom job.
If you recall, you got the Vertex fully qualified identifier for the custom job in the response.name field when you called the create_custom_job method, and saved the identifier in the variable job_id.
End of explanation
"""
while True:
response = get_custom_job(job_id, True)
if response.state != aip.JobState.JOB_STATE_SUCCEEDED:
print("Training job has not completed:", response.state)
model_path_to_deploy = None
if response.state == aip.JobState.JOB_STATE_FAILED:
break
else:
if not DIRECT:
MODEL_DIR = MODEL_DIR + "/model"
model_path_to_deploy = MODEL_DIR
print("Training Time:", response.update_time - response.create_time)
break
time.sleep(60)
print("model_to_deploy:", model_path_to_deploy)
"""
Explanation: Deployment
Training the above model may take upwards of 20 minutes time.
Once your model is done training, you can calculate the actual time it took to train the model by subtracting end_time from start_time. For your model, we will need to know the location of the saved model, which the Python script saved in your local Cloud Storage bucket at MODEL_DIR + '/saved_model.pb'.
End of explanation
"""
import tensorflow as tf
model = tf.keras.models.load_model(MODEL_DIR)
"""
Explanation: Load the saved model
Your model is stored in a TensorFlow SavedModel format in a Cloud Storage bucket. Now load it from the Cloud Storage bucket, and then you can do some things, like evaluate the model, and do a prediction.
To load, you use the TF.Keras model.load_model() method passing it the Cloud Storage path where the model is saved -- specified by MODEL_DIR.
End of explanation
"""
import numpy as np
from tensorflow.keras.datasets import cifar10
(_, _), (x_test, y_test) = cifar10.load_data()
x_test = (x_test / 255.0).astype(np.float32)
print(x_test.shape, y_test.shape)
"""
Explanation: Evaluate the model
Now find out how good the model is.
Load evaluation data
You will load the CIFAR10 test (holdout) data from tf.keras.datasets, using the method load_data(). This will return the dataset as a tuple of two elements. The first element is the training data and the second is the test data. Each element is also a tuple of two elements: the image data, and the corresponding labels.
You don't need the training data, and hence why we loaded it as (_, _).
Before you can run the data through evaluation, you need to preprocess it:
x_test:
1. Normalize (rescaling) the pixel data by dividing each pixel by 255. This will replace each single byte integer pixel with a 32-bit floating point number between 0 and 1.
y_test:<br/>
2. The labels are currently scalar (sparse). If you look back at the compile() step in the trainer/task.py script, you will find that it was compiled for sparse labels. So we don't need to do anything more.
End of explanation
"""
model.evaluate(x_test, y_test)
"""
Explanation: Perform the model evaluation
Now evaluate how well the model in the custom job did.
End of explanation
"""
CONCRETE_INPUT = "numpy_inputs"
def _preprocess(bytes_input):
decoded = tf.io.decode_jpeg(bytes_input, channels=3)
decoded = tf.image.convert_image_dtype(decoded, tf.float32)
resized = tf.image.resize(decoded, size=(32, 32))
return resized
@tf.function(input_signature=[tf.TensorSpec([None], tf.string)])
def preprocess_fn(bytes_inputs):
decoded_images = tf.map_fn(
_preprocess, bytes_inputs, dtype=tf.float32, back_prop=False
)
return {
CONCRETE_INPUT: decoded_images
} # User needs to make sure the key matches model's input
@tf.function(input_signature=[tf.TensorSpec([None], tf.string)])
def serving_fn(bytes_inputs):
images = preprocess_fn(bytes_inputs)
prob = m_call(**images)
return prob
m_call = tf.function(model.call).get_concrete_function(
[tf.TensorSpec(shape=[None, 32, 32, 3], dtype=tf.float32, name=CONCRETE_INPUT)]
)
tf.saved_model.save(
model, model_path_to_deploy, signatures={"serving_default": serving_fn}
)
"""
Explanation: Upload the model for serving
Next, you will upload your TF.Keras model from the custom job to Vertex Model service, which will create a Vertex Model resource for your custom model. During upload, you need to define a serving function to convert data to the format your model expects. If you send encoded data to Vertex, your serving function ensures that the data is decoded on the model server before it is passed as input to your model.
How does the serving function work
When you send a request to an online prediction server, the request is received by a HTTP server. The HTTP server extracts the prediction request from the HTTP request content body. The extracted prediction request is forwarded to the serving function. For Google pre-built prediction containers, the request content is passed to the serving function as a tf.string.
The serving function consists of two parts:
preprocessing function:
Converts the input (tf.string) to the input shape and data type of the underlying model (dynamic graph).
Performs the same preprocessing of the data that was done during training the underlying model -- e.g., normalizing, scaling, etc.
post-processing function:
Converts the model output to format expected by the receiving application -- e.q., compresses the output.
Packages the output for the the receiving application -- e.g., add headings, make JSON object, etc.
Both the preprocessing and post-processing functions are converted to static graphs which are fused to the model. The output from the underlying model is passed to the post-processing function. The post-processing function passes the converted/packaged output back to the HTTP server. The HTTP server returns the output as the HTTP response content.
One consideration you need to consider when building serving functions for TF.Keras models is that they run as static graphs. That means, you cannot use TF graph operations that require a dynamic graph. If you do, you will get an error during the compile of the serving function which will indicate that you are using an EagerTensor which is not supported.
Serving function for image data
To pass images to the prediction service, you encode the compressed (e.g., JPEG) image bytes into base 64 -- which makes the content safe from modification while transmitting binary data over the network. Since this deployed model expects input data as raw (uncompressed) bytes, you need to ensure that the base 64 encoded data gets converted back to raw bytes before it is passed as input to the deployed model.
To resolve this, define a serving function (serving_fn) and attach it to the model as a preprocessing step. Add a @tf.function decorator so the serving function is fused to the underlying model (instead of upstream on a CPU).
When you send a prediction or explanation request, the content of the request is base 64 decoded into a Tensorflow string (tf.string), which is passed to the serving function (serving_fn). The serving function preprocesses the tf.string into raw (uncompressed) numpy bytes (preprocess_fn) to match the input requirements of the model:
- io.decode_jpeg- Decompresses the JPG image which is returned as a Tensorflow tensor with three channels (RGB).
- image.convert_image_dtype - Changes integer pixel values to float 32, and rescales pixel data between 0 and 1.
- image.resize - Resizes the image to match the input shape for the model.
At this point, the data can be passed to the model (m_call).
End of explanation
"""
loaded = tf.saved_model.load(model_path_to_deploy)
serving_input = list(
loaded.signatures["serving_default"].structured_input_signature[1].keys()
)[0]
print("Serving function input:", serving_input)
"""
Explanation: Get the serving function signature
You can get the signatures of your model's input and output layers by reloading the model into memory, and querying it for the signatures corresponding to each layer.
For your purpose, you need the signature of the serving function. Why? Well, when we send our data for prediction as a HTTP request packet, the image data is base64 encoded, and our TF.Keras model takes numpy input. Your serving function will do the conversion from base64 to a numpy array.
When making a prediction request, you need to route the request to the serving function instead of the model, so you need to know the input layer name of the serving function -- which you will use later when you make a prediction request.
End of explanation
"""
IMAGE_URI = DEPLOY_IMAGE
def upload_model(display_name, image_uri, model_uri):
model = {
"display_name": display_name,
"metadata_schema_uri": "",
"artifact_uri": model_uri,
"container_spec": {
"image_uri": image_uri,
"command": [],
"args": [],
"env": [{"name": "env_name", "value": "env_value"}],
"ports": [{"container_port": 8080}],
"predict_route": "",
"health_route": "",
},
}
response = clients["model"].upload_model(parent=PARENT, model=model)
print("Long running operation:", response.operation.name)
upload_model_response = response.result(timeout=180)
print("upload_model_response")
print(" model:", upload_model_response.model)
return upload_model_response.model
model_to_deploy_id = upload_model(
"cifar10-" + TIMESTAMP, IMAGE_URI, model_path_to_deploy
)
"""
Explanation: Upload the model
Use this helper function upload_model to upload your model, stored in SavedModel format, up to the Model service, which will instantiate a Vertex Model resource instance for your model. Once you've done that, you can use the Model resource instance in the same way as any other Vertex Model resource instance, such as deploying to an Endpoint resource for serving predictions.
The helper function takes the following parameters:
display_name: A human readable name for the Endpoint service.
image_uri: The container image for the model deployment.
model_uri: The Cloud Storage path to our SavedModel artificat. For this tutorial, this is the Cloud Storage location where the trainer/task.py saved the model artifacts, which we specified in the variable MODEL_DIR.
The helper function calls the Model client service's method upload_model, which takes the following parameters:
parent: The Vertex location root path for Dataset, Model and Endpoint resources.
model: The specification for the Vertex Model resource instance.
Let's now dive deeper into the Vertex model specification model. This is a dictionary object that consists of the following fields:
display_name: A human readable name for the Model resource.
metadata_schema_uri: Since your model was built without an Vertex Dataset resource, you will leave this blank ('').
artificat_uri: The Cloud Storage path where the model is stored in SavedModel format.
container_spec: This is the specification for the Docker container that will be installed on the Endpoint resource, from which the Model resource will serve predictions. Use the variable you set earlier DEPLOY_GPU != None to use a GPU; otherwise only a CPU is allocated.
Uploading a model into a Vertex Model resource returns a long running operation, since it may take a few moments. You call response.result(), which is a synchronous call and will return when the Vertex Model resource is ready.
The helper function returns the Vertex fully qualified identifier for the corresponding Vertex Model instance upload_model_response.model. You will save the identifier for subsequent steps in the variable model_to_deploy_id.
End of explanation
"""
def get_model(name):
response = clients["model"].get_model(name=name)
print(response)
get_model(model_to_deploy_id)
"""
Explanation: Get Model resource information
Now let's get the model information for just your model. Use this helper function get_model, with the following parameter:
name: The Vertex unique identifier for the Model resource.
This helper function calls the Vertex Model client service's method get_model, with the following parameter:
name: The Vertex unique identifier for the Model resource.
End of explanation
"""
ENDPOINT_NAME = "cifar10_endpoint-" + TIMESTAMP
def create_endpoint(display_name):
endpoint = {"display_name": display_name}
response = clients["endpoint"].create_endpoint(parent=PARENT, endpoint=endpoint)
print("Long running operation:", response.operation.name)
result = response.result(timeout=300)
print("result")
print(" name:", result.name)
print(" display_name:", result.display_name)
print(" description:", result.description)
print(" labels:", result.labels)
print(" create_time:", result.create_time)
print(" update_time:", result.update_time)
return result
result = create_endpoint(ENDPOINT_NAME)
"""
Explanation: Deploy the Model resource
Now deploy the trained Vertex custom Model resource. This requires two steps:
Create an Endpoint resource for deploying the Model resource to.
Deploy the Model resource to the Endpoint resource.
Create an Endpoint resource
Use this helper function create_endpoint to create an endpoint to deploy the model to for serving predictions, with the following parameter:
display_name: A human readable name for the Endpoint resource.
The helper function uses the endpoint client service's create_endpoint method, which takes the following parameter:
display_name: A human readable name for the Endpoint resource.
Creating an Endpoint resource returns a long running operation, since it may take a few moments to provision the Endpoint resource for serving. You call response.result(), which is a synchronous call and will return when the Endpoint resource is ready. The helper function returns the Vertex fully qualified identifier for the Endpoint resource: response.name.
End of explanation
"""
# The full unique ID for the endpoint
endpoint_id = result.name
# The short numeric ID for the endpoint
endpoint_short_id = endpoint_id.split("/")[-1]
print(endpoint_id)
"""
Explanation: Now get the unique identifier for the Endpoint resource you created.
End of explanation
"""
MIN_NODES = 1
MAX_NODES = 1
"""
Explanation: Compute instance scaling
You have several choices on scaling the compute instances for handling your online prediction requests:
Single Instance: The online prediction requests are processed on a single compute instance.
Set the minimum (MIN_NODES) and maximum (MAX_NODES) number of compute instances to one.
Manual Scaling: The online prediction requests are split across a fixed number of compute instances that you manually specified.
Set the minimum (MIN_NODES) and maximum (MAX_NODES) number of compute instances to the same number of nodes. When a model is first deployed to the instance, the fixed number of compute instances are provisioned and online prediction requests are evenly distributed across them.
Auto Scaling: The online prediction requests are split across a scaleable number of compute instances.
Set the minimum (MIN_NODES) number of compute instances to provision when a model is first deployed and to de-provision, and set the maximum (`MAX_NODES) number of compute instances to provision, depending on load conditions.
The minimum number of compute instances corresponds to the field min_replica_count and the maximum number of compute instances corresponds to the field max_replica_count, in your subsequent deployment request.
End of explanation
"""
DEPLOYED_NAME = "cifar10_deployed-" + TIMESTAMP
def deploy_model(
model, deployed_model_display_name, endpoint, traffic_split={"0": 100}
):
if DEPLOY_GPU:
machine_spec = {
"machine_type": DEPLOY_COMPUTE,
"accelerator_type": DEPLOY_GPU,
"accelerator_count": DEPLOY_NGPU,
}
else:
machine_spec = {
"machine_type": DEPLOY_COMPUTE,
"accelerator_count": 0,
}
deployed_model = {
"model": model,
"display_name": deployed_model_display_name,
"dedicated_resources": {
"min_replica_count": MIN_NODES,
"max_replica_count": MAX_NODES,
"machine_spec": machine_spec,
},
"disable_container_logging": False,
}
response = clients["endpoint"].deploy_model(
endpoint=endpoint, deployed_model=deployed_model, traffic_split=traffic_split
)
print("Long running operation:", response.operation.name)
result = response.result()
print("result")
deployed_model = result.deployed_model
print(" deployed_model")
print(" id:", deployed_model.id)
print(" model:", deployed_model.model)
print(" display_name:", deployed_model.display_name)
print(" create_time:", deployed_model.create_time)
return deployed_model.id
deployed_model_id = deploy_model(model_to_deploy_id, DEPLOYED_NAME, endpoint_id)
"""
Explanation: Deploy Model resource to the Endpoint resource
Use this helper function deploy_model to deploy the Model resource to the Endpoint resource you created for serving predictions, with the following parameters:
model: The Vertex fully qualified model identifier of the model to upload (deploy) from the training pipeline.
deploy_model_display_name: A human readable name for the deployed model.
endpoint: The Vertex fully qualified endpoint identifier to deploy the model to.
The helper function calls the Endpoint client service's method deploy_model, which takes the following parameters:
endpoint: The Vertex fully qualified Endpoint resource identifier to deploy the Model resource to.
deployed_model: The requirements specification for deploying the model.
traffic_split: Percent of traffic at the endpoint that goes to this model, which is specified as a dictionary of one or more key/value pairs.
If only one model, then specify as { "0": 100 }, where "0" refers to this model being uploaded and 100 means 100% of the traffic.
If there are existing models on the endpoint, for which the traffic will be split, then use model_id to specify as { "0": percent, model_id: percent, ... }, where model_id is the model id of an existing model to the deployed endpoint. The percents must add up to 100.
Let's now dive deeper into the deployed_model parameter. This parameter is specified as a Python dictionary with the minimum required fields:
model: The Vertex fully qualified model identifier of the (upload) model to deploy.
display_name: A human readable name for the deployed model.
disable_container_logging: This disables logging of container events, such as execution failures (default is container logging is enabled). Container logging is typically enabled when debugging the deployment and then disabled when deployed for production.
dedicated_resources: This refers to how many compute instances (replicas) that are scaled for serving prediction requests.
machine_spec: The compute instance to provision. Use the variable you set earlier DEPLOY_GPU != None to use a GPU; otherwise only a CPU is allocated.
min_replica_count: The number of compute instances to initially provision, which you set earlier as the variable MIN_NODES.
max_replica_count: The maximum number of compute instances to scale to, which you set earlier as the variable MAX_NODES.
Traffic Split
Let's now dive deeper into the traffic_split parameter. This parameter is specified as a Python dictionary. This might at first be a tad bit confusing. Let me explain, you can deploy more than one instance of your model to an endpoint, and then set how much (percent) goes to each instance.
Why would you do that? Perhaps you already have a previous version deployed in production -- let's call that v1. You got better model evaluation on v2, but you don't know for certain that it is really better until you deploy to production. So in the case of traffic split, you might want to deploy v2 to the same endpoint as v1, but it only get's say 10% of the traffic. That way, you can monitor how well it does without disrupting the majority of users -- until you make a final decision.
Response
The method returns a long running operation response. We will wait sychronously for the operation to complete by calling the response.result(), which will block until the model is deployed. If this is the first time a model is deployed to the endpoint, it may take a few additional minutes to complete provisioning of resources.
End of explanation
"""
test_image = x_test[0]
test_label = y_test[0]
print(test_image.shape)
"""
Explanation: Make a online prediction request
Now do a online prediction to your deployed model.
Get test item
You will use an example out of the test (holdout) portion of the dataset as a test item.
End of explanation
"""
import base64
import cv2
cv2.imwrite("tmp.jpg", (test_image * 255).astype(np.uint8))
bytes = tf.io.read_file("tmp.jpg")
b64str = base64.b64encode(bytes.numpy()).decode("utf-8")
"""
Explanation: Prepare the request content
You are going to send the CIFAR10 image as compressed JPG image, instead of the raw uncompressed bytes:
cv2.imwrite: Use openCV to write the uncompressed image to disk as a compressed JPEG image.
Denormalize the image data from [0,1) range back to [0,255).
Convert the 32-bit floating point values to 8-bit unsigned integers.
tf.io.read_file: Read the compressed JPG images back into memory as raw bytes.
base64.b64encode: Encode the raw bytes into a base 64 encoded string.
End of explanation
"""
def predict_image(image, endpoint, parameters_dict):
# The format of each instance should conform to the deployed model's prediction input schema.
instances_list = [{serving_input: {"b64": image}}]
instances = [json_format.ParseDict(s, Value()) for s in instances_list]
response = clients["prediction"].predict(
endpoint=endpoint, instances=instances, parameters=parameters_dict
)
print("response")
print(" deployed_model_id:", response.deployed_model_id)
predictions = response.predictions
print("predictions")
for prediction in predictions:
print(" prediction:", prediction)
predict_image(b64str, endpoint_id, None)
"""
Explanation: Send the prediction request
Ok, now you have a test image. Use this helper function predict_image, which takes the following parameters:
image: The test image data as a numpy array.
endpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model resource was deployed to.
parameters_dict: Additional parameters for serving.
This function calls the prediction client service predict method with the following parameters:
endpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model resource was deployed to.
instances: A list of instances (encoded images) to predict.
parameters: Additional parameters for serving.
To pass the image data to the prediction service, in the previous step you encoded the bytes into base64 -- which makes the content safe from modification when transmitting binary data over the network. You need to tell the serving binary where your model is deployed to, that the content has been base64 encoded, so it will decode it on the other end in the serving binary.
Each instance in the prediction request is a dictionary entry of the form:
{serving_input: {'b64': content}}
input_name: the name of the input layer of the underlying model.
'b64': A key that indicates the content is base64 encoded.
content: The compressed JPG image bytes as a base64 encoded string.
Since the predict() service can take multiple images (instances), you will send your single image as a list of one image. As a final step, you package the instances list into Google's protobuf format -- which is what we pass to the predict() service.
The response object returns a list, where each element in the list corresponds to the corresponding image in the request. You will see in the output for each prediction:
predictions: Confidence level for the prediction, between 0 and 1, for each of the classes.
End of explanation
"""
def undeploy_model(deployed_model_id, endpoint):
response = clients["endpoint"].undeploy_model(
endpoint=endpoint, deployed_model_id=deployed_model_id, traffic_split={}
)
print(response)
undeploy_model(deployed_model_id, endpoint_id)
"""
Explanation: Undeploy the Model resource
Now undeploy your Model resource from the serving Endpoint resoure. Use this helper function undeploy_model, which takes the following parameters:
deployed_model_id: The model deployment identifier returned by the endpoint service when the Model resource was deployed to.
endpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model is deployed to.
This function calls the endpoint client service's method undeploy_model, with the following parameters:
deployed_model_id: The model deployment identifier returned by the endpoint service when the Model resource was deployed.
endpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model resource is deployed.
traffic_split: How to split traffic among the remaining deployed models on the Endpoint resource.
Since this is the only deployed model on the Endpoint resource, you simply can leave traffic_split empty by setting it to {}.
End of explanation
"""
delete_dataset = True
delete_pipeline = True
delete_model = True
delete_endpoint = True
delete_batchjob = True
delete_customjob = True
delete_hptjob = True
delete_bucket = True
# Delete the dataset using the Vertex fully qualified identifier for the dataset
try:
if delete_dataset and "dataset_id" in globals():
clients["dataset"].delete_dataset(name=dataset_id)
except Exception as e:
print(e)
# Delete the training pipeline using the Vertex fully qualified identifier for the pipeline
try:
if delete_pipeline and "pipeline_id" in globals():
clients["pipeline"].delete_training_pipeline(name=pipeline_id)
except Exception as e:
print(e)
# Delete the model using the Vertex fully qualified identifier for the model
try:
if delete_model and "model_to_deploy_id" in globals():
clients["model"].delete_model(name=model_to_deploy_id)
except Exception as e:
print(e)
# Delete the endpoint using the Vertex fully qualified identifier for the endpoint
try:
if delete_endpoint and "endpoint_id" in globals():
clients["endpoint"].delete_endpoint(name=endpoint_id)
except Exception as e:
print(e)
# Delete the batch job using the Vertex fully qualified identifier for the batch job
try:
if delete_batchjob and "batch_job_id" in globals():
clients["job"].delete_batch_prediction_job(name=batch_job_id)
except Exception as e:
print(e)
# Delete the custom job using the Vertex fully qualified identifier for the custom job
try:
if delete_customjob and "job_id" in globals():
clients["job"].delete_custom_job(name=job_id)
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex fully qualified identifier for the hyperparameter tuning job
try:
if delete_hptjob and "hpt_job_id" in globals():
clients["job"].delete_hyperparameter_tuning_job(name=hpt_job_id)
except Exception as e:
print(e)
if delete_bucket and "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
"""
Explanation: Cleaning up
To clean up all GCP resources used in this project, you can delete the GCP
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
Dataset
Pipeline
Model
Endpoint
Batch Job
Custom Job
Hyperparameter Tuning Job
Cloud Storage Bucket
End of explanation
"""
|
permamodel/permamodel | notebooks/Ku_2D.ipynb | mit | import os,sys
sys.path.append('../../permamodel/')
from permamodel.components import bmi_Ku_component
from permamodel import examples_directory
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
from mpl_toolkits.basemap import Basemap, addcyclic
import matplotlib as mpl
print examples_directory
cfg_file = os.path.join(examples_directory, 'Ku_method_2D.cfg')
x = bmi_Ku_component.BmiKuMethod()
x.initialize(cfg_file)
y0 = x.get_value('datetime__start')
y1 = x.get_value('datetime__end')
for i in np.linspace(y0,y1,y1-y0+1):
x.update()
print i
x.finalize()
ALT = x.get_value('soil__active_layer_thickness')
TTOP = x.get_value('soil__temperature')
LAT = x.get_value('latitude')
LON = x.get_value('longitude')
SND = x.get_value('snowpack__depth')
LONS, LATS = np.meshgrid(LON, LAT)
#print np.shape(ALT)
#print np.shape(LONS)
"""
Explanation: This model was developed by Permamodel workgroup.
Basic theory is Kudryavtsev's method.
Reference:
Anisimov, O. A., Shiklomanov, N. I., & Nelson, F. E. (1997).
Global warming and active-layer thickness: results from transient general circulation models.
Global and Planetary Change, 15(3), 61-77.
End of explanation
"""
fig=plt.figure(figsize=(8,4.5))
ax = fig.add_axes([0.05,0.05,0.9,0.85])
m = Basemap(llcrnrlon=-145.5,llcrnrlat=1.,urcrnrlon=-2.566,urcrnrlat=46.352,\
rsphere=(6378137.00,6356752.3142),\
resolution='l',area_thresh=1000.,projection='lcc',\
lat_1=50.,lon_0=-107.,ax=ax)
X, Y = m(LONS, LATS)
m.drawcoastlines(linewidth=1.25)
# m.fillcontinents(color='0.8')
m.drawparallels(np.arange(-80,81,20),labels=[1,1,0,0])
m.drawmeridians(np.arange(0,360,60),labels=[0,0,0,1])
clev = np.array([0.5, 1.0, 1.5, 2.0, 2.5, 3.0])
cs = m.contourf(X, Y, ALT, clev, cmap=plt.cm.PuBu_r, extend='both')
cbar = m.colorbar(cs)
cbar.set_label('m')
plt.show()
# print x._values["ALT"][:]
ALT2 = np.reshape(ALT, np.size(ALT))
ALT2 = ALT2[np.where(~np.isnan(ALT2))]
print 'Simulated ALT:'
print 'Max:', np.nanmax(ALT2),'m', '75% = ', np.percentile(ALT2, 75)
print 'Min:', np.nanmin(ALT2),'m', '25% = ', np.percentile(ALT2, 25)
plt.hist(ALT2)
"""
Explanation: Spatially visualize active layer thickness:
End of explanation
"""
fig2=plt.figure(figsize=(8,4.5))
ax2 = fig2.add_axes([0.05,0.05,0.9,0.85])
m2 = Basemap(llcrnrlon=-145.5,llcrnrlat=1.,urcrnrlon=-2.566,urcrnrlat=46.352,\
rsphere=(6378137.00,6356752.3142),\
resolution='l',area_thresh=1000.,projection='lcc',\
lat_1=50.,lon_0=-107.,ax=ax2)
X, Y = m2(LONS, LATS)
m2.drawcoastlines(linewidth=1.25)
# m.fillcontinents(color='0.8')
m2.drawparallels(np.arange(-80,81,20),labels=[1,1,0,0])
m2.drawmeridians(np.arange(0,360,60),labels=[0,0,0,1])
clev = np.linspace(start=-10, stop=0, num =11)
cs2 = m2.contourf(X, Y, TTOP, clev, cmap=plt.cm.seismic, extend='both')
cbar2 = m2.colorbar(cs2)
cbar2.set_label('Ground Temperature ($^\circ$C)')
plt.show()
# # print x._values["ALT"][:]
TTOP2 = np.reshape(TTOP, np.size(TTOP))
TTOP2 = TTOP2[np.where(~np.isnan(TTOP2))]
# Hist plot:
plt.hist(TTOP2)
mask = x._model.mask
print np.shape(mask)
plt.imshow(mask)
print np.nanmin(x._model.tot_percent)
"""
Explanation: Spatially visualize mean annual ground temperature:
End of explanation
"""
|
hashiprobr/redes-sociais | encontro22/small-world.ipynb | gpl-3.0 | import sys
sys.path.append('..')
import socnet as sn
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: Encontro 22: Mundos Pequenos
Importando as bibliotecas:
End of explanation
"""
from random import random
def generate_random_graph(num_nodes, c):
g = sn.generate_empty_graph(num_nodes)
nodes = list(g.nodes)
for i in range(num_nodes):
n = nodes[i]
for j in range(i + 1, num_nodes):
m = nodes[j]
if random() < c / num_nodes:
g.add_edge(n, m)
return g
"""
Explanation: Início da Atividade 1
Definindo uma função que gera um grafo aleatório tal que a probabilidade de uma aresta existir é c sobre o número de nós:
End of explanation
"""
N = 100
C = 10
rg = generate_random_graph(N, C)
"""
Explanation: Gerando um grafo passando parâmetros específicos para a função acima.
End of explanation
"""
from scipy.stats import poisson
x = range(N)
plt.hist([rg.degree(n) for n in rg.nodes], x, normed=True)
plt.plot(x, poisson.pmf(C, x));
"""
Explanation: Verificando se a distribuição dos graus de pg segue uma Poisson com média c:
End of explanation
"""
x = []
rcc = []
rad = []
for num_nodes in range(C + 1, N):
g = generate_random_graph(num_nodes, C)
x.append(num_nodes)
rcc.append(sn.average_clustering_coefficient(g))
rad.append(sn.average_distance(g))
"""
Explanation: Calculando variação de clustering coefficient e average distance conforme aumentamos num_nodes:
End of explanation
"""
plt.plot(x, rcc);
"""
Explanation: Plotando variação de clustering coefficient:
End of explanation
"""
plt.plot(x, rad);
"""
Explanation: Plotando variação de average distance:
End of explanation
"""
def generate_circular_graph(num_nodes, c):
g = sn.generate_empty_graph(num_nodes)
nodes = list(g.nodes)
for i in range(num_nodes):
n = nodes[i]
for delta in range(1, c // 2 + 1):
j = (i + delta) % num_nodes
m = nodes[j]
g.add_edge(n, m)
return g
"""
Explanation: Início da Atividade 2
Definindo uma função que gera um grafo circular:
End of explanation
"""
ccc = []
cad = []
for num_nodes in x:
g = generate_circular_graph(num_nodes, C)
ccc.append(sn.average_clustering_coefficient(g))
cad.append(sn.average_distance(g))
"""
Explanation: Calculando variação de clustering coefficient e average distance conforme aumentamos num_nodes:
End of explanation
"""
plt.plot(x, ccc);
"""
Explanation: Plotando variação de clustering coefficient:
End of explanation
"""
plt.plot(x, cad);
"""
Explanation: Plotando variação de average distance:
End of explanation
"""
from random import choice
def generate_hybrid_graph(num_nodes, c, p):
g = generate_circular_graph(num_nodes, c)
for n in g.nodes:
non_neighbors = set(g.nodes)
for m in g.neighbors(n):
non_neighbors.remove(m)
non_neighbors.remove(n)
for m in list(g.neighbors(n)):
if random() < p:
g.remove_edge(n, m)
non_neighbors.add(m)
l = choice(list(non_neighbors))
non_neighbors.remove(l)
g.add_edge(n, l)
return g
"""
Explanation: Início da Atividade 3
Definindo uma função que gera um grafo híbrido:
End of explanation
"""
N = 100
C = 10
"""
Explanation: Os próximos gráficos serão para N e C fixos. Por conveniência, vamos repetir a definição.
End of explanation
"""
x = []
hcc = []
had = []
for ip in range(0, 11):
p = ip / 10
g = generate_hybrid_graph(N, C, p)
x.append(p)
hcc.append(sn.average_clustering_coefficient(g))
had.append(sn.average_distance(g))
"""
Explanation: Calculando variação de clustering coefficient e average distance conforme aumentamos p:
End of explanation
"""
plt.plot(x, 11 * [C / N])
plt.plot(x, hcc);
"""
Explanation: Comparando variação de clustering coefficient com o valor de referência do modelo aleatório.
Em um "pequeno mundo", espera-se um clustering coefficient acima desse valor de referência.
End of explanation
"""
plt.plot(x, 11 * [N / (2 * C)])
plt.plot(x, had);
"""
Explanation: Comparando variação de average distance com o valor de referência do modelo circular.
Em um "pequeno mundo", espera-se um average distance abaixo desse valor de referência.
End of explanation
"""
|
drvinceknight/gt | nbs/solutions/08-Evolutionary-Game-Theory.ipynb | mit | import sympy as sym
x_1 = sym.symbols("x_1")
sym.solveset(3 * x_1 - 2 * (1 - x_1), x_1)
"""
Explanation: Evolutionary game theory - solutions
Assume the frequency dependent selection model for a population with two types of individuals: $x=(x_1, x_2)$ such that $x_1 + x_2 = 1$. Obtain all the stable distribution for the sytem defined by the following fitness functions:
For all of the functions in question, $x=(0, 1)$ and $x=(1, 0)$ are equilibria. There is a 3rd potential equilibria given by $f_1(x) = f_2(x)$. This is bookwork: https://vknight.org/gt/chapters/11/#Frequency-dependent-selection
$f_1(x)=x_1 - x_2\qquad f_2(x)=x_2 - 2 x_1$
$f_1(x)=f_2(x)\Rightarrow x_1 - x_2 = x_2 - 2x_1 \Rightarrow 3x_1 = 2x_2$ which gives (using the fact that $x_1 + x_2=1$ single solution: $(x_1, x_2)=(2/5, 3/5)$
End of explanation
"""
x = sym.symbols("x", positive=True)
res = sym.solveset(- x ** 2 + 4 * x - sym.S(5) / 2, x)
res
for sol in list(res):
print(sol, float(sol), float(1 - sol))
"""
Explanation: B. $f_1(x)=x_1x_2 - x_2\qquad f_2(x)=x_2 - x_1 + 1/2$
$f_1(x)=f_2(x)\Rightarrow x_1x_2 - x_2 = x_2 - x_1 + 1/2$ setting $x=x_1$ so that $1 - x = x_2$ gives: $x - x ^ 2 - 1 + x = 1 - x - x + 1/2$ which corresponds to:
$$-x ^ 2 + 4 x - 5/2=0$$
This has solution $x=2 \pm \sqrt{6}/2$, thus $(x_1, x_2) = (2 - \sqrt{6}/2, -1 + \sqrt{6}/2)$ is the only set of solutions for which $1 \geq x_1 \geq 0$ and $1\geq x_2 \geq 0$.
End of explanation
"""
|
gregcaporaso/short-read-tax-assignment | ipynb/mock-community/taxonomy-assignment-trimmed-dbs.ipynb | bsd-3-clause | from os.path import join, expandvars
from joblib import Parallel, delayed
from glob import glob
from os import system
from tax_credit.framework_functions import (parameter_sweep,
generate_per_method_biom_tables,
move_results_to_repository)
project_dir = expandvars("$HOME/Desktop/projects/tax-credit")
analysis_name= "mock-community"
data_dir = join(project_dir, "data", analysis_name)
reference_database_dir = expandvars("$HOME/Desktop/ref_dbs/")
results_dir = expandvars("$HOME/Desktop/projects/mock-community/")
"""
Explanation: Data generation: using python to sweep over methods and parameters
In this notebook, we illustrate how to use python to generate and run a list of commands. In this example, we generate a list of QIIME 1.9.0 assign_taxonomy.py commands, though this workflow for command generation is generally very useful for performing parameter sweeps (i.e., exploration of sets of parameters for achieving a specific result for comparative purposes).
Environment preparation
End of explanation
"""
dataset_reference_combinations = [
('mock-3', 'silva_123_v4_trim250'),
('mock-3', 'silva_123_clean_full16S'),
('mock-3', 'silva_123_clean_v4_trim250'),
('mock-3', 'gg_13_8_otus_clean_trim150'),
('mock-3', 'gg_13_8_otus_clean_full16S'),
('mock-9', 'unite_20.11.2016_clean_trim100'),
('mock-9', 'unite_20.11.2016_clean_fullITS'),
]
reference_dbs = {'gg_13_8_otus_clean_trim150': (join(reference_database_dir, 'gg_13_8_otus/99_otus_clean_515f-806r_trim150.fasta'),
join(reference_database_dir, 'gg_13_8_otus/99_otu_taxonomy_clean.tsv')),
'gg_13_8_otus_clean_full16S': (join(reference_database_dir, 'gg_13_8_otus/99_otus_clean.fasta'),
join(reference_database_dir, 'gg_13_8_otus/99_otu_taxonomy_clean.tsv')),
'unite_20.11.2016_clean_trim100': (join(reference_database_dir, 'unite_20.11.2016/sh_refs_qiime_ver7_99_20.11.2016_dev_clean_ITS1Ff-ITS2r_trim100.fasta'),
join(reference_database_dir, 'unite_20.11.2016/sh_taxonomy_qiime_ver7_99_20.11.2016_dev_clean.tsv')),
'unite_20.11.2016_clean_fullITS': (join(reference_database_dir, 'unite_20.11.2016/sh_refs_qiime_ver7_99_20.11.2016_dev_clean.fasta'),
join(reference_database_dir, 'unite_20.11.2016/sh_taxonomy_qiime_ver7_99_20.11.2016_dev_clean.tsv')),
'silva_123_v4_trim250': (join(reference_database_dir, 'SILVA123_QIIME_release/rep_set/rep_set_16S_only/99/99_otus_16S/dna-sequences.fasta'),
join(reference_database_dir, 'SILVA123_QIIME_release/taxonomy/16S_only/99/majority_taxonomy_7_levels.txt')),
'silva_123_clean_full16S': (join(reference_database_dir, 'SILVA123_QIIME_release/99_otus_16S_clean.fasta'),
join(reference_database_dir, 'SILVA123_QIIME_release/majority_taxonomy_7_levels_clean.tsv')),
'silva_123_clean_v4_trim250': (join(reference_database_dir, 'SILVA123_QIIME_release/99_otus_16S_clean/dna-sequences.fasta'),
join(reference_database_dir, 'SILVA123_QIIME_release/majority_taxonomy_7_levels_clean.tsv'))
}
"""
Explanation: Preparing data set sweep
First, we're going to define the data sets that we'll sweep over. The following cell does not need to be modified unless if you wish to change the datasets or reference databases used in the sweep. Here we will use a single mock community, but two different versions of the reference database.
End of explanation
"""
method_parameters_combinations = { # probabalistic classifiers
'rdp': {'confidence': [0.0, 0.1, 0.2, 0.3, 0.4, 0.5,
0.6, 0.7, 0.8, 0.9, 1.0]},
# global alignment classifiers
'uclust': {'min_consensus_fraction': [0.51, 0.76, 1.0],
'similarity': [0.9, 0.97, 0.99],
'uclust_max_accepts': [1, 3, 5]},
}
"""
Explanation: Preparing the method/parameter combinations and generating commands
Now we set the methods and method-specific parameters that we want to sweep. Modify to sweep other methods. Note how method_parameters_combinations feeds method/parameter combinations to parameter_sweep() in the cell below.
Assignment Using QIIME 1 or Command-Line Classifiers
Here we provide an example of taxonomy assignment using legacy QIIME 1 classifiers executed on the command line. To accomplish this, we must first convert commands to a string, which we then pass to bash for execution. As QIIME 1 is written in python-2, we must also activate a separate environment in which QIIME 1 has been installed. If any environmental variables need to be set (in this example, the RDP_JAR_PATH), we must also source the .bashrc file.
End of explanation
"""
command_template = "source activate qiime1; source ~/.bashrc; mkdir -p {0} ; assign_taxonomy.py -v -i {1} -o {0} -r {2} -t {3} -m {4} {5} --rdp_max_memory 7000"
commands = parameter_sweep(data_dir, results_dir, reference_dbs,
dataset_reference_combinations,
method_parameters_combinations, command_template,
infile='rep_seqs.fna',)
"""
Explanation: Now enter the template of the command to sweep, and generate a list of commands with parameter_sweep().
Fields must adhere to following format:
{0} = output directory
{1} = input data
{2} = reference sequences
{3} = reference taxonomy
{4} = method name
{5} = other parameters
End of explanation
"""
print(len(commands))
commands[0]
"""
Explanation: As a sanity check, we can look at the first command that was generated and the number of commands generated.
End of explanation
"""
Parallel(n_jobs=1)(delayed(system)(command) for command in commands)
"""
Explanation: Finally, we run our commands.
End of explanation
"""
new_reference_database_dir = expandvars("$HOME/Desktop/ref_dbs/")
reference_dbs = {'gg_13_8_otus_clean_trim150' : (join(new_reference_database_dir, 'gg_13_8_otus/99_otus_clean_515f-806r_trim150.qza'),
join(new_reference_database_dir, 'gg_13_8_otus/99_otu_taxonomy_clean.qza')),
'gg_13_8_otus_clean_full16S' : (join(new_reference_database_dir, 'gg_13_8_otus/99_otus_clean.qza'),
join(new_reference_database_dir, 'gg_13_8_otus/99_otu_taxonomy_clean.qza')),
'unite_20.11.2016_clean_trim100' : (join(new_reference_database_dir, 'sh_qiime_release_20.11.2016/developer/sh_refs_qiime_ver7_99_20.11.2016_dev_clean_ITS1Ff-ITS2r_trim100.qza'),
join(new_reference_database_dir, 'sh_qiime_release_20.11.2016/developer/sh_taxonomy_qiime_ver7_99_20.11.2016_dev_clean.qza')),
'unite_20.11.2016_clean_fullITS' : (join(new_reference_database_dir, 'sh_qiime_release_20.11.2016/developer/sh_refs_qiime_ver7_99_20.11.2016_dev_clean.qza'),
join(new_reference_database_dir, 'sh_qiime_release_20.11.2016/developer/sh_taxonomy_qiime_ver7_99_20.11.2016_dev_clean.qza')),
'silva_123_v4_trim250': (join(reference_database_dir, 'SILVA123_QIIME_release/rep_set/rep_set_16S_only/99/99_otus_16S_515f-806r_trim250.qza'),
join(reference_database_dir, 'SILVA123_QIIME_release/taxonomy/16S_only/99/majority_taxonomy_7_levels.qza')),
'silva_123_clean_full16S': (join(reference_database_dir, 'SILVA123_QIIME_release/99_otus_16S_clean.qza'),
join(reference_database_dir, 'SILVA123_QIIME_release/taxonomy/16S_only/99/majority_taxonomy_7_levels.qza')),
'silva_123_clean_v4_trim250': (join(reference_database_dir, 'SILVA123_QIIME_release/99_otus_16S_clean_515f-806r_trim250.qza'),
join(reference_database_dir, 'SILVA123_QIIME_release/taxonomy/16S_only/99/majority_taxonomy_7_levels.qza'))
}
method_parameters_combinations = { # probabalistic classifiers
'blast+' : {'p-evalue': [0.001],
'p-maxaccepts': [1, 10],
'p-min-id': [0.80, 0.99],
'p-min-consensus': [0.51, 0.99]}
}
command_template = "mkdir -p {0}; qiime feature-classifier blast --i-query {1} --o-classification {0}/rep_seqs_tax_assignments.qza --i-reference-reads {2} --i-reference-taxonomy {3} {5}; qiime tools export {0}/rep_seqs_tax_assignments.qza --output-dir {0}"
commands = parameter_sweep(data_dir, results_dir, reference_dbs,
dataset_reference_combinations,
method_parameters_combinations, command_template,
infile='rep_seqs.qza',)
Parallel(n_jobs=4)(delayed(system)(command) for command in commands)
method_parameters_combinations = { # probabalistic classifiers
'vsearch' : {'p-maxaccepts': [1, 10],
'p-min-id': [0.97, 0.99],
'p-min-consensus': [0.51, 0.99]}
}
command_template = "mkdir -p {0}; qiime feature-classifier vsearch --i-query {1} --o-classification {0}/rep_seqs_tax_assignments.qza --i-reference-reads {2} --i-reference-taxonomy {3} {5}; qiime tools export {0}/rep_seqs_tax_assignments.qza --output-dir {0}"
commands = parameter_sweep(data_dir, results_dir, reference_dbs,
dataset_reference_combinations,
method_parameters_combinations, command_template,
infile='rep_seqs.qza',)
Parallel(n_jobs=4)(delayed(system)(command) for command in commands)
new_reference_database_dir = expandvars("$HOME/Desktop/ref_dbs/")
reference_dbs = {'gg_13_8_otus_clean_trim150' : (join(new_reference_database_dir, 'gg_13_8_otus/99_otus_clean_515f-806r_trim150-classifier.qza'),
join(new_reference_database_dir, 'gg_13_8_otus/99_otu_taxonomy_clean.qza')),
'gg_13_8_otus_clean_full16S' : (join(new_reference_database_dir, 'gg_13_8_otus/99_otus_clean-classifier.qza'),
join(new_reference_database_dir, 'gg_13_8_otus/99_otu_taxonomy_clean.qza')),
'unite_20.11.2016_clean_trim100' : (join(new_reference_database_dir, 'sh_qiime_release_20.11.2016/developer/sh_refs_qiime_ver7_99_20.11.2016_dev_clean_ITS1Ff-ITS2r_trim100-classifier.qza'),
join(new_reference_database_dir, 'sh_qiime_release_20.11.2016/developer/sh_taxonomy_qiime_ver7_99_20.11.2016_dev_clean.qza')),
'unite_20.11.2016_clean_fullITS' : (join(new_reference_database_dir, 'sh_qiime_release_20.11.2016/developer/sh_refs_qiime_ver7_99_20.11.2016_dev_clean-classifier.qza'),
join(new_reference_database_dir, 'sh_qiime_release_20.11.2016/developer/sh_taxonomy_qiime_ver7_99_20.11.2016_dev_clean.qza')),
'silva_123_v4_trim250': (join(reference_database_dir, 'SILVA123_QIIME_release/99_otus_16S_515f-806r_trim250-classifier.qza'),
join(reference_database_dir, 'SILVA123_QIIME_release/taxonomy/16S_only/99/majority_taxonomy_7_levels.txt')),
'silva_123_clean_full16S': (join(reference_database_dir, 'SILVA123_QIIME_release/99_otus_16S_clean-classifier.qza'),
join(reference_database_dir, 'SILVA123_QIIME_release/majority_taxonomy_7_levels_clean.tsv')),
'silva_123_clean_v4_trim250': (join(reference_database_dir, 'SILVA123_QIIME_release/99_otus_16S_clean_515f-806r_trim250-classifier.qza'),
join(reference_database_dir, 'SILVA123_QIIME_release/majority_taxonomy_7_levels_clean.tsv'))
}
method_parameters_combinations = {
'q2-nb' : {'p-confidence': [0.0, 0.2, 0.4, 0.6, 0.8]}
}
command_template = "mkdir -p {0}; qiime feature-classifier classify --i-reads {1} --o-classification {0}/rep_seqs_tax_assignments.qza --i-classifier {2} {5}; qiime tools export {0}/rep_seqs_tax_assignments.qza --output-dir {0}"
commands = parameter_sweep(data_dir, results_dir, reference_dbs,
dataset_reference_combinations,
method_parameters_combinations, command_template,
infile='rep_seqs.qza',)
Parallel(n_jobs=1)(delayed(system)(command) for command in commands)
"""
Explanation: QIIME2 Classifiers
Now let's do it all over again, but with QIIME2 classifiers (which require different input files and command templates). Note that the QIIME2 artifact files required for assignment are not included in tax-credit, but can be generated from any reference dataset using qiime tools import.
End of explanation
"""
taxonomy_glob = join(results_dir, '*', '*', '*', '*', 'rep_seqs_tax_assignments.txt')
generate_per_method_biom_tables(taxonomy_glob, data_dir)
"""
Explanation: Generate per-method biom tables
Modify the taxonomy_glob below to point to the taxonomy assignments that were generated above. This may be necessary if filepaths were altered in the preceding cells.
End of explanation
"""
precomputed_results_dir = join(project_dir, "data", "precomputed-results", analysis_name)
for community in dataset_reference_combinations:
method_dirs = glob(join(results_dir, community[0], '*', '*', '*'))
move_results_to_repository(method_dirs, precomputed_results_dir)
"""
Explanation: Move result files to repository
Add results to the tax-credit directory (e.g., to push these results to the repository or compare with other precomputed results in downstream analysis steps). The precomputed_results_dir path and methods_dirs glob below should not need to be changed unless if substantial changes were made to filepaths in the preceding cells.
End of explanation
"""
for community in dataset_reference_combinations:
community_dir = join(precomputed_results_dir, community[0])
exp_observations = join(community_dir, '*', 'expected')
new_community_exp_dir = join(community_dir, community[1], 'expected')
!mkdir {new_community_exp_dir}; cp {exp_observations}/* {new_community_exp_dir}
"""
Explanation: Do not forget to copy the expected taxonomy files for this mock community!
End of explanation
"""
|
rrbb014/data_science | fastcampus_dss/2016_05_23/0523_04__누적 분포 함수와 확률 밀도 함수.ipynb | mit | %%tikz
\filldraw [fill=white] (0,0) circle [radius=1cm];
\foreach \angle in {60,30,...,-270} {
\draw[line width=1pt] (\angle:0.9cm) -- (\angle:1cm);
}
\draw (0,0) -- (90:0.8cm);
"""
Explanation: 누적 분포 함수와 확률 밀도 함수
누적 분포 함수(cumulative distribution function)와 확률 밀도 함수(probabiligy density function)는 확률 변수의 분포 즉, 확률 분포를 수학적으로 정의하기 위한 수식이다.
확률 분포의 묘사
확률의 정의에서 확률은 사건(event)이라는 표본의 집합에 대해 할당된 숫자라고 하였다. 데이터 분석을 하려면 확률이 구체적으로 어떻게 할당되었는지를 묘사(describe)하거 전달(communicate)해야 할 필요가 있다. 어떤 사건에 어느 정도의 확률이 할당되었는지를 묘사한 것을 확률 분포(distribution)라고 한다.
확률 분포를 묘사하기 위해서는 모든 사건(event)들을 하나 하나 제시하고 거기에 할당된 숫자를 보여야 하기 때문에 확률 분포의 묘사는 결코 쉽지 않은 작업이다. 그러나 확률 변수를 이용하면 이러한 묘사 작업이 좀 더 쉬워진다. 왜냐하면 사건(event)이 구간(interval)이 되고 이 구산을 지정하는데는 시작점과 끝점이라는 두 개의 숫자만 있으면 되기 때문이다.
[[school_notebook:4bcfe70a64de40ec945639236b0e911d]]
그러나 사건(event) 즉, 구간(interval) 하나를 정의하기 위해 숫자가 하나가 아닌 두 개가 필요하다는 점은 아무래도 불편하다. 숫자 하나만으로 사건 즉, 구간을 정의할 수 있는 방법은 없을까? 이를 해결하기 위한 아이디어 중 하나는 구간의 시작을 나타내는 숫자를 모두 같은 숫자인 음수 무한대($-\infty$)로 통일하는 것이다. 여러가지 구간들 중에서 시작점이 음수 무한대인 구간만 사용하는 것이라고 볼 수 있다.
$${ -\infty \leq X < -1 } $$
$${ -\infty \leq X < 0 } $$
$${ -\infty \leq X < 1 } $$
$${ -\infty \leq X < 2 } $$
$$ \vdots $$
$$ { -\infty \leq X < x } $$
$$ \vdots $$
물론 이러한 구간들은 시그마 필드를 구성하는 전체 사건(event)들 중 일부에 지나지 않는다. 그러나 확률 공간과 시그마 필드의 정의를 이용하면 이러한 구간들로부터 시작점이 음수 무한대가 아닌 다른 구간들을 생성할 수 있다. 또한 새로 생성된 구간들에 대한 확률값도 확률의 정의에 따라 계산할 수 있다.
누적 확률 분포
위와 같은 방법으로 서술된 확률 분포를 누적 분포 함수 (cumulative distribution function) 또는 누적 확률 분포라고 하고 약자로 cdf라고 쓴다. 일반적으로 cdf는 대문자를 사용하여 $F(x)$와 같은 기호로 표시하며 이 때 독립 변수 $x$는 범위의 끝을 뜻한다. 범위의 시작은 음의 무한대(negative infinity, $-\infty$)이다.
확률 변수 $X$에 대한 누적 확률 분포 $F(x)$의 수학적 정의는 다음과 같다.
$$ F(x) = P({X < x}) = P(X < x)$$
몇가지 누적 확률 분포 표시의 예를 들면 다음과 같다.
$$ \vdots $$
* $F(-1)$ : 확률 변수가 $-\infty$이상 -1 미만인 구간 내에 존재할 확률 즉, $P( { -\infty \leq X < -1 })$
* $F(0)$ : 확률 변수가 $-\infty$이상 0 미만인 구간 내에 존재할 확률 즉, $P( { -\infty \leq X < 0 })$
* $F(1)$ : 확률 변수가 $-\infty$이상 1 미만인 구간 내에 존재할 확률 즉, $P( { -\infty \leq X < 1 })$
$$ \vdots $$
* $F(10)$ : 확률 변수가 $-\infty$이상 10 미만인 구간 내에 존재할 확률 즉, $P( { -\infty \leq X < 10 })$
$$ \vdots $$
End of explanation
"""
t = np.linspace(-100, 500, 100)
F = t / 360
F[t < 0] = 0
F[t > 360] = 1
plt.plot(t, F)
plt.ylim(-0.1, 1.1)
plt.xticks([0, 180, 360]);
plt.title("Cumulative Distribution Function");
plt.xlabel("$x$ (deg.)");
plt.ylabel("$F(x)$");
"""
Explanation: 시계 바늘 확률 문제의 경우를 예로 들어보자. 이 경우에는 각도가 0도부터 360까지이지만 음의 무한대를 시작점으로 해도 상관없다.
$$ F(0) = P({ -\infty {}^{\circ} \leq \theta < 0 {}^{\circ} }) = 0 $$
$$ F(10) = P({ -\infty {}^{\circ} \leq \theta < 10 {}^{\circ} }) = \dfrac{1}{36} $$
$$ F(20) = P({ -\infty {}^{\circ} \leq \theta < 20 {}^{\circ} }) = \dfrac{2}{36} $$
$$ \vdots $$
$$ F(350) = P({ -\infty {}^{\circ} \leq \theta < 350 {}^{\circ} }) = \dfrac{35}{36} $$
$$ F(360) = P({ -\infty {}^{\circ} \leq \theta < 360 {}^{\circ} }) = 1 $$
$$ F(370) = P({ -\infty {}^{\circ} \leq \theta < 370 {}^{\circ} }) = 1 $$
$$ F(380) = P({ -\infty {}^{\circ} \leq \theta < 380 {}^{\circ} }) = 1 $$
$$ \vdots $$
이를 NumPy와 matplotlib를 사용하여 그래프로 그래면 다음과 같다.
End of explanation
"""
t = np.linspace(-100, 500, 1000)
F = t / 360
F[t < 0] = 0
F[t > 360] = 1
f = np.gradient(F) # 수치미분
plt.plot(t, f)
plt.ylim(-0.0001, f.max()*1.1)
plt.xticks([0, 180, 360]);
plt.title("Probability Density Function");
plt.xlabel("$x$ (deg.)");
plt.ylabel("$f(x)$");
"""
Explanation: 누적 밀도 함수 즉 cdf는 다음과 같은 특징을 가진다.
$F(-\infty) = 0$
$F(+\infty) = 1$
$F(x) \geq F(y) \;\; \text{ if } \;\; x > y $
확률 밀도 함수
누적 분포 함수는 확률 분포를 함수라는 편리한 상태로 바꾸어 주었다. 누적 분포 함수는 확률이 어느 사건(event)에 어느 정도 분포되어 있는지 수학적으로 명확하게 표현해 준다.
그러나 누적 분포 함수가 표현하는 사건이 음수 무한대를 시작점으로 하고 변수 $x$를 끝점으로 하는 구간이다보니 분포의 형상을 직관적으로 이해하기는 힘든 단점이 있다. 다시 말해서 어떤 확률 변수 값이 더 자주 나오는지에 대한 정보를 알기 힘들다는 점이다.
이를 알기 위해서는 확률 변수가 나올 수 있는 전체 구간 ($-\infty$ ~ $\infty$)을 아주 작은 폭을 가지는 구간들로 나눈 다음 각 구간의 확률을 살펴보는 것이 편리하다. 다만 이렇게 되면 구간의 폭(width)을 어느 정도로 정의해야 하는지에 대한 추가적인 약속이 필요하기 때문에 실효성이 떨어진다.
이러한 단점을 보완하기 위해 생각한 것이 절대적인 확률이 아닌 상대적인 확률 분포 형태만을 보기 위한 확률 밀도 함수(probability density function)이다.
누적 확률 분포 그래프의 x축의 오른쪽으로 이동하면서 크기의 변화를 살펴보자.만약 특정한 $x$값 근처의 구간에 확률이 배정되지 않았다면 누적 분포 함수는 그 구간을 지나도 증가하지 않는다. 즉, 기울기가 0이다. 왜냐하면 $x$ 값이 커졌다(x축의 오른쪽으로 이동하였다)는 것은 앞의 구간을 포함하는 더 큰 구간(사건)에 대한 확률을 묘사하고 있는 것인데 추가적으로 포함된 신규 구간에 확률이 없다면 그 신규 구간을 포함한 구간이나 포함하지 않은 구간이나 배정된 확률이 같기 때문이다.
누적 분포 함수의 기울기가 0이 아닌 경우는 추가적으로 포함된 구간에 0이 아닌 확률이 할당되어 있는 경우이다. 만약 더 많은 확률이 할당되었다면 누적 분포 함수는 그 구간을 지나면서 더 빠른 속도로 증가할 것이다. 다시 말해서 함수의 기울기가 커진다. 이러한 방식으로 누적 분포의 기울기의 크기를 보면 각 위치에 배정된 확률의 상대적 크기를 알 수 있다.
기울기를 구하는 수학적 연산이 미분(differentiation)이므로 확률 밀도 함수는 누적 분포 함수의 미분으로 정의한다.
$$ \dfrac{dF(x)}{dx} = f(x) $$
이를 적분으로 나타내면 다음과 같다.
$$ F(x) = \int_{-\infty}^{x} f(u) du $$
x를 써버려서 u로...대체. 암거나 써도댐.
확률 밀도 함수는 특정 확률 변수 구간의 확률이 다른 구간에 비해 상대적으로 얼마나 높은가를 나타내는 것이며 그 값 자체가 확률은 아니다라는 점을 명심해야 한다.
확률 밀도 함수는 다음과 같은 특징을 가진다.
$-\infty$ 부터 $\infty$ 까지 적분하면 그 값은 1이 된다.
$$ \int_{-\infty}^{\infty} f(u)du = 1$$
확률 밀도 함수는 0보다 같거나 크다.
$$ f(x) \geq 0 $$
앞서 보인 시계 바늘 문제에서 확률 밀도함수를 구하면 다음과 같다.
End of explanation
"""
x = np.arange(1,7)
y = np.array([0.0, 0.1, 0.1, 0.2, 0.2, 0.4])
plt.stem(x, y);
plt.xlim(0, 7);
plt.ylim(-0.01, 0.5);
"""
Explanation: 확률 질량 함수
이산 확률 분포는 확률 밀도 함수를 정의할 수 없는 대신 확률 질량 함수가 존재한다. 확률 질량 함수(probability mass funtion)는 이산 확률 변수의 가능한 값 하나 하나에 대해 확률을 정의한 함수이다. 예를 들어 6면체인 주사위를 던져서 나올 수 있는 값은 1부터 6까지의 이산적인 값을 가지는데 이러한 이산 확률 변수는 예를 들어 다음과 같은 확률 질량 함수를 가질 수 있다. 이 경우에는 공정하지 않은(unfair) 주사위의 확률 분포를 보이고 있다.
End of explanation
"""
x = np.arange(1,7)
y = np.array([0.0, 0.1, 0.1, 0.2, 0.2, 0.4])
z = np.cumsum(y)
plt.step(x, z);
plt.xlim(0, 7);
plt.ylim(-0.01, 1.1);
"""
Explanation: 위의 확률 질량 함수는 주사위 눈금 1이 나오지 않고 6이 비정상적으로 많이 나오게 만든 비정상적인 주사위(unfair dice)를 묘사한다.
이 확률 변수에 대해 각 값을 누적하여 더하면 이산 확률 변수의 누적 분포 함수(cumulative distribution function)를 구할 수 있다.
End of explanation
"""
|
makeyourowntextminingtoolkit/makeyourowntextminingtoolkit | A03_svd_applied_to_slightly_bigger_word_document_matrix.ipynb | gpl-2.0 | #import pandas for conviently labelled arrays
import pandas
# import numpy for SVD function
import numpy
# import matplotlib.pyplot for visualising arrays
import matplotlib.pyplot as plt
"""
Explanation: SVD Applied to a Word-Document Matrix
This notebook applies the SVD to a simple word-document matrix. The aim is to see what the reconstructed reduced dimension matrix looks like.
End of explanation
"""
# create a simple word-document matrix as a pandas dataframe, the content values have been normalised
words = ['wheel', ' seat', ' engine', ' slice', ' oven', ' boil', 'door', 'kitchen', 'roof']
print(words)
documents = ['doc1', 'doc2', 'doc3', 'doc4', 'doc5', 'doc6', 'doc7', 'doc8', 'doc9']
word_doc = pandas.DataFrame([[0.5,0.3333, 0.25, 0, 0, 0, 0, 0, 0],
[0.25, 0.3333, 0, 0, 0, 0, 0, 0.25, 0],
[0.25, 0.3333, 0.75, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0.5, 0.5, 0.6, 0, 0, 0],
[0, 0, 0, 0.3333, 0.1667, 0, 0.5, 0, 0],
[0, 0, 0, 0.1667, 0.3333, 0.4, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0.25, 0.25],
[0, 0, 0, 0, 0, 0, 0.5, 0.25, 0.25],
[0, 0, 0, 0, 0, 0, 0, 0.25, 0.5]], index=words, columns=documents)
# and show it
word_doc
"""
Explanation: A Slightly Bigger Word-Document Matrix
The example word-document matrix is taken from http://makeyourowntextminingtoolkit.blogspot.co.uk/2016/11/so-many-dimensions-and-how-to-reduce.html but expanded to cover a 3rd topic related to a home or house
End of explanation
"""
# create a numpy array from the pandas dataframe
A = word_doc.values
"""
Explanation: Word-Document Matrix is A
End of explanation
"""
# break it down into an SVD
U, s, VT = numpy.linalg.svd(A, full_matrices=False)
S = numpy.diag(s)
# what are U, S and V
print("U =\n", numpy.round(U, decimals=2), "\n")
print("S =\n", numpy.round(S, decimals=2), "\n")
print("V^T =\n", numpy.round(VT, decimals=2), "\n")
"""
Explanation: Now Take the SVD
End of explanation
"""
# rebuild A2 from U.S.V
A2 = numpy.dot(U,numpy.dot(S,VT))
print("A2 =\n", numpy.round(A2, decimals=2))
"""
Explanation: We can see above that the values in the diagonal S matrix are ordered by magnitide. There is a significant different between the biggest value 1.1, and the smallest 0.05. The halfway value of 0.28 is still much smaller than the largest.
Check U, S and V Do Actually Reconstruct A
End of explanation
"""
# S_reduced is the same as S but with only the top 3 elements kept
S_reduced = numpy.zeros_like(S)
# only keep top two eigenvalues
l = 3
S_reduced[:l, :l] = S[:l,:l]
# show S_rediced which has less info than original S
print("S_reduced =\n", numpy.round(S_reduced, decimals=2))
"""
Explanation: Yes, that worked .. the reconstructed A2 is the same as the original A (within the bounds of small floating point accuracy)
Now Reduce Dimensions, Extract Topics
Here we use only the top 3 values of the S singular value matrix, pretty brutal reduction in dimensions!
Why 3, and not 2?
We'll only plot 2 dimensions for the document cluster view, and later we'll use 3 dimensions for the topic word view
End of explanation
"""
# what is the document matrix now?
S_reduced_VT = numpy.dot(S_reduced, VT)
print("S_reduced_VT = \n", numpy.round(S_reduced_VT, decimals=2))
# plot the array
p = plt.subplot(111)
p.axis('scaled'); p.axis([-2, 2, -2, 2]); p.axhline(y=0, color='lightgrey'); p.axvline(x=0, color='lightgrey')
p.set_yticklabels([]); p.set_xticklabels([])
p.set_title("S_reduced_VT")
p.plot(S_reduced_VT[0,],S_reduced_VT[1,],'ro')
plt.show()
"""
Explanation: New View Of Documents
End of explanation
"""
# topics are a linear combination of original words
U_S_reduced = numpy.dot(U, S_reduced)
df = pandas.DataFrame(numpy.round(U_S_reduced, decimals=2), index=words)
# show colour coded so it is easier to see significant word contributions to a topic
df.style.background_gradient(cmap=plt.get_cmap('Blues'), low=0, high=2)
"""
Explanation: The above shows that there are indeed 3 clusters of documents. That matches our expectations as we constructed the example data set that way.
Topics from New View of Words
End of explanation
"""
|
tensorflow/tpu | tools/colab/bert_finetuning_with_cloud_tpus.ipynb | apache-2.0 | # Copyright 2018 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""
Explanation: <a href="https://colab.research.google.com/github/tensorflow/tpu/blob/master/tools/colab/bert_finetuning_with_cloud_tpus.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Copyright 2018 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
"""
import datetime
import json
import os
import pprint
import random
import string
import sys
import tensorflow as tf
assert 'COLAB_TPU_ADDR' in os.environ, 'ERROR: Not connected to a TPU runtime; please see the first cell in this notebook for instructions!'
TPU_ADDRESS = 'grpc://' + os.environ['COLAB_TPU_ADDR']
print('TPU address is', TPU_ADDRESS)
from google.colab import auth
auth.authenticate_user()
with tf.Session(TPU_ADDRESS) as session:
print('TPU devices:')
pprint.pprint(session.list_devices())
# Upload credentials to TPU.
with open('/content/adc.json', 'r') as f:
auth_info = json.load(f)
tf.contrib.cloud.configure_gcs(session, credentials=auth_info)
# Now credentials are set for all future sessions on this TPU.
"""
Explanation: BERT End to End (Fine-tuning + Predicting) in 5 minutes with Cloud TPU
Overview
BERT, or Bidirectional Embedding Representations from Transformers, is a new method of pre-training language representations which obtains state-of-the-art results on a wide array of Natural Language Processing (NLP) tasks. The academic paper can be found here: https://arxiv.org/abs/1810.04805.
This Colab demonstates using a free Colab Cloud TPU to fine-tune sentence and sentence-pair classification tasks built on top of pretrained BERT models and
run predictions on tuned model. The colab demonsrates loading pretrained BERT models from both TF Hub and checkpoints.
Note: You will need a GCP (Google Compute Engine) account and a GCS (Google Cloud
Storage) bucket for this Colab to run.
Please follow the Google Cloud TPU quickstart for how to create GCP account and GCS bucket. You have $300 free credit to get started with any GCP product. You can learn more about Cloud TPU at https://cloud.google.com/tpu/docs.
This notebook is hosted on GitHub. To view it in its original repository, after opening the notebook, select File > View on GitHub.
Learning objectives
In this notebook, you will learn how to train and evaluate a BERT model using TPU.
Instructions
<h3><a href="https://cloud.google.com/tpu/"><img valign="middle" src="https://raw.githubusercontent.com/GoogleCloudPlatform/tensorflow-without-a-phd/master/tensorflow-rl-pong/images/tpu-hexagon.png" width="50"></a> Train on TPU</h3>
Create a Cloud Storage bucket for your TensorBoard logs at http://console.cloud.google.com/storage and fill in the BUCKET parameter in the "Parameters" section below.
On the main menu, click Runtime and select Change runtime type. Set "TPU" as the hardware accelerator.
Click Runtime again and select Runtime > Run All (Watch out: the "Colab-only auth for this notebook and the TPU" cell requires user input). You can also run the cells manually with Shift-ENTER.
Set up your TPU environment
In this section, you perform the following tasks:
Set up a Colab TPU running environment
Verify that you are connected to a TPU device
Upload your credentials to TPU to access your GCS bucket.
End of explanation
"""
import sys
!test -d bert_repo || git clone https://github.com/google-research/bert bert_repo
if not 'bert_repo' in sys.path:
sys.path += ['bert_repo']
# import python modules defined by BERT
import modeling
import optimization
import run_classifier
import run_classifier_with_tfhub
import tokenization
# import tfhub
import tensorflow_hub as hub
"""
Explanation: Prepare and import BERT modules
With your environment configured, you can now prepare and import the BERT modules. The following step clones the source code from GitHub and import the modules from the source. Alternatively, you can install BERT using pip (!pip install bert-tensorflow).
End of explanation
"""
TASK = 'MRPC' #@param {type:"string"}
assert TASK in ('MRPC', 'CoLA'), 'Only (MRPC, CoLA) are demonstrated here.'
# Download glue data.
! test -d download_glue_repo || git clone https://gist.github.com/60c2bdb54d156a41194446737ce03e2e.git download_glue_repo
!python download_glue_repo/download_glue_data.py --data_dir='glue_data' --tasks=$TASK
TASK_DATA_DIR = 'glue_data/' + TASK
print('***** Task data directory: {} *****'.format(TASK_DATA_DIR))
!ls $TASK_DATA_DIR
BUCKET = 'YOUR_BUCKET' #@param {type:"string"}
assert BUCKET, 'Must specify an existing GCS bucket name'
OUTPUT_DIR = 'gs://{}/bert-tfhub/models/{}'.format(BUCKET, TASK)
tf.gfile.MakeDirs(OUTPUT_DIR)
print('***** Model output directory: {} *****'.format(OUTPUT_DIR))
# Available pretrained model checkpoints:
# uncased_L-12_H-768_A-12: uncased BERT base model
# uncased_L-24_H-1024_A-16: uncased BERT large model
# cased_L-12_H-768_A-12: cased BERT large model
BERT_MODEL = 'uncased_L-12_H-768_A-12' #@param {type:"string"}
BERT_MODEL_HUB = 'https://tfhub.dev/google/bert_' + BERT_MODEL + '/1'
"""
Explanation: Prepare for training
This next section of code performs the following tasks:
Specify task and download training data.
Specify BERT pretrained model
Specify GS bucket, create output directory for model checkpoints and eval results.
End of explanation
"""
tokenizer = run_classifier_with_tfhub.create_tokenizer_from_hub_module(BERT_MODEL_HUB)
tokenizer.tokenize("This here's an example of using the BERT tokenizer")
"""
Explanation: Now let's load tokenizer module from TF Hub and play with it.
End of explanation
"""
TRAIN_BATCH_SIZE = 32
EVAL_BATCH_SIZE = 8
PREDICT_BATCH_SIZE = 8
LEARNING_RATE = 2e-5
NUM_TRAIN_EPOCHS = 3.0
MAX_SEQ_LENGTH = 128
# Warmup is a period of time where hte learning rate
# is small and gradually increases--usually helps training.
WARMUP_PROPORTION = 0.1
# Model configs
SAVE_CHECKPOINTS_STEPS = 1000
SAVE_SUMMARY_STEPS = 500
processors = {
"cola": run_classifier.ColaProcessor,
"mnli": run_classifier.MnliProcessor,
"mrpc": run_classifier.MrpcProcessor,
}
processor = processors[TASK.lower()]()
label_list = processor.get_labels()
# Compute number of train and warmup steps from batch size
train_examples = processor.get_train_examples(TASK_DATA_DIR)
num_train_steps = int(len(train_examples) / TRAIN_BATCH_SIZE * NUM_TRAIN_EPOCHS)
num_warmup_steps = int(num_train_steps * WARMUP_PROPORTION)
# Setup TPU related config
tpu_cluster_resolver = tf.contrib.cluster_resolver.TPUClusterResolver(TPU_ADDRESS)
NUM_TPU_CORES = 8
ITERATIONS_PER_LOOP = 1000
def get_run_config(output_dir):
return tf.contrib.tpu.RunConfig(
cluster=tpu_cluster_resolver,
model_dir=output_dir,
save_checkpoints_steps=SAVE_CHECKPOINTS_STEPS,
tpu_config=tf.contrib.tpu.TPUConfig(
iterations_per_loop=ITERATIONS_PER_LOOP,
num_shards=NUM_TPU_CORES,
per_host_input_for_training=tf.contrib.tpu.InputPipelineConfig.PER_HOST_V2))
"""
Explanation: Also we initilize our hyperprams, prepare the training data and initialize TPU config.
End of explanation
"""
# Force TF Hub writes to the GS bucket we provide.
os.environ['TFHUB_CACHE_DIR'] = OUTPUT_DIR
model_fn = run_classifier_with_tfhub.model_fn_builder(
num_labels=len(label_list),
learning_rate=LEARNING_RATE,
num_train_steps=num_train_steps,
num_warmup_steps=num_warmup_steps,
use_tpu=True,
bert_hub_module_handle=BERT_MODEL_HUB
)
estimator_from_tfhub = tf.contrib.tpu.TPUEstimator(
use_tpu=True,
model_fn=model_fn,
config=get_run_config(OUTPUT_DIR),
train_batch_size=TRAIN_BATCH_SIZE,
eval_batch_size=EVAL_BATCH_SIZE,
predict_batch_size=PREDICT_BATCH_SIZE,
)
"""
Explanation: Fine-tune and Run Predictions on a pretrained BERT Model from TF Hub
This section demonstrates fine-tuning from a pre-trained BERT TF Hub module and running predictions.
End of explanation
"""
# Train the model
def model_train(estimator):
print('MRPC/CoLA on BERT base model normally takes about 2-3 minutes. Please wait...')
# We'll set sequences to be at most 128 tokens long.
train_features = run_classifier.convert_examples_to_features(
train_examples, label_list, MAX_SEQ_LENGTH, tokenizer)
print('***** Started training at {} *****'.format(datetime.datetime.now()))
print(' Num examples = {}'.format(len(train_examples)))
print(' Batch size = {}'.format(TRAIN_BATCH_SIZE))
tf.logging.info(" Num steps = %d", num_train_steps)
train_input_fn = run_classifier.input_fn_builder(
features=train_features,
seq_length=MAX_SEQ_LENGTH,
is_training=True,
drop_remainder=True)
estimator.train(input_fn=train_input_fn, max_steps=num_train_steps)
print('***** Finished training at {} *****'.format(datetime.datetime.now()))
model_train(estimator_from_tfhub)
def model_eval(estimator):
# Eval the model.
eval_examples = processor.get_dev_examples(TASK_DATA_DIR)
eval_features = run_classifier.convert_examples_to_features(
eval_examples, label_list, MAX_SEQ_LENGTH, tokenizer)
print('***** Started evaluation at {} *****'.format(datetime.datetime.now()))
print(' Num examples = {}'.format(len(eval_examples)))
print(' Batch size = {}'.format(EVAL_BATCH_SIZE))
# Eval will be slightly WRONG on the TPU because it will truncate
# the last batch.
eval_steps = int(len(eval_examples) / EVAL_BATCH_SIZE)
eval_input_fn = run_classifier.input_fn_builder(
features=eval_features,
seq_length=MAX_SEQ_LENGTH,
is_training=False,
drop_remainder=True)
result = estimator.evaluate(input_fn=eval_input_fn, steps=eval_steps)
print('***** Finished evaluation at {} *****'.format(datetime.datetime.now()))
output_eval_file = os.path.join(OUTPUT_DIR, "eval_results.txt")
with tf.gfile.GFile(output_eval_file, "w") as writer:
print("***** Eval results *****")
for key in sorted(result.keys()):
print(' {} = {}'.format(key, str(result[key])))
writer.write("%s = %s\n" % (key, str(result[key])))
model_eval(estimator_from_tfhub)
def model_predict(estimator):
# Make predictions on a subset of eval examples
prediction_examples = processor.get_dev_examples(TASK_DATA_DIR)[:PREDICT_BATCH_SIZE]
input_features = run_classifier.convert_examples_to_features(prediction_examples, label_list, MAX_SEQ_LENGTH, tokenizer)
predict_input_fn = run_classifier.input_fn_builder(features=input_features, seq_length=MAX_SEQ_LENGTH, is_training=False, drop_remainder=True)
predictions = estimator.predict(predict_input_fn)
for example, prediction in zip(prediction_examples, predictions):
print('text_a: %s\ntext_b: %s\nlabel:%s\nprediction:%s\n' % (example.text_a, example.text_b, str(example.label), prediction['probabilities']))
model_predict(estimator_from_tfhub)
"""
Explanation: At this point, you can now fine-tune the model, evaluate it, and run predictions on it.
End of explanation
"""
# Setup task specific model and TPU running config.
BERT_PRETRAINED_DIR = 'gs://cloud-tpu-checkpoints/bert/' + BERT_MODEL
print('***** BERT pretrained directory: {} *****'.format(BERT_PRETRAINED_DIR))
!gsutil ls $BERT_PRETRAINED_DIR
CONFIG_FILE = os.path.join(BERT_PRETRAINED_DIR, 'bert_config.json')
INIT_CHECKPOINT = os.path.join(BERT_PRETRAINED_DIR, 'bert_model.ckpt')
model_fn = run_classifier.model_fn_builder(
bert_config=modeling.BertConfig.from_json_file(CONFIG_FILE),
num_labels=len(label_list),
init_checkpoint=INIT_CHECKPOINT,
learning_rate=LEARNING_RATE,
num_train_steps=num_train_steps,
num_warmup_steps=num_warmup_steps,
use_tpu=True,
use_one_hot_embeddings=True
)
OUTPUT_DIR = OUTPUT_DIR.replace('bert-tfhub', 'bert-checkpoints')
tf.gfile.MakeDirs(OUTPUT_DIR)
estimator_from_checkpoints = tf.contrib.tpu.TPUEstimator(
use_tpu=True,
model_fn=model_fn,
config=get_run_config(OUTPUT_DIR),
train_batch_size=TRAIN_BATCH_SIZE,
eval_batch_size=EVAL_BATCH_SIZE,
predict_batch_size=PREDICT_BATCH_SIZE,
)
"""
Explanation: Fine-tune and run predictions on a pre-trained BERT model from checkpoints
Alternatively, you can also load pre-trained BERT models from saved checkpoints.
End of explanation
"""
model_train(estimator_from_checkpoints)
model_eval(estimator_from_checkpoints)
model_predict(estimator_from_checkpoints)
"""
Explanation: Now, you can repeat the training, evaluation, and prediction steps.
End of explanation
"""
|
landlab/landlab | notebooks/tutorials/groundwater/groundwater_flow.ipynb | mit | import numpy as np
import matplotlib.pyplot as plt
from landlab import RasterModelGrid, imshow_grid
from landlab.components import GroundwaterDupuitPercolator, FlowAccumulator
from landlab.components.uniform_precip import PrecipitationDistribution
"""
Explanation: <a href="http://landlab.github.io"><img style="float: left" src="../../landlab_header.png"></a>
Modeling groundwater flow in a conceptual catchment
<hr>
<small>For more Landlab tutorials, click here: <a href="https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html">https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html</a></small>
<hr>
This tutorial demonstrates how the GroundwaterDupuitPercolator can be used to model groundwater flow and seepage (groundwater return flow). It is recommended to read the documentation for the component before starting this tutorial to be familiar with the mechanics of the model.
In this tutorial you will:
* Create a raster grid on which to run the model
* Simulate constant recharge and check that the component conserves mass
* Confirm conservation of mass when the recharge rate and timestep are changed
* Simulate recharge from storm events, check conservation of mass, and look at the outflow hydrograph
* Learn how to set fixed gradient boundaries and set values for the hydraulic gradient
Import libraries
End of explanation
"""
boundaries = {"top": "closed", "bottom": "closed", "right": "closed", "left": "closed"}
grid = RasterModelGrid((51, 51), xy_spacing=10.0, bc=boundaries)
grid.status_at_node[1] = grid.BC_NODE_IS_FIXED_VALUE
elev = grid.add_zeros("node", "topographic__elevation")
elev[:] = (0.001 * grid.x_of_node ** 2 + 0.001 * grid.y_of_node ** 2) + 2
base = grid.add_zeros("node", "aquifer_base__elevation")
base[:] = elev - 2
wt = grid.add_zeros("node", "water_table__elevation")
wt[:] = elev
plt.figure()
imshow_grid(grid, "topographic__elevation")
"""
Explanation: Create a RasterModelGrid
Here you will make the grid on which we will run the model. You will create three fields: topographic elevation, aquifer base elevation, and initial water table elevation
End of explanation
"""
K = 0.01 # hydraulic conductivity, (m/s)
R = 0 # 1e-7 # recharge rate, (m/s)
n = 0.2 # porosity, (-)
gdp = GroundwaterDupuitPercolator(
grid, hydraulic_conductivity=K, porosity=n, recharge_rate=R, regularization_f=0.01
)
fa = FlowAccumulator(
grid,
surface="topographic__elevation",
flow_director="FlowDirectorSteepest",
runoff_rate="surface_water__specific_discharge",
)
"""
Explanation: The grid is square with dimensions 500x500m. The surface elevation and aquifer base have the same concave parabolic shape, with thickness 2m between them. The aquifer is initially fully saturated (water table at the surface). Water is only allowed to exit the domain through a single node in the the lower left corner. All other boundaries are closed.
Simulate constant groundwater recharge
Now initialize the model components. In addition to the grid, the GroundwaterDupuitPercolator takes four optional arguments: hydraulic conductivity, porosity, recharge rate, and a regularization factor that smooths the transition between subsurface and surface flow as the water table approaches the ground surface. The greater the value, the smoother the transition.
You will also initialize a FlowAccumulator in order to use an included method to calculate the surface water discharge out of the domain. The runoff rate used by the FlowAccumulator is the surface water specific discharge from the groundwater model.
End of explanation
"""
N = 500
dt = 1e2
recharge_flux = np.zeros(N)
gw_flux = np.zeros(N)
sw_flux = np.zeros(N)
storage = np.zeros(N)
s0 = gdp.calc_total_storage()
for i in range(N):
gdp.run_one_step(dt)
fa.run_one_step()
storage[i] = gdp.calc_total_storage()
recharge_flux[i] = gdp.calc_recharge_flux_in()
gw_flux[i] = gdp.calc_gw_flux_out()
sw_flux[i] = gdp.calc_sw_flux_out()
"""
Explanation: Next, run the model forward in time, and track the fluxes leaving the domain.
End of explanation
"""
plt.figure()
imshow_grid(grid, (wt - base) / (elev - base), cmap="Blues")
"""
Explanation: Now visualize some results.
End of explanation
"""
t = np.arange(0, N * dt, dt)
plt.figure(figsize=(8, 6))
plt.plot(
t / 3600,
np.cumsum(gw_flux) * dt + np.cumsum(sw_flux) * dt + storage - s0,
"b-",
linewidth=3,
alpha=0.5,
label="Total Fluxes + Storage",
)
plt.plot(
t / 3600,
np.cumsum(recharge_flux) * dt - recharge_flux[0] * dt,
"k:",
label="recharge flux",
)
plt.plot(t / 3600, np.cumsum(gw_flux) * dt, "b:", label="groundwater flux")
plt.plot(t / 3600, np.cumsum(sw_flux) * dt, "g:", label="surface water flux")
plt.plot(t / 3600, storage - s0, "r:", label="storage")
plt.ylabel("Cumulative Volume $[m^3]$")
plt.xlabel("Time [h]")
plt.legend(frameon=False)
plt.show()
"""
Explanation: The above shows how saturated the aquifer is. Note that it is most saturated at the lowest area of the domain, nearest the outlet.
Now look at the mass balance by ploting cumulative fluxes. The cumulative recharge in should be equal to cumulative fluxes out (groundwater and surface water) plus the change in storage from the initial condition.
End of explanation
"""
wt0 = wt.copy()
K = 0.01 # hydraulic conductivity, (m/s)
R = np.array([1e-5, 1e-6, 1e-7, 1e-8]) # recharge rate, (m/s)
por = 0.2 # porosity, (-)
f_in = np.zeros(len(R))
f_out = np.zeros(len(R))
for n in range(len(R)):
boundaries = {
"top": "closed",
"bottom": "closed",
"right": "closed",
"left": "closed",
}
grid = RasterModelGrid((51, 51), xy_spacing=10.0, bc=boundaries)
grid.status_at_node[1] = grid.BC_NODE_IS_FIXED_VALUE
elev = grid.add_zeros("node", "topographic__elevation")
elev[:] = (0.001 * grid.x_of_node ** 2 + 0.001 * grid.y_of_node ** 2) + 2
base = grid.add_zeros("node", "aquifer_base__elevation")
base[:] = elev - 2
wt = grid.add_zeros("node", "water_table__elevation")
wt[:] = wt0.copy()
gdp = GroundwaterDupuitPercolator(
grid,
hydraulic_conductivity=K,
porosity=por,
recharge_rate=R[n],
regularization_f=0.01,
courant_coefficient=0.1,
)
fa = FlowAccumulator(
grid,
surface="topographic__elevation",
flow_director="FlowDirectorSteepest",
runoff_rate="surface_water__specific_discharge",
)
N = 250
dt = 1e2
recharge_flux = np.zeros(N)
gw_flux = np.zeros(N)
sw_flux = np.zeros(N)
storage = np.zeros(N)
s0 = gdp.calc_total_storage()
for i in range(N):
gdp.run_one_step(dt)
fa.run_one_step()
recharge_flux[i] = gdp.calc_recharge_flux_in()
gw_flux[i] = gdp.calc_gw_flux_out()
sw_flux[i] = gdp.calc_sw_flux_out()
storage[i] = gdp.calc_total_storage()
f_in[n] = np.sum(recharge_flux) * dt
f_out[n] = np.sum(gw_flux) * dt + np.sum(sw_flux) * dt + storage[-1] - s0
"""
Explanation: The thick blue line (cumulative fluxes plus storage) matches the black cumulative recharge flux line, which indicates that the model has conserved mass. Because the initial domain was fully saturated, the primary feature that shows up in this mass balance is the loss of that initial water. It will be easier to see what is going on here in the second example.
Check conservation of mass with changing recharge
Now check to confirm that mass is conserved with different recharge rates.
End of explanation
"""
x11 = np.linspace(0, max(f_in))
plt.figure()
plt.loglog(x11, x11, "r--", label="1:1")
plt.loglog(f_in, f_out, ".", markersize=10)
plt.legend(frameon=False)
plt.ylabel("flux out + storage $(m^3)$")
plt.xlabel("flux in $(m^3)$")
plt.show()
"""
Explanation: The code above simulates the evolution of the water table under four different recharge rates, and calculates the fluxes across the domain boundaries and the change in storage. It then sums the fluxes to find out the total volume in and out and change in storage. Below we visualize how flow in compares with flow out plus storage change. If mass is conserved they should be the same.
End of explanation
"""
(f_in - f_out) / f_in
"""
Explanation: The trials plot close to the 1:1 line, showing that we are close to mass conservation. Just how close? Calculate the relative error below.
End of explanation
"""
wt0 = wt.copy()
K = 0.01 # hydraulic conductivity, (m/s)
R = 1e-7 # recharge rate, (m/s)
por = 0.2 # porosity, (-)
N_all = np.array([10, 50, 100, 500, 1000]) # number of timesteps
T = 24 * 3600 # total time
dt_all = T / N_all # timestep
gdp.courant_coefficient = 0.2
f_in = np.zeros(len(N_all))
f_out = np.zeros(len(N_all))
for n in range(len(N_all)):
boundaries = {
"top": "closed",
"bottom": "closed",
"right": "closed",
"left": "closed",
}
grid = RasterModelGrid((51, 51), xy_spacing=10.0, bc=boundaries)
grid.status_at_node[1] = grid.BC_NODE_IS_FIXED_VALUE
elev = grid.add_zeros("node", "topographic__elevation")
elev[:] = (0.001 * grid.x_of_node ** 2 + 0.001 * grid.y_of_node ** 2) + 2
base = grid.add_zeros("node", "aquifer_base__elevation")
base[:] = elev - 2
wt = grid.add_zeros("node", "water_table__elevation")
wt[:] = wt0.copy()
gdp = GroundwaterDupuitPercolator(
grid,
hydraulic_conductivity=K,
porosity=por,
recharge_rate=R,
regularization_f=0.01,
courant_coefficient=0.1,
)
fa = FlowAccumulator(
grid,
surface="topographic__elevation",
flow_director="FlowDirectorSteepest",
runoff_rate="surface_water__specific_discharge",
)
N = N_all[n]
dt = dt_all[n]
recharge_flux = np.zeros(N)
gw_flux = np.zeros(N)
sw_flux = np.zeros(N)
storage = np.zeros(N)
s0 = gdp.calc_total_storage()
for i in range(N):
gdp.run_with_adaptive_time_step_solver(dt)
fa.run_one_step()
recharge_flux[i] = gdp.calc_recharge_flux_in()
gw_flux[i] = gdp.calc_gw_flux_out()
sw_flux[i] = gdp.calc_sw_flux_out()
storage[i] = gdp.calc_total_storage()
f_in[n] = np.sum(recharge_flux) * dt
f_out[n] = np.sum(gw_flux) * dt + np.sum(sw_flux) * dt + storage[-1] - s0
"""
Explanation: Check conservation of mass with changing timestep
To check conservation of mass with different timesteps, we will use the method run_with_adaptive_time_step_solver to ensure the model remains stable. This method is the same as run_one_step, except that it subdivides the provided timestep (event or inter-event duration in this case) in order to meet a Courant-type stability criterion.
We can set the courant_coefficient either as an argument when we create the component, or by setting the attribute gdp.courant_coefficient. This value indicates how large the maximum allowed timestep is relative to the Courant limit. Values close to 0.1 are recommended for best results.
For efficiency, fluxes are only calculated at the end of each large timestep when using run_with_adaptive_time_step_solver, not during the internally subdivided timesteps. As a result, deviations from mass conservation are possible.
End of explanation
"""
(f_in - f_out) / f_in
"""
Explanation: The code above simulates the evolution of the water table for the same total amount of time, but using four different values for the timestep. Just as before, fluxes and storage are calculated, along with their totals. Again, look at the relative error in mass conservation.
End of explanation
"""
# generate storm timeseries
T = 10 * 24 * 3600 # sec
Tr = 1 * 3600 # sec
Td = 24 * 3600 # sec
dt = 1e3 # sec
p = 1e-3 # m
precip = PrecipitationDistribution(
mean_storm_duration=Tr,
mean_interstorm_duration=Td,
mean_storm_depth=p,
total_t=T,
delta_t=dt,
)
durations = []
intensities = []
precip.seed_generator(seedval=1)
for (
interval_duration,
rainfall_rate_in_interval,
) in precip.yield_storm_interstorm_duration_intensity(subdivide_interstorms=True):
durations.append(interval_duration)
intensities.append(rainfall_rate_in_interval)
N = len(durations)
"""
Explanation: Simulate time-varying recharge
Lastly, simulate time-varying recharge, look at the mass balance, and the outflow hydrograph. This will use the same grid and groundwater model instance as above, taking the final condition of the previous model run as the new initial condition here. This time the adaptive timestep solver will be used to make sure the model remains stable.
First, we need a distribution of recharge events. We will use landlab's precipitation distribution tool to create a lists paired recharge events and intensities.
End of explanation
"""
recharge_flux = np.zeros(N)
gw_flux = np.zeros(N)
sw_flux = np.zeros(N)
storage = np.zeros(N)
s0 = gdp.calc_total_storage()
num_substeps = np.zeros(N)
gdp.courant_coefficient = 0.2
for i in range(N):
gdp.recharge = intensities[i] * np.ones_like(gdp.recharge)
gdp.run_with_adaptive_time_step_solver(durations[i])
fa.run_one_step()
num_substeps[i] = gdp.number_of_substeps
recharge_flux[i] = gdp.calc_recharge_flux_in()
gw_flux[i] = gdp.calc_gw_flux_out()
sw_flux[i] = gdp.calc_sw_flux_out()
storage[i] = gdp.calc_total_storage()
"""
Explanation: Next, run the model forward with the adaptive timestep solver.
End of explanation
"""
t = np.cumsum(durations)
plt.figure()
plt.plot(
t / 3600,
np.cumsum(gw_flux * durations) + np.cumsum(sw_flux * durations) + storage - s0,
"b-",
linewidth=3,
alpha=0.5,
label="Total Fluxes + Storage",
)
plt.plot(t / 3600, np.cumsum(recharge_flux * durations), "k:", label="recharge flux")
plt.plot(t / 3600, np.cumsum(gw_flux * durations), "b:", label="groundwater flux")
plt.plot(t / 3600, np.cumsum(sw_flux * durations), "g:", label="surface water flux")
plt.plot(t / 3600, storage - storage[0], "r:", label="storage")
plt.ylabel("Cumulative Volume $[m^3]$")
plt.xlabel("Time [h]")
plt.legend(frameon=False)
plt.show()
"""
Explanation: Again, visualize the mass balance:
End of explanation
"""
plt.figure()
plt.plot(num_substeps, ".")
plt.xlabel("Iteration")
plt.ylabel("Numer of Substeps")
plt.yticks([1, 5, 10, 15, 20])
plt.show()
max(num_substeps)
"""
Explanation: Visualize numer of substeps that the model took for stability:
End of explanation
"""
fig, ax = plt.subplots(figsize=(8, 6))
ax.plot(t / (3600 * 24), sw_flux, label="Surface water flux")
ax.plot(t / (3600 * 24), gw_flux, label="Groundwater flux")
ax.set_ylim((0, 0.04))
ax.set_ylabel("Flux out $[m^3/s]$")
ax.set_xlabel("Time [d]")
ax.legend(frameon=False, loc=7)
ax1 = ax.twinx()
ax1.plot(t / (3600 * 24), recharge_flux, "0.6")
ax1.set_ylim((1.2, 0))
ax1.set_ylabel("Recharge flux in $[m^3/s]$")
plt.show()
"""
Explanation: The method has subdivided the timestep up to 18 times in order to meet the stability criterion. This is dependent on a number of factors, including the Courant coefficient, the hydraulic conductivity, and hydraulic gradient.
Now look at the timeseries of recharge in and groundwater and surface water leaving the domain at the open node:
End of explanation
"""
grid = RasterModelGrid((10, 10), xy_spacing=10.0)
grid.set_status_at_node_on_edges(
right=grid.BC_NODE_IS_CLOSED,
top=grid.BC_NODE_IS_CLOSED,
left=grid.BC_NODE_IS_CLOSED,
bottom=grid.BC_NODE_IS_FIXED_GRADIENT,
)
elev = grid.add_zeros("node", "topographic__elevation")
elev[:] = (0.001 * (grid.x_of_node - 100) ** 2 + 0.0002 * grid.y_of_node ** 2) + 10
base = grid.add_zeros("node", "aquifer_base__elevation")
base[:] = elev - 2
wt = grid.add_zeros("node", "water_table__elevation")
wt[:] = elev
gdp = GroundwaterDupuitPercolator(grid)
"""
Explanation: The relationship between maximum flux that can be passed through the subsurface and the occurrence of groundwater seepage is clear from this figure.
Using different boundary conditions
So far, we have used the fixed fixed value "open" boundary condition, and zero flux "closed" boundary condition. Fixed gradient (von Neumann) boundary conditions are also supported by this component. When fixed value is selected, the water table elevation remains fixed on the specified boundary nodes. When fixed gradient is selected, the water table gradient on specified links remains fixed.
Below is an example of setting fixed gradient links on one boundary.
End of explanation
"""
# calculate surface slopes
S = grid.calc_grad_at_link(elev)
# assign hydraulic gradient at fixed linlks to be the surface slope there
grid.at_link["hydraulic__gradient"][grid.fixed_links] = S[grid.fixed_links]
grid.at_link["hydraulic__gradient"][grid.fixed_links]
"""
Explanation: Say we want to set the bottom boundary gradients to be equal to the slope of the topographic surface. This can be done simply as follows:
End of explanation
"""
|
slowvak/MachineLearningForMedicalImages | notebooks/Module 1.ipynb | mit | %matplotlib inline
import os
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import pandas as pd
import csv
from pandas.tools.plotting import scatter_matrix
from sklearn import preprocessing
import nibabel as nib
"""
Explanation: Module 1 - Data Load / Display / Normalization
In this module you will learn how to load a nifti image utilizing nibabel and create datasets that can be used with machine learning algorithms. The basic features we will consider are intensity based and originate from multiple acquisition types.
Step 1: Load basic python libraries
End of explanation
"""
CurrentDir= os.getcwd()
# Print current directory
print (CurrentDir)
# Get parent direcotry
print(os.path.abspath(os.path.join(CurrentDir, os.pardir)))
# Create the file paths. The images are contained in a subfolder called Data.
PostName = os.path.abspath(os.path.join(os.path.abspath(os.path.join(CurrentDir, os.pardir)), "Data", 'POST.nii.gz') )
PreName = os.path.abspath(os.path.join(os.path.abspath(os.path.join(CurrentDir, os.pardir)), "Data", 'PRE.nii.gz') )
FLAIRName = os.path.abspath(os.path.join(os.path.abspath(os.path.join(CurrentDir, os.pardir)), "Data", 'FLAIR.nii.gz') )
GroundTruth= os.path.abspath(os.path.join(os.path.abspath(os.path.join(CurrentDir, os.pardir)), "Data", 'GroundTruth.nii.gz') )
# read Pre in--we assume that all images are same x,y dims
Pre = nib.load(PreName)
# Pre is a class containing the image data among other information
Pre=Pre.get_data()
xdim = np.shape(Pre)[0]
ydim = np.shape(Pre)[1]
zdim = np.shape(Pre)[2]
# Printing the dimensions of an image
print ('Dimensions')
print (xdim,ydim,zdim)
# Normalize to mean
Pre=Pre/np.mean(Pre[np.nonzero(Pre)])
# Post
Post = nib.load(PostName)
Post=Post.get_data()
# Normalize to mean
Post=Post/np.mean(Post[np.nonzero(Post)])
Flair = nib.load(FLAIRName)
Flair=Flair.get_data()
# Normalize FLAIR
Flair=Flair/np.mean(Flair[np.nonzero(Flair)])
print ("Data Loaded")
"""
Explanation: Step 2: Load the three types of images available.
T1w pre-contrast
FLAIR
T1w post-contrast
The goal is to create a 4D image that contains all four 3D volumes we will use in our example
End of explanation
"""
# Load Ground Truth
GroundTrutha = nib.load(GroundTruth)
GroundTruth=GroundTrutha.get_data()
print ("Data Loaded")
"""
Explanation: Create traing set
We assume the following labels.
Enhancing Tumor = 4
Edema = 2
WM and CSF and GM=1
Background (air) = 0
End of explanation
"""
def display_overlay(Image1, Image2):
"""
Function: Overlays Image2 over Image1
Image 1: 2D image
Image 2: 2D Image
Requires numpy, matplotlib
"""
Image1=np.rot90(Image1,3)
Image2=np.rot90(Image2,3)
Image2 = np.ma.masked_where(Image2 == 0, Image2)
plt.imshow(Image1, cmap=plt.cm.gray)
plt.imshow(Image2, cmap=plt.cm.brg, alpha=.7, vmin=.7, vmax=5, interpolation='nearest')
plt.axis('off')
plt.show()
f, (ax1,ax2,ax3,ax4)=plt.subplots(1,4)
ax1.imshow(np.rot90(Post[:, :, 55,],3), cmap=plt.cm.gray)
ax1.axis('off')
ax2.imshow(np.rot90(Flair[:, :, 55,],3), cmap=plt.cm.gray)
ax2.axis('off')
ax3.imshow(np.rot90(Pre[:, :, 55,],3), cmap=plt.cm.gray)
ax3.axis('off')
ax4.imshow(np.rot90(GroundTruth[:, :, 55,],3), cmap=plt.cm.gray)
ax4.axis('off')
plt.show()
display_overlay(Post[:, :, 55,], GroundTruth[:,:,55]==4)
display_overlay(Flair[:, :, 55,], GroundTruth[:,:,55]==2)
display_overlay(Pre[:, :, 55,], GroundTruth[:,:,55]==1)
"""
Explanation: Plot the images
End of explanation
"""
# Create classes
# Tissue =GM+CSG+WM
ClassTissuePost=(Post[np.nonzero(GroundTruth==1)])
ClassTissuePre=(Pre[np.nonzero(GroundTruth==1)])
ClassTissueFlair=(Flair[np.nonzero(GroundTruth==1)])
# Enhancing Tumor
ClassTumorPost=(Post[np.nonzero(GroundTruth==4)])
ClassTumorPre=(Pre[np.nonzero(GroundTruth==4)])
ClassTumorFlair=(Flair[np.nonzero(GroundTruth==4)])
# Edema
ClassEdemaPost=(Post[np.nonzero(GroundTruth==2)])
ClassEdemaPre=(Pre[np.nonzero(GroundTruth==2)])
ClassEdemaFlair=(Flair[np.nonzero(GroundTruth==2)])
# We only select 1000 points for demosntration purposes
IND=np.random.randint(np.shape(ClassTumorPre)[0], size=5000)
ClassTissuePost=ClassTissuePost[IND]
ClassTissuePre=ClassTissuePre[IND]
ClassTissueFlair=ClassTissueFlair[IND]
ClassTumorPost=ClassTumorPost[IND]
ClassTumorPre=ClassTumorPre[IND]
ClassTumorFlair=ClassTumorFlair[IND]
ClassEdemaPost=ClassEdemaPost[IND]
ClassEdemaPre=ClassEdemaPre[IND]
ClassEdemaFlair=ClassEdemaFlair[IND]
print ("Saving the data to a pandas dataframe and subsequently to a csv")
# Create a dictionary containing the classes
datasetcomplete={"ClassTissuePost": ClassTissuePost, "ClassTissuePre": ClassTissuePre, "ClassTissueFlair": ClassTissueFlair, "ClassTumorPost": ClassTumorPost, "ClassTumorPre": ClassTumorPre, "ClassTumorFlair": ClassTumorFlair, "ClassEdemaPost": ClassEdemaPost, "ClassEdemaPre": ClassEdemaPre, "ClassEdemaFlair": ClassEdemaFlair}
datapd=pd.DataFrame.from_dict(datasetcomplete,orient="index")
# print (datapd)
datapd=datapd.transpose()
# datapd=pd.DataFrame(dict([ (k,Series(v)) for k,v in datasetcomplete.iteritems() ]))
datapd.to_csv("DataExample.csv",index=False)
"""
Explanation: Create dataset
End of explanation
"""
# Display Tumor vs NAWM
IND=np.random.randint(1000, size=100)
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(ClassTissuePost[IND,], ClassTissuePre[IND,], ClassTissueFlair[IND,])
ax.scatter(ClassTumorPost[IND,], ClassTumorPre[IND,], ClassTumorFlair[IND,], c='r', marker='^')
ax.set_xlabel('post')
ax.set_ylabel('pret')
ax.set_zlabel('FLAIR')
plt.show()
fig = plt.figure()
ax = fig.add_subplot(111)
ax.scatter(ClassTissuePost[IND,], ClassTissuePre[IND,])
ax.scatter(ClassTumorPost[IND,], ClassTumorPre[IND,], c='r', marker='^')
ax.set_xlabel('post')
ax.set_ylabel('pret')
plt.show()
"""
Explanation: Create some scatter plots
End of explanation
"""
# descriptions
print(datapd.describe())
"""
Explanation: Describe the data
End of explanation
"""
|
egentry/lamat-2016-solutions | day2/randomness.ipynb | mit | poisson_samples = np.random.poisson(lam=1.5, size=20)
print(poisson_samples)
gaussian_samples = np.random.normal(loc=-5.0, scale=2.0, size=20)
print(gaussian_samples)
"""
Explanation: Activity description
See: https://docs.google.com/document/d/1COdCXs4K6kAXLcVvYxG3fqS53l2gzbkDvbbTmm8ZF1U/edit?usp=sharing
Example generation of random samples
End of explanation
"""
def add_samples(dictionary_of_samples, key, samples):
""" `samples` must be a list! """
try:
dictionary_of_samples[key] += samples
except KeyError:
# if entry doesn't exist, create a new one
dictionary_of_samples[key] = samples
"""
Explanation: Build dictionary to hold samples
End of explanation
"""
test_dictionary_of_samples = {}
key = "test"
samples = [1,2,3]
add_samples(test_dictionary_of_samples, key, samples)
add_samples(test_dictionary_of_samples, key, samples)
if test_dictionary_of_samples[key] == [1,2,3,1,2,3]:
print("OK")
else:
print("Error: add_samples isn't behaving right")
"""
Explanation: test add_samples
(there are better ways to do testing using libraries like pytest, nose or unittest)
End of explanation
"""
dictionary_of_samples = {}
N_samples = 10
key = ("gaussian", -5.0, 2.0)
gaussian_samples = list(np.random.normal(loc=key[1], scale=key[2], size=N_samples))
add_samples(dictionary_of_samples, key, gaussian_samples)
key = ("gaussian", 10, 1.0)
gaussian_samples = list(np.random.normal(loc=key[1], scale=key[2], size=N_samples))
add_samples(dictionary_of_samples, key, gaussian_samples)
key = ("poisson", 1.5)
poisson_samples = list(np.random.poisson(lam=key[1], size=N_samples))
add_samples(dictionary_of_samples, key, poisson_samples)
key = ("gaussian", 2.0)
poisson_samples = list(np.random.poisson(lam=key[1], size=N_samples))
add_samples(dictionary_of_samples, key, poisson_samples)
dictionary_of_samples
"""
Explanation: Activity solution
End of explanation
"""
from astroML import datasets
data = datasets.fetch_sdss_corrected_spectra()
# columns available within the data structure
data.keys()
def separate_redshifts_of_galaxies(data):
"""
Filters the SDSS data, into two lists of redshifts:
one for star formation dominated galaxies and one for AGN-dominated galaxies
Parameters
----------
data : npz file
Must be the data structure returned from fetch_sdss_corrected_spectra()
Returns
-------
star_formation_dominated_redshifts : list
agn_dominated_redshifts : list
"""
star_formation_dominated = (data["lineindex_cln"] == 4)
agn_dominated = (data["lineindex_cln"] == 5)
star_formation_dominated_redshifts = list(data["z"][star_formation_dominated] )
agn_dominated_redshifts = list(data["z"][agn_dominated] )
return star_formation_dominated_redshifts, agn_dominated_redshifts
star_formation_dominated_redshifts, agn_dominated_redshifts = separate_redshifts_of_galaxies(data)
plt.hist(star_formation_dominated_redshifts,
normed=True, label="Star Forming")
plt.hist(agn_dominated_redshifts,
normed=True, label="AGN")
plt.title("Galaxy redshifts by classification")
plt.xlabel("Redshift")
plt.ylabel("Counts (Normalized)")
plt.legend(loc="best")
"""
Explanation: SDSS extension
End of explanation
"""
|
miklevin/pipulate | examples/LESSON08_Selecting-by-Label-with-loc.ipynb | mit | import pandas as pd
pd.set_option('display.max_columns', 500)
def a1_notation(n):
string = ""
while n > 0:
n, remainder = divmod(n - 1, 26)
string = chr(65 + remainder) + string
return string
# First we create a 30 x 30 DataFrame with both row and column labels.
alist = list(range(1, 31))
A1_list = [a1_notation(x) for x in alist]
A1_list_plus = ["%s-label" % a1_notation(x) for x in alist]
df = pd.DataFrame([alist for aline in alist], columns=A1_list, index=A1_list_plus)
df
# Using a single row-label with .loc() returns a the row as a Pandas Series.
df.loc['A-label']
"""
Explanation: Pandas DataFrame.loc()
df.loc() is mostly about the df.loc('row') but
Colon-comma makes your d.loc(:, 'column') show.
So, the API for .loc() is mostly just this:
df.loc[row-label, col-label]
So for the very common simple task of selecting columns by label, you need to know some special Pythonic magic:
df.loc[:, col-label]
This colon is Python short-hand for a slice from beginning to end... or in other words "all rows". This is the single most confusing part of the Pandas API, which becoming more and more important because the old ways of selecting rows are deprecated.
End of explanation
"""
# Using a row-label and a column-label in .loc() returns the contents of the cell.
df.loc['A-label', 'AD']
# Using a single row-label INSIDE A LIST in .loc() returns that row as a DataFrame.
df.loc[['A-label']]
# Using a list of row-labels in .loc() returns those specific rows as a DataFrame.
df.loc[['A-label', 'C-label', 'E-label', 'G-label', 'I-label']]
# Using row-labels with slice notation in .loc() returns that range of rows as a DataFrame.
df.loc['A-label':'G-label']
# Dropping the first row-label in slice notation in .loc() indicates starting at top of DataFrame.
df.loc[:'G-label']
# Dropping the last row-label in slice notation in .loc() indicates ending at bottom of DataFrame.
df.loc['W-label':]
# Dropping both row-labels in slice notation in .loc() selects the entire DataFrame (top to bottom).
df.loc[:]
# As with iloc, there is a comma in the interface which seperates row arguments from column arguments.
df.loc[:, ]
"""
Explanation: Let's Talk about Rows
End of explanation
"""
# If we use just a single label-name after the comma (like using an integer in iloc) we get a Pandas Series.
df.loc[:, 'A']
# Putting a label-name in a list after the comma returns a DataFrame.
df.loc[:, ['A']]
# This is how we perform the very common task of selecting specific columns by label.
df.loc[:, ['P', 'A', 'N', 'D', 'A', 'S']]
# Label-based slice definitions can be used in place of lists.
df.loc[:, 'P':'U']
# After the comma you can leave off the ending slice value for "to the end".
df.loc[:, 'P':]
# After the comma you can leave off the beginning slice value for "from beginning".
df.loc[:, :'N']
"""
Explanation: Let's Talk about Columns
End of explanation
"""
# This is how we get the intersection of label-based slice definitions for rows and columns.
df.loc[:'N-label', :'N']
# We can use lists of labels to define intersections. If each list has one item, we get a 1-cell DataFrame
df.loc[['N-label'], ['N']]
# If you're going for a single value, there's not much sense in asking for a DataFrame to be returned.
df.loc['N-label', 'N']
# We can use references (variable names) to make list-intersection requests more readable.
rows = ['P-label', 'A-label', 'N-label', 'D-label', 'A-label', 'S-label']
cols = ['P', 'A', 'N', 'D', 'A', 'S']
df.loc[rows, cols]
# We can use list comprehension and the enumerate and modulus functions to list every other column.
[(i, x)[1] for i, x in enumerate(list(df.columns)) if i%2]
# This is how we select every other column by label name.
df.loc[:, [(i, x)[1] for i, x in enumerate(list(df.columns)) if i%2]]
# We generate a warning and get a column full of NaN addressing nonexistant columns
df.loc[:,['foo', 'B', 'A', 'R']]
df_blah
"""
Explanation: Rows & Columns
End of explanation
"""
|
theJollySin/python_for_scientists | classes/12_matplotlib/1_line_plots.ipynb | gpl-3.0 | import numpy
from matplotlib import pyplot
%matplotlib inline
### generate some random data
xdata = numpy.arange(25)
ydata = numpy.random.randn(25)
### initialize the "figure" and "axes" objects
fig, ax = pyplot.subplots()
line_plot = ax.plot(xdata, ydata)
"""
Explanation: Line Plots
Perhaps the most well-known type of plot, it is so generic that in pyplot it is called by the function named plot.
End of explanation
"""
### initializing figure
fig, ax = pyplot.subplots()
### thickness of line
line_plot = ax.plot(xdata, ydata, lw=1)
"""
Explanation: Linewidth
End of explanation
"""
### initializing figure
fig, ax = pyplot.subplots()
### adding dashes of various kinds to line
line_plot = ax.plot(xdata, ydata, ls='-')
#line_plot = ax.plot(xdata, ydata, ls='--')
#line_plot = ax.plot(xdata, ydata, ls=':')
#line_plot = ax.plot(xdata, ydata, ls='-.')
#line_plot = ax.plot(xdata, ydata, dashes=[15,3,8,10])
"""
Explanation: Dashes
End of explanation
"""
### initializing figure
fig, ax = pyplot.subplots()
line_plot = ax.plot(xdata, ydata, color='r')
#line_plot = ax.plot(xdata, ydata, color='purple')
#line_plot = ax.plot(xdata, ydata, color=(0.2, 0.8, 0.2))
#line_plot = ax.plot(xdata, ydata, color='#cccccc')
"""
Explanation: Colors
Python has a variety of ways to choose plotting colors, and using them is pretty straightforward.
End of explanation
"""
|
liganega/Gongsu-DataSci | previous/y2017/W09-numpy-averages/.ipynb_checkpoints/GongSu21_Statistics_Averages-checkpoint.ipynb | gpl-3.0 | import numpy as np
import pandas as pd
from datetime import datetime as dt
from scipy import stats
"""
Explanation: 자료 안내: 여기서 다루는 내용은 아래 사이트의 내용을 참고하여 생성되었음.
https://github.com/rouseguy/intro2stats
안내사항
오늘 다루는 내용은 pandas 모듈의 소개 정도로 이해하고 넘어갈 것을 권장한다.
아래 내용은 엑셀의 스프레드시트지에 담긴 데이터를 분석하여 평균 등을 어떻게 구하는가를 알고 있다면 어렵지 않게 이해할 수 있는 내용이다. 즉, 넘파이의 어레이에 대한 기초지식과 엑셀에 대한 기초지식을 활용하면 내용을 기본적으로 이해할 수 있을 것이다.
좀 더 자세한 설명이 요구된다면 아래 사이트의 설명을 미리 읽으면 좋다(5.2절 내용까지면 충분함).
하지만, 아래 내용을 엑셀의 기능과 비교하면서 먼저 주욱 훑어 볼 것을 권장한다.
http://sinpong.tistory.com/category/Python%20for%20data%20analysis
평균(Average) 구하기
오늘의 주요 예제
미국에서 거래되는 담배(식물)의 도매가격 데이터를 분석하여, 거래된 도매가의 평균을 구한다.
평균값(Mean)
중앙값(Median)
최빈값(Mode)
평균에 대한 보다 자세한 설명은 첨부된 강의노트 참조: GongSu21-Averages.pdf
사용하는 주요 모듈
아래는 통계분석에서 기본적으로 사용되는 모듈들이다.
pandas: 통계분석 전용 모듈
numpy 모듈을 바탕으로 하여 통계분석에 특화된 모듈임.
마이크로소프트의 엑셀처럼 작동하는 기능을 지원함
datetime: 날짜와 시간을 적절하게 표시하도록 도와주는 기능을 지원하는 모듈
scipy: 수치계산, 공업수학 등을 지원하는 모듈
팬더스(Pandas) 소개
pandas란?
빠르고 쉬운 데이터 분석 도구를 지원하는 파이썬 모듈
numpy를 기본적으로 활용함.
pandas의 기능
데이터 정렬 등 다양한 연산 기능 지원
강력한 색인 및 슬라이싱 기능
시계열(time series) 기능 지원
결측치(누락된 데이터) 처리
SQL과 같은 DB의 관계연산 기능 지원
주의: pandas 모듈의 기능에 대한 보다 자세한 설명은 다음 시간에 다룬다.
여기서는 pandas 모듈을 어떻게 활용하는지에 대한 감을 잡기만 하면 된다.
End of explanation
"""
prices_pd = pd.read_csv("data/Weed_Price.csv", parse_dates=[-1])
"""
Explanation: 데이터 불러오기 및 처리
오늘 사용할 데이터는 다음과 같다.
미국 51개 주(State)별 담배(식물) 도매가격 및 판매일자: Weed_price.csv
아래 그림은 미국의 주별 담배(식물) 판매 데이터를 담은 Weed_Price.csv 파일를 엑셀로 읽었을 때의 일부를 보여준다.
실제 데이터량은 22899개이며, 아래 그림에는 5개의 데이터만을 보여주고 있다.
* 주의: 1번줄은 테이블의 열별 목록(column names)을 담고 있다.
* 열별 목록: State, HighQ, HighQN, MedQ, MedQN, LowQ, LowQN, date
<p>
<table cellspacing="20">
<tr>
<td>
<img src="img/weed_price.png", width=600>
</td>
</tr>
</table>
</p>
csv 파일 불러오기
pandas 모듈의 read_csv 함수 활용
read_csv 함수의 리턴값은 DataFrame 이라는 특수한 자료형임
엑셀의 위 그림 모양의 스프레드시트(spreadsheet)라고 생각하면 됨.
언급한 세 개의 csv 파일을 pandas의 read_csv 함수를 이용하여 불러들이자.
주의: Weed_Price.csv 파일을 불러들일 때, parse_dates라는 키워드 인자가 사용되었다.
* parse_dates 키워드 인자: 날짜를 읽어들일 때 다양한 방식을 사용하도록 하는 기능을 갖고 있다.
* 여기서 값을 [-1]로 준 것은 소스 데이터에 있는 날짜 데이터를 변경하지 말고 그대로 불러오라는 의미이다.
* 위 엑셀파일에서 볼 수 있듯이, 마지막 열에 포함된 날짜표시는 굳이 변경을 요하지 않는다.
End of explanation
"""
type(prices_pd)
"""
Explanation: read_csv 함수의 리턴값은 DataFrame 이라는 자료형이다.
End of explanation
"""
prices_pd.head()
"""
Explanation: DataFrame 자료형
자세한 설명은 다음 시간에 추가될 것임. 우선은 아래 사이트를 참조할 수 있다는 정도만 언급함.
(5.2절 내용까지면 충분함)
http://sinpong.tistory.com/category/Python%20for%20data%20analysis
DataFrame 자료형과 엑셀의 스프레드시트 비교하기
불러 들인 Weed_Price.csv 파일의 상위 다섯 줄을 확인해보면, 앞서 엑셀파일 그림에서 본 내용과 일치한다.
다만, 행과 열의 목록이 조금 다를 뿐이다.
* 엑셀에서는 열 목록이 A, B, C, ..., H로 되어 있으며, 소스 파일의 열 목록은 1번 줄로 밀려 있다.
* 엑셀에서의 행 목록은 1, 2, 3, ... 으로 되어 있다.
하지만 read_csv 파일은 좀 다르게 불러 들인다.
* 열 목록은 소스 파일의 열 목록을 그대로 사용한다.
* 행 목록은 0, 1, 2, ... 으로 되어 있다.
데이터 파일의 상위 몇 줄을 불러들이기 위해서는 DataFrame 자료형의 head 메소드를 활용한다.
인자값을 주지 않으면 상위 5줄을 보여준다.
End of explanation
"""
prices_pd.head(10)
"""
Explanation: 인자를 주면 원하는 만큼 보여준다.
End of explanation
"""
prices_pd.tail()
"""
Explanation: 파일이 매우 많은 수의 데이터를 포함하고 있을 경우, 맨 뒷쪽 부분을 확인하고 싶으면
tail 메소드를 활용한다. 사용법은 head 메소드와 동일하다.
아래 명령어를 통해 Weed_Price.csv 파일에 22899개의 데이터가 저장되어 있음을 확인할 수 있다.
End of explanation
"""
prices_pd.dtypes
"""
Explanation: 결측치 존재 여부
위 결과를 보면 LowQ 목록에 NaN 이라는 기호가 포함되어 있다. NaN은 Not a Number, 즉, 숫자가 아니다라는 의미이며, 데이터가 애초부터 존재하지 않았거나 누락되었음을 의미한다.
DataFrame의 dtypes
DataFrame 자료형의 dtypes 속성을 이용하면 열별 목록에 사용된 자료형을 확인할 수 있다.
Weed_Price.csv 파일을 읽어 들인 prices_pd 변수에 저장된 DataFrame 값의 열별 목록에 사용된 자료형을 보여준다.
주의:
* numpy의 array 자료형의 dtype 속성은 하나의 자료형만을 담고 있다.
* 열별 목록에는 하나의 자료형 값들만 올 수 있다.
즉, 열 하나하나가 넘파이의 array에 해당한다고 볼 수 있다.
* State 목록에 사용된 object 라는 dtype은 문자열이 저장된 위치를 가리키는 포인터를 의미한다.
* 문자열의 길이를 제한할 수 없기 때문에 문자열을 어딘가에 저장하고 포인터가 그 위치를 가리키며,
필요에 따라 포인터 정보를 이용하여 저장된 문자열을 확인한다.
* 마지막 줄에 표시된 "dtype: object"의 의미는 복잡한 데이터들의 자료형이라는 의미로 이해하면 됨.
End of explanation
"""
prices_pd.sort_values(['State', 'date'], inplace=True)
"""
Explanation: 정렬 및 결측치 채우기
정렬하기
주별로, 날짜별로 데이터를 정렬한다.
End of explanation
"""
prices_pd.fillna(method='ffill', inplace=True)
"""
Explanation: 결측치 채우기
평균을 구하기 위해서는 결측치(누락된 데이터)가 없어야 한다.
여기서는 이전 줄의 데이터를 이용하여 채우는 방식(method='ffill')을 이용한다.
주의: 앞서 정렬을 먼저 한 이유는, 결측치가 있을 경우 가능하면 동일한 주(State),
비슷한 시점에서 거래된 가격을 사용하고자 함이다.
End of explanation
"""
prices_pd.head()
"""
Explanation: 정렬된 데이터의 첫 부분은 아래와 같이 알라바마(Alabama) 주의 데이터만 날짜별로 순서대로 보인다.
End of explanation
"""
prices_pd.tail()
"""
Explanation: 정렬된 데이터의 끝 부분은 아래와 같이 요밍(Wyoming) 주의 데이터만 날짜별로 순서대로 보인다.
이제 결측치가 더 이상 존재하지 않는다.
End of explanation
"""
california_pd = prices_pd[prices_pd.State == "California"].copy(True)
"""
Explanation: 데이터 분석하기: 평균(Average)
캘리포니아 주를 대상으로해서 담배(식물) 도매가의 평균(average)을 구해본다.
평균값(Mean)
평균값 = 모든 값들의 합을 값들의 개수로 나누기
$X$: 데이터에 포함된 값들을 대변하는 변수
$n$: 데이터에 포함된 값들의 개수
$\Sigma\, X$: 데이터에 포함된 모든 값들의 합
$$\text{평균값}(\mu) = \frac{\Sigma\, X}{n}$$
먼저 마스크 인덱스를 이용하여 캘리포니아 주의 데이터만 추출해야 한다.
End of explanation
"""
california_pd.head(20)
"""
Explanation: 캘리포니아 주에서 거래된 첫 5개의 데이터를 확인해보자.
End of explanation
"""
ca_sum = california_pd['HighQ'].sum()
ca_sum
"""
Explanation: HighQ 열 목록에 있는 값들의 총합을 구해보자.
주의: sum() 메소드 활용을 기억한다.
End of explanation
"""
ca_count = california_pd['HighQ'].count()
ca_count
"""
Explanation: HighQ 열 목록에 있는 값들의 개수를 확인해보자.
주의: count() 메소드 활용을 기억한다.
End of explanation
"""
# 캘리포니아 주에서 거래된 상품(HighQ) 담배(식물) 도매가의 평균값
ca_mean = ca_sum / ca_count
ca_mean
"""
Explanation: 이제 캘리포니아 주에서 거래된 HighQ의 담배가격의 평균값을 구할 수 있다.
End of explanation
"""
ca_count
"""
Explanation: 중앙값(Median)
캘리포니아 주에서 거래된 HighQ의 담배가격의 중앙값을 구하자.
중앙값 = 데이터를 크기 순으로 정렬하였을 때 가장 가운데에 위치한 수
데이터의 크기 n이 홀수일 때: $\frac{n+1}{2}$번 째 위치한 데이터
데이터의 크기 n이 짝수일 때: $\frac{n}{2}$번 째와 $\frac{n}{2}+1$번 째에 위치한 데이터들의 평균값
여기서는 데이터의 크기가 449로 홀수이다.
End of explanation
"""
ca_highq_pd = california_pd.sort_values(['HighQ'])
ca_highq_pd.head()
"""
Explanation: 따라서 중앙값은 $\frac{\text{ca_count}-1}{2}$번째에 위치한 값이다.
주의: 인덱스는 0부터 출발한다. 따라서 중앙값이 하나 앞으로 당겨진다.
End of explanation
"""
# 캘리포니아에서 거래된 상품(HighQ) 담배(식물) 도매가의 중앙값
ca_median = ca_highq_pd.HighQ.iloc[int((ca_count-1)/ 2)]
ca_median
"""
Explanation: 인덱스 로케이션 함수인 iloc 함수를 활용한다.
주의: iloc 메소드는 인덱스 번호를 사용한다.
위 표에서 보여주는 인덱스 번호는 Weed_Price.csv 파일을 처음 불러왔을 때 사용된 인덱스 번호이다.
하지만 ca_high_pd 에서는 참고사항으로 사용될 뿐이며, iloc 함수에 인자로 들어가는 인덱스는 다시 0부터 세는 것으로 시작한다.
따라서 아래 코드처럼 기존의 참고용 인덱스를 사용하면 옳은 답을 구할 수 없다.
End of explanation
"""
# 캘리포니아 주에서 가장 빈번하게 거래된 상품(HighQ) 담배(식물)의 도매가
ca_mode = ca_highq_pd.HighQ.value_counts().index[0]
ca_mode
"""
Explanation: 최빈값(Mode)
캘리포니아 주에서 거래된 HighQ의 담배가격의 최빈값을 구하자.
최빈값 = 가장 자주 발생한 데이터
주의: value_counts() 메소드 활용을 기억한다.
End of explanation
"""
california_pd.mean()
california_pd.mean().HighQ
california_pd.median()
california_pd.mode()
california_pd.mode().HighQ
california_pd.HighQ.mean()
california_pd.HighQ.median()
california_pd.HighQ.mode()
"""
Explanation: 연습문제
연습
지금까지 구한 평균값, 중앙값, 최빈값을 구하는 함수가 이미 DataFrame과 Series 자료형의 메소드로 구현되어 있다.
아래 코드들을 실행하면서 각각의 코드의 의미를 확인하라.
End of explanation
"""
sum = 0
count = 0
for index in np.arange(len(california_pd)):
if california_pd.iloc[index]['date'].year == 2014:
sum += california_pd.iloc[index]['HighQ']
count += 1
sum/count
"""
Explanation: 연습
캘리포니아 주에서 2013년, 2014년, 2015년에 거래된 HighQ의 담배(식물) 도매가격의 평균을 각각 구하라.
힌트: california_pd.iloc[0]['date'].year
견본답안1
2014년에 거래된 도매가의 평균값을 아래와 같이 계산할 수 있다.
sum 변수: 2014년도에 거래된 도매가의 총합을 담는다.
count 변수: 2014년도의 거래 횟수를 담는다.
End of explanation
"""
years = np.arange(2013, 2016)
year_starts = [0]
for yr in years:
for index in np.arange(year_starts[-1], len(california_pd)):
if california_pd.iloc[index]['date'].year == yr:
continue
else:
year_starts.append(index)
break
year_starts
"""
Explanation: 견본답안2
아래와 같은 방식을 이용하여 인덱스 정보를 구하여 슬라이싱 기능을 활용할 수도 있다.
슬라이싱을 활용하여 연도별 평균을 구하는 방식은 본문 내용과 동일한 방식을 따른다.
End of explanation
"""
california_pd.iloc[4]
california_pd.iloc[5]
california_pd.iloc[368]
california_pd.iloc[369]
"""
Explanation: year_starts에 담긴 숫자들의 의미는 다음과 같다.
0번줄부터 2013년도 거래가 표시된다.
5번줄부터 2014년도 거래가 표시된다.
369번줄부터 2015년도 거래가 표시된다.
End of explanation
"""
|
gfeiden/Notebook | Daily/20150821_mass_track_compositions.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
cd /Users/grefe950/evolve/dmestar/trk/
"""
Explanation: Diagnostic Checks on Mass Tracks
End of explanation
"""
def loadTrack(filename):
return np.genfromtxt(filename, usecols=(0, 1, 2, 3, 4, 5))
"""
Explanation: Quick mass track loader
End of explanation
"""
masses = [0.1, 0.5, 1.0, 1.5]
# directory extensions
gs98_dir = 'gs98/p000/a0/amlt1884'
gas07_dir = 'gas07/p000/a0/amlt2202'
agss09_dir = 'agss09/p000/a0/amlt1991'
# file name extensions
gs98_ext = '_GS98_p000_p0_y28_mlt1.884.trk'
gas07_ext = '_GAS07_p000_p0_y26_mlt2.202.trk'
agss09_ext = '_AGSS09_p000_p0_y27_mlt1.991.trk'
colors = {2:'#0094b2', 1:'#B22222', 0:'#56b4ea', 3:'#555555'}
fig, ax = plt.subplots(2, 2, figsize=(12., 12.))
for i in range(len(masses)):
mass = masses[i]
row = i/2
col = i%2
# set axis properties
axis = ax[row, col]
axis.tick_params(which='major', axis='both', length=15., labelsize=16.)
axis.set_xlabel('effective temperature (K)', fontsize=18.)
axis.set_ylabel('log(L/Lo)', fontsize=18.)
axis.invert_xaxis()
# load mass tracks
gs98 = loadTrack('{:s}/m{:04.0f}{:s}'.format(gs98_dir, mass*1000., gs98_ext))
gas07 = loadTrack('{:s}/m{:04.0f}{:s}'.format(gas07_dir, mass*1000., gas07_ext))
agss09 = loadTrack('{:s}/m{:04.0f}{:s}'.format(agss09_dir, mass*1000., agss09_ext))
axis.plot(10**gs98[:,1], gs98[:,3], lw=3, c=colors[3], label='GS98')
axis.plot(10**gas07[:,1], gas07[:,3], dashes=(2.0, 2.0), lw=3, c=colors[1], label='GAS07')
axis.plot(10**agss09[:,1], agss09[:,3], dashes=(20., 5.), lw=3, c=colors[2], label='AGSS09')
axis.legend(loc=2, fontsize=14.)
fig.tight_layout()
colors = {2:'#0094b2', 1:'#B22222', 0:'#56b4ea', 3:'#555555'}
fig, ax = plt.subplots(4, 1, figsize=(8., 16.))
for i in range(len(masses)):
mass = masses[i]
# set axis properties
axis = ax[i]
axis.tick_params(which='major', axis='both', length=15., labelsize=16.)
axis.set_ylabel('effective temperature (K)', fontsize=18.)
axis.set_xlabel('age (Gyr)', fontsize=18.)
# load mass tracks
gs98 = loadTrack('{:s}/m{:04.0f}{:s}'.format(gs98_dir, mass*1000., gs98_ext))
gas07 = loadTrack('{:s}/m{:04.0f}{:s}'.format(gas07_dir, mass*1000., gas07_ext))
agss09 = loadTrack('{:s}/m{:04.0f}{:s}'.format(agss09_dir, mass*1000., agss09_ext))
axis.semilogx(gs98[:,0]/1.0e9, 10**gs98[:,1], lw=3, c=colors[3], label='GS98')
axis.semilogx(gas07[:,0]/1.0e9, 10**gas07[:,1], dashes=(2.0, 2.0), lw=3, c=colors[1], label='GAS07')
axis.semilogx(agss09[:,0]/1.0e9, 10**agss09[:,1], dashes=(20., 5.), lw=3, c=colors[2], label='AGSS09')
axis.legend(loc=2, fontsize=14.)
fig.tight_layout()
"""
Explanation: Preliminary definitions, including masses and file extensions.
End of explanation
"""
cd /usr/local/dmestar/data/atm/
gs98_atm = np.genfromtxt('./phx/GS98/t010/Zp0d0.ap0d0_t010.dat')
gs98K_atm = np.genfromtxt('./kur/GS98/t010/kurucz_z+0.00_a+0.00_t02_tau010.sbc')
gas07_atm = np.genfromtxt('./mrc/GAS07/t010/marcs_z+0.00_a+0.00_m1.0_t02_tau010.sbc')
agss09_atm = np.genfromtxt('./phx/AGSS09/t010/Zp0d0.ap0d0_t010.dat')
fig, ax = plt.subplots(3, 1, figsize=(8., 12.), sharex=True)
for i in range(len(ax)):
col = -3*(i + 1) + i%3
axis = ax[i]
axis.grid(True)
axis.set_xlabel('effective temperature (K)', fontsize=18., family='serif')
axis.set_ylabel('temperature at $\\tau = 10$ (K)', fontsize=18., family='serif')
axis.set_xlim(2500., 8000.)
axis.set_ylim(3000., 12000.)
axis.tick_params(which='major', axis='both', length=15., labelsize=14.)
axis.plot(gs98_atm[:,0], gs98_atm[:, col], lw=7, c=colors[3], alpha=0.5, label='GS98')
#axis.plot(gs98K_atm[:,0], gs98K_atm[:, col], 'o', markersize=10., c=colors[0], label='GS98K')
axis.plot(gas07_atm[:,0], gas07_atm[:, col], lw=4, c=colors[1], alpha=0.8, label='GAS07')
axis.plot(agss09_atm[:,0], agss09_atm[:, col], lw=4, c=colors[2], alpha=0.9, label='AGSS09')
axis.legend(loc=2, fontsize=14.)
fig.tight_layout()
"""
Explanation: It's quite curious as to why the GAS07 and AGSS09 tracks show opposite relative effects with respect to the GS98. Should look at the atmosphere structures at depth to determine if there are any intrinsic differences in the atmospheres that are causing their opposite behavior.
Atmosphere Thermal Structure
End of explanation
"""
|
tensorflow/docs-l10n | site/zh-cn/tensorboard/scalars_and_keras.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
"""
# 加载 TensorBoard notebook 插件
%load_ext tensorboard
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from datetime import datetime
from packaging import version
import tensorflow as tf
from tensorflow import keras
import numpy as np
print("TensorFlow version: ", tf.__version__)
assert version.parse(tf.__version__).release[0] >= 2, \
"This notebook requires TensorFlow 2.0 or above."
"""
Explanation: TensorBoard Scalars: Logging training metrics in Keras
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://tensorflow.google.cn/tensorboard/scalars_and_keras"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png" />在 tensorFlow.google.cn 上查看</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/tensorboard/scalars_and_keras.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png" />Google Colab 中运行</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/tensorboard/scalars_and_keras.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png" />在 GitHub 上查看源代码</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/tensorboard/scalars_and_keras.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png" />下载此 notebook</a>
</td>
</table>
概述
机器学习总是涉及理解关键指标,例如损失 (loss) ,以及它们如何随着训练的进行而变化。 例如,这些指标可以帮助您了解模型是否过拟合,或者是否不必要地训练了太长时间。 您可能需要比较不同训练中的这些指标,以帮助调试和改善模型。
TensorBoard 的 Scalars Dashboard 允许您轻松地使用简单的 API 可视化这些指标。 本教程提供了非常基本的示例,可帮助您在开发 Keras 模型时学习如何在 TensorBoard 中使用这些 API 。 您将学习如何使用 Keras TensorBoard 回调和 TensorFlow Summary API 来可视化默认和自定义标量。
设置
End of explanation
"""
data_size = 1000
# 80% 的数据用来训练
train_pct = 0.8
train_size = int(data_size * train_pct)
# 创建在(-1,1)范围内的随机数作为输入
x = np.linspace(-1, 1, data_size)
np.random.shuffle(x)
# 生成输出数据
# y = 0.5x + 2 + noise
y = 0.5 * x + 2 + np.random.normal(0, 0.05, (data_size, ))
# 将数据分成训练和测试集
x_train, y_train = x[:train_size], y[:train_size]
x_test, y_test = x[train_size:], y[train_size:]
"""
Explanation: 配置数据用来训练回归
您现在将使用 Keras 计算回归,即找到对应数据集的最佳拟合。 (虽然使用神经网络和梯度下降解决此类问题多此一举,但这却是一个非常容易理解的示例.)
您将使用 TensorBoard 观察训练和测试损失 (loss) 在各个时期之间如何变化。 希望您会看到训练集和测试集损失随着时间的流逝而减少,然后保持稳定。
首先,大致沿 y = 0.5x + 2 线生成1000个数据点。 将这些数据点分为训练和测试集。 您希望神经网络学会 x 与 y 的对应关系。
End of explanation
"""
logdir = "logs/scalars/" + datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = keras.callbacks.TensorBoard(log_dir=logdir)
model = keras.models.Sequential([
keras.layers.Dense(16, input_dim=1),
keras.layers.Dense(1),
])
model.compile(
loss='mse', # keras.losses.mean_squared_error
optimizer=keras.optimizers.SGD(lr=0.2),
)
print("Training ... With default parameters, this takes less than 10 seconds.")
training_history = model.fit(
x_train, # input
y_train, # output
batch_size=train_size,
verbose=0, # Suppress chatty output; use Tensorboard instead
epochs=100,
validation_data=(x_test, y_test),
callbacks=[tensorboard_callback],
)
print("Average test loss: ", np.average(training_history.history['loss']))
"""
Explanation: 训练模型和记录损失 (loss)
您现在可以定义,训练和评估模型了。
要在训练时记录损失 (loss) ,请执行以下操作:
创建 Keras TensorBoard 回调
指定日志目录
将 TensorBoard 回调传递给 Keras' Model.fit().
TensorBoard 从日志目录层次结构中读取日志数据。 在此 notebook 中,根日志目录是 logs/scalars ,后缀有时间戳的子目录。带时间戳的子目录使您可以在使用 TensorBoard 并在模型上进行迭代时轻松识别并选择训练运行。
End of explanation
"""
%tensorboard --logdir logs/scalars
"""
Explanation: 使用 TensorBoard 检查损失 (loss)
现在,启动 TensorBoard ,并指定您在上面使用的根日志目录。
等待几秒钟以使 TensorBoard 进入载入界面。
End of explanation
"""
print(model.predict([60, 25, 2]))
# 理想的输出结果是:
# [[32.0]
# [14.5]
# [ 3.0]]
"""
Explanation: <img class="tfo-display-only-on-site" src="https://github.com/tensorflow/tensorboard/blob/master/docs/images/scalars_loss.png?raw=1"/>
您可能会看到 TensorBoard 显示消息“当前数据集没有活动的仪表板”。这是因为尚未保存初始日志记录数据。随着训练的进行,Keras 模型将开始记录数据。TensorBoard 将定期刷新并显示您的 scalar 指标。如果您不耐烦,可以点击右上角的刷新箭头。
在观看训练进度时,请注意训练和验证损失如何迅速减少,然后保持稳定。实际上,您可能在25个 epochs 后就停止了训练,因为在此之后训练并没有太大改善。
将鼠标悬停在图形上可以查看特定的数据点。您也可以尝试使用鼠标放大,或选择其中的一部分以查看更多详细信息。
注意左侧的 “Runs” 选择器。 “Runs” 表示来自一轮训练的一组日志,在本例中为 Model.fit() 的结果。随着时间的推移,开发人员进行实验和开发模型时,通常会有很多运行。
使用 “Runs” 选择器选择特定的 Runs,或仅从训练或验证中选择。比较运行将帮助您评估哪个版本的代码可以更好地解决您的问题。
TensorBoard 的损失图表明,对于训练和验证,损失持续减少,然后稳定下来。 这意味着该模型的指标可能非常好! 现在来看模型在现实生活中的实际行为。
给定 (60, 25, 2), 方程式 y = 0.5x + 2 应该会输出 (32, 14.5, 3). 模型会输出一样的结果吗?
End of explanation
"""
logdir = "logs/scalars/" + datetime.now().strftime("%Y%m%d-%H%M%S")
file_writer = tf.summary.create_file_writer(logdir + "/metrics")
file_writer.set_as_default()
def lr_schedule(epoch):
"""
Returns a custom learning rate that decreases as epochs progress.
"""
learning_rate = 0.2
if epoch > 10:
learning_rate = 0.02
if epoch > 20:
learning_rate = 0.01
if epoch > 50:
learning_rate = 0.005
tf.summary.scalar('learning rate', data=learning_rate, step=epoch)
return learning_rate
lr_callback = keras.callbacks.LearningRateScheduler(lr_schedule)
tensorboard_callback = keras.callbacks.TensorBoard(log_dir=logdir)
model = keras.models.Sequential([
keras.layers.Dense(16, input_dim=1),
keras.layers.Dense(1),
])
model.compile(
loss='mse', # keras.losses.mean_squared_error
optimizer=keras.optimizers.SGD(),
)
training_history = model.fit(
x_train, # input
y_train, # output
batch_size=train_size,
verbose=0, # Suppress chatty output; use Tensorboard instead
epochs=100,
validation_data=(x_test, y_test),
callbacks=[tensorboard_callback, lr_callback],
)
"""
Explanation: 并不差!
记录自定义 scalars
如果要记录自定义值,例如动态学习率,该怎么办? 为此,您需要使用 TensorFlow Summary API。
重新训练回归模型并记录自定义学习率。如以下步骤所示:
1.使用 tf.summary.create_file_writer() 创建文件编写器。
2.定义自定义学习率函数。 这将传递给 Keras LearningRateScheduler 回调。
3.在学习率函数内部,使用 tf.summary.scalar() 记录自定义学习率。
4.将 LearningRateScheduler 回调传递给 Model.fit()。
通常,要记录自定义 scalars ,您需要对文件编写器使用 tf.summary.scalar()。 文件编写器负责将此运行的数据写入指定的目录,并在您使用 tf.summary.scalar() 时隐式使用。
End of explanation
"""
%tensorboard --logdir logs/scalars
"""
Explanation: 查看 TensorBoard
End of explanation
"""
print(model.predict([60, 25, 2]))
# 理想的输出结果是:
# [[32.0]
# [14.5]
# [ 3.0]]
"""
Explanation: <img class="tfo-display-only-on-site" src="https://github.com/tensorflow/tensorboard/blob/master/docs/images/scalars_custom_lr.png?raw=1"/>
使用左侧的 “Runs” 选择器,请注意您运行了 <timestamp>/metrics。 选择此运行将显示一个 "learning rate" 图,您可以在此运行过程中验证学习率的进度。
您还可以将此运行的训练和验证损失曲线与您以前的运行进行比较。
模型会输出什么呢?
End of explanation
"""
|
pombredanne/https-gitlab.lrde.epita.fr-vcsn-vcsn | doc/notebooks/automaton.is_equivalent.ipynb | gpl-3.0 | import vcsn
"""
Explanation: automaton.is_equivalent(aut)
Whether this automaton is equivalent to aut, i.e., whether they accept the same words with the same weights.
Preconditions:
- The join of the weightsets is either $\mathbb{B}, \mathbb{Z}$, or a field ($\mathbb{F}2, \mathbb{Q}, \mathbb{Q}\text{mp}, \mathbb{R}$).
Algorithm:
- for Boolean automata, check whether is_useless(difference(a1.realtime(), a2.realtime()) and conversely.
- otherwise, check whether is_empty(reduce(union(a1.realtime(), -1 * a2.realtime())).
See also:
- automaton.is_isomorphic
Examples
End of explanation
"""
B = vcsn.context('lal_char(abc), b')
a1 = B.expression('a').standard()
a2 = B.expression('b').standard()
a1.is_equivalent(a2)
"""
Explanation: Automata with different languages are not equivalent.
End of explanation
"""
Z = vcsn.context('lal_char, z')
a1 = Z.expression('<42>a').standard()
a2 = Z.expression('<51>a').standard()
a1.is_equivalent(a2)
"""
Explanation: Automata that computes different weights are not equivalent.
End of explanation
"""
a = vcsn.context('law_char(abcxy), q').expression('<2>(ab)<3>(c)<5/2>').standard(); a
b = vcsn.context('lal_char(abcXY), z').expression('<5>ab<3>c').standard(); b
a.is_equivalent(b)
"""
Explanation: The types of the automata need not be equal for the automata to be equivalent. In the following example the automaton types are
$$\begin{align}
{a,b,c,x,y}^* & \rightarrow \mathbb{Q}\
{a,b,c,X,Y} & \rightarrow \mathbb{Z}\
\end{align}$$
End of explanation
"""
r = B.expression('[abc]*')
r
std = r.standard()
std
dt = r.derived_term()
dt
std.is_equivalent(dt)
"""
Explanation: Boolean automata
Of course the different means to compute automata from rational expressions (thompson, standard, derived_term...) result in different, but equivalent, automata.
End of explanation
"""
th = r.thompson()
th
th.is_equivalent(std)
"""
Explanation: Labelsets need not to be free. For instance, one can compare the Thompson automaton (which features spontaneous transitions) with the standard automaton:
End of explanation
"""
th.proper(prune = False)
th.proper(prune = False).is_equivalent(std)
"""
Explanation: Of course useless states "do not count" in checking equivalence.
End of explanation
"""
a = Z.expression('<2>ab+<3>ac').automaton()
a
d = a.determinize()
d
d.is_equivalent(a)
"""
Explanation: Weighted automata
In the case of weighted automata, the algorithms checks whether $(a_1 + -1 \times a_2).\mathtt{reduce}().\mathtt{is_empty}()$, so the preconditions of automaton.reduce apply.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/test-institute-3/cmip6/models/sandbox-2/seaice.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'test-institute-3', 'sandbox-2', 'seaice')
"""
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: TEST-INSTITUTE-3
Source ID: SANDBOX-2
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:46
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation
"""
|
mne-tools/mne-tools.github.io | dev/_downloads/98d9662291626be9c938eee7a8fcc9bd/sensor_noise_level.ipynb | bsd-3-clause | # Author: Eric Larson <larson.eric.d@gmail.com>
#
# License: BSD-3-Clause
import os.path as op
import mne
data_path = mne.datasets.sample.data_path()
raw_erm = mne.io.read_raw_fif(op.join(data_path, 'MEG', 'sample',
'ernoise_raw.fif'), preload=True)
"""
Explanation: Show noise levels from empty room data
This shows how to use :meth:mne.io.Raw.plot_psd to examine noise levels
of systems. See :footcite:KhanCohen2013 for an example.
End of explanation
"""
raw_erm.plot_psd(tmax=10., average=True, spatial_colors=False,
dB=False, xscale='log')
"""
Explanation: We can plot the absolute noise levels:
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/cnrm-cerfacs/cmip6/models/cnrm-esm2-1-hr/seaice.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cnrm-cerfacs', 'cnrm-esm2-1-hr', 'seaice')
"""
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: CNRM-CERFACS
Source ID: CNRM-ESM2-1-HR
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:52
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation
"""
|
bryanwweber/PyKED | docs/rcm-example.ipynb | bsd-3-clause | import cantera as ct
import numpy as np
from pyked import ChemKED
"""
Explanation: RCM modeling with varying reactor volume
This example is available as an ipynb (Jupyter Notebook) file in the main GitHub repository at https://github.com/pr-omethe-us/PyKED/blob/master/docs/rcm-example.ipynb
The ChemKED file that will be used in this example can be found in the
tests directory of the PyKED
repository at https://github.com/pr-omethe-us/PyKED/blob/master/pyked/tests/testfile_rcm.yaml.
Examining that file, we find the first section specifies the information about
the ChemKED file itself:
yaml
file-authors:
- name: Kyle E Niemeyer
ORCID: 0000-0003-4425-7097
file-version: 0
chemked-version: 0.4.0
Then, we find the information regarding the article in the literature from which
this data was taken. In this case, the dataset comes from the work of
Mittal et al.:
yaml
reference:
doi: 10.1002/kin.20180
authors:
- name: Gaurav Mittal
- name: Chih-Jen Sung
ORCID: 0000-0003-2046-8076
- name: Richard A Yetter
journal: International Journal of Chemical Kinetics
year: 2006
volume: 38
pages: 516-529
detail: Fig. 6, open circle
experiment-type: ignition delay
apparatus:
kind: rapid compression machine
institution: Case Western Reserve University
facility: CWRU RCM
Finally, this file contains just a single datapoint, which describes the experimental
ignition delay, initial mixture composition, initial temperature, initial pressure,
compression time, ignition type, and volume history that specifies
how the volume of the reactor varies with time, for simulating the compression
stroke and post-compression processes:
yaml
datapoints:
- temperature:
- 297.4 kelvin
ignition-delay:
- 1.0 ms
pressure:
- 958.0 torr
composition:
kind: mole fraction
species:
- species-name: H2
InChI: 1S/H2/h1H
amount:
- 0.12500
- species-name: O2
InChI: 1S/O2/c1-2
amount:
- 0.06250
- species-name: N2
InChI: 1S/N2/c1-2
amount:
- 0.18125
- species-name: Ar
InChI: 1S/Ar
amount:
- 0.63125
ignition-type:
target: pressure
type: d/dt max
rcm-data:
compression-time:
- 38.0 ms
time-histories:
- type: volume
time:
units: s
column: 0
volume:
units: cm3
column: 1
values:
- [0.00E+000, 5.47669375000E+002]
- [1.00E-003, 5.46608789894E+002]
- [2.00E-003, 5.43427034574E+002]
...
The values for the volume history in the time-histories key are truncated here to save space. One application of the
data stored in this file is to perform a simulation using Cantera to
calculate the ignition delay, including the facility-dependent effects represented in the volume
trace. All information required to perform this simulation is present in the ChemKED file, with the
exception of a chemical kinetic model for H<sub>2</sub>/CO combustion.
In Python, additional functionality can be imported into a script or session by the import
keyword. Cantera, NumPy, and PyKED must be imported into the session so that we can work with the
code. In the case of Cantera and NumPy, we will use many functions from these libraries, so we
assign them abbreviations (ct and np, respectively) for convenience. From PyKED, we
will only be using the ChemKED class, so this is all that is imported:
End of explanation
"""
from urllib.request import urlopen
import yaml
rcm_link = 'https://raw.githubusercontent.com/pr-omethe-us/PyKED/master/pyked/tests/testfile_rcm.yaml'
with urlopen(rcm_link) as response:
testfile_rcm = yaml.safe_load(response.read())
ck = ChemKED(dict_input=testfile_rcm)
dp = ck.datapoints[0]
"""
Explanation: Next, we have to load the ChemKED file and retrieve the first element of the datapoints
list. Although this file only encodes a single experiment, the datapoints attribute will
always be a list (in this case, of length 1). The elements of the
datapoints list are instances of the DataPoint class, which we store in the variable
dp. To load the YAML file from the web, we also import and use the PyYAML package, and the built-in urllib package, and use the dict_input argument to ChemKED to read the information.
End of explanation
"""
T_initial = dp.temperature.to('K').magnitude
P_initial = dp.pressure.to('Pa').magnitude
X_initial = dp.get_cantera_mole_fraction()
"""
Explanation: The initial temperature, pressure, and mixture composition can be read from the
instance of the DataPoint class. PyKED uses instances of the Pint Quantity class to
store values with units, while Cantera expects a floating-point value in SI
units as input. Therefore, we use the built-in capabilities of Pint to convert
the units from those specified in the ChemKED file to SI units, and we use the magnitude
attribute of the Quantity class to take only the numerical part. We also retrieve the
initial mixture mole fractions in a format Cantera will understand:
End of explanation
"""
gas = ct.Solution('gri30.xml')
gas.TPX = T_initial, P_initial, X_initial
"""
Explanation: With these properties defined, we have to create the objects in Cantera that represent the physical
state of the system to be studied. In Cantera, the Solution class stores the thermodynamic,
kinetic, and transport data from an input file in the CTI format. After the Solution object
is created, we can set the initial temperature, pressure, and mole fractions using the TPX
attribute of the Solution class. In this example, we will use the GRI-3.0 as the chemical kinetic mechanism for H<sub>2</sub>/CO combustion. GRI-3.0 is built-in to Cantera, so no other input files are needed.
End of explanation
"""
reac = ct.IdealGasReactor(gas)
env = ct.Reservoir(ct.Solution('air.xml'))
"""
Explanation: With the thermodynamic and kinetic data loaded and the initial conditions defined, we need to
install the Solution instance into an IdealGasReactor which implements the equations
for mass, energy, and species conservation. In addition, we create a Reservoir to represent
the environment external to the reaction chamber. The input file used for the environment,
air.xml, is also included with Cantera and represents an average composition of air.
End of explanation
"""
from cansen.profiles import VolumeProfile
exp_time = dp.volume_history.time.magnitude
exp_volume = dp.volume_history.volume.magnitude
keywords = {'vproTime': exp_time, 'vproVol': exp_volume}
ct.Wall(reac, env, velocity=VolumeProfile(keywords));
"""
Explanation: To apply the effect of the volume trace to the IdealGasReactor, a Wall must be
installed between the reactor and environment and assigned a velocity. The Wall allows the
environment to do work on the reactor (or vice versa) and change the reactor's thermodynamic state;
we use a Reservoir for the environment because in Cantera, Reservoirs always have a
constant thermodynamic state and composition. Using a Reservoir accelerates the solution
compared to using two IdealGasReactors, since the composition and state of the environment
are typically not necessary for the solution of autoignition problems. Although we do not show the
details here, a reference implementation of a class that computes the wall velocity given the volume
history of the reactor is available in CanSen, in the
cansen.profiles.VolumeProfile class, which we import here:
End of explanation
"""
netw = ct.ReactorNet([reac])
netw.set_max_time_step(np.min(np.diff(exp_time)))
"""
Explanation: Then, the IdealGasReactor is installed in a ReactorNet. The ReactorNet
implements the connection to the numerical solver (CVODES is
used in Cantera) to solve the energy and species equations. For this example, it is best practice
to set the maximum time step allowed in the solution to be the minimum time difference in the time array from the volume trace:
End of explanation
"""
time = []
temperature = []
pressure = []
volume = []
mass_fractions = []
"""
Explanation: To calculate the ignition delay, we will follow the definition specified in the ChemKED file for
this experiment, where the experimentalists used the maximum of the time derivative of the pressure
to define the ignition delay. To calculate this derivative, we need to store the state variables and the composition on each time step, so we initialize several Python lists to act as storage:
End of explanation
"""
while netw.time < 0.05:
time.append(netw.time)
temperature.append(reac.T)
pressure.append(reac.thermo.P)
volume.append(reac.volume)
mass_fractions.append(reac.Y)
netw.step()
"""
Explanation: Finally, the problem is integrated using the step method of the ReactorNet. The
step method takes one timestep forward on each call, with step size determined by the CVODES
solver (CVODES uses an adaptive time-stepping algorithm). On each step, we add the relevant variables
to their respective lists. The problem is integrated until a user-specified end time, in this case
50 ms, although in principle, the user could end the simulation on any condition
they choose:
End of explanation
"""
%matplotlib notebook
import matplotlib.pyplot as plt
plt.figure()
plt.plot(time, pressure)
plt.ylabel('Pressure [Pa]')
plt.xlabel('Time [s]');
"""
Explanation: At this point, the user would post-process the information in the pressure list to calculate
the derivative by whatever algorithm they choose. We will plot the pressure versus the time of the simulation using the Matplotlib library:
End of explanation
"""
plt.figure()
plt.plot(exp_time, exp_volume/exp_volume[0], label='Experimental volume', linestyle='--')
plt.plot(time, volume, label='Simulated volume')
plt.legend(loc='best')
plt.ylabel('Volume [m^3]')
plt.xlabel('Time [s]');
"""
Explanation: We can also plot the volume trace and compare to the values derived from the ChemKED file.
End of explanation
"""
|
turbomanage/training-data-analyst | courses/machine_learning/deepdive/05_artandscience/labs/a_handtuning.ipynb | apache-2.0 | import math
import shutil
import numpy as np
import pandas as pd
import tensorflow as tf
print(tf.__version__)
tf.logging.set_verbosity(tf.logging.INFO)
pd.options.display.max_rows = 10
pd.options.display.float_format = '{:.1f}'.format
"""
Explanation: Hand tuning hyperparameters
Learning Objectives:
* Use the LinearRegressor class in TensorFlow to predict median housing price, at the granularity of city blocks, based on one input feature
* Evaluate the accuracy of a model's predictions using Root Mean Squared Error (RMSE)
* Improve the accuracy of a model by hand-tuning its hyperparameters
The data is based on 1990 census data from California. This data is at the city block level, so these features reflect the total number of rooms in that block, or the total number of people who live on that block, respectively. Using only one input feature -- the number of rooms -- predict house value.
Set Up
In this first cell, we'll load the necessary libraries.
End of explanation
"""
df = pd.read_csv("https://storage.googleapis.com/ml_universities/california_housing_train.csv", sep=",")
"""
Explanation: Next, we'll load our data set.
End of explanation
"""
df.head()
df.describe()
"""
Explanation: Examine the data
It's a good idea to get to know your data a little bit before you work with it.
We'll print out a quick summary of a few useful statistics on each column.
This will include things like mean, standard deviation, max, min, and various quantiles.
End of explanation
"""
df['num_rooms'] = df['total_rooms'] / df['households']
df.describe()
# Split into train and eval
np.random.seed(seed=1) #makes split reproducible
msk = np.random.rand(len(df)) < 0.8
traindf = df[msk]
evaldf = df[~msk]
"""
Explanation: In this exercise, we'll be trying to predict median_house_value. It will be our label (sometimes also called a target). Can we use total_rooms as our input feature? What's going on with the values for that feature?
This data is at the city block level, so these features reflect the total number of rooms in that block, or the total number of people who live on that block, respectively. Let's create a different, more appropriate feature. Because we are predicing the price of a single house, we should try to make all our features correspond to a single house as well
End of explanation
"""
OUTDIR = './housing_trained'
def train_and_evaluate(output_dir, num_train_steps):
estimator = #TODO: Use LinearRegressor estimator
#Add rmse evaluation metric
def rmse(labels, predictions):
pred_values = tf.cast(predictions['predictions'],tf.float64)
return {'rmse': tf.metrics.root_mean_squared_error(labels, pred_values)}
estimator = tf.contrib.estimator.add_metrics(estimator,rmse)
train_spec=tf.estimator.TrainSpec(
input_fn = ,#TODO: use tf.estimator.inputs.pandas_input_fn
max_steps = num_train_steps)
eval_spec=tf.estimator.EvalSpec(
input_fn = ,#TODO: use tf.estimator.inputs.pandas_input_fn
steps = None,
start_delay_secs = 1, # start evaluating after N seconds
throttle_secs = 10, # evaluate every N seconds
)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
# Run training
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
train_and_evaluate(OUTDIR, num_train_steps = 100)
"""
Explanation: Build the first model
In this exercise, we'll be trying to predict median_house_value. It will be our label (sometimes also called a target). We'll use num_rooms as our input feature.
To train our model, we'll use the LinearRegressor estimator. The Estimator takes care of a lot of the plumbing, and exposes a convenient way to interact with data, training, and evaluation.
End of explanation
"""
SCALE = 100000
OUTDIR = './housing_trained'
def train_and_evaluate(output_dir, num_train_steps):
estimator = #TODO
#Add rmse evaluation metric
def rmse(labels, predictions):
pred_values = tf.cast(predictions['predictions'],tf.float64)
return {'rmse': tf.metrics.root_mean_squared_error(labels*SCALE, pred_values*SCALE)}
estimator = tf.contrib.estimator.add_metrics(estimator,rmse)
train_spec=tf.estimator.TrainSpec(
input_fn = ,#TODO
max_steps = num_train_steps)
eval_spec=tf.estimator.EvalSpec(
input_fn = ,#TODO
steps = None,
start_delay_secs = 1, # start evaluating after N seconds
throttle_secs = 10, # evaluate every N seconds
)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
# Run training
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
train_and_evaluate(OUTDIR, num_train_steps = 100)
"""
Explanation: 1. Scale the output
Let's scale the target values so that the default parameters are more appropriate. Note that the RMSE here is now in 100000s so if you get RMSE=0.9, it really means RMSE=90000.
End of explanation
"""
SCALE = 100000
OUTDIR = './housing_trained'
def train_and_evaluate(output_dir, num_train_steps):
myopt = #TODO: use tf.train.FtrlOptimizer and set learning rate
estimator = tf.estimator.LinearRegressor(
model_dir = output_dir,
feature_columns = [tf.feature_column.numeric_column('num_rooms')],
optimizer = myopt)
#Add rmse evaluation metric
def rmse(labels, predictions):
pred_values = tf.cast(predictions['predictions'],tf.float64)
return {'rmse': tf.metrics.root_mean_squared_error(labels*SCALE, pred_values*SCALE)}
estimator = tf.contrib.estimator.add_metrics(estimator,rmse)
train_spec=tf.estimator.TrainSpec(
input_fn = ,#TODO: make sure to specify batch_size
max_steps = num_train_steps)
eval_spec=tf.estimator.EvalSpec(
input_fn = ,#TODO
steps = None,
start_delay_secs = 1, # start evaluating after N seconds
throttle_secs = 10, # evaluate every N seconds
)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
# Run training
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
train_and_evaluate(OUTDIR, num_train_steps = 100)
"""
Explanation: 2. Change learning rate and batch size
Can you come up with better parameters? Note the default learning_rate is smaller of 0.2 or 1/sqrt(num_features), and default batch_size is 128. You can also change num_train_steps to train longer if neccessary
End of explanation
"""
|
hetaodie/hetaodie.github.io | assets/media/uda-ml/supervisedlearning/decision/17/Titanic Solutions-zh.ipynb | mit | # Import libraries necessary for this project
import numpy as np
import pandas as pd
from IPython.display import display # Allows the use of display() for DataFrames
# Pretty display for notebooks
%matplotlib inline
# Set a random seed
import random
random.seed(42)
# Load the dataset
in_file = 'titanic_data.csv'
full_data = pd.read_csv(in_file)
# Print the first few entries of the RMS Titanic data
display(full_data.head())
"""
Explanation: Lab:使用决策树探索泰坦尼克号乘客存活情况
开始
在引导项目中,你研究了泰坦尼克号存活数据并能够对乘客存活情况作出预测。在该项目中,你手动构建了一个决策树,该决策树在每个阶段都会选择一个与存活情况最相关的特征。幸运的是,这正是决策树的运行原理!在此实验室中,我们将通过在 sklearn 中实现决策树使这一流程速度显著加快。
我们首先将加载数据集并显示某些行。
End of explanation
"""
# Store the 'Survived' feature in a new variable and remove it from the dataset
outcomes = full_data['Survived']
features_raw = full_data.drop('Survived', axis = 1)
# Show the new dataset with 'Survived' removed
display(features_raw.head())
"""
Explanation: <div>
<style>
.dataframe thead tr:only-child th {
text-align: right;
}
下面是每位乘客具备的各种特征:
- **Survived**:存活结果(0 = 存活;1 = 未存活)
- **Pclass**:社会阶层(1 = 上层;2 = 中层;3 = 底层)
- **Name**:乘客姓名
- **Sex**:乘客性别
- **Age**:乘客年龄(某些条目为 `NaN`)
- **SibSp**:一起上船的兄弟姐妹和配偶人数
- **Parch**:一起上船的父母和子女人数
- **Ticket**:乘客的票号
- **Fare**:乘客支付的票价
- **Cabin**:乘客的客舱号(某些条目为 `NaN`)
- **Embarked**:乘客的登船港(C = 瑟堡;Q = 皇后镇;S = 南安普顿)
因为我们对每位乘客或船员的存活情况感兴趣,因此我们可以从此数据集中删除 **Survived** 特征,并将其存储在单独的变量 `outcome` 中。我们将使用这些结果作为预测目标。
运行以下代码单元格,以从数据集中删除特征 **Survived** 并将其存储到 `outcome` 中。
End of explanation
"""
features = pd.get_dummies(features_raw)
"""
Explanation: <div>
<style>
.dataframe thead tr:only-child th {
text-align: right;
}
相同的泰坦尼克号样本数据现在显示 DataFrame 中删除了 **Survived** 特征。注意 `data`(乘客数据)和 `outcomes` (存活结果)现在是*成对的*。意味着对于任何乘客 `data.loc[i]`,都具有存活结果 `outcomes[i]`。
## 预处理数据
现在我们对数据进行预处理。首先,我们将对特征进行独热编码。
End of explanation
"""
features = features.fillna(0.0)
display(features.head())
"""
Explanation: 现在用 0 填充任何空白处。
End of explanation
"""
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(features, outcomes, test_size=0.2, random_state=42)
# Import the classifier from sklearn
from sklearn.tree import DecisionTreeClassifier
# TODO: Define the classifier, and fit it to the data
model = DecisionTreeClassifier()
model.fit(X_train, y_train)
"""
Explanation: <div>
<style>
.dataframe thead tr:only-child th {
text-align: right;
}
## (TODO) 训练模型
现在我们已经准备好在 sklearn 中训练模型了。首先,将数据拆分为训练集和测试集。然后用训练集训练模型。
End of explanation
"""
# Making predictions
y_train_pred = model.predict(X_train)
y_test_pred = model.predict(X_test)
# Calculate the accuracy
from sklearn.metrics import accuracy_score
train_accuracy = accuracy_score(y_train, y_train_pred)
test_accuracy = accuracy_score(y_test, y_test_pred)
print('The training accuracy is', train_accuracy)
print('The test accuracy is', test_accuracy)
"""
Explanation: 测试模型
现在看看模型的效果。我们计算下训练集和测试集的准确率。
End of explanation
"""
# Training the model
model = DecisionTreeClassifier(max_depth=6, min_samples_leaf=6, min_samples_split=10)
model.fit(X_train, y_train)
# Making predictions
y_train_pred = model.predict(X_train)
y_test_pred = model.predict(X_test)
# Calculating accuracies
train_accuracy = accuracy_score(y_train, y_train_pred)
test_accuracy = accuracy_score(y_test, y_test_pred)
print('The training accuracy is', train_accuracy)
print('The test accuracy is', test_accuracy)
"""
Explanation: 练习:改善模型
结果是训练准确率很高,但是测试准确率较低。我们可能有点过拟合了。
现在该你来发挥作用了!训练新的模型,并尝试指定一些参数来改善测试准确率,例如:
max_depth
min_samples_leaf
min_samples_split
你可以根据直觉、采用试错法,甚至可以使用网格搜索法!
挑战: 尝试在测试集中获得 85% 的准确率,如果需要提示,可以查看接下来的解决方案 notebook。
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.