repo_name stringlengths 6 77 | path stringlengths 8 215 | license stringclasses 15
values | content stringlengths 335 154k |
|---|---|---|---|
ES-DOC/esdoc-jupyterhub | notebooks/mohc/cmip6/models/sandbox-3/ocnbgchem.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mohc', 'sandbox-3', 'ocnbgchem')
"""
Explanation: ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era: CMIP6
Institute: MOHC
Source ID: SANDBOX-3
Topic: Ocnbgchem
Sub-Topics: Tracers.
Properties: 65 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:15
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean biogeochemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean biogeochemistry model code (PISCES 2.0,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean biogeochemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
"""
Explanation: 1.4. Elemental Stoichiometry
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe elemental stoichiometry (fixed, variable, mix of the two)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.5. Elemental Stoichiometry Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe which elements have fixed/variable stoichiometry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all prognostic tracer variables in the ocean biogeochemistry component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.7. Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all diagnotic tracer variables in the ocean biogeochemistry component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.8. Damping
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any tracer damping used (such as artificial correction or relaxation to climatology,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for passive tracers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for passive tracers (if different from ocean)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for biology sources and sinks
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for biology sources and sinks (if different from ocean)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transport scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 4.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Transport scheme used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.3. Use Different Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Decribe transport scheme if different than that of ocean model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how atmospheric deposition is modeled
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
"""
Explanation: 5.2. River Input
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how river input is modeled
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Sediments From Boundary Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.4. Sediments From Explicit Model
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from explicit sediment model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CO2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.2. CO2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe CO2 gas exchange
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.3. O2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is O2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.4. O2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe O2 gas exchange
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.5. DMS Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is DMS gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.6. DMS Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify DMS gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.7. N2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.8. N2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.9. N2O Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2O gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.10. N2O Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2O gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.11. CFC11 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC11 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.12. CFC11 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC11 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.13. CFC12 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC12 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.14. CFC12 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC12 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.15. SF6 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is SF6 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.16. SF6 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify SF6 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.17. 13CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 13CO2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.18. 13CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 13CO2 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.19. 14CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 14CO2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.20. 14CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 14CO2 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.21. Other Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any other gas exchange
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how carbon chemistry is modeled
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7.2. PH Scale
Is Required: FALSE Type: ENUM Cardinality: 0.1
If NOT OMIP protocol, describe pH scale.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Constants If Not OMIP
Is Required: FALSE Type: STRING Cardinality: 0.1
If NOT OMIP protocol, list carbon chemistry constants.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of tracers in ocean biogeochemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.2. Sulfur Cycle Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sulfur cycle modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Nutrients Present
Is Required: TRUE Type: ENUM Cardinality: 1.N
List nutrient species present in ocean biogeochemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.4. Nitrous Species If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous species.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.5. Nitrous Processes If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous processes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required: TRUE Type: STRING Cardinality: 1.1
Definition of upper trophic level (e.g. based on size) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Upper Trophic Levels Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Define how upper trophic level are treated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
"""
Explanation: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of phytoplankton
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.2. Pft
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton functional types (PFT) (if applicable)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.3. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton size classes (if applicable)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of zooplankton
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Zooplankton size classes (if applicable)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there bacteria representation ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Lability
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe treatment of lability in dissolved organic matter
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is particulate carbon represented in ocean biogeochemistry?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Types If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, type(s) of particulate matter taken into account
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
"""
Explanation: 13.3. Size If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13.4. Size If Discrete
Is Required: FALSE Type: STRING Cardinality: 0.1
If prognostic and discrete size, describe which size classes are used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Sinking Speed If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, method for calculation of sinking speed of particules
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
"""
Explanation: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which carbon isotopes are modelled (C13, C14)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 14.2. Abiotic Carbon
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is abiotic carbon modelled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
"""
Explanation: 14.3. Alkalinity
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is alkalinity modelled ?
End of explanation
"""
|
naveensr89/Scipy-explore | linear_reg.ipynb | gpl-2.0 | %matplotlib inline
%pylab inline
from __future__ import print_function
import matplotlib.pyplot as plt
import matplotlib.animation as animation
import theano
import numpy as np
from theano import tensor as T
from numpy.linalg import inv
"""
Explanation: Sample code to test features of 'NumPy', 'Matplotlib' and 'Scipy'
Importing includes
End of explanation
"""
x = 2
print(x)
y = x**2
print(y)
"""
Explanation: Testing variable assignment and operations in python
End of explanation
"""
# Theano symbolic gradient example
B = T.scalar('E')
R = T.sqr(B)
A = T.grad(R,B)
Z = theano.function([B], A)
# Theano symbolic gradient example - Numeric
a = range(10)
da= range(10)
for idx,x in enumerate(a):
da[idx] = Z(x)
plt.plot(a,da)
plt.xlabel('x')
plt.ylabel('dx')
plt.title('Gradient of $f(x)=x^2$')
plt.show()
"""
Explanation: Extra: Testing Theano capabalities in handling symbolic variables
End of explanation
"""
a = 3
b = 2
N = 100
# y = ax+b
x = np.reshape(range(N),(N,1))
y = a*x + b + 10*np.random.randn(N,1)
#plot
plt.scatter(x,y)
plt.xlabel('x')
plt.ylabel('y')
plt.title('Plot of $y = 3x+2 + 10*\eta (0,1)$')
plt.show()
"""
Explanation: Simple Linear Regression
Let $y = 3x + 2 + 10*n$ be the equation of a line, where $n \sim \eta(0,1)$ is standard normal distribution.
Let x = [1:100]. We will plot the scatter plot of y vs x
End of explanation
"""
# Linear regression (MSE)
# Augment x with 1
X = np.hstack((np.ones((N,1)),x))
w = np.dot(inv(X.T.dot(X)),X.T.dot(y))
print('a = ',w[1],'b = ',w[0])
plt.scatter(x,y)
plt.plot(x,X.dot(w))
plt.xlabel('x')
plt.ylabel('y')
plt.legend(['Fitted model','Input'])
plt.title('Plot of $y = 3x+2 + 10*\eta (0,1)$')
plt.show()
from tempfile import NamedTemporaryFile
VIDEO_TAG = """<video controls>
<source src="data:video/x-m4v;base64,{0}" type="video/mp4">
Your browser does not support the video tag.
</video>"""
def anim_to_html(anim):
if not hasattr(anim, '_encoded_video'):
with NamedTemporaryFile(suffix='.mp4') as f:
anim.save(f.name, fps=20, extra_args=['-vcodec', 'libx264'])
video = open(f.name, "rb").read()
anim._encoded_video = video.encode("base64")
return VIDEO_TAG.format(anim._encoded_video)
from IPython.display import HTML
def display_animation(anim):
plt.close(anim._fig)
return HTML(anim_to_html(anim))
# Animation: MSE gradient descent
fig1 = plt.figure()
def init():
line.set_data([], [])
return line,
def update_w(i):
global w
off = 2*a*X.T.dot((X.dot(w)-y))
w = w - off
line.set_data(x,X.dot(w))
return line,
X = np.hstack((np.ones((N,1)),x))
w = np.random.rand(X.shape[1],1)
ax = plt.axes(xlim=(-20, 120), ylim=(-50, 350))
line, = ax.plot([], [], lw=2)
a = 0.0000001
plt.scatter(x,y)
plt.xlabel('x')
plt.ylabel('y')
plt.legend(['Fitted model','Input'])
plt.title('Plot of $y = 3x+2 + 10*\eta (0,1)$')
line_ani = animation.FuncAnimation(fig1, update_w,init_func=init, frames=100, interval=25, blit=True)
#plt.show()
display_animation(line_ani)
"""
Explanation: From linear regression using the model $$p(y_i/\mathbf{x_i}) = \eta(y_i/\mathbf{w}^T\mathbf{x_i},\sigma^2)$$
We have $$ \mathbf{w} = (\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T\mathbf{y} $$
Now we will plot the original points and the line fitted using linear regression.
End of explanation
"""
|
PyDataBH/data-scraping-letrasmus | Raspando dados na Web com Requests e BeautifulSoup.ipynb | mit | # Importando a biblioteca Requests
import requests
url = "https://www.letras.mus.br/john-mayer/420168/"
r = requests.get(url)
type(r)
"""
Explanation: John Mayer - Gravity
John Clayton Mayer é um cantor, compositor e produtor musical norte-americano. Nascido em Bridgeport, no estado de Connecticut, ele estudou na Berklee College of Music. Autor de músicas que ganharam o Grammy Awards como Your Body Is A Wonderland. Comecou tocando rock, mas foi transitando para o blues. Já colaborou com artistas renomados do blues como B.B. King e Eric Clapton.
<img src="img/johnmayer.jpg">
<iframe src="https://open.spotify.com/embed?uri=spotify:user:spotify:playlist:37i9dQZF1DWYrlaUEYy4Vg&view=coverart" width="300" height="380" frameborder="0" allowtransparency="true"></iframe>
End of explanation
"""
# Extrair o HTML do objeto Response e imprime
html = r.text
#print(html)
"""
Explanation: Ao examinarmos a chamada da funcao requests.get() obtemos um objeto do tipo Response. Para mais informacoes e um pequeno Quickstart dessa biblioteca acesse
End of explanation
"""
# Importando a biblioteca BeautifulSoup de bs4
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, "html5lib")
type(soup)
"""
Explanation: Muito bom, agora já temos todo o conteúdo da página que contem a música Gravity do John Mayer. O próximo passo é realizar o parsing ou a separacao do conteúdo da música, do restante da página. Para realizar essa tarefa vamos usar uma outra biblioteca chamada BeautifulSoup
End of explanation
"""
soup.title
"""
Explanation: Agora chegou o momento de extra informacoes relevantes da página que acabamos de coletar, como o título da página
End of explanation
"""
soup.title.string
"""
Explanation: Como podemos perceber ele retorna todo o conteúdo do título, mas é possível obter apenas a string do título da página
End of explanation
"""
# Encontre todos os hyperlinks da nossa sopa e exibindo apenas os 10 primeiros
soup.findAll('a')[:10]
"""
Explanation: Podemos também obter uma lista com todos os hyperlinks da página, que no HTML sao declarados com a tag < a >
End of explanation
"""
# Extrair o texto da nossa sopa e imprimir
text = soup.get_text()
#print(text)
"""
Explanation: Muito bem, o que nós precisamos fazer agora é extrair o texto da música que é o nosso objetivo inicial, para isso o BeautifulSoup contém um método chamado .get_text()
End of explanation
"""
article_tag = soup.find('article')
type(article_tag)
for br_tag in article_tag.find_all('br'):
br_tag.extract()
text = article_tag.get_text(separator=u' ', strip=True)
print(text)
"""
Explanation: Como podemos perceber esse método retorna todo conteúdo da página inclusive códigos Javascript ou CSS. Nao é exatamente o que gostariamos de obter, o texto da música está contido nessa sopa, mas nossa colher parece grande demais para come-la ;). Analisando a página que contém o texto da música temos uma boa pista. A tag < article > parece conter todo o conteúdo da letra da música. Vamos usar entao o método .findAll() passando como parâmetro a string 'article'
End of explanation
"""
# Importando RegexpTokenizer de nltk.tokenize
from nltk.tokenize import RegexpTokenizer
# Criando o objeto tokenizer
tokenizer = RegexpTokenizer('\w+')
# Criando os tokens a partir do texto da música Gravity
tokens = tokenizer.tokenize(text)
tokens[:8]
# Inicializando uma lista vazia que vai conter as palavras minúsculas
words = []
# Loop through list tokens and make lower case
for word in tokens:
words.append(word.lower())
# Print several items from list as sanity check
words[:8]
"""
Explanation: Parte 2 - Tokenizacao
Por último, vamos comecar a preparar o nosso corpus para uma posterior análise de NLP. Uma das primeiras etapas nesse processo é a etapa de tokenizacao. Para realizar isso vamos importar o módulo RegexpTokenizer da biblioteca NLTK (Natural Language Toolkit). Para mais detalhes sobre a biblioteca acesse. Existe também um livro que você pode ler aqui - http://www.nltk.org/book/
End of explanation
"""
|
ewanbarr/anansi | docs/Molonglo_coords.ipynb | apache-2.0 | import numpy as np
import ephem as e
from scipy.optimize import minimize
import matplotlib.pyplot as plt
np.set_printoptions(precision=5,suppress =True)
"""
Explanation: Molonglo coordinate transforms
Useful coordinate transforms for the molonglo radio telescope
End of explanation
"""
def rotation_matrix(angle, d):
directions = {
"x":[1.,0.,0.],
"y":[0.,1.,0.],
"z":[0.,0.,1.]
}
direction = np.array(directions[d])
sina = np.sin(angle)
cosa = np.cos(angle)
# rotation matrix around unit vector
R = np.diag([cosa, cosa, cosa])
R += np.outer(direction, direction) * (1.0 - cosa)
direction *= sina
R += np.array([[ 0.0, -direction[2], direction[1]],
[ direction[2], 0.0, -direction[0]],
[-direction[1], direction[0], 0.0]])
return R
def reflection_matrix(d):
m = {
"x":[[-1.,0.,0.],[0., 1.,0.],[0.,0., 1.]],
"y":[[1., 0.,0.],[0.,-1.,0.],[0.,0., 1.]],
"z":[[1., 0.,0.],[0., 1.,0.],[1.,0.,-1.]]
}
return np.array(m[d])
"""
Explanation: Below we define the rotation and reflection matrices
End of explanation
"""
def pos_vector(a,b):
return np.array([[np.cos(b)*np.cos(a)],
[np.cos(b)*np.sin(a)],
[np.sin(b)]])
def pos_from_vector(vec):
a,b,c = vec
a_ = np.arctan2(b,a)
c_ = np.arcsin(c)
return a_,c_
"""
Explanation: Define a position vectors
End of explanation
"""
def transform(a,b,R,inverse=True):
P = pos_vector(a,b)
if inverse:
R = R.T
V = np.dot(R,P).ravel()
a,b = pos_from_vector(V)
a = 0 if np.isnan(a) else a
b = 0 if np.isnan(a) else b
return a,b
"""
Explanation: Generic transform
End of explanation
"""
def hadec_to_nsew(ha,dec):
ew = np.arcsin((0.9999940546 * np.cos(dec) * np.sin(ha))
- (0.0029798011806 * np.cos(dec) * np.cos(ha))
+ (0.002015514993 * np.sin(dec)))
ns = np.arcsin(((-0.0000237558704 * np.cos(dec) * np.sin(ha))
+ (0.578881847 * np.cos(dec) * np.cos(ha))
+ (0.8154114339 * np.sin(dec)))
/ np.cos(ew))
return ns,ew
"""
Explanation: Reference conversion formula from Duncan's old TCC
End of explanation
"""
# There should be a slope and tilt conversion to get accurate change
#skew = 4.363323129985824e-05
#slope = 0.0034602076124567475
#skew = 0.00004
#slope = 0.00346
skew = 0.01297 # <- this is the skew I get if I optimize for the same results as duncan's system
slope= 0.00343
def telescope_to_nsew_matrix(skew,slope):
R = rotation_matrix(skew,"z")
R = np.dot(R,rotation_matrix(slope,"y"))
return R
def nsew_to_azel_matrix(skew,slope):
pre_R = telescope_to_nsew_matrix(skew,slope)
x_rot = rotation_matrix(-np.pi/2,"x")
y_rot = rotation_matrix(np.pi/2,"y")
R = np.dot(x_rot,y_rot)
R = np.dot(pre_R,R)
R_bar = reflection_matrix("x")
R = np.dot(R,R_bar)
return R
def nsew_to_azel(ns, ew):
az,el = transform(ns,ew,nsew_to_azel_matrix(skew,slope))
return az,el
print nsew_to_azel(0,np.pi/2) # should be -pi/2 and 0
print nsew_to_azel(-np.pi/2,0)# should be -pi and 0
print nsew_to_azel(0.0,.5) # should be pi/2 and something near pi/2
print nsew_to_azel(-.5,.5) # less than pi/2 and less than pi/2
print nsew_to_azel(.5,-.5)
print nsew_to_azel(.5,.5)
"""
Explanation: New conversion formula using rotation matrices
What do we think we should have:
\begin{equation}
\begin{bmatrix}
\cos(\rm EW)\cos(\rm NS) \
\cos(\rm EW)\sin(\rm NS) \
\sin(\rm EW)
\end{bmatrix}
=
\mathbf{R}
\begin{bmatrix}
\cos(\delta)\cos(\rm HA) \
\cos(\delta)\sin(\rm HA) \
\sin(\delta)
\end{bmatrix}
\end{equation}
Where $\mathbf{R}$ is a composite rotation matrix.
We need a rotations in axis of array plus orthogonal rotation w.r.t. to array centre. Note that the NS convention is flipped so HA and NS go clockwise and anti-clockwise respectively when viewed from the north pole in both coordinate systems.
\begin{equation}
\mathbf{R}_x
=
\begin{bmatrix}
1 & 0 & 0 \
0 & \cos(\theta) & -\sin(\theta) \
0 & \sin(\theta) & \cos(\theta)
\end{bmatrix}
\end{equation}
\begin{equation}
\mathbf{R}_y
=
\begin{bmatrix}
\cos(\phi) & 0 & \sin(\phi) \
0 & 1 & 0 \
-\sin(\phi) & 0 & \cos(\phi)
\end{bmatrix}
\end{equation}
\begin{equation}
\mathbf{R}_z
=
\begin{bmatrix}
\cos(\eta) & -\sin(\eta) & 0\
\sin(\eta) & \cos(\eta) & 0\
0 & 0 & 1
\end{bmatrix}
\end{equation}
\begin{equation}
\mathbf{R} = \mathbf{R}_x \mathbf{R}_y \mathbf{R}_z
\end{equation}
Here I think $\theta$ is a $3\pi/2$ rotation to put the telescope pole (west) at the telescope zenith and $\phi$ is also $\pi/2$ to rotate the telescope meridian (which is lengthwise on the array, what we traditionally think of as the meridian is actually the equator of the telescope) into the position of $Az=0$.
However rotation of NS and HA are opposite, so a reflection is needed. For example reflection around a plane in along which the $z$ axis lies:
\begin{equation}
\mathbf{\bar{R}}_z
=
\begin{bmatrix}
1 & 0 & 0\
0 & 1 & 0\
0 & 0 & -1
\end{bmatrix}
\end{equation}
Conversion to azimuth and elevations should therefore require $\theta=-\pi/2$ and $\phi=\pi/2$ with a reflection about $x$.
Taking into account the EW skew and slope of the telescope:
\begin{equation}
\begin{bmatrix}
\cos(\rm EW)\cos(\rm NS) \
\cos(\rm EW)\sin(\rm NS) \
\sin(\rm EW)
\end{bmatrix}
=
\begin{bmatrix}
\cos(\alpha) & -\sin(\alpha) & 0\
\sin(\alpha) & \cos(\alpha) & 0\
0 & 0 & 1
\end{bmatrix}
\begin{bmatrix}
\cos(\beta) & 0 & \sin(\beta) \
0 & 1 & 0 \
-\sin(\beta) & 0 & \cos(\beta)
\end{bmatrix}
\begin{bmatrix}
1 & 0 & 0 \
0 & 0 & 1 \
0 & -1 & 0
\end{bmatrix}
\begin{bmatrix}
0 & 0 & -1 \
0 & 1 & 0 \
1 & 0 & 0
\end{bmatrix}
\begin{bmatrix}
-1 & 0 & 0\
0 & 1 & 0\
0 & 0 & 1
\end{bmatrix}
\begin{bmatrix}
\cos(\delta)\cos(\rm HA) \
\cos(\delta)\sin(\rm HA) \
\sin(\delta)
\end{bmatrix}
\end{equation}
So the correction matrix to take telescope coordinates to ns,ew
\begin{bmatrix}
\cos(\alpha)\sin(\beta) & -\sin(\beta) & \cos(\alpha)\sin(\beta) \
\sin(\alpha)\cos(\beta) & \cos(\alpha) & \sin(\alpha)\sin(\beta) \
-\sin(\beta) & 0 & \cos(\beta)
\end{bmatrix}
and to Az Elv
\begin{bmatrix}
\sin(\alpha) & -\cos(\alpha)\sin(\beta) & -\cos(\alpha)\cos(\beta) \
\cos(\alpha) & -\sin(\alpha)\sin(\beta) & -\sin(\alpha)\cos(\beta) \
-\cos(\beta) & 0 & \sin(\beta)
\end{bmatrix}
End of explanation
"""
def azel_to_nsew(az, el):
ns,ew = transform(az,el,nsew_to_azel_matrix(skew,slope).T)
return ns,ew
"""
Explanation: The inverse of this is:
End of explanation
"""
mol_lat = -0.6043881274183919 # in radians
def azel_to_hadec_matrix(lat):
rot_y = rotation_matrix(np.pi/2-lat,"y")
rot_z = rotation_matrix(np.pi,"z")
R = np.dot(rot_y,rot_z)
return R
def azel_to_hadec(az,el,lat):
ha,dec = transform(az,el,azel_to_hadec_matrix(lat))
return ha,dec
def nsew_to_hadec(ns,ew,lat,skew=skew,slope=slope):
R = np.dot(nsew_to_azel_matrix(skew,slope),azel_to_hadec_matrix(lat))
ha,dec = transform(ns,ew,R)
return ha,dec
ns,ew = 0.8,0.8
az,el = nsew_to_azel(ns,ew)
print "AzEl:",az,el
ha,dec = azel_to_hadec(az,el,mol_lat)
print "HADec:",ha,dec
ha,dec = nsew_to_hadec(ns,ew,mol_lat)
print "HADec2:",ha,dec
# This is Duncan's version
ns_,ew_ = hadec_to_nsew(ha,dec)
print "NSEW Duncan:",ns_,ew_
print "NS offset:",ns_-ns," EW offset:",ew_-ew
def test(ns,ew,skew,slope):
ha,dec = nsew_to_hadec(ns,ew,mol_lat,skew,slope)
ns_,ew_ = hadec_to_nsew(ha,dec)
no,eo = ns-ns_,ew-ew_
no = 0 if np.isnan(no) else no
eo = 0 if np.isnan(eo) else eo
return no,eo
ns = np.linspace(-np.pi/2+0.1,np.pi/2-0.1,10)
ew = np.linspace(-np.pi/2+0.1,np.pi/2-0.1,10)
def test2(a):
skew,slope = a
out_ns = np.empty([10,10])
out_ew = np.empty([10,10])
for ii,n in enumerate(ns):
for jj,k in enumerate(ew):
a,b = test(n,k,skew,slope)
out_ns[ii,jj] = a
out_ew[ii,jj] = b
a = abs(out_ns).sum()#abs(np.median(out_ns))
b = abs(out_ew).sum()#abs(np.median(out_ew))
print a,b
print max(a,b)
return max(a,b)
#minimize(test2,[skew,slope])
# Plotting out the conversion error as a function of HA and Dec.
# Colour scale is log of the absolute difference between original system and new system
ns = np.linspace(-np.pi/2,np.pi/2,10)
ew = np.linspace(-np.pi/2,np.pi/2,10)
out_ns = np.empty([10,10])
out_ew = np.empty([10,10])
for ii,n in enumerate(ns):
for jj,k in enumerate(ew):
print jj
a,b = test(n,k,skew,slope)
out_ns[ii,jj] = a
out_ew[ii,jj] = b
plt.figure()
plt.subplot(121)
plt.imshow(abs(out_ns),aspect="auto")
plt.colorbar()
plt.subplot(122)
plt.imshow(abs(out_ew),aspect="auto")
plt.colorbar()
plt.show()
from mpl_toolkits.mplot3d import Axes3D
from itertools import product, combinations
from matplotlib.patches import FancyArrowPatch
from mpl_toolkits.mplot3d import proj3d
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.set_aspect("equal")
#draw sphere
u, v = np.mgrid[0:2*np.pi:20j, 0:np.pi:10j]
x=np.cos(u)*np.sin(v)
y=np.sin(u)*np.sin(v)
z=np.cos(v)
ax.plot_wireframe(x, y, z, color="r",lw=1)
R = rotation_matrix(np.pi/2,"x")
pos_v = np.array([[x],[y],[z]])
p = pos_v.T
for i in p:
for j in i:
j[0] = np.dot(R,j[0])
class Arrow3D(FancyArrowPatch):
def __init__(self, xs, ys, zs, *args, **kwargs):
FancyArrowPatch.__init__(self, (0,0), (0,0), *args, **kwargs)
self._verts3d = xs, ys, zs
def draw(self, renderer):
xs3d, ys3d, zs3d = self._verts3d
xs, ys, zs = proj3d.proj_transform(xs3d, ys3d, zs3d, renderer.M)
self.set_positions((xs[0],ys[0]),(xs[1],ys[1]))
FancyArrowPatch.draw(self, renderer)
a = Arrow3D([0,1],[0,0.1],[0,.10], mutation_scale=20, lw=1, arrowstyle="-|>", color="k")
ax.add_artist(a)
ax.set_xlabel("X")
ax.set_ylabel("Y")
ax.set_zlabel("Z")
x=p.T[0,0]
y=p.T[1,0]
z=p.T[2,0]
ax.plot_wireframe(x, y, z, color="b",lw=1)
plt.show()
"""
Explanation: Extending this to HA Dec
End of explanation
"""
|
blua/deep-learning | embeddings/Skip-Grams-Solution.ipynb | mit | import time
import numpy as np
import tensorflow as tf
import utils
"""
Explanation: Skip-gram word2vec
In this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like machine translation.
Readings
Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material.
A really good conceptual overview of word2vec from Chris McCormick
First word2vec paper from Mikolov et al.
NIPS paper with improvements for word2vec also from Mikolov et al.
An implementation of word2vec from Thushan Ganegedara
TensorFlow word2vec tutorial
Word embeddings
When you're dealing with words in text, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The matrix multiplication going into the first hidden layer will have almost all of the resulting values be zero. This a huge waste of computation.
To solve this problem and greatly increase the efficiency of our networks, we use what are called embeddings. Embeddings are just a fully connected layer like you've seen before. We call this layer the embedding layer and the weights are embedding weights. We skip the multiplication into the embedding layer by instead directly grabbing the hidden layer values from the weight matrix. We can do this because the multiplication of a one-hot encoded vector with a matrix returns the row of the matrix corresponding the index of the "on" input unit.
Instead of doing the matrix multiplication, we use the weight matrix as a lookup table. We encode the words as integers, for example "heart" is encoded as 958, "mind" as 18094. Then to get hidden layer values for "heart", you just take the 958th row of the embedding matrix. This process is called an embedding lookup and the number of hidden units is the embedding dimension.
<img src='assets/tokenize_lookup.png' width=500>
There is nothing magical going on here. The embedding lookup table is just a weight matrix. The embedding layer is just a hidden layer. The lookup is just a shortcut for the matrix multiplication. The lookup table is trained just like any weight matrix as well.
Embeddings aren't only used for words of course. You can use them for any model where you have a massive number of classes. A particular type of model called Word2Vec uses the embedding layer to find vector representations of words that contain semantic meaning.
Word2Vec
The word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as "black", "white", and "red" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram.
<img src="assets/word2vec_architectures.png" width="500">
In this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts.
First up, importing packages.
End of explanation
"""
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import zipfile
dataset_folder_path = 'data'
dataset_filename = 'text8.zip'
dataset_name = 'Text8 Dataset'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(dataset_filename):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc=dataset_name) as pbar:
urlretrieve(
'http://mattmahoney.net/dc/text8.zip',
dataset_filename,
pbar.hook)
if not isdir(dataset_folder_path):
with zipfile.ZipFile(dataset_filename) as zip_ref:
zip_ref.extractall(dataset_folder_path)
with open('data/text8') as f:
text = f.read()
"""
Explanation: Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space.
End of explanation
"""
words = utils.preprocess(text)
print(words[:30])
print("Total words: {}".format(len(words)))
print("Unique words: {}".format(len(set(words))))
"""
Explanation: Preprocessing
Here I'm fixing up the text to make training easier. This comes from the utils module I wrote. The preprocess function coverts any punctuation into tokens, so a period is changed to <PERIOD>. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it.
End of explanation
"""
vocab_to_int, int_to_vocab = utils.create_lookup_tables(words)
int_words = [vocab_to_int[word] for word in words]
"""
Explanation: And here I'm creating dictionaries to covert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words.
End of explanation
"""
from collections import Counter
import random
threshold = 1e-5
word_counts = Counter(int_words)
total_count = len(int_words)
freqs = {word: count/total_count for word, count in word_counts.items()}
p_drop = {word: 1 - np.sqrt(threshold/freqs[word]) for word in word_counts}
train_words = [word for word in int_words if random.random() < (1 - p_drop[word])]
"""
Explanation: Subsampling
Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by
$$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$
where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset.
I'm going to leave this up to you as an exercise. Check out my solution to see how I did it.
Exercise: Implement subsampling for the words in int_words. That is, go through int_words and discard each word given the probablility $P(w_i)$ shown above. Note that $P(w_i)$ is that probability that a word is discarded. Assign the subsampled data to train_words.
End of explanation
"""
def get_target(words, idx, window_size=5):
''' Get a list of words in a window around an index. '''
R = np.random.randint(1, window_size+1)
start = idx - R if (idx - R) > 0 else 0
stop = idx + R
target_words = set(words[start:idx] + words[idx+1:stop+1])
return list(target_words)
"""
Explanation: Making batches
Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$.
From Mikolov et al.:
"Since the more distant words are usually less related to the current word than those close to it, we give less weight to the distant words by sampling less from those words in our training examples... If we choose $C = 5$, for each training word we will select randomly a number $R$ in range $< 1; C >$, and then use $R$ words from history and $R$ words from the future of the current word as correct labels."
Exercise: Implement a function get_target that receives a list of words, an index, and a window size, then returns a list of words in the window around the index. Make sure to use the algorithm described above, where you chose a random number of words to from the window.
End of explanation
"""
def get_batches(words, batch_size, window_size=5):
''' Create a generator of word batches as a tuple (inputs, targets) '''
n_batches = len(words)//batch_size
# only full batches
words = words[:n_batches*batch_size]
for idx in range(0, len(words), batch_size):
x, y = [], []
batch = words[idx:idx+batch_size]
for ii in range(len(batch)):
batch_x = batch[ii]
batch_y = get_target(batch, ii, window_size)
y.extend(batch_y)
x.extend([batch_x]*len(batch_y))
yield x, y
"""
Explanation: Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory.
End of explanation
"""
train_graph = tf.Graph()
with train_graph.as_default():
inputs = tf.placeholder(tf.int32, [None], name='inputs')
labels = tf.placeholder(tf.int32, [None, None], name='labels')
"""
Explanation: Building the graph
From Chris McCormick's blog, we can see the general structure of our network.
The input words are passed in as one-hot encoded vectors. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal.
The idea here is to train the hidden layer weight matrix to find efficient representations for our words. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset.
I'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal.
Exercise: Assign inputs and labels using tf.placeholder. We're going to be passing in integers, so set the data types to tf.int32. The batches we're passing in will have varying sizes, so set the batch sizes to [None]. To make things work later, you'll need to set the second dimension of labels to None or 1.
End of explanation
"""
n_vocab = len(int_to_vocab)
n_embedding = 200 # Number of embedding features
with train_graph.as_default():
embedding = tf.Variable(tf.random_uniform((n_vocab, n_embedding), -1, 1))
embed = tf.nn.embedding_lookup(embedding, inputs)
"""
Explanation: Embedding
The embedding matrix has a size of the number of words by the number of units in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using tokenized data for our inputs, usually as integers, where the number of tokens is the number of words in our vocabulary.
Exercise: Tensorflow provides a convenient function tf.nn.embedding_lookup that does this lookup for us. You pass in the embedding matrix and a tensor of integers, then it returns rows in the matrix corresponding to those integers. Below, set the number of embedding features you'll use (200 is a good start), create the embedding matrix variable, and use tf.nn.embedding_lookup to get the embedding tensors. For the embedding matrix, I suggest you initialize it with a uniform random numbers between -1 and 1 using tf.random_uniform.
End of explanation
"""
# Number of negative labels to sample
n_sampled = 100
with train_graph.as_default():
softmax_w = tf.Variable(tf.truncated_normal((n_vocab, n_embedding), stddev=0.1))
softmax_b = tf.Variable(tf.zeros(n_vocab))
# Calculate the loss using negative sampling
loss = tf.nn.sampled_softmax_loss(softmax_w, softmax_b,
labels, embed,
n_sampled, n_vocab)
cost = tf.reduce_mean(loss)
optimizer = tf.train.AdamOptimizer().minimize(cost)
"""
Explanation: Negative sampling
For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called "negative sampling". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss.
Exercise: Below, create weights and biases for the softmax layer. Then, use tf.nn.sampled_softmax_loss to calculate the loss. Be sure to read the documentation to figure out how it works.
End of explanation
"""
with train_graph.as_default():
## From Thushan Ganegedara's implementation
valid_size = 16 # Random set of words to evaluate similarity on.
valid_window = 100
# pick 8 samples from (0,100) and (1000,1100) each ranges. lower id implies more frequent
valid_examples = np.array(random.sample(range(valid_window), valid_size//2))
valid_examples = np.append(valid_examples,
random.sample(range(1000,1000+valid_window), valid_size//2))
valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
# We use the cosine distance:
norm = tf.sqrt(tf.reduce_sum(tf.square(embedding), 1, keep_dims=True))
normalized_embedding = embedding / norm
valid_embedding = tf.nn.embedding_lookup(normalized_embedding, valid_dataset)
similarity = tf.matmul(valid_embedding, tf.transpose(normalized_embedding))
# If the checkpoints directory doesn't exist:
!mkdir checkpoints
epochs = 10
batch_size = 1000
window_size = 10
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
iteration = 1
loss = 0
sess.run(tf.global_variables_initializer())
for e in range(1, epochs+1):
batches = get_batches(train_words, batch_size, window_size)
start = time.time()
for x, y in batches:
feed = {inputs: x,
labels: np.array(y)[:, None]}
train_loss, _ = sess.run([cost, optimizer], feed_dict=feed)
loss += train_loss
if iteration % 100 == 0:
end = time.time()
print("Epoch {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Avg. Training loss: {:.4f}".format(loss/100),
"{:.4f} sec/batch".format((end-start)/100))
loss = 0
start = time.time()
if iteration % 1000 == 0:
# note that this is expensive (~20% slowdown if computed every 500 steps)
sim = similarity.eval()
for i in range(valid_size):
valid_word = int_to_vocab[valid_examples[i]]
top_k = 8 # number of nearest neighbors
nearest = (-sim[i, :]).argsort()[1:top_k+1]
log = 'Nearest to %s:' % valid_word
for k in range(top_k):
close_word = int_to_vocab[nearest[k]]
log = '%s %s,' % (log, close_word)
print(log)
iteration += 1
save_path = saver.save(sess, "checkpoints/text8.ckpt")
embed_mat = sess.run(normalized_embedding)
"""
Explanation: Validation
This code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings.
End of explanation
"""
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
embed_mat = sess.run(embedding)
"""
Explanation: Restore the trained network if you need to:
End of explanation
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
viz_words = 500
tsne = TSNE()
embed_tsne = tsne.fit_transform(embed_mat[:viz_words, :])
fig, ax = plt.subplots(figsize=(14, 14))
for idx in range(viz_words):
plt.scatter(*embed_tsne[idx, :], color='steelblue')
plt.annotate(int_to_vocab[idx], (embed_tsne[idx, 0], embed_tsne[idx, 1]), alpha=0.7)
"""
Explanation: Visualizing the word vectors
Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensional data.
End of explanation
"""
|
weikang9009/pysal | notebooks/explore/spaghetti/Spaghetti_Pointpatterns_Empirical.ipynb | bsd-3-clause | import os
last_modified = None
if os.name == "posix":
last_modified = !stat -f\
"# This notebook was last updated: %Sm"\
Spaghetti_Pointpatterns_Empirical.ipynb
elif os.name == "nt":
last_modified = !for %a in (Spaghetti_Pointpatterns_Empirical.ipynb)\
do echo # This notebook was last updated: %~ta
if last_modified:
get_ipython().set_next_input(last_modified[-1])
# This notebook was last updated: Dec 9 14:23:58 2018
"""
Explanation: $SPA$tial $G$rap$H$s: n$ET$works, $T$opology, & $I$nference
Tutorial for pysal.spaghetti: Working with point patterns: empirical observations
James D. Gaboardi [jgaboardi@fsu.edu]
Instantiating a pysal.spaghetti.Network
Allocating observations to a network
snapping
Visualizing original and snapped locations
visualization with geopandas and matplotlib
End of explanation
"""
from pysal.explore import spaghetti as spgh
from pysal.lib import examples
import geopandas as gpd
import matplotlib.pyplot as plt
import matplotlib.lines as mlines
from shapely.geometry import Point, LineString
%matplotlib inline
__author__ = "James Gaboardi <jgaboardi@gmail.com>"
"""
Explanation:
End of explanation
"""
ntw = spgh.Network(in_data=examples.get_path('streets.shp'))
"""
Explanation: 1. Instantiating a pysal.spaghetti.Network
Instantiate the network from .shp file
End of explanation
"""
# Crimes with attributes
ntw.snapobservations(examples.get_path('crimes.shp'),
'crimes',
attribute=True)
# Schools without attributes
ntw.snapobservations(examples.get_path('schools.shp'),
'schools',
attribute=False)
"""
Explanation: 2. Allocating observations to a network
Snap point patterns to the network
End of explanation
"""
schools_df = spgh.element_as_gdf(ntw,
pp_name='schools',
snapped=False)
snapped_schools_df = spgh.element_as_gdf(ntw,
pp_name='schools',
snapped=True)
"""
Explanation: 3. Visualizing original and snapped locations
True and snapped school locations
End of explanation
"""
crimes_df = spgh.element_as_gdf(ntw,
pp_name='crimes',
snapped=False)
snapped_crimes_df = spgh.element_as_gdf(ntw,
pp_name='crimes',
snapped=True)
"""
Explanation: True and snapped crime locations
End of explanation
"""
# network nodes and edges
vertices_df, arcs_df = spgh.element_as_gdf(ntw,
vertices=True,
arcs=True)
"""
Explanation: Create geopandas.GeoDataFrame objects of the vertices and arcs
End of explanation
"""
# legend patches
arcs = mlines.Line2D([], [], color='k', label='Network Arcs', alpha=.5)
vtxs = mlines.Line2D([], [], color='k', linewidth=0, markersize=2.5,
marker='o', label='Network Vertices', alpha=1)
schl = mlines.Line2D([], [], color='k', linewidth=0, markersize=25,
marker='X', label='School Locations', alpha=1)
snp_schl = mlines.Line2D([], [], color='k', linewidth=0, markersize=12,
marker='o', label='Snapped Schools', alpha=1)
crme = mlines.Line2D([], [], color='r', linewidth=0, markersize=7,
marker='x', label='Crime Locations', alpha=.75)
snp_crme = mlines.Line2D([], [], color='r', linewidth=0, markersize=3,
marker='o', label='Snapped Crimes', alpha=.75)
patches = [arcs, vtxs, schl, snp_schl, crme, snp_crme]
# plot figure
base = arcs_df.plot(color='k', alpha=.25, figsize=(12,12), zorder=0)
vertices_df.plot(ax=base, color='k', markersize=5, alpha=1)
crimes_df.plot(ax=base, color='r', marker='x',
markersize=50, alpha=.5, zorder=1)
snapped_crimes_df.plot(ax=base, color='r',
markersize=20, alpha=.5, zorder=1)
schools_df.plot(ax=base, cmap='tab20', column='id', marker='X',
markersize=500, alpha=.5, zorder=2)
snapped_schools_df.plot(ax=base,cmap='tab20', column='id',
markersize=200, alpha=.5, zorder=2)
# add legend
plt.legend(handles=patches, fancybox=True, framealpha=0.8,
scatterpoints=1, fontsize="xx-large", bbox_to_anchor=(1.04, .6))
"""
Explanation: Plotting geopandas.GeoDataFrame objects
End of explanation
"""
|
Sinar/sinar.myreps | docs/Malaysian MP Statistics.ipynb | agpl-3.0 | import requests
import json
#Dewan Rakyat MP Posts in Sinar Malaysia Popit Database
posts = []
for page in range(1,10):
dewan_rakyat_request = requests.get('http://sinar-malaysia.popit.mysociety.org/api/v0.1/search/posts?q=organization_id:53633b5a19ee29270d8a9ecf'+'&page='+str(page))
for post in (json.loads(dewan_rakyat_request.content)['result']):
posts.append(post)
"""
Explanation: Malaysian MP Statistics
A live notebook of working examples of using Sinar's Popit API and database of Malaysian MPs.
TODO
Detailed information of Persons should probably be appended to post memberships
this would allow us to show post (Seat) information, not just details of person.
Refactor functions here into common library
Issues
Posts should check for role, eg. Member of Parliament. Speaker is a post in Dewan Rakyat,
but is not an MP.
Author
Feel free to do pull requests, or contact me on issues with the data.
Khairil Yusof khairil.yusof@sinarproject.org
End of explanation
"""
import datetime
from dateutil import parser
current = datetime.date(2013,5,5)
def has_end_date(member):
if member.has_key('end_date') and member['end_date'] == '':
return False
elif not member.has_key('end_date'):
return False
else:
return True
def current_MP(member):
#Legislative tag term here would simply
if (parser.parse(member['start_date'])).date() > current:
if not has_end_date(member):
return True
else:
return False
def person(person_id):
#Load up information of persons from Popit database
req = requests.get('https://sinar-malaysia.popit.mysociety.org/api/v0.1/persons/' + person_id)
return json.loads(req.content)['result']
def age(str):
#calculate age based on date strings stored in Popit
born = parser.parse(str)
today = datetime.date.today()
age = today.year - born.year - ((today.month, today.day) < (born.month, born.day))
return int(age)
#Current MPs will not have end dates, and have terms after 2013-05-05
MP_ids = []
for post in posts:
for member in post['memberships']:
if current_MP(member):
MP_ids.append(member['person_id'])
#Pull down the data of current MPs from Popit Database add calculate age if there is birthdate
MPs = []
for id in MP_ids:
MPs.append(person(id))
for MP in MPs:
if MP.has_key('birth_date'):
if MP['birth_date']:
#add current age in addition to the values in Popit
MP['age'] = age(MP['birth_date'])
"""
Explanation: Now we will load up information on the MPs holding these posts
End of explanation
"""
WomenMPs = []
for MP in MPs:
if MP.has_key('gender') and MP['gender'] == 'Female':
WomenMPs.append(MP)
print "Number of Women MPs " + str(len(WomenMPs))
for MP in WomenMPs:
print MP['name']
"""
Explanation: Women MPs
End of explanation
"""
import numpy
#list of ages
ages = []
for MP in MPs:
if MP.has_key('age'):
ages.append(int(MP['age']))
print numpy.median(ages)
print numpy.max(ages)
print numpy.min(ages)
"""
Explanation: Age of MPs
End of explanation
"""
import pandas
pandas.DataFrame(MPs)
df = pandas.DataFrame(MPs)
print df['age'].median()
print df['age'].max()
print df['age'].min()
"""
Explanation: Pandas
pandas is an open source, BSD-licensed library providing high-performance, easy-to-use data structures and data analysis tools for the Python programming language.
If you're learning Python to work with data, it's worth getting used to this library, as it provides pretty much all you will need when working with data from importing and cleaning messy data, to exporting it, including working with very large data sets.
A lot of the earlier work, such as cleaning, getting unique values etc. could be done easily with built-in functions of pandas as a DataFrame.
End of explanation
"""
MP_source = {'name':df['name'],'birth_date':df['birth_date'],'age':df['age']}
MP_Names = pandas.DataFrame(MP_source)
MP_Names.sort('age')
%matplotlib inline
grouped = MP_Names.groupby('age')
grouped.age.count().plot(kind='bar',figsize=(15,15))
"""
Explanation: We could have dropped duplicates from bad data:
df.drop_duplicates('id')
Parse and set birth_date column as datetime to calculate age without parsing it manually:
df['birth_date']= pandas.to_datetime(df['birth_date'])
Best of all after cleaning up the data, we can easily export it to CSV format where it is more easily usable by normal users in spreadsheets or plotting charts.
End of explanation
"""
import pandas
data = { "age": [], "birth_date": []}
data_index = { "age": [], "birth_date": []}
for entry in MPs:
if entry.has_key('age'):
data["age"].append(entry["age"])
data_index["age"].append(entry["name"])
data["birth_date"].append(entry["birth_date"])
data_index["birth_date"].append(entry["name"])
final_data = { "age": pandas.Series(data["age"], index=data_index["age"]),
"birth_date": pandas.Series(data["birth_date"], index=data_index["birth_date"])
}
mp_age_df = pandas.DataFrame(final_data)
mp_age_df.sort("age")
mp_age_df["age"].plot(kind="hist",figsize=(15,15))
"""
Explanation: Previous example tried to massage python data structures into Pandas DataFrame which works, but isn't very pretty.
Ng Swee Meng sweester@sinarproject.org has contributed proper way of building up data structures for Pandas DataFrames in the following example:
End of explanation
"""
|
mikheyev/phage-lab | src/Raw data.ipynb | mit | !ls -lh ../data/reads
"""
Explanation: What do the data look like?
Jupyter IPython notebooks, such as this one, allow you to run both Python code and, using 'magics' also shell commands. In this tutorial we'll use both, since we will be interfacing with a variety of software, as well as processing data.
First, let's look around in the directory using standard Linux commands. We can execute a shell command by preceding it with an exclamation mark.
End of explanation
"""
!gunzip -c ../data/reads/mutant1_OIST-2015-03-28.fq.gz | head -8
"""
Explanation: We see that there are five files four of these are mutants, and and one reference original sample.
We will take a look inside one of the files and look at the distribution of read statistics.
The reads are in text files, which have been compressed using gzip, a common practice for storing raw data. You can look inside by decompressing a file, piping the output to a program called head, which will stop after a few lines. You don't want to print the contents of the entire file to screen, since it will likely crash IPython.
End of explanation
"""
!fastqc ../data/reads/mutant1_OIST-2015-03-28.fq.gz
from IPython.display import IFrame
IFrame('../data/reads/mutant1_OIST-2015-03-28_fastqc.html', width=1000, height=1000)
"""
Explanation: Each read in the fastq file format has four lines, one is a unique read name, one containing the sequence of bases, one +, and one containing quality scores. The quality scores correspond to the sequencer's confidence in making the base call.
It is good practice to examine the quality of your data before you proceed with the analysis. We'll use a popular tools called FastQC to do some exploratory analysis.
End of explanation
"""
import gzip
from Bio import SeqIO
with gzip.open("../data/reads/mutant1_OIST-2015-03-28.fq.gz", 'rt') as infile: # open and decompress input file
for rec in SeqIO.parse(infile, "fastq"): # start looping over all records
print(rec) #print record contents
break # stop looping, we only want to see one record
"""
Explanation: Key statistics
Basic Statistics. Reports number of sequences, and basic details
Per base sequence quality. The distribution of sequence quality scored over the length of the read.
The quality scale is logarithmic. Notice that the quality degrades rapidly over the length of the read. This is a key characteristic of Illumina data, and product of their sequencing chemistry, which limits the upper read length to about 300 bp.
We can explore the contents of read files programmatically using a library within Python called Biopython. This allows to automate many tedious tasks.
End of explanation
"""
print(dir(rec)) # print methods associaat
"""
Explanation: You can see the methods associated with each object, suce as rec usig the dir command.
End of explanation
"""
rec.reverse_complement()
"""
Explanation: For example, we can reverse complement the sequence:
End of explanation
"""
|
uwoseis/zephyr | notebooks/Compare Solutions Homogeneous - 3D.ipynb | mit | import sys
sys.path.append('../')
import numpy as np
from zephyr.backend import MiniZephyr25D, SparseKaiserSource, AnalyticalHelmholtz
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import matplotlib
%matplotlib inline
from IPython.display import set_matplotlib_formats
set_matplotlib_formats('png')
matplotlib.rcParams['savefig.dpi'] = 150 # Change this to adjust figure size
systemConfig = {
'dx': 1., # m
'dz': 1., # m
'c': 2500., # m/s
'rho': 1., # kg/m^3
'nx': 100, # count
'nz': 200, # count
'freq': 2e2, # Hz
'nky': 160,
'3D': True,
}
nx = systemConfig['nx']
nz = systemConfig['nz']
dx = systemConfig['dx']
dz = systemConfig['dz']
MZ = MiniZephyr25D(systemConfig)
AH = AnalyticalHelmholtz(systemConfig)
SKS = SparseKaiserSource(systemConfig)
xs, zs = 25, 25
sloc = np.array([xs, zs]).reshape((1,2))
q = SKS(sloc)
uMZ = MZ*q
uAH = AH(sloc)
clip = 0.01
plotopts = {
'vmin': -np.pi,
'vmax': np.pi,
'extent': [0., dx * nx, dz * nz, 0.],
'cmap': cm.bwr,
}
fig = plt.figure()
ax1 = fig.add_subplot(1,4,1)
plt.imshow(np.angle(uAH.reshape((nz, nx))), **plotopts)
plt.title('AH Phase')
ax2 = fig.add_subplot(1,4,2)
plt.imshow(np.angle(uMZ.reshape((nz, nx))), **plotopts)
plt.title('MZ Phase')
plotopts.update({
'vmin': -clip,
'vmax': clip,
})
ax3 = fig.add_subplot(1,4,3)
plt.imshow(uAH.reshape((nz, nx)).real, **plotopts)
plt.title('AH Real')
ax4 = fig.add_subplot(1,4,4)
plt.imshow(uMZ.reshape((nz, nx)).real, **plotopts)
plt.title('MZ Real')
fig.tight_layout()
"""
Explanation: Compare Solutions - Homogenous 3D
Brendan Smithyman | November 2015
This notebook shows comparisons between the responses of the different solvers.
End of explanation
"""
fig = plt.figure()
ax = fig.add_subplot(1,1,1, aspect=1000)
plt.plot(uAH.real.reshape((nz, nx))[:,xs], label='AnalyticalHelmholtz')
plt.plot(uMZ.real.reshape((nz, nx))[:,xs], label='MiniZephyr')
plt.legend(loc=1)
plt.title('Real part of response through xs=%d'%xs)
"""
Explanation: Error plots for MiniZephyr vs. the AnalyticalHelmholtz response
Response of the field (showing where the numerical case does not match the analytical case):
Source region
PML regions
End of explanation
"""
uMZr = uMZ.reshape((nz, nx))
uAHr = uAH.reshape((nz, nx))
plotopts.update({
'cmap': cm.jet,
'vmin': 0.,
'vmax': 50.,
})
fig = plt.figure()
ax1 = fig.add_subplot(1,2,1)
plt.imshow(abs(uAHr - uMZr)/(abs(uAHr)+1e-15) * 100, **plotopts)
cb = plt.colorbar()
cb.set_label('Percent error')
plotopts.update({'vmax': 10.})
ax2 = fig.add_subplot(1,2,2)
plt.imshow(abs(uAHr - uMZr)/(abs(uAHr)+1e-15) * 100, **plotopts)
cb = plt.colorbar()
cb.set_label('Percent error')
fig.tight_layout()
"""
Explanation: Relative error of the MiniZephyr solution (in %)
End of explanation
"""
|
SJSlavin/phys202-2015-work | assignments/assignment05/InteractEx01.ipynb | mit | %matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
from IPython.html.widgets import interact, interactive, fixed
from IPython.display import display
"""
Explanation: Interact Exercise 01
Import
End of explanation
"""
def print_sum(a, b):
print(a + b)
"""
Explanation: Interact basics
Write a print_sum function that prints the sum of its arguments a and b.
End of explanation
"""
# YOUR CODE HERE
interact(print_sum, a=(-10.0, 10.0, 0.1), b=(-8, 8, 2));
assert True # leave this for grading the print_sum exercise
"""
Explanation: Use the interact function to interact with the print_sum function.
a should be a floating point slider over the interval [-10., 10.] with step sizes of 0.1
b should be an integer slider the interval [-8, 8] with step sizes of 2.
End of explanation
"""
def print_string(s, length=False):
print(s)
if length == True:
print(len(s))
"""
Explanation: Write a function named print_string that prints a string and additionally prints the length of that string if a boolean parameter is True.
End of explanation
"""
# YOUR CODE HERE
interact(print_string, Length=True, s="");
assert True # leave this for grading the print_string exercise
"""
Explanation: Use the interact function to interact with the print_string function.
s should be a textbox with the initial value "Hello World!".
length should be a checkbox with an initial value of True.
End of explanation
"""
|
scientific-visualization-2016/ClassMaterials | Week-03/04-plotting-seal-data.ipynb | cc0-1.0 | import os
import pandas as pd
import numpy as np
df = pd.read_csv("seal-behav.csv", parse_dates=[1])
df.set_index("timestamp",inplace=True)
df.head(5)
"""
Explanation: <img src='https://www.rc.colorado.edu/sites/all/themes/research/logo.png' style="height:75px">
Plotting the Seal Data on a map
Dependend on the previous pandas tutorial.
File seal-behav.csv
End of explanation
"""
wd = df.pivot( columns="individual") #row, column, values (optional)
f104 = df.ix[df["individual"] == "F104"]
f104.head()
"""
Explanation: Selecting one seal
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
from mpl_toolkits.basemap import Basemap
"""
Explanation: Plotting the seal path
Several steps:
1. Create a map centered around the region
2. Draw coastlines
3. Draw countries
4. Fill oceans and coastline
5. Draw the oberservations of the seal on map
End of explanation
"""
f104.dtypes
lons = f104["longitude"].values
lons = lons.astype(np.float)
lats = f104["latitude"].values
lons_c=np.average(lons)
lats_c=np.average(lats)
print (lons_c, lats_c)
#
map = Basemap(projection='ortho', lat_0=lats_c,lon_0=lons_c)
fig=plt.figure(figsize=(12,9))
# draw coastlines, country boundaries, fill continents.
map.drawcoastlines(linewidth=0.25)
map.drawcountries(linewidth=0.25)
map.fillcontinents(color='coral',lake_color='blue')
# draw the edge of the map projection region (the projection limb)
map.drawmapboundary(fill_color='aqua')
"""
Explanation: Drawing an empty map of the region
End of explanation
"""
#
map = Basemap(projection='ortho', lat_0=lats_c,lon_0=lons_c)
fig=plt.figure(figsize=(12,9))
# draw coastlines, country boundaries, fill continents.
map.drawcoastlines(linewidth=0.25)
map.drawcountries(linewidth=0.25)
map.fillcontinents(color='coral',lake_color='blue')
# draw the edge of the map projection region (the projection limb)
map.drawmapboundary(fill_color='aqua')
# Seal F104
x, y = map(lons,lats)
map.scatter(x,y,color='r',label='f104')
plt.legend()
"""
Explanation: Plotting seal observations
End of explanation
"""
#
map = Basemap(width=200000,height=100000,projection='lcc', resolution='h',
lat_0=lats_c,lon_0=lons_c)
fig=plt.figure(figsize=(12,9))
ax = fig.add_axes([0.05,0.05,0.9,0.85])
# draw coastlines, country boundaries, fill continents.
map.drawcoastlines(linewidth=0.25)
map.drawcountries(linewidth=0.25)
map.fillcontinents(color='coral',lake_color='blue')
# draw the edge of the map projection region (the projection limb)
map.drawmapboundary(fill_color='aqua')
# create a grid
# draw lat/lon grid lines every 2 degrees.
map.drawmeridians(np.arange(0,360,2), labels=[False, True, True, False])
map.drawparallels(np.arange(-90,90,1), lables=[True, False, False, True])
# Seal f104
x, y = map(lons,lats)
map.scatter(x,y,color='b',label='f104')
plt.legend()
?map
"""
Explanation: Plot all zoomed in
End of explanation
"""
|
ImAlexisSaez/deep-learning-specialization-coursera | course_1/week_4/assignment_1/building_your_deep_neural_network_step_by_step_v2.ipynb | mit | import numpy as np
import h5py
import matplotlib.pyplot as plt
from testCases_v2 import *
from dnn_utils_v2 import sigmoid, sigmoid_backward, relu, relu_backward
%matplotlib inline
plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
%load_ext autoreload
%autoreload 2
np.random.seed(1)
"""
Explanation: Building your Deep Neural Network: Step by Step
Welcome to your week 4 assignment (part 1 of 2)! You have previously trained a 2-layer Neural Network (with a single hidden layer). This week, you will build a deep neural network, with as many layers as you want!
In this notebook, you will implement all the functions required to build a deep neural network.
In the next assignment, you will use these functions to build a deep neural network for image classification.
After this assignment you will be able to:
- Use non-linear units like ReLU to improve your model
- Build a deeper neural network (with more than 1 hidden layer)
- Implement an easy-to-use neural network class
Notation:
- Superscript $[l]$ denotes a quantity associated with the $l^{th}$ layer.
- Example: $a^{[L]}$ is the $L^{th}$ layer activation. $W^{[L]}$ and $b^{[L]}$ are the $L^{th}$ layer parameters.
- Superscript $(i)$ denotes a quantity associated with the $i^{th}$ example.
- Example: $x^{(i)}$ is the $i^{th}$ training example.
- Lowerscript $i$ denotes the $i^{th}$ entry of a vector.
- Example: $a^{[l]}_i$ denotes the $i^{th}$ entry of the $l^{th}$ layer's activations).
Let's get started!
1 - Packages
Let's first import all the packages that you will need during this assignment.
- numpy is the main package for scientific computing with Python.
- matplotlib is a library to plot graphs in Python.
- dnn_utils provides some necessary functions for this notebook.
- testCases provides some test cases to assess the correctness of your functions
- np.random.seed(1) is used to keep all the random function calls consistent. It will help us grade your work. Please don't change the seed.
End of explanation
"""
# GRADED FUNCTION: initialize_parameters
def initialize_parameters(n_x, n_h, n_y):
"""
Argument:
n_x -- size of the input layer
n_h -- size of the hidden layer
n_y -- size of the output layer
Returns:
parameters -- python dictionary containing your parameters:
W1 -- weight matrix of shape (n_h, n_x)
b1 -- bias vector of shape (n_h, 1)
W2 -- weight matrix of shape (n_y, n_h)
b2 -- bias vector of shape (n_y, 1)
"""
np.random.seed(1)
### START CODE HERE ### (≈ 4 lines of code)
W1 = np.random.randn(n_h, n_x) * 0.01
b1 = np.zeros((n_h, 1))
W2 = np.random.randn(n_y, n_h) * 0.01
b2 = np.zeros((n_y, 1))
### END CODE HERE ###
assert(W1.shape == (n_h, n_x))
assert(b1.shape == (n_h, 1))
assert(W2.shape == (n_y, n_h))
assert(b2.shape == (n_y, 1))
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2}
return parameters
parameters = initialize_parameters(2,2,1)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
"""
Explanation: 2 - Outline of the Assignment
To build your neural network, you will be implementing several "helper functions". These helper functions will be used in the next assignment to build a two-layer neural network and an L-layer neural network. Each small helper function you will implement will have detailed instructions that will walk you through the necessary steps. Here is an outline of this assignment, you will:
Initialize the parameters for a two-layer network and for an $L$-layer neural network.
Implement the forward propagation module (shown in purple in the figure below).
Complete the LINEAR part of a layer's forward propagation step (resulting in $Z^{[l]}$).
We give you the ACTIVATION function (relu/sigmoid).
Combine the previous two steps into a new [LINEAR->ACTIVATION] forward function.
Stack the [LINEAR->RELU] forward function L-1 time (for layers 1 through L-1) and add a [LINEAR->SIGMOID] at the end (for the final layer $L$). This gives you a new L_model_forward function.
Compute the loss.
Implement the backward propagation module (denoted in red in the figure below).
Complete the LINEAR part of a layer's backward propagation step.
We give you the gradient of the ACTIVATE function (relu_backward/sigmoid_backward)
Combine the previous two steps into a new [LINEAR->ACTIVATION] backward function.
Stack [LINEAR->RELU] backward L-1 times and add [LINEAR->SIGMOID] backward in a new L_model_backward function
Finally update the parameters.
<img src="images/final outline.png" style="width:800px;height:500px;">
<caption><center> Figure 1</center></caption><br>
Note that for every forward function, there is a corresponding backward function. That is why at every step of your forward module you will be storing some values in a cache. The cached values are useful for computing gradients. In the backpropagation module you will then use the cache to calculate the gradients. This assignment will show you exactly how to carry out each of these steps.
3 - Initialization
You will write two helper functions that will initialize the parameters for your model. The first function will be used to initialize parameters for a two layer model. The second one will generalize this initialization process to $L$ layers.
3.1 - 2-layer Neural Network
Exercise: Create and initialize the parameters of the 2-layer neural network.
Instructions:
- The model's structure is: LINEAR -> RELU -> LINEAR -> SIGMOID.
- Use random initialization for the weight matrices. Use np.random.randn(shape)*0.01 with the correct shape.
- Use zero initialization for the biases. Use np.zeros(shape).
End of explanation
"""
# GRADED FUNCTION: initialize_parameters_deep
def initialize_parameters_deep(layer_dims):
"""
Arguments:
layer_dims -- python array (list) containing the dimensions of each layer in our network
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
Wl -- weight matrix of shape (layer_dims[l], layer_dims[l-1])
bl -- bias vector of shape (layer_dims[l], 1)
"""
np.random.seed(3)
parameters = {}
L = len(layer_dims) # number of layers in the network
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.random.randn(layer_dims[l], layer_dims[l-1]) * 0.01
parameters['b' + str(l)] = np.zeros((layer_dims[l], 1))
### END CODE HERE ###
assert(parameters['W' + str(l)].shape == (layer_dims[l], layer_dims[l-1]))
assert(parameters['b' + str(l)].shape == (layer_dims[l], 1))
return parameters
parameters = initialize_parameters_deep([5,4,3])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
"""
Explanation: Expected output:
<table style="width:80%">
<tr>
<td> **W1** </td>
<td> [[ 0.01624345 -0.00611756]
[-0.00528172 -0.01072969]] </td>
</tr>
<tr>
<td> **b1**</td>
<td>[[ 0.]
[ 0.]]</td>
</tr>
<tr>
<td>**W2**</td>
<td> [[ 0.00865408 -0.02301539]]</td>
</tr>
<tr>
<td> **b2** </td>
<td> [[ 0.]] </td>
</tr>
</table>
3.2 - L-layer Neural Network
The initialization for a deeper L-layer neural network is more complicated because there are many more weight matrices and bias vectors. When completing the initialize_parameters_deep, you should make sure that your dimensions match between each layer. Recall that $n^{[l]}$ is the number of units in layer $l$. Thus for example if the size of our input $X$ is $(12288, 209)$ (with $m=209$ examples) then:
<table style="width:100%">
<tr>
<td> </td>
<td> **Shape of W** </td>
<td> **Shape of b** </td>
<td> **Activation** </td>
<td> **Shape of Activation** </td>
<tr>
<tr>
<td> **Layer 1** </td>
<td> $(n^{[1]},12288)$ </td>
<td> $(n^{[1]},1)$ </td>
<td> $Z^{[1]} = W^{[1]} X + b^{[1]} $ </td>
<td> $(n^{[1]},209)$ </td>
<tr>
<tr>
<td> **Layer 2** </td>
<td> $(n^{[2]}, n^{[1]})$ </td>
<td> $(n^{[2]},1)$ </td>
<td>$Z^{[2]} = W^{[2]} A^{[1]} + b^{[2]}$ </td>
<td> $(n^{[2]}, 209)$ </td>
<tr>
<tr>
<td> $\vdots$ </td>
<td> $\vdots$ </td>
<td> $\vdots$ </td>
<td> $\vdots$</td>
<td> $\vdots$ </td>
<tr>
<tr>
<td> **Layer L-1** </td>
<td> $(n^{[L-1]}, n^{[L-2]})$ </td>
<td> $(n^{[L-1]}, 1)$ </td>
<td>$Z^{[L-1]} = W^{[L-1]} A^{[L-2]} + b^{[L-1]}$ </td>
<td> $(n^{[L-1]}, 209)$ </td>
<tr>
<tr>
<td> **Layer L** </td>
<td> $(n^{[L]}, n^{[L-1]})$ </td>
<td> $(n^{[L]}, 1)$ </td>
<td> $Z^{[L]} = W^{[L]} A^{[L-1]} + b^{[L]}$</td>
<td> $(n^{[L]}, 209)$ </td>
<tr>
</table>
Remember that when we compute $W X + b$ in python, it carries out broadcasting. For example, if:
$$ W = \begin{bmatrix}
j & k & l\
m & n & o \
p & q & r
\end{bmatrix}\;\;\; X = \begin{bmatrix}
a & b & c\
d & e & f \
g & h & i
\end{bmatrix} \;\;\; b =\begin{bmatrix}
s \
t \
u
\end{bmatrix}\tag{2}$$
Then $WX + b$ will be:
$$ WX + b = \begin{bmatrix}
(ja + kd + lg) + s & (jb + ke + lh) + s & (jc + kf + li)+ s\
(ma + nd + og) + t & (mb + ne + oh) + t & (mc + nf + oi) + t\
(pa + qd + rg) + u & (pb + qe + rh) + u & (pc + qf + ri)+ u
\end{bmatrix}\tag{3} $$
Exercise: Implement initialization for an L-layer Neural Network.
Instructions:
- The model's structure is [LINEAR -> RELU] $ \times$ (L-1) -> LINEAR -> SIGMOID. I.e., it has $L-1$ layers using a ReLU activation function followed by an output layer with a sigmoid activation function.
- Use random initialization for the weight matrices. Use np.random.rand(shape) * 0.01.
- Use zeros initialization for the biases. Use np.zeros(shape).
- We will store $n^{[l]}$, the number of units in different layers, in a variable layer_dims. For example, the layer_dims for the "Planar Data classification model" from last week would have been [2,4,1]: There were two inputs, one hidden layer with 4 hidden units, and an output layer with 1 output unit. Thus means W1's shape was (4,2), b1 was (4,1), W2 was (1,4) and b2 was (1,1). Now you will generalize this to $L$ layers!
- Here is the implementation for $L=1$ (one layer neural network). It should inspire you to implement the general case (L-layer neural network).
python
if L == 1:
parameters["W" + str(L)] = np.random.randn(layer_dims[1], layer_dims[0]) * 0.01
parameters["b" + str(L)] = np.zeros((layer_dims[1], 1))
End of explanation
"""
# GRADED FUNCTION: linear_forward
def linear_forward(A, W, b):
"""
Implement the linear part of a layer's forward propagation.
Arguments:
A -- activations from previous layer (or input data): (size of previous layer, number of examples)
W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)
b -- bias vector, numpy array of shape (size of the current layer, 1)
Returns:
Z -- the input of the activation function, also called pre-activation parameter
cache -- a python dictionary containing "A", "W" and "b" ; stored for computing the backward pass efficiently
"""
### START CODE HERE ### (≈ 1 line of code)
Z = np.dot(W, A) + b
### END CODE HERE ###
assert(Z.shape == (W.shape[0], A.shape[1]))
cache = (A, W, b)
return Z, cache
A, W, b = linear_forward_test_case()
Z, linear_cache = linear_forward(A, W, b)
print("Z = " + str(Z))
"""
Explanation: Expected output:
<table style="width:80%">
<tr>
<td> **W1** </td>
<td>[[ 0.01788628 0.0043651 0.00096497 -0.01863493 -0.00277388]
[-0.00354759 -0.00082741 -0.00627001 -0.00043818 -0.00477218]
[-0.01313865 0.00884622 0.00881318 0.01709573 0.00050034]
[-0.00404677 -0.0054536 -0.01546477 0.00982367 -0.01101068]]</td>
</tr>
<tr>
<td>**b1** </td>
<td>[[ 0.]
[ 0.]
[ 0.]
[ 0.]]</td>
</tr>
<tr>
<td>**W2** </td>
<td>[[-0.01185047 -0.0020565 0.01486148 0.00236716]
[-0.01023785 -0.00712993 0.00625245 -0.00160513]
[-0.00768836 -0.00230031 0.00745056 0.01976111]]</td>
</tr>
<tr>
<td>**b2** </td>
<td>[[ 0.]
[ 0.]
[ 0.]]</td>
</tr>
</table>
4 - Forward propagation module
4.1 - Linear Forward
Now that you have initialized your parameters, you will do the forward propagation module. You will start by implementing some basic functions that you will use later when implementing the model. You will complete three functions in this order:
LINEAR
LINEAR -> ACTIVATION where ACTIVATION will be either ReLU or Sigmoid.
[LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID (whole model)
The linear forward module (vectorized over all the examples) computes the following equations:
$$Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]}\tag{4}$$
where $A^{[0]} = X$.
Exercise: Build the linear part of forward propagation.
Reminder:
The mathematical representation of this unit is $Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]}$. You may also find np.dot() useful. If your dimensions don't match, printing W.shape may help.
End of explanation
"""
# GRADED FUNCTION: linear_activation_forward
def linear_activation_forward(A_prev, W, b, activation):
"""
Implement the forward propagation for the LINEAR->ACTIVATION layer
Arguments:
A_prev -- activations from previous layer (or input data): (size of previous layer, number of examples)
W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)
b -- bias vector, numpy array of shape (size of the current layer, 1)
activation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu"
Returns:
A -- the output of the activation function, also called the post-activation value
cache -- a python dictionary containing "linear_cache" and "activation_cache";
stored for computing the backward pass efficiently
"""
if activation == "sigmoid":
# Inputs: "A_prev, W, b". Outputs: "A, activation_cache".
### START CODE HERE ### (≈ 2 lines of code)
Z, linear_cache = linear_forward(A_prev, W, b)
A, activation_cache = sigmoid(Z)
### END CODE HERE ###
elif activation == "relu":
# Inputs: "A_prev, W, b". Outputs: "A, activation_cache".
### START CODE HERE ### (≈ 2 lines of code)
Z, linear_cache = linear_forward(A_prev, W, b)
A, activation_cache = relu(Z)
### END CODE HERE ###
assert (A.shape == (W.shape[0], A_prev.shape[1]))
cache = (linear_cache, activation_cache)
return A, cache
A_prev, W, b = linear_activation_forward_test_case()
A, linear_activation_cache = linear_activation_forward(A_prev, W, b, activation = "sigmoid")
print("With sigmoid: A = " + str(A))
A, linear_activation_cache = linear_activation_forward(A_prev, W, b, activation = "relu")
print("With ReLU: A = " + str(A))
"""
Explanation: Expected output:
<table style="width:35%">
<tr>
<td> **Z** </td>
<td> [[ 3.26295337 -1.23429987]] </td>
</tr>
</table>
4.2 - Linear-Activation Forward
In this notebook, you will use two activation functions:
Sigmoid: $\sigma(Z) = \sigma(W A + b) = \frac{1}{ 1 + e^{-(W A + b)}}$. We have provided you with the sigmoid function. This function returns two items: the activation value "a" and a "cache" that contains "Z" (it's what we will feed in to the corresponding backward function). To use it you could just call:
python
A, activation_cache = sigmoid(Z)
ReLU: The mathematical formula for ReLu is $A = RELU(Z) = max(0, Z)$. We have provided you with the relu function. This function returns two items: the activation value "A" and a "cache" that contains "Z" (it's what we will feed in to the corresponding backward function). To use it you could just call:
python
A, activation_cache = relu(Z)
For more convenience, you are going to group two functions (Linear and Activation) into one function (LINEAR->ACTIVATION). Hence, you will implement a function that does the LINEAR forward step followed by an ACTIVATION forward step.
Exercise: Implement the forward propagation of the LINEAR->ACTIVATION layer. Mathematical relation is: $A^{[l]} = g(Z^{[l]}) = g(W^{[l]}A^{[l-1]} +b^{[l]})$ where the activation "g" can be sigmoid() or relu(). Use linear_forward() and the correct activation function.
End of explanation
"""
# GRADED FUNCTION: L_model_forward
def L_model_forward(X, parameters):
"""
Implement forward propagation for the [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID computation
Arguments:
X -- data, numpy array of shape (input size, number of examples)
parameters -- output of initialize_parameters_deep()
Returns:
AL -- last post-activation value
caches -- list of caches containing:
every cache of linear_relu_forward() (there are L-1 of them, indexed from 0 to L-2)
the cache of linear_sigmoid_forward() (there is one, indexed L-1)
"""
caches = []
A = X
L = len(parameters) // 2 # number of layers in the neural network
# Implement [LINEAR -> RELU]*(L-1). Add "cache" to the "caches" list.
for l in range(1, L):
A_prev = A
### START CODE HERE ### (≈ 2 lines of code)
A, cache = linear_activation_forward(A_prev, parameters['W' + str(l)], parameters['b' + str(l)], "relu")
caches.append(cache)
### END CODE HERE ###
# Implement LINEAR -> SIGMOID. Add "cache" to the "caches" list.
### START CODE HERE ### (≈ 2 lines of code)
AL, cache = linear_activation_forward(A, parameters['W' + str(L)], parameters['b' + str(L)], "sigmoid")
caches.append(cache)
### END CODE HERE ###
assert(AL.shape == (1,X.shape[1]))
return AL, caches
X, parameters = L_model_forward_test_case()
AL, caches = L_model_forward(X, parameters)
print("AL = " + str(AL))
print("Length of caches list = " + str(len(caches)))
"""
Explanation: Expected output:
<table style="width:35%">
<tr>
<td> **With sigmoid: A ** </td>
<td > [[ 0.96890023 0.11013289]]</td>
</tr>
<tr>
<td> **With ReLU: A ** </td>
<td > [[ 3.43896131 0. ]]</td>
</tr>
</table>
Note: In deep learning, the "[LINEAR->ACTIVATION]" computation is counted as a single layer in the neural network, not two layers.
d) L-Layer Model
For even more convenience when implementing the $L$-layer Neural Net, you will need a function that replicates the previous one (linear_activation_forward with RELU) $L-1$ times, then follows that with one linear_activation_forward with SIGMOID.
<img src="images/model_architecture_kiank.png" style="width:600px;height:300px;">
<caption><center> Figure 2 : [LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID model</center></caption><br>
Exercise: Implement the forward propagation of the above model.
Instruction: In the code below, the variable AL will denote $A^{[L]} = \sigma(Z^{[L]}) = \sigma(W^{[L]} A^{[L-1]} + b^{[L]})$. (This is sometimes also called Yhat, i.e., this is $\hat{Y}$.)
Tips:
- Use the functions you had previously written
- Use a for loop to replicate [LINEAR->RELU] (L-1) times
- Don't forget to keep track of the caches in the "caches" list. To add a new value c to a list, you can use list.append(c).
End of explanation
"""
# GRADED FUNCTION: compute_cost
def compute_cost(AL, Y):
"""
Implement the cost function defined by equation (7).
Arguments:
AL -- probability vector corresponding to your label predictions, shape (1, number of examples)
Y -- true "label" vector (for example: containing 0 if non-cat, 1 if cat), shape (1, number of examples)
Returns:
cost -- cross-entropy cost
"""
m = Y.shape[1]
# Compute loss from aL and y.
### START CODE HERE ### (≈ 1 lines of code)
cost = -1 / m * np.sum(np.multiply(Y, np.log(AL)) + np.multiply((1 - Y), np.log(1 - AL)))
### END CODE HERE ###
cost = np.squeeze(cost) # To make sure your cost's shape is what we expect (e.g. this turns [[17]] into 17).
assert(cost.shape == ())
return cost
Y, AL = compute_cost_test_case()
print("cost = " + str(compute_cost(AL, Y)))
"""
Explanation: <table style="width:40%">
<tr>
<td> **AL** </td>
<td > [[ 0.17007265 0.2524272 ]]</td>
</tr>
<tr>
<td> **Length of caches list ** </td>
<td > 2</td>
</tr>
</table>
Great! Now you have a full forward propagation that takes the input X and outputs a row vector $A^{[L]}$ containing your predictions. It also records all intermediate values in "caches". Using $A^{[L]}$, you can compute the cost of your predictions.
5 - Cost function
Now you will implement forward and backward propagation. You need to compute the cost, because you want to check if your model is actually learning.
Exercise: Compute the cross-entropy cost $J$, using the following formula: $$-\frac{1}{m} \sum\limits_{i = 1}^{m} (y^{(i)}\log\left(a^{[L] (i)}\right) + (1-y^{(i)})\log\left(1- a^{L}\right)) \tag{7}$$
End of explanation
"""
# GRADED FUNCTION: linear_backward
def linear_backward(dZ, cache):
"""
Implement the linear portion of backward propagation for a single layer (layer l)
Arguments:
dZ -- Gradient of the cost with respect to the linear output (of current layer l)
cache -- tuple of values (A_prev, W, b) coming from the forward propagation in the current layer
Returns:
dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev
dW -- Gradient of the cost with respect to W (current layer l), same shape as W
db -- Gradient of the cost with respect to b (current layer l), same shape as b
"""
A_prev, W, b = cache
m = A_prev.shape[1]
### START CODE HERE ### (≈ 3 lines of code)
dW = 1 / m * np.dot(dZ, A_prev.T)
db = 1 / m * np.sum(dZ, axis=1, keepdims=True)
dA_prev = np.dot(W.T, dZ)
### END CODE HERE ###
assert (dA_prev.shape == A_prev.shape)
assert (dW.shape == W.shape)
assert (db.shape == b.shape)
return dA_prev, dW, db
# Set up some test inputs
dZ, linear_cache = linear_backward_test_case()
dA_prev, dW, db = linear_backward(dZ, linear_cache)
print ("dA_prev = "+ str(dA_prev))
print ("dW = " + str(dW))
print ("db = " + str(db))
"""
Explanation: Expected Output:
<table>
<tr>
<td>**cost** </td>
<td> 0.41493159961539694</td>
</tr>
</table>
6 - Backward propagation module
Just like with forward propagation, you will implement helper functions for backpropagation. Remember that back propagation is used to calculate the gradient of the loss function with respect to the parameters.
Reminder:
<img src="images/backprop_kiank.png" style="width:650px;height:250px;">
<caption><center> Figure 3 : Forward and Backward propagation for LINEAR->RELU->LINEAR->SIGMOID <br> The purple blocks represent the forward propagation, and the red blocks represent the backward propagation. </center></caption>
<!--
For those of you who are expert in calculus (you don't need to be to do this assignment), the chain rule of calculus can be used to derive the derivative of the loss $\mathcal{L}$ with respect to $z^{[1]}$ in a 2-layer network as follows:
$$\frac{d \mathcal{L}(a^{[2]},y)}{{dz^{[1]}}} = \frac{d\mathcal{L}(a^{[2]},y)}{{da^{[2]}}}\frac{{da^{[2]}}}{{dz^{[2]}}}\frac{{dz^{[2]}}}{{da^{[1]}}}\frac{{da^{[1]}}}{{dz^{[1]}}} \tag{8} $$
In order to calculate the gradient $dW^{[1]} = \frac{\partial L}{\partial W^{[1]}}$, you use the previous chain rule and you do $dW^{[1]} = dz^{[1]} \times \frac{\partial z^{[1]} }{\partial W^{[1]}}$. During the backpropagation, at each step you multiply your current gradient by the gradient corresponding to the specific layer to get the gradient you wanted.
Equivalently, in order to calculate the gradient $db^{[1]} = \frac{\partial L}{\partial b^{[1]}}$, you use the previous chain rule and you do $db^{[1]} = dz^{[1]} \times \frac{\partial z^{[1]} }{\partial b^{[1]}}$.
This is why we talk about **backpropagation**.
!-->
Now, similar to forward propagation, you are going to build the backward propagation in three steps:
- LINEAR backward
- LINEAR -> ACTIVATION backward where ACTIVATION computes the derivative of either the ReLU or sigmoid activation
- [LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID backward (whole model)
6.1 - Linear backward
For layer $l$, the linear part is: $Z^{[l]} = W^{[l]} A^{[l-1]} + b^{[l]}$ (followed by an activation).
Suppose you have already calculated the derivative $dZ^{[l]} = \frac{\partial \mathcal{L} }{\partial Z^{[l]}}$. You want to get $(dW^{[l]}, db^{[l]} dA^{[l-1]})$.
<img src="images/linearback_kiank.png" style="width:250px;height:300px;">
<caption><center> Figure 4 </center></caption>
The three outputs $(dW^{[l]}, db^{[l]}, dA^{[l]})$ are computed using the input $dZ^{[l]}$.Here are the formulas you need:
$$ dW^{[l]} = \frac{\partial \mathcal{L} }{\partial W^{[l]}} = \frac{1}{m} dZ^{[l]} A^{[l-1] T} \tag{8}$$
$$ db^{[l]} = \frac{\partial \mathcal{L} }{\partial b^{[l]}} = \frac{1}{m} \sum_{i = 1}^{m} dZ^{l}\tag{9}$$
$$ dA^{[l-1]} = \frac{\partial \mathcal{L} }{\partial A^{[l-1]}} = W^{[l] T} dZ^{[l]} \tag{10}$$
Exercise: Use the 3 formulas above to implement linear_backward().
End of explanation
"""
# GRADED FUNCTION: linear_activation_backward
def linear_activation_backward(dA, cache, activation):
"""
Implement the backward propagation for the LINEAR->ACTIVATION layer.
Arguments:
dA -- post-activation gradient for current layer l
cache -- tuple of values (linear_cache, activation_cache) we store for computing backward propagation efficiently
activation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu"
Returns:
dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev
dW -- Gradient of the cost with respect to W (current layer l), same shape as W
db -- Gradient of the cost with respect to b (current layer l), same shape as b
"""
linear_cache, activation_cache = cache
if activation == "relu":
### START CODE HERE ### (≈ 2 lines of code)
dZ = relu_backward(dA, activation_cache)
dA_prev, dW, db = linear_backward(dZ, linear_cache)
### END CODE HERE ###
elif activation == "sigmoid":
### START CODE HERE ### (≈ 2 lines of code)
dZ = sigmoid_backward(dA, activation_cache)
dA_prev, dW, db = linear_backward(dZ, linear_cache)
### END CODE HERE ###
return dA_prev, dW, db
AL, linear_activation_cache = linear_activation_backward_test_case()
dA_prev, dW, db = linear_activation_backward(AL, linear_activation_cache, activation = "sigmoid")
print ("sigmoid:")
print ("dA_prev = "+ str(dA_prev))
print ("dW = " + str(dW))
print ("db = " + str(db) + "\n")
dA_prev, dW, db = linear_activation_backward(AL, linear_activation_cache, activation = "relu")
print ("relu:")
print ("dA_prev = "+ str(dA_prev))
print ("dW = " + str(dW))
print ("db = " + str(db))
"""
Explanation: Expected Output:
<table style="width:90%">
<tr>
<td> **dA_prev** </td>
<td > [[ 0.51822968 -0.19517421]
[-0.40506361 0.15255393]
[ 2.37496825 -0.89445391]] </td>
</tr>
<tr>
<td> **dW** </td>
<td > [[-0.10076895 1.40685096 1.64992505]] </td>
</tr>
<tr>
<td> **db** </td>
<td> [[ 0.50629448]] </td>
</tr>
</table>
6.2 - Linear-Activation backward
Next, you will create a function that merges the two helper functions: linear_backward and the backward step for the activation linear_activation_backward.
To help you implement linear_activation_backward, we provided two backward functions:
- sigmoid_backward: Implements the backward propagation for SIGMOID unit. You can call it as follows:
python
dZ = sigmoid_backward(dA, activation_cache)
relu_backward: Implements the backward propagation for RELU unit. You can call it as follows:
python
dZ = relu_backward(dA, activation_cache)
If $g(.)$ is the activation function,
sigmoid_backward and relu_backward compute $$dZ^{[l]} = dA^{[l]} * g'(Z^{[l]}) \tag{11}$$.
Exercise: Implement the backpropagation for the LINEAR->ACTIVATION layer.
End of explanation
"""
# GRADED FUNCTION: L_model_backward
def L_model_backward(AL, Y, caches):
"""
Implement the backward propagation for the [LINEAR->RELU] * (L-1) -> LINEAR -> SIGMOID group
Arguments:
AL -- probability vector, output of the forward propagation (L_model_forward())
Y -- true "label" vector (containing 0 if non-cat, 1 if cat)
caches -- list of caches containing:
every cache of linear_activation_forward() with "relu" (it's caches[l], for l in range(L-1) i.e l = 0...L-2)
the cache of linear_activation_forward() with "sigmoid" (it's caches[L-1])
Returns:
grads -- A dictionary with the gradients
grads["dA" + str(l)] = ...
grads["dW" + str(l)] = ...
grads["db" + str(l)] = ...
"""
grads = {}
L = len(caches) # the number of layers
m = AL.shape[1]
Y = Y.reshape(AL.shape) # after this line, Y is the same shape as AL
# Initializing the backpropagation
### START CODE HERE ### (1 line of code)
dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL))
### END CODE HERE ###
# Lth layer (SIGMOID -> LINEAR) gradients. Inputs: "AL, Y, caches". Outputs: "grads["dAL"], grads["dWL"], grads["dbL"]
### START CODE HERE ### (approx. 2 lines)
current_cache = caches[L - 1]
grads["dA" + str(L)], grads["dW" + str(L)], grads["db" + str(L)] = linear_activation_backward(dAL, current_cache, activation = "sigmoid")
### END CODE HERE ###
for l in reversed(range(L-1)):
# lth layer: (RELU -> LINEAR) gradients.
# Inputs: "grads["dA" + str(l + 2)], caches". Outputs: "grads["dA" + str(l + 1)] , grads["dW" + str(l + 1)] , grads["db" + str(l + 1)]
### START CODE HERE ### (approx. 5 lines)
current_cache = caches[l]
dA_prev_temp, dW_temp, db_temp = linear_activation_backward(grads["dA" + str(l + 2)], current_cache, activation = "relu")
grads["dA" + str(l + 1)] = dA_prev_temp
grads["dW" + str(l + 1)] = dW_temp
grads["db" + str(l + 1)] = db_temp
### END CODE HERE ###
return grads
AL, Y_assess, caches = L_model_backward_test_case()
grads = L_model_backward(AL, Y_assess, caches)
print ("dW1 = "+ str(grads["dW1"]))
print ("db1 = "+ str(grads["db1"]))
print ("dA1 = "+ str(grads["dA1"]))
"""
Explanation: Expected output with sigmoid:
<table style="width:100%">
<tr>
<td > dA_prev </td>
<td >[[ 0.11017994 0.01105339]
[ 0.09466817 0.00949723]
[-0.05743092 -0.00576154]] </td>
</tr>
<tr>
<td > dW </td>
<td > [[ 0.10266786 0.09778551 -0.01968084]] </td>
</tr>
<tr>
<td > db </td>
<td > [[-0.05729622]] </td>
</tr>
</table>
Expected output with relu
<table style="width:100%">
<tr>
<td > dA_prev </td>
<td > [[ 0.44090989 0. ]
[ 0.37883606 0. ]
[-0.2298228 0. ]] </td>
</tr>
<tr>
<td > dW </td>
<td > [[ 0.44513824 0.37371418 -0.10478989]] </td>
</tr>
<tr>
<td > db </td>
<td > [[-0.20837892]] </td>
</tr>
</table>
6.3 - L-Model Backward
Now you will implement the backward function for the whole network. Recall that when you implemented the L_model_forward function, at each iteration, you stored a cache which contains (X,W,b, and z). In the back propagation module, you will use those variables to compute the gradients. Therefore, in the L_model_backward function, you will iterate through all the hidden layers backward, starting from layer $L$. On each step, you will use the cached values for layer $l$ to backpropagate through layer $l$. Figure 5 below shows the backward pass.
<img src="images/mn_backward.png" style="width:450px;height:300px;">
<caption><center> Figure 5 : Backward pass </center></caption>
Initializing backpropagation:
To backpropagate through this network, we know that the output is,
$A^{[L]} = \sigma(Z^{[L]})$. Your code thus needs to compute dAL $= \frac{\partial \mathcal{L}}{\partial A^{[L]}}$.
To do so, use this formula (derived using calculus which you don't need in-depth knowledge of):
python
dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL)) # derivative of cost with respect to AL
You can then use this post-activation gradient dAL to keep going backward. As seen in Figure 5, you can now feed in dAL into the LINEAR->SIGMOID backward function you implemented (which will use the cached values stored by the L_model_forward function). After that, you will have to use a for loop to iterate through all the other layers using the LINEAR->RELU backward function. You should store each dA, dW, and db in the grads dictionary. To do so, use this formula :
$$grads["dW" + str(l)] = dW^{[l]}\tag{15} $$
For example, for $l=3$ this would store $dW^{[l]}$ in grads["dW3"].
Exercise: Implement backpropagation for the [LINEAR->RELU] $\times$ (L-1) -> LINEAR -> SIGMOID model.
End of explanation
"""
# GRADED FUNCTION: update_parameters
def update_parameters(parameters, grads, learning_rate):
"""
Update parameters using gradient descent
Arguments:
parameters -- python dictionary containing your parameters
grads -- python dictionary containing your gradients, output of L_model_backward
Returns:
parameters -- python dictionary containing your updated parameters
parameters["W" + str(l)] = ...
parameters["b" + str(l)] = ...
"""
L = len(parameters) // 2 # number of layers in the neural network
# Update rule for each parameter. Use a for loop.
### START CODE HERE ### (≈ 3 lines of code)
for l in range(L):
parameters["W" + str(l+1)] = parameters["W" + str(l+1)] - learning_rate * grads["dW" + str(l+1)]
parameters["b" + str(l+1)] = parameters["b" + str(l+1)] - learning_rate * grads["db" + str(l+1)]
### END CODE HERE ###
return parameters
parameters, grads = update_parameters_test_case()
parameters = update_parameters(parameters, grads, 0.1)
print ("W1 = "+ str(parameters["W1"]))
print ("b1 = "+ str(parameters["b1"]))
print ("W2 = "+ str(parameters["W2"]))
print ("b2 = "+ str(parameters["b2"]))
#print ("W3 = "+ str(parameters["W3"]))
#print ("b3 = "+ str(parameters["b3"]))
"""
Explanation: Expected Output
<table style="width:60%">
<tr>
<td > dW1 </td>
<td > [[ 0.41010002 0.07807203 0.13798444 0.10502167]
[ 0. 0. 0. 0. ]
[ 0.05283652 0.01005865 0.01777766 0.0135308 ]] </td>
</tr>
<tr>
<td > db1 </td>
<td > [[-0.22007063]
[ 0. ]
[-0.02835349]] </td>
</tr>
<tr>
<td > dA1 </td>
<td > [[ 0. 0.52257901]
[ 0. -0.3269206 ]
[ 0. -0.32070404]
[ 0. -0.74079187]] </td>
</tr>
</table>
6.4 - Update Parameters
In this section you will update the parameters of the model, using gradient descent:
$$ W^{[l]} = W^{[l]} - \alpha \text{ } dW^{[l]} \tag{16}$$
$$ b^{[l]} = b^{[l]} - \alpha \text{ } db^{[l]} \tag{17}$$
where $\alpha$ is the learning rate. After computing the updated parameters, store them in the parameters dictionary.
Exercise: Implement update_parameters() to update your parameters using gradient descent.
Instructions:
Update parameters using gradient descent on every $W^{[l]}$ and $b^{[l]}$ for $l = 1, 2, ..., L$.
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.20/_downloads/1af5a35cbb809b9480120842884536c5/plot_brainstorm_auditory.ipynb | bsd-3-clause | # Authors: Mainak Jas <mainak.jas@telecom-paristech.fr>
# Eric Larson <larson.eric.d@gmail.com>
# Jaakko Leppakangas <jaeilepp@student.jyu.fi>
#
# License: BSD (3-clause)
import os.path as op
import pandas as pd
import numpy as np
import mne
from mne import combine_evoked
from mne.minimum_norm import apply_inverse
from mne.datasets.brainstorm import bst_auditory
from mne.io import read_raw_ctf
print(__doc__)
"""
Explanation: Brainstorm auditory tutorial dataset
Here we compute the evoked from raw for the auditory Brainstorm
tutorial dataset. For comparison, see [1] and the associated
brainstorm site <https://neuroimage.usc.edu/brainstorm/Tutorials/Auditory>.
Experiment:
- One subject, 2 acquisition runs 6 minutes each.
- Each run contains 200 regular beeps and 40 easy deviant beeps.
- Random ISI: between 0.7s and 1.7s seconds, uniformly distributed.
- Button pressed when detecting a deviant with the right index finger.
The specifications of this dataset were discussed initially on the
FieldTrip bug tracker
<http://bugzilla.fieldtriptoolbox.org/show_bug.cgi?id=2300>__.
References
.. [1] Tadel F, Baillet S, Mosher JC, Pantazis D, Leahy RM.
Brainstorm: A User-Friendly Application for MEG/EEG Analysis.
Computational Intelligence and Neuroscience, vol. 2011, Article ID
879716, 13 pages, 2011. doi:10.1155/2011/879716
End of explanation
"""
use_precomputed = True
"""
Explanation: To reduce memory consumption and running time, some of the steps are
precomputed. To run everything from scratch change this to False. With
use_precomputed = False running time of this script can be several
minutes even on a fast computer.
End of explanation
"""
data_path = bst_auditory.data_path()
subject = 'bst_auditory'
subjects_dir = op.join(data_path, 'subjects')
raw_fname1 = op.join(data_path, 'MEG', 'bst_auditory',
'S01_AEF_20131218_01.ds')
raw_fname2 = op.join(data_path, 'MEG', 'bst_auditory',
'S01_AEF_20131218_02.ds')
erm_fname = op.join(data_path, 'MEG', 'bst_auditory',
'S01_Noise_20131218_01.ds')
"""
Explanation: The data was collected with a CTF 275 system at 2400 Hz and low-pass
filtered at 600 Hz. Here the data and empty room data files are read to
construct instances of :class:mne.io.Raw.
End of explanation
"""
preload = not use_precomputed
raw = read_raw_ctf(raw_fname1, preload=preload)
n_times_run1 = raw.n_times
mne.io.concatenate_raws([raw, read_raw_ctf(raw_fname2, preload=preload)])
raw_erm = read_raw_ctf(erm_fname, preload=preload)
"""
Explanation: In the memory saving mode we use preload=False and use the memory
efficient IO which loads the data on demand. However, filtering and some
other functions require the data to be preloaded in the memory.
End of explanation
"""
raw.set_channel_types({'HEOG': 'eog', 'VEOG': 'eog', 'ECG': 'ecg'})
if not use_precomputed:
# Leave out the two EEG channels for easier computation of forward.
raw.pick(['meg', 'stim', 'misc', 'eog', 'ecg'])
"""
Explanation: Data channel array consisted of 274 MEG axial gradiometers, 26 MEG reference
sensors and 2 EEG electrodes (Cz and Pz).
In addition:
1 stim channel for marking presentation times for the stimuli
1 audio channel for the sent signal
1 response channel for recording the button presses
1 ECG bipolar
2 EOG bipolar (vertical and horizontal)
12 head tracking channels
20 unused channels
The head tracking channels and the unused channels are marked as misc
channels. Here we define the EOG and ECG channels.
End of explanation
"""
annotations_df = pd.DataFrame()
offset = n_times_run1
for idx in [1, 2]:
csv_fname = op.join(data_path, 'MEG', 'bst_auditory',
'events_bad_0%s.csv' % idx)
df = pd.read_csv(csv_fname, header=None,
names=['onset', 'duration', 'id', 'label'])
print('Events from run {0}:'.format(idx))
print(df)
df['onset'] += offset * (idx - 1)
annotations_df = pd.concat([annotations_df, df], axis=0)
saccades_events = df[df['label'] == 'saccade'].values[:, :3].astype(int)
# Conversion from samples to times:
onsets = annotations_df['onset'].values / raw.info['sfreq']
durations = annotations_df['duration'].values / raw.info['sfreq']
descriptions = annotations_df['label'].values
annotations = mne.Annotations(onsets, durations, descriptions)
raw.set_annotations(annotations)
del onsets, durations, descriptions
"""
Explanation: For noise reduction, a set of bad segments have been identified and stored
in csv files. The bad segments are later used to reject epochs that overlap
with them.
The file for the second run also contains some saccades. The saccades are
removed by using SSP. We use pandas to read the data from the csv files. You
can also view the files with your favorite text editor.
End of explanation
"""
saccade_epochs = mne.Epochs(raw, saccades_events, 1, 0., 0.5, preload=True,
baseline=(None, None),
reject_by_annotation=False)
projs_saccade = mne.compute_proj_epochs(saccade_epochs, n_mag=1, n_eeg=0,
desc_prefix='saccade')
if use_precomputed:
proj_fname = op.join(data_path, 'MEG', 'bst_auditory',
'bst_auditory-eog-proj.fif')
projs_eog = mne.read_proj(proj_fname)[0]
else:
projs_eog, _ = mne.preprocessing.compute_proj_eog(raw.load_data(),
n_mag=1, n_eeg=0)
raw.add_proj(projs_saccade)
raw.add_proj(projs_eog)
del saccade_epochs, saccades_events, projs_eog, projs_saccade # To save memory
"""
Explanation: Here we compute the saccade and EOG projectors for magnetometers and add
them to the raw data. The projectors are added to both runs.
End of explanation
"""
raw.plot(block=True)
"""
Explanation: Visually inspect the effects of projections. Click on 'proj' button at the
bottom right corner to toggle the projectors on/off. EOG events can be
plotted by adding the event list as a keyword argument. As the bad segments
and saccades were added as annotations to the raw data, they are plotted as
well.
End of explanation
"""
if not use_precomputed:
raw.plot_psd(tmax=np.inf, picks='meg')
notches = np.arange(60, 181, 60)
raw.notch_filter(notches, phase='zero-double', fir_design='firwin2')
raw.plot_psd(tmax=np.inf, picks='meg')
"""
Explanation: Typical preprocessing step is the removal of power line artifact (50 Hz or
60 Hz). Here we notch filter the data at 60, 120 and 180 to remove the
original 60 Hz artifact and the harmonics. The power spectra are plotted
before and after the filtering to show the effect. The drop after 600 Hz
appears because the data was filtered during the acquisition. In memory
saving mode we do the filtering at evoked stage, which is not something you
usually would do.
End of explanation
"""
if not use_precomputed:
raw.filter(None, 100., h_trans_bandwidth=0.5, filter_length='10s',
phase='zero-double', fir_design='firwin2')
"""
Explanation: We also lowpass filter the data at 100 Hz to remove the hf components.
End of explanation
"""
tmin, tmax = -0.1, 0.5
event_id = dict(standard=1, deviant=2)
reject = dict(mag=4e-12, eog=250e-6)
# find events
events = mne.find_events(raw, stim_channel='UPPT001')
"""
Explanation: Epoching and averaging.
First some parameters are defined and events extracted from the stimulus
channel (UPPT001). The rejection thresholds are defined as peak-to-peak
values and are in T / m for gradiometers, T for magnetometers and
V for EOG and EEG channels.
End of explanation
"""
sound_data = raw[raw.ch_names.index('UADC001-4408')][0][0]
onsets = np.where(np.abs(sound_data) > 2. * np.std(sound_data))[0]
min_diff = int(0.5 * raw.info['sfreq'])
diffs = np.concatenate([[min_diff + 1], np.diff(onsets)])
onsets = onsets[diffs > min_diff]
assert len(onsets) == len(events)
diffs = 1000. * (events[:, 0] - onsets) / raw.info['sfreq']
print('Trigger delay removed (μ ± σ): %0.1f ± %0.1f ms'
% (np.mean(diffs), np.std(diffs)))
events[:, 0] = onsets
del sound_data, diffs
"""
Explanation: The event timing is adjusted by comparing the trigger times on detected
sound onsets on channel UADC001-4408.
End of explanation
"""
raw.info['bads'] = ['MLO52-4408', 'MRT51-4408', 'MLO42-4408', 'MLO43-4408']
"""
Explanation: We mark a set of bad channels that seem noisier than others. This can also
be done interactively with raw.plot by clicking the channel name
(or the line). The marked channels are added as bad when the browser window
is closed.
End of explanation
"""
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=['meg', 'eog'],
baseline=(None, 0), reject=reject, preload=False,
proj=True)
"""
Explanation: The epochs (trials) are created for MEG channels. First we find the picks
for MEG and EOG channels. Then the epochs are constructed using these picks.
The epochs overlapping with annotated bad segments are also rejected by
default. To turn off rejection by bad segments (as was done earlier with
saccades) you can use keyword reject_by_annotation=False.
End of explanation
"""
epochs.drop_bad()
epochs_standard = mne.concatenate_epochs([epochs['standard'][range(40)],
epochs['standard'][182:222]])
epochs_standard.load_data() # Resampling to save memory.
epochs_standard.resample(600, npad='auto')
epochs_deviant = epochs['deviant'].load_data()
epochs_deviant.resample(600, npad='auto')
del epochs
"""
Explanation: We only use first 40 good epochs from each run. Since we first drop the bad
epochs, the indices of the epochs are no longer same as in the original
epochs collection. Investigation of the event timings reveals that first
epoch from the second run corresponds to index 182.
End of explanation
"""
evoked_std = epochs_standard.average()
evoked_dev = epochs_deviant.average()
del epochs_standard, epochs_deviant
"""
Explanation: The averages for each conditions are computed.
End of explanation
"""
for evoked in (evoked_std, evoked_dev):
evoked.filter(l_freq=None, h_freq=40., fir_design='firwin')
"""
Explanation: Typical preprocessing step is the removal of power line artifact (50 Hz or
60 Hz). Here we lowpass filter the data at 40 Hz, which will remove all
line artifacts (and high frequency information). Normally this would be done
to raw data (with :func:mne.io.Raw.filter), but to reduce memory
consumption of this tutorial, we do it at evoked stage. (At the raw stage,
you could alternatively notch filter with :func:mne.io.Raw.notch_filter.)
End of explanation
"""
evoked_std.plot(window_title='Standard', gfp=True, time_unit='s')
evoked_dev.plot(window_title='Deviant', gfp=True, time_unit='s')
"""
Explanation: Here we plot the ERF of standard and deviant conditions. In both conditions
we can see the P50 and N100 responses. The mismatch negativity is visible
only in the deviant condition around 100-200 ms. P200 is also visible around
170 ms in both conditions but much stronger in the standard condition. P300
is visible in deviant condition only (decision making in preparation of the
button press). You can view the topographies from a certain time span by
painting an area with clicking and holding the left mouse button.
End of explanation
"""
times = np.arange(0.05, 0.301, 0.025)
evoked_std.plot_topomap(times=times, title='Standard', time_unit='s')
evoked_dev.plot_topomap(times=times, title='Deviant', time_unit='s')
"""
Explanation: Show activations as topography figures.
End of explanation
"""
evoked_difference = combine_evoked([evoked_dev, -evoked_std], weights='equal')
evoked_difference.plot(window_title='Difference', gfp=True, time_unit='s')
"""
Explanation: We can see the MMN effect more clearly by looking at the difference between
the two conditions. P50 and N100 are no longer visible, but MMN/P200 and
P300 are emphasised.
End of explanation
"""
reject = dict(mag=4e-12)
cov = mne.compute_raw_covariance(raw_erm, reject=reject)
cov.plot(raw_erm.info)
del raw_erm
"""
Explanation: Source estimation.
We compute the noise covariance matrix from the empty room measurement
and use it for the other runs.
End of explanation
"""
trans_fname = op.join(data_path, 'MEG', 'bst_auditory',
'bst_auditory-trans.fif')
trans = mne.read_trans(trans_fname)
"""
Explanation: The transformation is read from a file:
End of explanation
"""
if use_precomputed:
fwd_fname = op.join(data_path, 'MEG', 'bst_auditory',
'bst_auditory-meg-oct-6-fwd.fif')
fwd = mne.read_forward_solution(fwd_fname)
else:
src = mne.setup_source_space(subject, spacing='ico4',
subjects_dir=subjects_dir, overwrite=True)
model = mne.make_bem_model(subject=subject, ico=4, conductivity=[0.3],
subjects_dir=subjects_dir)
bem = mne.make_bem_solution(model)
fwd = mne.make_forward_solution(evoked_std.info, trans=trans, src=src,
bem=bem)
inv = mne.minimum_norm.make_inverse_operator(evoked_std.info, fwd, cov)
snr = 3.0
lambda2 = 1.0 / snr ** 2
del fwd
"""
Explanation: To save time and memory, the forward solution is read from a file. Set
use_precomputed=False in the beginning of this script to build the
forward solution from scratch. The head surfaces for constructing a BEM
solution are read from a file. Since the data only contains MEG channels, we
only need the inner skull surface for making the forward solution. For more
information: CHDBBCEJ, :func:mne.setup_source_space,
bem-model, :func:mne.bem.make_watershed_bem.
End of explanation
"""
stc_standard = mne.minimum_norm.apply_inverse(evoked_std, inv, lambda2, 'dSPM')
brain = stc_standard.plot(subjects_dir=subjects_dir, subject=subject,
surface='inflated', time_viewer=False, hemi='lh',
initial_time=0.1, time_unit='s')
del stc_standard, brain
"""
Explanation: The sources are computed using dSPM method and plotted on an inflated brain
surface. For interactive controls over the image, use keyword
time_viewer=True.
Standard condition.
End of explanation
"""
stc_deviant = mne.minimum_norm.apply_inverse(evoked_dev, inv, lambda2, 'dSPM')
brain = stc_deviant.plot(subjects_dir=subjects_dir, subject=subject,
surface='inflated', time_viewer=False, hemi='lh',
initial_time=0.1, time_unit='s')
del stc_deviant, brain
"""
Explanation: Deviant condition.
End of explanation
"""
stc_difference = apply_inverse(evoked_difference, inv, lambda2, 'dSPM')
brain = stc_difference.plot(subjects_dir=subjects_dir, subject=subject,
surface='inflated', time_viewer=False, hemi='lh',
initial_time=0.15, time_unit='s')
"""
Explanation: Difference.
End of explanation
"""
|
shakhova/BananaML | 1st_1.10.17/introduction_to_ipython.ipynb | gpl-3.0 | ! echo 'hello, world!'
!echo $t
%%bash
mkdir test_directory
cd test_directory/
ls -a
#удаление директории, если она не нужна
! rm -r test_directory
"""
Explanation: text
Header
для редактирования формулы ниже использует синтаксис tex
$$ c = \sqrt{a^2 + b^2}$$
End of explanation
"""
%%cmd
mkdir test_directory
cd test_directory
dir
"""
Explanation: Ниже аналоги команд для пользователей Windows:
End of explanation
"""
%%cmd
rmdir test_directiory
%lsmagic
%pylab inline
y = range(11)
y
plot(y)
"""
Explanation: удаление директории, если она не нужна (windows)
End of explanation
"""
|
hasecbinusr/pysal | pysal/contrib/clusterpy/clusterpy.ipynb | bsd-3-clause | mexico = cp.importCsvData(ps.examples.get_path('mexico.csv'))
mexico.fieldNames
w = ps.open(ps.examples.get_path('mexico.gal')).read()
w.n
cp.addRook2Layer(ps.examples.get_path('mexico.gal'), mexico)
mexico.Wrook
mexico.cluster('arisel', ['pcgdp1940'], 5, wType='rook', inits=10, dissolve=0)
mexico.fieldNames
mexico.getVars('pcgdp1940')
# mexico example all together
csvfile = ps.examples.get_path('mexico.csv')
galfile = ps.examples.get_path('mexico.gal')
mexico = cp.importCsvData(csvfile)
cp.addRook2Layer(galfile, mexico)
mexico.cluster('arisel', ['pcgdp1940'], 5, wType='rook', inits=10, dissolve=0)
mexico.region2areas.index(2)
mexico.Wrook[0]
mexico.getVars('State')
regions = np.array(mexico.region2areas)
regions
Counter(regions)
"""
Explanation: Attrribute data from a csv file and a W from a gal file
End of explanation
"""
mexico = cp.importCsvData(ps.examples.get_path('mexico.csv'))
w = ps.open(ps.examples.get_path('mexico.gal')).read()
cp.addW2Layer(w, mexico)
mexico.Wrook
mexico.cluster('arisel', ['pcgdp1940'], 5, wType='rook', inits=10, dissolve=0)
"""
Explanation: Attrribute data from a csv file and an external W object
End of explanation
"""
usf = ps.examples.get_path('us48.shp')
us = cp.loadArcData(usf.split(".")[0])
us.Wqueen
us.fieldNames
uscsv = ps.examples.get_path("usjoin.csv")
f = ps.open(uscsv)
pci = np.array([f.by_col[str(y)] for y in range(1929, 2010)]).T
pci
usy = cp.Layer()
cp.addQueen2Layer(ps.examples.get_path('states48.gal'), usy)
names = ["Y_%d"%v for v in range(1929,2010)]
cp.addArray2Layer(pci, usy, names)
names
usy.fieldNames
usy.getVars('Y_1929')
usy.Wrook
usy.cluster('arisel', ['Y_1980'], 8, wType='queen', inits=10, dissolve=0)
#mexico.cluster('arisel', ['pcgdp1940'], 5, wType='rook', inits=10, dissolve=0)
us = cp.Layer()
cp.addQueen2Layer(ps.examples.get_path('states48.gal'), us)
uscsv = ps.examples.get_path("usjoin.csv")
f = ps.open(uscsv)
pci = np.array([f.by_col[str(y)] for y in range(1929, 2010)]).T
names = ["Y_%d"%v for v in range(1929,2010)]
cp.addArray2Layer(pci, us, names)
usy.cluster('arisel', ['Y_1980'], 8, wType='queen', inits=10, dissolve=0)
us_alpha = cp.importCsvData(ps.examples.get_path('usjoin.csv'))
alpha_fips = us_alpha.getVars('STATE_FIPS')
alpha_fips
dbf = ps.open(ps.examples.get_path('us48.dbf'))
dbf.header
state_fips = dbf.by_col('STATE_FIPS')
names = dbf.by_col('STATE_NAME')
names
state_fips = map(int, state_fips)
state_fips
# the csv file has the states ordered alphabetically, but this isn't the case for the order in the shapefile so we have to reorder before any choropleths are drawn
alpha_fips = [i[0] for i in alpha_fips.values()]
reorder = [ alpha_fips.index(s) for s in state_fips]
regions = usy.region2areas
regions
from pysal.contrib.viz import mapping as maps
shp = ps.examples.get_path('us48.shp')
regions = np.array(regions)
maps.plot_choropleth(shp, regions[reorder], 'unique_values')
usy.cluster('arisel', ['Y_1929'], 8, wType='queen', inits=10, dissolve=0)
regions = usy.region2areas
regions = np.array(regions)
maps.plot_choropleth(shp, regions[reorder], 'unique_values')
names = ["Y_%d"%i for i in range(1929, 2010)]
#usy.cluster('arisel', ['Y_1929'], 8, wType='queen', inits=10, dissolve=0)
usy.cluster('arisel', names, 8, wType='queen', inits=10, dissolve=0)
regions = usy.region2areas
regions = np.array(regions)
maps.plot_choropleth(shp, regions[reorder], 'unique_values', title='All Years')
ps.version
usy.cluster('arisel', names[:40], 8, wType='queen', inits=10, dissolve=0)
regions = usy.region2areas
regions = np.array(regions)
maps.plot_choropleth(shp, regions[reorder], 'unique_values', title='1929-68')
usy.cluster('arisel', names[40:], 8, wType='queen', inits=10, dissolve=0)
regions = usy.region2areas
regions = np.array(regions)
maps.plot_choropleth(shp, regions[reorder], 'unique_values', title='1969-2009')
usy.cluster('arisel', names[40:], 8, wType='queen', inits=10, dissolve=0)
usy.dataOperation("CONSTANT = 1")
usy.Wrook = usy.Wqueen
usy.cluster('maxpTabu', ['Y_1929', 'Y_1929'], threshold=1000, dissolve=0)
regions = usy.region2areas
regions = np.array(regions)
maps.plot_choropleth(shp, regions[reorder], 'unique_values', title='maxp 1929')
Counter(regions)
usy.getVars('Y_1929')
usy.Wrook
usy.cluster('maxpTabu', ['Y_1929', 'CONSTANT'], threshold=8, dissolve=0)
regions = usy.region2areas
regions = np.array(regions)
maps.plot_choropleth(shp, regions[reorder], 'unique_values', title='maxp 1929')
regions
Counter(regions)
vars = names
vars.append('CONSTANT')
vars
usy.cluster('maxpTabu', vars, threshold=8, dissolve=0)
regions = usy.region2areas
regions = np.array(regions)
maps.plot_choropleth(shp, regions[reorder], 'unique_values', title='maxp 1929-2009')
Counter(regions)
south = cp.loadArcData(ps.examples.get_path("south.shp"))
south.fieldNames
# uncomment if you have some time ;->
#south.cluster('arisel', ['HR70'], 20, wType='queen', inits=10, dissolve=0)
#regions = south.region2areas
shp = ps.examples.get_path('south.shp')
#maps.plot_choropleth(shp, np.array(regions), 'unique_values')
south.dataOperation("CONSTANT = 1")
south.cluster('maxpTabu', ['HR70', 'CONSTANT'], threshold=70, dissolve=0)
regions = south.region2areas
regions = np.array(regions)
maps.plot_choropleth(shp, regions, 'unique_values', title='maxp HR70 threshold=70')
Counter(regions)
"""
Explanation: Shapefile and mapping results with PySAL Viz
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/nerc/cmip6/models/sandbox-3/land.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nerc', 'sandbox-3', 'land')
"""
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: NERC
Source ID: SANDBOX-3
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:27
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation
"""
|
tensorflow/docs-l10n | site/ja/addons/tutorials/networks_seq2seq_nmt.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
"""
!pip install tensorflow-addons==0.11.2
import tensorflow as tf
import tensorflow_addons as tfa
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
from sklearn.model_selection import train_test_split
import unicodedata
import re
import numpy as np
import os
import io
import time
"""
Explanation: TensorFlow アドオン ネットワーク : アテンションメカニズムを使用したシーケンスツーシーケンスのニューラル機械翻訳
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https://www.tensorflow.org/addons/tutorials/networks_seq2seq_nmt"> <img src="https://www.tensorflow.org/images/tf_logo_32px.png"> TensorFlow.org で表示</a>
</td>
<td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/addons/tutorials/networks_seq2seq_nmt.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab で実行</a> </td>
<td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/addons/tutorials/networks_seq2seq_nmt.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示</a> </td>
<td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/addons/tutorials/networks_seq2seq_nmt.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a> </td>
</table>
概要
このノートブックでは、シーケンスツーシーケンスモデルのアーキテクチャについて簡単に紹介しています。このノートブックでは、ニューラル機械翻訳に必要な 4 つの基本トピックを大まかに取り上げます。
データのクリーニング
データの準備
アテンションを使用したニューラル翻訳モデル
tf.addons.seq2seq.BasicDecoder と tf.addons.seq2seq.BeamSearchDecoder を使用した最終翻訳
ただし、このようなモデルの背後にある基本的な考え方は、エンコーダとデコーダのアーキテクチャだけです。これらのネットワークは通常、テキスト要約、機械翻訳、画像キャプションなどの様々なタスクに使用されます。このチュートリアルでは、必要に応じて専門用語を説明しながら、概念を実践的に理解できるようにしています。シーケンスツーシーケンスモデルの最初のテストベッドである、ニューラル機械翻訳(NMT)のタスクに焦点を当てています。
セットアップ
End of explanation
"""
def download_nmt():
path_to_zip = tf.keras.utils.get_file(
'spa-eng.zip', origin='http://storage.googleapis.com/download.tensorflow.org/data/spa-eng.zip',
extract=True)
path_to_file = os.path.dirname(path_to_zip)+"/spa-eng/spa.txt"
return path_to_file
"""
Explanation: データのクリーニングと準備
http://www.manythings.org/anki/ で提供されている言語データセットを使用します。このデータセットには、以下の形式の言語翻訳ペアが含まれます。
May I borrow this book? ¿Puedo tomar prestado este libro?
多様な言語を使用できますが、ここでは英語とスペイン語のデータセットを使用します。データセットをダウンロードすると、以下の手順に従って、データを準備します。
各文に、開始と終了のトークンを追加します。
特殊文字を除去して、文をクリーニングします。
単語インデックスを使用して語彙を作成し(単語から ID にマッピング)、単語インデックスを逆順にします(ID から単語にマッピング)。
最大長に合わせて各文にパディングを設定します(反復エンコーダーへの入力に合わせて最大の長さを調整する必要があるためです)。
End of explanation
"""
class NMTDataset:
def __init__(self, problem_type='en-spa'):
self.problem_type = 'en-spa'
self.inp_lang_tokenizer = None
self.targ_lang_tokenizer = None
def unicode_to_ascii(self, s):
return ''.join(c for c in unicodedata.normalize('NFD', s) if unicodedata.category(c) != 'Mn')
## Step 1 and Step 2
def preprocess_sentence(self, w):
w = self.unicode_to_ascii(w.lower().strip())
# creating a space between a word and the punctuation following it
# eg: "he is a boy." => "he is a boy ."
# Reference:- https://stackoverflow.com/questions/3645931/python-padding-punctuation-with-white-spaces-keeping-punctuation
w = re.sub(r"([?.!,¿])", r" \1 ", w)
w = re.sub(r'[" "]+', " ", w)
# replacing everything with space except (a-z, A-Z, ".", "?", "!", ",")
w = re.sub(r"[^a-zA-Z?.!,¿]+", " ", w)
w = w.strip()
# adding a start and an end token to the sentence
# so that the model know when to start and stop predicting.
w = '<start> ' + w + ' <end>'
return w
def create_dataset(self, path, num_examples):
# path : path to spa-eng.txt file
# num_examples : Limit the total number of training example for faster training (set num_examples = len(lines) to use full data)
lines = io.open(path, encoding='UTF-8').read().strip().split('\n')
word_pairs = [[self.preprocess_sentence(w) for w in l.split('\t')] for l in lines[:num_examples]]
return zip(*word_pairs)
# Step 3 and Step 4
def tokenize(self, lang):
# lang = list of sentences in a language
# print(len(lang), "example sentence: {}".format(lang[0]))
lang_tokenizer = tf.keras.preprocessing.text.Tokenizer(filters='', oov_token='<OOV>')
lang_tokenizer.fit_on_texts(lang)
## tf.keras.preprocessing.text.Tokenizer.texts_to_sequences converts string (w1, w2, w3, ......, wn)
## to a list of correspoding integer ids of words (id_w1, id_w2, id_w3, ...., id_wn)
tensor = lang_tokenizer.texts_to_sequences(lang)
## tf.keras.preprocessing.sequence.pad_sequences takes argument a list of integer id sequences
## and pads the sequences to match the longest sequences in the given input
tensor = tf.keras.preprocessing.sequence.pad_sequences(tensor, padding='post')
return tensor, lang_tokenizer
def load_dataset(self, path, num_examples=None):
# creating cleaned input, output pairs
targ_lang, inp_lang = self.create_dataset(path, num_examples)
input_tensor, inp_lang_tokenizer = self.tokenize(inp_lang)
target_tensor, targ_lang_tokenizer = self.tokenize(targ_lang)
return input_tensor, target_tensor, inp_lang_tokenizer, targ_lang_tokenizer
def call(self, num_examples, BUFFER_SIZE, BATCH_SIZE):
file_path = download_nmt()
input_tensor, target_tensor, self.inp_lang_tokenizer, self.targ_lang_tokenizer = self.load_dataset(file_path, num_examples)
input_tensor_train, input_tensor_val, target_tensor_train, target_tensor_val = train_test_split(input_tensor, target_tensor, test_size=0.2)
train_dataset = tf.data.Dataset.from_tensor_slices((input_tensor_train, target_tensor_train))
train_dataset = train_dataset.shuffle(BUFFER_SIZE).batch(BATCH_SIZE, drop_remainder=True)
val_dataset = tf.data.Dataset.from_tensor_slices((input_tensor_val, target_tensor_val))
val_dataset = val_dataset.batch(BATCH_SIZE, drop_remainder=True)
return train_dataset, val_dataset, self.inp_lang_tokenizer, self.targ_lang_tokenizer
BUFFER_SIZE = 32000
BATCH_SIZE = 64
# Let's limit the #training examples for faster training
num_examples = 30000
dataset_creator = NMTDataset('en-spa')
train_dataset, val_dataset, inp_lang, targ_lang = dataset_creator.call(num_examples, BUFFER_SIZE, BATCH_SIZE)
example_input_batch, example_target_batch = next(iter(train_dataset))
example_input_batch.shape, example_target_batch.shape
"""
Explanation: 手順1から4を実行するために必要な関数を使用して、NMTDatasetクラスを定義する
call() は以下を返します。
train_dataset と val_dataset: tf.data.Dataset オブジェクト
inp_lang_tokenizer と targ_lang_tokenizer: tf.keras.preprocessing.text.Tokenizer オブジェクト
End of explanation
"""
vocab_inp_size = len(inp_lang.word_index)+1
vocab_tar_size = len(targ_lang.word_index)+1
max_length_input = example_input_batch.shape[1]
max_length_output = example_target_batch.shape[1]
embedding_dim = 256
units = 1024
steps_per_epoch = num_examples//BATCH_SIZE
print("max_length_english, max_length_spanish, vocab_size_english, vocab_size_spanish")
max_length_input, max_length_output, vocab_inp_size, vocab_tar_size
#####
class Encoder(tf.keras.Model):
def __init__(self, vocab_size, embedding_dim, enc_units, batch_sz):
super(Encoder, self).__init__()
self.batch_sz = batch_sz
self.enc_units = enc_units
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
##-------- LSTM layer in Encoder ------- ##
self.lstm_layer = tf.keras.layers.LSTM(self.enc_units,
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
def call(self, x, hidden):
x = self.embedding(x)
output, h, c = self.lstm_layer(x, initial_state = hidden)
return output, h, c
def initialize_hidden_state(self):
return [tf.zeros((self.batch_sz, self.enc_units)), tf.zeros((self.batch_sz, self.enc_units))]
## Test Encoder Stack
encoder = Encoder(vocab_inp_size, embedding_dim, units, BATCH_SIZE)
# sample input
sample_hidden = encoder.initialize_hidden_state()
sample_output, sample_h, sample_c = encoder(example_input_batch, sample_hidden)
print ('Encoder output shape: (batch size, sequence length, units) {}'.format(sample_output.shape))
print ('Encoder h vecotr shape: (batch size, units) {}'.format(sample_h.shape))
print ('Encoder c vector shape: (batch size, units) {}'.format(sample_c.shape))
class Decoder(tf.keras.Model):
def __init__(self, vocab_size, embedding_dim, dec_units, batch_sz, attention_type='luong'):
super(Decoder, self).__init__()
self.batch_sz = batch_sz
self.dec_units = dec_units
self.attention_type = attention_type
# Embedding Layer
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
#Final Dense layer on which softmax will be applied
self.fc = tf.keras.layers.Dense(vocab_size)
# Define the fundamental cell for decoder recurrent structure
self.decoder_rnn_cell = tf.keras.layers.LSTMCell(self.dec_units)
# Sampler
self.sampler = tfa.seq2seq.sampler.TrainingSampler()
# Create attention mechanism with memory = None
self.attention_mechanism = self.build_attention_mechanism(self.dec_units,
None, self.batch_sz*[max_length_input], self.attention_type)
# Wrap attention mechanism with the fundamental rnn cell of decoder
self.rnn_cell = self.build_rnn_cell(batch_sz)
# Define the decoder with respect to fundamental rnn cell
self.decoder = tfa.seq2seq.BasicDecoder(self.rnn_cell, sampler=self.sampler, output_layer=self.fc)
def build_rnn_cell(self, batch_sz):
rnn_cell = tfa.seq2seq.AttentionWrapper(self.decoder_rnn_cell,
self.attention_mechanism, attention_layer_size=self.dec_units)
return rnn_cell
def build_attention_mechanism(self, dec_units, memory, memory_sequence_length, attention_type='luong'):
# ------------- #
# typ: Which sort of attention (Bahdanau, Luong)
# dec_units: final dimension of attention outputs
# memory: encoder hidden states of shape (batch_size, max_length_input, enc_units)
# memory_sequence_length: 1d array of shape (batch_size) with every element set to max_length_input (for masking purpose)
if(attention_type=='bahdanau'):
return tfa.seq2seq.BahdanauAttention(units=dec_units, memory=memory, memory_sequence_length=memory_sequence_length)
else:
return tfa.seq2seq.LuongAttention(units=dec_units, memory=memory, memory_sequence_length=memory_sequence_length)
def build_initial_state(self, batch_sz, encoder_state, Dtype):
decoder_initial_state = self.rnn_cell.get_initial_state(batch_size=batch_sz, dtype=Dtype)
decoder_initial_state = decoder_initial_state.clone(cell_state=encoder_state)
return decoder_initial_state
def call(self, inputs, initial_state):
x = self.embedding(inputs)
outputs, _, _ = self.decoder(x, initial_state=initial_state, sequence_length=self.batch_sz*[max_length_output-1])
return outputs
# Test decoder stack
decoder = Decoder(vocab_tar_size, embedding_dim, units, BATCH_SIZE, 'luong')
sample_x = tf.random.uniform((BATCH_SIZE, max_length_output))
decoder.attention_mechanism.setup_memory(sample_output)
initial_state = decoder.build_initial_state(BATCH_SIZE, [sample_h, sample_c], tf.float32)
sample_decoder_outputs = decoder(sample_x, initial_state)
print("Decoder Outputs Shape: ", sample_decoder_outputs.rnn_output.shape)
"""
Explanation: いくつかの重要なパラメーター
End of explanation
"""
optimizer = tf.keras.optimizers.Adam()
def loss_function(real, pred):
# real shape = (BATCH_SIZE, max_length_output)
# pred shape = (BATCH_SIZE, max_length_output, tar_vocab_size )
cross_entropy = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True, reduction='none')
loss = cross_entropy(y_true=real, y_pred=pred)
mask = tf.logical_not(tf.math.equal(real,0)) #output 0 for y=0 else output 1
mask = tf.cast(mask, dtype=loss.dtype)
loss = mask* loss
loss = tf.reduce_mean(loss)
return loss
"""
Explanation: オプティマイザと損失関数を定義する
End of explanation
"""
checkpoint_dir = './training_checkpoints'
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
checkpoint = tf.train.Checkpoint(optimizer=optimizer,
encoder=encoder,
decoder=decoder)
"""
Explanation: チェックポイント(オブジェクトベースの保存)
End of explanation
"""
@tf.function
def train_step(inp, targ, enc_hidden):
loss = 0
with tf.GradientTape() as tape:
enc_output, enc_h, enc_c = encoder(inp, enc_hidden)
dec_input = targ[ : , :-1 ] # Ignore <end> token
real = targ[ : , 1: ] # ignore <start> token
# Set the AttentionMechanism object with encoder_outputs
decoder.attention_mechanism.setup_memory(enc_output)
# Create AttentionWrapperState as initial_state for decoder
decoder_initial_state = decoder.build_initial_state(BATCH_SIZE, [enc_h, enc_c], tf.float32)
pred = decoder(dec_input, decoder_initial_state)
logits = pred.rnn_output
loss = loss_function(real, logits)
variables = encoder.trainable_variables + decoder.trainable_variables
gradients = tape.gradient(loss, variables)
optimizer.apply_gradients(zip(gradients, variables))
return loss
"""
Explanation: 1 つの train_step 演算
End of explanation
"""
EPOCHS = 10
for epoch in range(EPOCHS):
start = time.time()
enc_hidden = encoder.initialize_hidden_state()
total_loss = 0
# print(enc_hidden[0].shape, enc_hidden[1].shape)
for (batch, (inp, targ)) in enumerate(train_dataset.take(steps_per_epoch)):
batch_loss = train_step(inp, targ, enc_hidden)
total_loss += batch_loss
if batch % 100 == 0:
print('Epoch {} Batch {} Loss {:.4f}'.format(epoch + 1,
batch,
batch_loss.numpy()))
# saving (checkpoint) the model every 2 epochs
if (epoch + 1) % 2 == 0:
checkpoint.save(file_prefix = checkpoint_prefix)
print('Epoch {} Loss {:.4f}'.format(epoch + 1,
total_loss / steps_per_epoch))
print('Time taken for 1 epoch {} sec\n'.format(time.time() - start))
"""
Explanation: モデルのトレーニング
End of explanation
"""
def evaluate_sentence(sentence):
sentence = dataset_creator.preprocess_sentence(sentence)
inputs = [inp_lang.word_index[i] for i in sentence.split(' ')]
inputs = tf.keras.preprocessing.sequence.pad_sequences([inputs],
maxlen=max_length_input,
padding='post')
inputs = tf.convert_to_tensor(inputs)
inference_batch_size = inputs.shape[0]
result = ''
enc_start_state = [tf.zeros((inference_batch_size, units)), tf.zeros((inference_batch_size,units))]
enc_out, enc_h, enc_c = encoder(inputs, enc_start_state)
dec_h = enc_h
dec_c = enc_c
start_tokens = tf.fill([inference_batch_size], targ_lang.word_index['<start>'])
end_token = targ_lang.word_index['<end>']
greedy_sampler = tfa.seq2seq.GreedyEmbeddingSampler()
# Instantiate BasicDecoder object
decoder_instance = tfa.seq2seq.BasicDecoder(cell=decoder.rnn_cell, sampler=greedy_sampler, output_layer=decoder.fc)
# Setup Memory in decoder stack
decoder.attention_mechanism.setup_memory(enc_out)
# set decoder_initial_state
decoder_initial_state = decoder.build_initial_state(inference_batch_size, [enc_h, enc_c], tf.float32)
### Since the BasicDecoder wraps around Decoder's rnn cell only, you have to ensure that the inputs to BasicDecoder
### decoding step is output of embedding layer. tfa.seq2seq.GreedyEmbeddingSampler() takes care of this.
### You only need to get the weights of embedding layer, which can be done by decoder.embedding.variables[0] and pass this callabble to BasicDecoder's call() function
decoder_embedding_matrix = decoder.embedding.variables[0]
outputs, _, _ = decoder_instance(decoder_embedding_matrix, start_tokens = start_tokens, end_token= end_token, initial_state=decoder_initial_state)
return outputs.sample_id.numpy()
def translate(sentence):
result = evaluate_sentence(sentence)
print(result)
result = targ_lang.sequences_to_texts(result)
print('Input: %s' % (sentence))
print('Predicted translation: {}'.format(result))
"""
Explanation: tf-addons の BasicDecoder を使ってデコードする
End of explanation
"""
# restoring the latest checkpoint in checkpoint_dir
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_dir))
translate(u'hace mucho frio aqui.')
translate(u'esta es mi vida.')
translate(u'¿todavia estan en casa?')
# wrong translation
translate(u'trata de averiguarlo.')
"""
Explanation: 最新のチェックポイントを復元してテストする
End of explanation
"""
def beam_evaluate_sentence(sentence, beam_width=3):
sentence = dataset_creator.preprocess_sentence(sentence)
inputs = [inp_lang.word_index[i] for i in sentence.split(' ')]
inputs = tf.keras.preprocessing.sequence.pad_sequences([inputs],
maxlen=max_length_input,
padding='post')
inputs = tf.convert_to_tensor(inputs)
inference_batch_size = inputs.shape[0]
result = ''
enc_start_state = [tf.zeros((inference_batch_size, units)), tf.zeros((inference_batch_size,units))]
enc_out, enc_h, enc_c = encoder(inputs, enc_start_state)
dec_h = enc_h
dec_c = enc_c
start_tokens = tf.fill([inference_batch_size], targ_lang.word_index['<start>'])
end_token = targ_lang.word_index['<end>']
# From official documentation
# NOTE If you are using the BeamSearchDecoder with a cell wrapped in AttentionWrapper, then you must ensure that:
# The encoder output has been tiled to beam_width via tfa.seq2seq.tile_batch (NOT tf.tile).
# The batch_size argument passed to the get_initial_state method of this wrapper is equal to true_batch_size * beam_width.
# The initial state created with get_initial_state above contains a cell_state value containing properly tiled final state from the encoder.
enc_out = tfa.seq2seq.tile_batch(enc_out, multiplier=beam_width)
decoder.attention_mechanism.setup_memory(enc_out)
print("beam_with * [batch_size, max_length_input, rnn_units] : 3 * [1, 16, 1024]] :", enc_out.shape)
# set decoder_inital_state which is an AttentionWrapperState considering beam_width
hidden_state = tfa.seq2seq.tile_batch([enc_h, enc_c], multiplier=beam_width)
decoder_initial_state = decoder.rnn_cell.get_initial_state(batch_size=beam_width*inference_batch_size, dtype=tf.float32)
decoder_initial_state = decoder_initial_state.clone(cell_state=hidden_state)
# Instantiate BeamSearchDecoder
decoder_instance = tfa.seq2seq.BeamSearchDecoder(decoder.rnn_cell,beam_width=beam_width, output_layer=decoder.fc)
decoder_embedding_matrix = decoder.embedding.variables[0]
# The BeamSearchDecoder object's call() function takes care of everything.
outputs, final_state, sequence_lengths = decoder_instance(decoder_embedding_matrix, start_tokens=start_tokens, end_token=end_token, initial_state=decoder_initial_state)
# outputs is tfa.seq2seq.FinalBeamSearchDecoderOutput object.
# The final beam predictions are stored in outputs.predicted_id
# outputs.beam_search_decoder_output is a tfa.seq2seq.BeamSearchDecoderOutput object which keep tracks of beam_scores and parent_ids while performing a beam decoding step
# final_state = tfa.seq2seq.BeamSearchDecoderState object.
# Sequence Length = [inference_batch_size, beam_width] details the maximum length of the beams that are generated
# outputs.predicted_id.shape = (inference_batch_size, time_step_outputs, beam_width)
# outputs.beam_search_decoder_output.scores.shape = (inference_batch_size, time_step_outputs, beam_width)
# Convert the shape of outputs and beam_scores to (inference_batch_size, beam_width, time_step_outputs)
final_outputs = tf.transpose(outputs.predicted_ids, perm=(0,2,1))
beam_scores = tf.transpose(outputs.beam_search_decoder_output.scores, perm=(0,2,1))
return final_outputs.numpy(), beam_scores.numpy()
def beam_translate(sentence):
result, beam_scores = beam_evaluate_sentence(sentence)
print(result.shape, beam_scores.shape)
for beam, score in zip(result, beam_scores):
print(beam.shape, score.shape)
output = targ_lang.sequences_to_texts(beam)
output = [a[:a.index('<end>')] for a in output]
beam_score = [a.sum() for a in score]
print('Input: %s' % (sentence))
for i in range(len(output)):
print('{} Predicted translation: {} {}'.format(i+1, output[i], beam_score[i]))
beam_translate(u'hace mucho frio aqui.')
beam_translate(u'¿todavia estan en casa?')
"""
Explanation: tf-addons BeamSearchDecoder を使用する
End of explanation
"""
|
parrt/msan692 | notes/datastructures.ipynb | mit | s = {1,3,2,9}
"""
Explanation: Data structures
For a refresher on object-oriented programming, see Object-oriented programming.
A simple set implementation
Sets in Python can be specified with set notation:
End of explanation
"""
s = set()
s.add(1)
s.add(3)
"""
Explanation: Or with by creating a set object and assigning it to a variable then manually adding elements:
End of explanation
"""
class MySet:
def __init__(self):
self.elements = []
def add(self, x):
if x not in self.elements:
self.elements.append(x)
s = MySet()
s.add(3) # same as MySet.add(a,3)
s.add(3)
s.add(2)
s.add('cat')
s.elements
from lolviz import *
objviz(s)
"""
Explanation: We can build our own set object implementation by creating a class definition:
End of explanation
"""
class MySet:
def __init__(self):
self.elements = []
def add(self, x):
if x not in self.elements:
self.elements.append(x)
def hasmember(self, x):
return x in self.elements
s = MySet()
s.add(3) # same as MySet.add(a,3)
s.add(3)
s.add(2)
s.add('cat')
s.hasmember(3), s.hasmember(99)
"""
Explanation: Question: How expensive is it to add an element to a set with this implementation?
Exercise
Add a method called hasmember() that returns true or false according to whether parameter x is a member of the set.
End of explanation
"""
class LLNode:
def __init__(self, value, next=None):
self.value = value
self.next = next
head = LLNode('tombu')
callsviz(varnames='head')
head = LLNode('parrt', head)
callsviz(varnames='head')
head = LLNode("xue", head)
callsviz(varnames='head')
"""
Explanation: Linked lists -- the gateway drug
We've studied arrays/lists that are built into Python but they are not always the best kind of list to use. Sometimes, we are inserting and deleting things from the head or middle of the list. If we do this in lists made up of contiguous cells in memory, we have to move a lot of cells around to make room for a new element or to close a hole made by a deletion. Most importantly, linked lists are the degenerate form of a general object graph. So, it makes sense to start with the simple versions and move up to general graphs.
Linked lists allow us to efficiently insert and remove things anywhere we want, at the cost of more memory.
A linked list associates a next pointer with each value. We call these things nodes and here's a simple implementation for node objects:
End of explanation
"""
p = head
while p is not None:
print(p.value)
p = p.next
"""
Explanation: Walk list
To walk a list, we use the notion of a cursor, which we can think of as a finger that moves along a data structure from node to node. We initialize the cursor to point to the first node of the list, the head, and then walk the cursor through the list via the next fields:
End of explanation
"""
class LLNode:
def __init__(self, value, next=None):
self.value = value
self.next = next
def exists(self, x):
p = self # start looking at this node
while p is not None:
if x==p.value:
return True
p = p.next
return False
head = LLNode('tombu')
head = LLNode('parrt', head)
head = LLNode("xue", head)
head.exists('parrt'), head.exists('part')
"""
Explanation: Question: How fast can we walk the linked list?
Exercise
Modify the walking code so that it lives in a method of LLNode called exists(self, x) that looks for a node with value x starting at self. If we test with head.exists('parrt') then self would be our global head variable. Have the function return true if x exists in the list, else return false. You can test it with:
python
head = LLNode('tombu')
head = LLNode('parrt', head)
head = LLNode("xue", head)
head.exists('parrt'), head.exists('part')
End of explanation
"""
x = LLNode('mary')
callviz(varnames=['head','x'])
"""
Explanation: Insertion at head
If we want to insert an element at the front of a linked list, we create a node to hold the value and set its next pointer to point to the old head. Then we have the head variable point at the new node. Here is the sequence.
Create new node
End of explanation
"""
x.next = head
callviz(varnames=['head','x'])
"""
Explanation: Make next field of new node point to head
End of explanation
"""
head = x
callviz(varnames=['head','x'])
"""
Explanation: Make head point at new node
End of explanation
"""
# to delete xue, make previous node skip over xue
xue = head.next
callviz(varnames=['head','x','xue'])
head.next = xue.next
callviz(varnames=['head','x'])
"""
Explanation: Deletion of node
End of explanation
"""
head.next = xue.next
callviz(varnames=['head','x','xue'])
"""
Explanation: Notice that xue still points at the node but we are going to ignore that variable from now on. Moving from the head of the list, we still cannot see the node with 'xue' in it.
End of explanation
"""
before_tombu = head.next
callviz(varnames=['head','x','before_tombu'])
before_tombu.next = None
callviz(varnames=['head','x','before_tombu'])
"""
Explanation: Exercise
Get a pointer to the node with value tombu and then delete it from the list using the same technique we just saw.
End of explanation
"""
class Tree:
def __init__(self, value, left=None, right=None):
self.value = value
self.left = left
self.right = right
root = Tree('parrt')
root.left = Tree('mary')
root.right = Tree('april')
treeviz(root)
root = Tree('parrt', Tree('mary'), Tree('april'))
treeviz(root)
root = Tree('parrt')
mary = Tree('mary')
april = Tree('april')
jim = Tree('jim')
sri = Tree('sri')
mike = Tree('mike')
root.left = mary
root.right = april
mary.left = jim
mary.right = mike
april.right = sri
treeviz(root)
"""
Explanation: Binary trees
The tree data structure is one of the most important in computer science and is extremely common in data science as well. Decision trees, which form the core of gradient boosting machines and random forests (machine learning algorithms), are naturally represented as trees in memory. When we process HTML and XML files, those are generally represented by trees. For example:
<img align="right" src="figures/xml-tree.png" width="200"></td>
xml
<bookstore>
<book category="cooking">
<title lang="en">Everyday Italian</title>
<author>Giada De Laurentiis</author>
<year>2005</year>
<price>30.00</price>
</book>
<book category="web">
<title lang="en">Learning XML</title>
<author>Erik T. Ray</author>
<year>2003</year>
<price>39.95</price>
</book>
</bookstore>
We're going to look at a simple kind of tree that has at most two children: a binary tree. A node that has no children is called a leaf and non-leaves are called internal nodes.
In general, trees with $n$ nodes have $n-1$ edges. Each node has a single incoming edge and the root has none.
Nodes have parents and children and siblings (at the same level).
Sometimes nodes have links back to their parents for programming convenience reasons. That would make it a graph not a tree but we still consider it a tree.
End of explanation
"""
class NTree:
def __init__(self, value):
self.value = value
self.children = []
def addchild(self, child):
if isinstance(child, NTree):
self.children.append(child)
root2 = NTree('parrt')
mary = NTree('mary')
april = NTree('april')
jim = NTree('jim')
sri = NTree('sri')
mike = NTree('mike')
root2.addchild(mary)
root2.addchild(jim)
root2.addchild(sri)
sri.addchild(mike)
sri.addchild(april)
objviz(root2)
"""
Explanation: Exercise
Create a class definition for NTree that allows arbitrary numbers of children. (Use a list for field children rather than left and right.) The constructor should init an empty children list. Test your code using:
```python
from lolviz import objviz
root2 = NTree('parrt')
mary = NTree('mary')
april = NTree('april')
jim = NTree('jim')
sri = NTree('sri')
mike = NTree('mike')
root2.addchild(mary)
root2.addchild(jim)
root2.addchild(sri)
sri.addchild(mike)
sri.addchild(april)
objviz(root2)
```
Solution
End of explanation
"""
treeviz(root)
def walk(t):
"Depth-first walk of binary tree"
if t is None: return
# if t.left is None: callsviz(varnames=['t']).view()
print(t.value) # "visit" or process this node
walk(t.left) # walk into the left door
walk(t.right) # after visiting all those, enter right door
walk(root)
"""
Explanation: Walking trees
Walking a tree is a matter of moving a cursor like we did with the linked lists above. The goal is to visit each node in the tree. We start out by having the cursor point at the root of the tree and then walk downwards until we hit leaves, and then we come back up and try other alternatives.
A good physical analogy: imagine a person (cursor) from HR needing to speak (visit) each person in a company starting with the president/CEO. Here's a sample org chart:
<img src="figures/orgchart.png" width="200">
The general visit algorithm starting at node p is meet with p then visit each direct report. Then visit all of their direct reports, one level of the tree at a time. The node visitation sequence would be A,B,C,F,H,J,... This is a breadth-first search of the tree and easy to describe but a bit more work to implement that a depth-first search. Depth first means visiting a person then visit their first direct report and that person's direct report etc... until you reach a leaf node. Then back up a level and move to next direct report. That visitation sequence is A,B,C,D,E,F,G,H,I,J,K,L.
If you'd like to start at node B, not A, what is the procedure? The same, of course. So visiting A means, say, printing A then visiting B. Visiting B means visiting C, and when that completes, visiting F, etc... The key is that the procedure for visiting a node is exactly the same regardless of which node you start with. This is generally true for any self-similar data structure like a tree.
Another easy way to think about binary tree visitation in particular is positioning yourself in a room with a bunch of doors as choices. Each door leads to other rooms, which might also have doors leading to other rooms. We can think of a room as a node and doors as pointers to other nodes. Each room is identical and has 0, 1, or 2 doors (for a binary tree). At the root node we might see two choices and, to explore all nodes, we can visit each door in turn. Let's go left:
<img src="figures/left-door.png" width="100">
After exploring all possible rooms by taking the left door, we come all the way back out to the root room and try the next alternative on the right:
<img src="figures/right-door.png" width="100">
Algorithmically what were doing in each room is
procedure visit room:
if left door exists, visit rooms accessible through left door
if right door exists, visit rooms accessible through right door
Or in code notation:
python
def visit(room):
if room.left: visit(room.left)
if room.right: visit(room.right)
This mechanism works from any room. Imagine waking up and finding yourself in a room with two doors. You have no idea whether you are at the root or somewhere in the middle of a labyrinth (maze) of rooms.
This approach is called backtracking.
Let's code this up but make a regular function not a method of the tree class to keep things simple. Let's look at that tree again:
End of explanation
"""
def fact(n):
print(f"fact({n})")
if n==0: return 1
return n * fact(n-1)
fact(10)
"""
Explanation: That is a recursive function, meaning that walk calls itself. It's really no different than the recurrence relations we use in mathematics, such as the gradient descent recurrence:
$x_{t+1} = x_t - \eta f'(x_t)$
Variable $x$ is a function of previous incarnations of itself.
End of explanation
"""
def walk(t):
if t is None: return
walk(t.left)
walk(t.right)
print(t.value) # process after visiting children
walk(root)
"""
Explanation: Don't let the recursion scare you, just pretend that you are calling a different function or that you are calling the same function except that it is known to be correct. We call that the "recursive leap of faith." (See Fundamentals of Recursion,Although that one is using C++ not Python.)
As the old joke goes: "To truly understand recursion, you must first understand recursion."
The order in which we reach (enter/exit) each node during the search is always the same for a given search strategy, such as depth first search. Here is a visualization from Wikipedia:
<img src="https://upload.wikimedia.org/wikipedia/commons/thumb/d/d4/Sorted_binary_tree_preorder.svg/440px-Sorted_binary_tree_preorder.svg.png" width="250">
We always try to go as deep as possible before exploring siblings.
Now, notice the black dots on the traversal. That signifies processing or "visiting" a node and in this case is done before visiting the children. When we process a node and then it's children, we call that a preorder traversal. If we process a node after walking the children, we call it a post-order traversal:
<img src="https://upload.wikimedia.org/wikipedia/commons/thumb/9/9d/Sorted_binary_tree_postorder.svg/440px-Sorted_binary_tree_postorder.svg.png" width="250">
In code, that just means switching the processing step two after the walk of the children:
End of explanation
"""
class Tree:
def __init__(self, value, left=None, right=None):
self.value = value
self.left = left
self.right = right
def walk(t:Tree) -> int:
if t is None: return 0
return t.value + walk(t.left) + walk(t.right)
a = Tree(3)
b = Tree(5)
c = Tree(10)
d = Tree(9)
e = Tree(4)
f = Tree(1)
a.left = b
a.right = c
b.left = d
b.right = e
e.right = f
treeviz(a)
print(walk(a), walk(b), walk(c))
"""
Explanation: In both cases we are performing a depth-first walk of the tree, which means that we are immediately seeking the leaves rather than siblings. A depth first walk scans down all of the left child fields of the nodes until it hits a leaf and then goes back up a level, looking for children at that level.
In contrast, a breadth-first walk processes all children before looking at grandchildren. This is a less common walk but, for our tree, would be the sequence parrt, mary, april, jim, mike, sri. In a sense, breadth first processes one level of the tree at a time:
<img src="https://upload.wikimedia.org/wikipedia/commons/thumb/d/d1/Sorted_binary_tree_breadth-first_traversal.svg/440px-Sorted_binary_tree_breadth-first_traversal.svg.png" width="250">
Exercise
Alter the depth-first recursive tree walk above to sum the values in a binary tree. Have walk() return the sum of a node's value and all it childrens' values. Test with:
```python
a = Tree(3)
b = Tree(5)
c = Tree(10)
d = Tree(9)
e = Tree(4)
f = Tree(1)
a.left = b
a.right = c
b.left = d
b.right = e
e.right = f
treeviz(a)
print(walk(a), walk(b), walk(c))
```
End of explanation
"""
import graphviz as gv
gv.Source("""
digraph G {
node [shape=box penwidth="0.6" margin="0.0" fontname="Helvetica" fontsize=10]
edge [arrowsize=.4 penwidth="0.6"]
rankdir=LR;
ranksep=.25;
cat->dog
dog->cat
dog->horse
dog->zebra
horse->zebra
zebra->llama
}
""")
"""
Explanation: Graphs
Trees are actually a subset of the class of directed, acyclic graphs. If we remove the acyclic restriction and the restriction that nodes have a single incoming edge, we get a general, directed graph. These are also extremely common in computer science and are used to represent graphs of users in a social network, locations on a map, or a graph of webpages, which is how Google does page ranking.
graphviz
You might find it useful to display graphs visually and graphviz is an excellent way to do that. Here's an example
End of explanation
"""
class GNode:
def __init__(self, value):
self.value = value
self.edges = [] # outgoing edges
def connect(self, other):
self.edges.append(other)
cat = GNode('cat')
dog = GNode('dog')
horse = GNode('horse')
zebra = GNode('zebra')
llama = GNode('llama')
cat.connect(dog)
dog.connect(cat)
dog.connect(horse)
dog.connect(zebra)
horse.connect(zebra)
zebra.connect(llama)
objviz(cat)
"""
Explanation: Once again, it's very convenient to represent a node in this graph as an object, which means we need a class definition:
End of explanation
"""
def walk(g, visited):
"Depth-first walk of a graph"
if g is None or g in visited: return
visited.add(g) # mark as visited
print(g.value) # process before visiting outgoing edges
for node in g.edges:
walk(node, visited) # walk all outgoing edge targets
walk(cat, set())
"""
Explanation: Walking graphs
Walking a graph (depth-first) is just like walking a tree in that we use backtracking to try all possible branches out of every node until we have reached all reachable nodes. When we run into a dead end, we back up to the most recently available on visited path and try that. That's how you get from the entrance to the exit of a maze.
<img src="figures/maze.jpg" width="300">
The only difference between walking a tree and walking a graph is that we have to watch out for cycles when walking a graph, so that we don't get stuck in an infinite loop. We leave a trail of breadcrumbs or candies or string to help us keep track of where we have visited and where we have not. If we run into our trail, we have hit a cycle and must also backtrack to avoid an infinite loop. This is a depth first search.
Here's a nice visualization website for graph walking.
<a href="http://algoanim.ide.sk/index.php?page=showanim&id=47)"><img src="figures/graph-dfs-icon.png" width="300"></a>
In code, here is how we perform a depth-frist search on a graph:
End of explanation
"""
walk(llama, set())
walk(horse, set())
"""
Explanation: Where we start the walk of the graph matters:
End of explanation
"""
import numpy as np
class Point:
def __init__(self, x, y):
self.x = x
self.y = y
def distance(self, other):
return np.sqrt( (self.x - other.x)**2 + (self.y - other.y)**2 )
def __add__(self,other):
x = self.x + other.x
y = self.y + other.y
return Point(x,y)
def __str__(self):
return f"({self.x},{self.y})"
p = Point(3,4)
q = Point(5,6)
print(p, q)
print(p + q) # calls p.__add__(q) or Point.__add__(p,q)
print(Point.__add__(p,q))
"""
Explanation: Operator overloading
(Note: We overload operators but override methods in a subclass definition)
Python allows class definitions to implement functions that are called when standard operator symbols such as + and / are applied to objects of that type. This is extremely useful for mathematical libraries such as numpy, but is often abused. Note that you could redefine subtraction to be multiplication when someone used the - sign. (Yikes!)
Here's an extension to Point that supports + for Point addition:
End of explanation
"""
import numpy as np
class Point:
def __init__(self, x, y):
self.x = x
self.y = y
def distance(self, other):
return np.sqrt( (self.x - other.x)**2 + (self.y - other.y)**2 )
def __add__(self,other):
x = self.x + other.x
y = self.y + other.y
return Point(x,y)
def __sub__(self,other):
x = self.x - other.x
y = self.y - other.y
return Point(x,y)
def __str__(self):
return f"({self.x},{self.y})"
p = Point(5,4)
q = Point(1,5)
print(p, q)
print(p - q)
"""
Explanation: Exercise
Add a method to implement the - subtraction operator for Point so that the following code works:
python
p = Point(5,4)
q = Point(1,5)
print(p, q)
print(p - q)
End of explanation
"""
|
pombredanne/https-gitlab.lrde.epita.fr-vcsn-vcsn | doc/notebooks/automaton.shuffle.ipynb | gpl-3.0 | import vcsn
"""
Explanation: automaton.shuffle(a1, ...)
The (accessible part of the) shuffle product of automata.
Preconditions:
- all the labelsets are letterized
See also:
- automaton.conjunction
- automaton.infiltration
- expression.shuffle
Examples
End of explanation
"""
std = lambda exp: vcsn.B.expression(exp).standard()
a = std('abc')
a
a.shuffle(std('xyz'))
"""
Explanation: Boolean Automata
The shuffle product of automata computes the shuffling of their languages: all the possible interleavings.
End of explanation
"""
c = vcsn.context('lal_char, seriesset<lal_char, z>')
std = lambda exp: c.expression(exp).standard()
std('<A>a<B>b').shuffle(std('<X>x<Y>y'))
"""
Explanation: Weighted automata
In the case of weighted automata, weights are "kept" with the letters.
End of explanation
"""
x = std('<x>a')
y = std('<y>a')
z = std('<z>a')
x.shuffle(y, z)
x.shuffle(y).shuffle(z)
"""
Explanation: Associativity
This operator is associative, and it is actually implemented as a variadic operator; a.shuffle(b, c) is not exactly the same as a.shuffle(b).shuffle(c): they are the same automata, but the former is labeled with 3-uples, not 2-tuples.
End of explanation
"""
|
blockstack/packaging | imported/future/docs/notebooks/.ipynb_checkpoints/Writing Python 2-3 compatible code-checkpoint.ipynb | gpl-3.0 | # Python 2 only:
print 'Hello'
# Python 2 and 3:
print('Hello')
"""
Explanation: Cheat Sheet: Writing Python 2-3 compatible code
Copyright (c): 2013-2015 Python Charmers Pty Ltd, Australia.
Author: Ed Schofield.
Licence: Creative Commons Attribution.
A PDF version is here: http://python-future.org/compatible_idioms.pdf
This notebook shows you idioms for writing future-proof code that is compatible with both versions of Python: 2 and 3. It accompanies Ed Schofield's talk at PyCon AU 2014, "Writing 2/3 compatible code". (The video is here: http://www.youtube.com/watch?v=KOqk8j11aAI&t=10m14s.)
Minimum versions:
Python 2: 2.6+
Python 3: 3.3+
Setup
The imports below refer to these pip-installable packages on PyPI:
import future # pip install future
import builtins # pip install future
import past # pip install future
import six # pip install six
The following scripts are also pip-installable:
futurize # pip install future
pasteurize # pip install future
See http://python-future.org and https://pythonhosted.org/six/ for more information.
Essential syntax differences
print
End of explanation
"""
# Python 2 only:
print 'Hello', 'Guido'
# Python 2 and 3:
from __future__ import print_function # (at top of module)
print('Hello', 'Guido')
# Python 2 only:
print >> sys.stderr, 'Hello'
# Python 2 and 3:
from __future__ import print_function
print('Hello', file=sys.stderr)
# Python 2 only:
print 'Hello',
# Python 2 and 3:
from __future__ import print_function
print('Hello', end='')
"""
Explanation: To print multiple strings, import print_function to prevent Py2 from interpreting it as a tuple:
End of explanation
"""
# Python 2 only:
raise ValueError, "dodgy value"
# Python 2 and 3:
raise ValueError("dodgy value")
"""
Explanation: Raising exceptions
End of explanation
"""
# Python 2 only:
traceback = sys.exc_info()[2]
raise ValueError, "dodgy value", traceback
# Python 3 only:
raise ValueError("dodgy value").with_traceback()
# Python 2 and 3: option 1
from six import reraise as raise_
# or
from future.utils import raise_
traceback = sys.exc_info()[2]
raise_(ValueError, "dodgy value", traceback)
# Python 2 and 3: option 2
from future.utils import raise_with_traceback
raise_with_traceback(ValueError("dodgy value"))
"""
Explanation: Raising exceptions with a traceback:
End of explanation
"""
# Setup:
class DatabaseError(Exception):
pass
# Python 3 only
class FileDatabase:
def __init__(self, filename):
try:
self.file = open(filename)
except IOError as exc:
raise DatabaseError('failed to open') from exc
# Python 2 and 3:
from future.utils import raise_from
class FileDatabase:
def __init__(self, filename):
try:
self.file = open(filename)
except IOError as exc:
raise_from(DatabaseError('failed to open'), exc)
# Testing the above:
try:
fd = FileDatabase('non_existent_file.txt')
except Exception as e:
assert isinstance(e.__cause__, IOError) # FileNotFoundError on Py3.3+ inherits from IOError
"""
Explanation: Exception chaining (PEP 3134):
End of explanation
"""
# Python 2 only:
try:
...
except ValueError, e:
...
# Python 2 and 3:
try:
...
except ValueError as e:
...
"""
Explanation: Catching exceptions
End of explanation
"""
# Python 2 only:
assert 2 / 3 == 0
# Python 2 and 3:
assert 2 // 3 == 0
"""
Explanation: Division
Integer division (rounding down):
End of explanation
"""
# Python 3 only:
assert 3 / 2 == 1.5
# Python 2 and 3:
from __future__ import division # (at top of module)
assert 3 / 2 == 1.5
"""
Explanation: "True division" (float division):
End of explanation
"""
# Python 2 only:
a = b / c # with any types
# Python 2 and 3:
from past.utils import old_div
a = old_div(b, c) # always same as / on Py2
"""
Explanation: "Old division" (i.e. compatible with Py2 behaviour):
End of explanation
"""
# Python 2 only
k = 9223372036854775808L
# Python 2 and 3:
k = 9223372036854775808
# Python 2 only
bigint = 1L
# Python 2 and 3
from builtins import int
bigint = int(1)
"""
Explanation: Long integers
Short integers are gone in Python 3 and long has become int (without the trailing L in the repr).
End of explanation
"""
# Python 2 only:
if isinstance(x, (int, long)):
...
# Python 3 only:
if isinstance(x, int):
...
# Python 2 and 3: option 1
from builtins import int # subclass of long on Py2
if isinstance(x, int): # matches both int and long on Py2
...
# Python 2 and 3: option 2
from past.builtins import long
if isinstance(x, (int, long)):
...
"""
Explanation: To test whether a value is an integer (of any kind):
End of explanation
"""
0644 # Python 2 only
0o644 # Python 2 and 3
"""
Explanation: Octal constants
End of explanation
"""
`x` # Python 2 only
repr(x) # Python 2 and 3
"""
Explanation: Backtick repr
End of explanation
"""
class BaseForm(object):
pass
class FormType(type):
pass
# Python 2 only:
class Form(BaseForm):
__metaclass__ = FormType
pass
# Python 3 only:
class Form(BaseForm, metaclass=FormType):
pass
# Python 2 and 3:
from six import with_metaclass
# or
from future.utils import with_metaclass
class Form(with_metaclass(FormType, BaseForm)):
pass
"""
Explanation: Metaclasses
End of explanation
"""
# Python 2 only
s1 = 'The Zen of Python'
s2 = u'きたないのよりきれいな方がいい\n'
# Python 2 and 3
s1 = u'The Zen of Python'
s2 = u'きたないのよりきれいな方がいい\n'
"""
Explanation: Strings and bytes
Unicode (text) string literals
If you are upgrading an existing Python 2 codebase, it may be preferable to mark up all string literals as unicode explicitly with u prefixes:
End of explanation
"""
# Python 2 and 3
from __future__ import unicode_literals # at top of module
s1 = 'The Zen of Python'
s2 = 'きたないのよりきれいな方がいい\n'
"""
Explanation: The futurize and python-modernize tools do not currently offer an option to do this automatically.
If you are writing code for a new project or new codebase, you can use this idiom to make all string literals in a module unicode strings:
End of explanation
"""
# Python 2 only
s = 'This must be a byte-string'
# Python 2 and 3
s = b'This must be a byte-string'
"""
Explanation: See http://python-future.org/unicode_literals.html for more discussion on which style to use.
Byte-string literals
End of explanation
"""
# Python 2 only:
for bytechar in 'byte-string with high-bit chars like \xf9':
...
# Python 3 only:
for myint in b'byte-string with high-bit chars like \xf9':
bytechar = bytes([myint])
# Python 2 and 3:
from builtins import bytes
for myint in bytes(b'byte-string with high-bit chars like \xf9'):
bytechar = bytes([myint])
"""
Explanation: To loop over a byte-string with possible high-bit characters, obtaining each character as a byte-string of length 1:
End of explanation
"""
# Python 3 only:
for myint in b'byte-string with high-bit chars like \xf9':
char = chr(myint) # returns a unicode string
bytechar = char.encode('latin-1')
# Python 2 and 3:
from builtins import bytes, chr
for myint in bytes(b'byte-string with high-bit chars like \xf9'):
char = chr(myint) # returns a unicode string
bytechar = char.encode('latin-1') # forces returning a byte str
"""
Explanation: As an alternative, chr() and .encode('latin-1') can be used to convert an int into a 1-char byte string:
End of explanation
"""
# Python 2 only:
a = u'abc'
b = 'def'
assert (isinstance(a, basestring) and isinstance(b, basestring))
# Python 2 and 3: alternative 1
from past.builtins import basestring # pip install future
a = u'abc'
b = b'def'
assert (isinstance(a, basestring) and isinstance(b, basestring))
# Python 2 and 3: alternative 2: refactor the code to avoid considering
# byte-strings as strings.
from builtins import str
a = u'abc'
b = b'def'
c = b.decode()
assert isinstance(a, str) and isinstance(c, str)
# ...
"""
Explanation: basestring
End of explanation
"""
# Python 2 only:
templates = [u"blog/blog_post_detail_%s.html" % unicode(slug)]
# Python 2 and 3: alternative 1
from builtins import str
templates = [u"blog/blog_post_detail_%s.html" % str(slug)]
# Python 2 and 3: alternative 2
from builtins import str as text
templates = [u"blog/blog_post_detail_%s.html" % text(slug)]
"""
Explanation: unicode
End of explanation
"""
# Python 2 only:
from StringIO import StringIO
# or:
from cStringIO import StringIO
# Python 2 and 3:
from io import BytesIO # for handling byte strings
from io import StringIO # for handling unicode strings
"""
Explanation: StringIO
End of explanation
"""
# Python 2 only:
import submodule2
# Python 2 and 3:
from . import submodule2
# Python 2 and 3:
# To make Py2 code safer (more like Py3) by preventing
# implicit relative imports, you can also add this to the top:
from __future__ import absolute_import
"""
Explanation: Imports relative to a package
Suppose the package is:
mypackage/
__init__.py
submodule1.py
submodule2.py
and the code below is in submodule1.py:
End of explanation
"""
heights = {'Fred': 175, 'Anne': 166, 'Joe': 192}
"""
Explanation: Dictionaries
End of explanation
"""
# Python 2 only:
for key in heights.iterkeys():
...
# Python 2 and 3:
for key in heights:
...
"""
Explanation: Iterating through dict keys/values/items
Iterable dict keys:
End of explanation
"""
# Python 2 only:
for value in heights.itervalues():
...
# Idiomatic Python 3
for value in heights.values(): # extra memory overhead on Py2
...
# Python 2 and 3: option 1
from builtins import dict
heights = dict(Fred=175, Anne=166, Joe=192)
for key in heights.values(): # efficient on Py2 and Py3
...
# Python 2 and 3: option 2
from builtins import itervalues
# or
from six import itervalues
for key in itervalues(heights):
...
"""
Explanation: Iterable dict values:
End of explanation
"""
# Python 2 only:
for (key, value) in heights.iteritems():
...
# Python 2 and 3: option 1
for (key, value) in heights.items(): # inefficient on Py2
...
# Python 2 and 3: option 2
from future.utils import viewitems
for (key, value) in viewitems(heights): # also behaves like a set
...
# Python 2 and 3: option 3
from future.utils import iteritems
# or
from six import iteritems
for (key, value) in iteritems(heights):
...
"""
Explanation: Iterable dict items:
End of explanation
"""
# Python 2 only:
keylist = heights.keys()
assert isinstance(keylist, list)
# Python 2 and 3:
keylist = list(heights)
assert isinstance(keylist, list)
"""
Explanation: dict keys/values/items as a list
dict keys as a list:
End of explanation
"""
# Python 2 only:
heights = {'Fred': 175, 'Anne': 166, 'Joe': 192}
valuelist = heights.values()
assert isinstance(valuelist, list)
# Python 2 and 3: option 1
valuelist = list(heights.values()) # inefficient on Py2
# Python 2 and 3: option 2
from builtins import dict
heights = dict(Fred=175, Anne=166, Joe=192)
valuelist = list(heights.values())
# Python 2 and 3: option 3
from future.utils import listvalues
valuelist = listvalues(heights)
# Python 2 and 3: option 4
from future.utils import itervalues
# or
from six import itervalues
valuelist = list(itervalues(heights))
"""
Explanation: dict values as a list:
End of explanation
"""
# Python 2 and 3: option 1
itemlist = list(heights.items()) # inefficient on Py2
# Python 2 and 3: option 2
from future.utils import listitems
itemlist = listitems(heights)
# Python 2 and 3: option 3
from future.utils import iteritems
# or
from six import iteritems
itemlist = list(iteritems(heights))
"""
Explanation: dict items as a list:
End of explanation
"""
# Python 2 only
class Upper(object):
def __init__(self, iterable):
self._iter = iter(iterable)
def next(self): # Py2-style
return self._iter.next().upper()
def __iter__(self):
return self
itr = Upper('hello')
assert itr.next() == 'H' # Py2-style
assert list(itr) == list('ELLO')
# Python 2 and 3: option 1
from builtins import object
class Upper(object):
def __init__(self, iterable):
self._iter = iter(iterable)
def __next__(self): # Py3-style iterator interface
return next(self._iter).upper() # builtin next() function calls
def __iter__(self):
return self
itr = Upper('hello')
assert next(itr) == 'H' # compatible style
assert list(itr) == list('ELLO')
# Python 2 and 3: option 2
from future.utils import implements_iterator
@implements_iterator
class Upper(object):
def __init__(self, iterable):
self._iter = iter(iterable)
def __next__(self): # Py3-style iterator interface
return next(self._iter).upper() # builtin next() function calls
def __iter__(self):
return self
itr = Upper('hello')
assert next(itr) == 'H'
assert list(itr) == list('ELLO')
"""
Explanation: Custom class behaviour
Custom iterators
End of explanation
"""
# Python 2 only:
class MyClass(object):
def __unicode__(self):
return 'Unicode string: \u5b54\u5b50'
def __str__(self):
return unicode(self).encode('utf-8')
a = MyClass()
print(a) # prints encoded string
# Python 2 and 3:
from future.utils import python_2_unicode_compatible
@python_2_unicode_compatible
class MyClass(object):
def __str__(self):
return u'Unicode string: \u5b54\u5b50'
a = MyClass()
print(a) # prints string encoded as utf-8 on Py2
"""
Explanation: Custom __str__ methods
End of explanation
"""
# Python 2 only:
class AllOrNothing(object):
def __init__(self, l):
self.l = l
def __nonzero__(self):
return all(self.l)
container = AllOrNothing([0, 100, 200])
assert not bool(container)
# Python 2 and 3:
from builtins import object
class AllOrNothing(object):
def __init__(self, l):
self.l = l
def __bool__(self):
return all(self.l)
container = AllOrNothing([0, 100, 200])
assert not bool(container)
"""
Explanation: Custom __nonzero__ vs __bool__ method:
End of explanation
"""
# Python 2 only:
for i in xrange(10**8):
...
# Python 2 and 3: forward-compatible
from builtins import range
for i in range(10**8):
...
# Python 2 and 3: backward-compatible
from past.builtins import xrange
for i in xrange(10**8):
...
"""
Explanation: Lists versus iterators
xrange
End of explanation
"""
# Python 2 only
mylist = range(5)
assert mylist == [0, 1, 2, 3, 4]
# Python 2 and 3: forward-compatible: option 1
mylist = list(range(5)) # copies memory on Py2
assert mylist == [0, 1, 2, 3, 4]
# Python 2 and 3: forward-compatible: option 2
from builtins import range
mylist = list(range(5))
assert mylist == [0, 1, 2, 3, 4]
# Python 2 and 3: option 3
from future.utils import lrange
mylist = lrange(5)
assert mylist == [0, 1, 2, 3, 4]
# Python 2 and 3: backward compatible
from past.builtins import range
mylist = range(5)
assert mylist == [0, 1, 2, 3, 4]
"""
Explanation: range
End of explanation
"""
# Python 2 only:
mynewlist = map(f, myoldlist)
assert mynewlist == [f(x) for x in myoldlist]
# Python 2 and 3: option 1
# Idiomatic Py3, but inefficient on Py2
mynewlist = list(map(f, myoldlist))
assert mynewlist == [f(x) for x in myoldlist]
# Python 2 and 3: option 2
from builtins import map
mynewlist = list(map(f, myoldlist))
assert mynewlist == [f(x) for x in myoldlist]
# Python 2 and 3: option 3
try:
import itertools.imap as map
except ImportError:
pass
mynewlist = list(map(f, myoldlist)) # inefficient on Py2
assert mynewlist == [f(x) for x in myoldlist]
# Python 2 and 3: option 4
from future.utils import lmap
mynewlist = lmap(f, myoldlist)
assert mynewlist == [f(x) for x in myoldlist]
# Python 2 and 3: option 5
from past.builtins import map
mynewlist = map(f, myoldlist)
assert mynewlist == [f(x) for x in myoldlist]
"""
Explanation: map
End of explanation
"""
# Python 2 only:
from itertools import imap
myiter = imap(func, myoldlist)
assert isinstance(myiter, iter)
# Python 3 only:
myiter = map(func, myoldlist)
assert isinstance(myiter, iter)
# Python 2 and 3: option 1
from builtins import map
myiter = map(func, myoldlist)
assert isinstance(myiter, iter)
# Python 2 and 3: option 2
try:
import itertools.imap as map
except ImportError:
pass
myiter = map(func, myoldlist)
assert isinstance(myiter, iter)
"""
Explanation: imap
End of explanation
"""
# Python 2 only
f = open('myfile.txt')
data = f.read() # as a byte string
text = data.decode('utf-8')
# Python 2 and 3: alternative 1
from io import open
f = open('myfile.txt', 'rb')
data = f.read() # as bytes
text = data.decode('utf-8') # unicode, not bytes
# Python 2 and 3: alternative 2
from io import open
f = open('myfile.txt', encoding='utf-8')
text = f.read() # unicode, not bytes
"""
Explanation: zip, izip
As above with zip and itertools.izip.
filter, ifilter
As above with filter and itertools.ifilter too.
Other builtins
File IO with open()
End of explanation
"""
# Python 2 only:
assert reduce(lambda x, y: x+y, [1, 2, 3, 4, 5]) == 1+2+3+4+5
# Python 2 and 3:
from functools import reduce
assert reduce(lambda x, y: x+y, [1, 2, 3, 4, 5]) == 1+2+3+4+5
"""
Explanation: reduce()
End of explanation
"""
# Python 2 only:
name = raw_input('What is your name? ')
assert isinstance(name, str) # native str
# Python 2 and 3:
from builtins import input
name = input('What is your name? ')
assert isinstance(name, str) # native str on Py2 and Py3
"""
Explanation: raw_input()
End of explanation
"""
# Python 2 only:
input("Type something safe please: ")
# Python 2 and 3
from builtins import input
eval(input("Type something safe please: "))
"""
Explanation: input()
End of explanation
"""
# Python 2 only:
f = file(pathname)
# Python 2 and 3:
f = open(pathname)
# But preferably, use this:
from io import open
f = open(pathname, 'rb') # if f.read() should return bytes
# or
f = open(pathname, 'rt') # if f.read() should return unicode text
"""
Explanation: Warning: using either of these is unsafe with untrusted input.
file()
End of explanation
"""
# Python 2 only:
execfile('myfile.py')
# Python 2 and 3: alternative 1
from past.builtins import execfile
execfile('myfile.py')
# Python 2 and 3: alternative 2
exec(compile(open('myfile.py').read()))
# This can sometimes cause this:
# SyntaxError: function ... uses import * and bare exec ...
# See https://github.com/PythonCharmers/python-future/issues/37
"""
Explanation: execfile()
End of explanation
"""
# Python 2 only:
assert unichr(8364) == '€'
# Python 3 only:
assert chr(8364) == '€'
# Python 2 and 3:
from builtins import chr
assert chr(8364) == '€'
"""
Explanation: unichr()
End of explanation
"""
# Python 2 only:
intern('mystring')
# Python 3 only:
from sys import intern
intern('mystring')
# Python 2 and 3: alternative 1
from past.builtins import intern
intern('mystring')
# Python 2 and 3: alternative 2
from six.moves import intern
intern('mystring')
# Python 2 and 3: alternative 3
from future.standard_library import install_aliases
install_aliases()
from sys import intern
intern('mystring')
# Python 2 and 3: alternative 2
try:
from sys import intern
except ImportError:
pass
intern('mystring')
"""
Explanation: intern()
End of explanation
"""
args = ('a', 'b')
kwargs = {'kwarg1': True}
# Python 2 only:
apply(f, args, kwargs)
# Python 2 and 3: alternative 1
f(*args, **kwargs)
# Python 2 and 3: alternative 2
from past.builtins import apply
apply(f, args, kwargs)
"""
Explanation: apply()
End of explanation
"""
# Python 2 only:
assert chr(64) == b'@'
assert chr(200) == b'\xc8'
# Python 3 only: option 1
assert chr(64).encode('latin-1') == b'@'
assert chr(0xc8).encode('latin-1') == b'\xc8'
# Python 2 and 3: option 1
from builtins import chr
assert chr(64).encode('latin-1') == b'@'
assert chr(0xc8).encode('latin-1') == b'\xc8'
# Python 3 only: option 2
assert bytes([64]) == b'@'
assert bytes([0xc8]) == b'\xc8'
# Python 2 and 3: option 2
from builtins import bytes
assert bytes([64]) == b'@'
assert bytes([0xc8]) == b'\xc8'
"""
Explanation: chr()
End of explanation
"""
# Python 2 only:
assert cmp('a', 'b') < 0 and cmp('b', 'a') > 0 and cmp('c', 'c') == 0
# Python 2 and 3: alternative 1
from past.builtins import cmp
assert cmp('a', 'b') < 0 and cmp('b', 'a') > 0 and cmp('c', 'c') == 0
# Python 2 and 3: alternative 2
cmp = lambda(x, y): (x > y) - (x < y)
assert cmp('a', 'b') < 0 and cmp('b', 'a') > 0 and cmp('c', 'c') == 0
"""
Explanation: cmp()
End of explanation
"""
# Python 2 only:
reload(mymodule)
# Python 2 and 3
from imp import reload
reload(mymodule)
"""
Explanation: reload()
End of explanation
"""
# Python 2 only
import anydbm
import whichdb
import dbm
import dumbdbm
import gdbm
# Python 2 and 3: alternative 1
from future import standard_library
standard_library.install_aliases()
import dbm
import dbm.ndbm
import dbm.dumb
import dbm.gnu
# Python 2 and 3: alternative 2
from future.moves import dbm
from future.moves.dbm import dumb
from future.moves.dbm import ndbm
from future.moves.dbm import gnu
# Python 2 and 3: alternative 3
from six.moves import dbm_gnu
# (others not supported)
"""
Explanation: Standard library
dbm modules
End of explanation
"""
# Python 2 only
from commands import getoutput, getstatusoutput
# Python 2 and 3
from future import standard_library
standard_library.install_aliases()
from subprocess import getoutput, getstatusoutput
"""
Explanation: commands / subprocess modules
End of explanation
"""
# Python 2.7 and above
from subprocess import check_output
# Python 2.6 and above: alternative 1
from future.moves.subprocess import check_output
# Python 2.6 and above: alternative 2
from future import standard_library
standard_library.install_aliases()
from subprocess import check_output
"""
Explanation: subprocess.check_output()
End of explanation
"""
# Python 2.7 and above
from collections import Counter, OrderedDict, ChainMap
# Python 2.6 and above: alternative 1
from future.backports import Counter, OrderedDict, ChainMap
# Python 2.6 and above: alternative 2
from future import standard_library
standard_library.install_aliases()
from collections import Counter, OrderedDict, ChainMap
"""
Explanation: collections: Counter, OrderedDict, ChainMap
End of explanation
"""
# Python 2 only
from StringIO import StringIO
from cStringIO import StringIO
# Python 2 and 3
from io import BytesIO
# and refactor StringIO() calls to BytesIO() if passing byte-strings
"""
Explanation: StringIO module
End of explanation
"""
# Python 2 only:
import httplib
import Cookie
import cookielib
import BaseHTTPServer
import SimpleHTTPServer
import CGIHttpServer
# Python 2 and 3 (after ``pip install future``):
import http.client
import http.cookies
import http.cookiejar
import http.server
"""
Explanation: http module
End of explanation
"""
# Python 2 only:
import DocXMLRPCServer
import SimpleXMLRPCServer
# Python 2 and 3 (after ``pip install future``):
import xmlrpc.server
# Python 2 only:
import xmlrpclib
# Python 2 and 3 (after ``pip install future``):
import xmlrpc.client
"""
Explanation: xmlrpc module
End of explanation
"""
# Python 2 and 3:
from cgi import escape
# Safer (Python 2 and 3, after ``pip install future``):
from html import escape
# Python 2 only:
from htmlentitydefs import codepoint2name, entitydefs, name2codepoint
# Python 2 and 3 (after ``pip install future``):
from html.entities import codepoint2name, entitydefs, name2codepoint
"""
Explanation: html escaping and entities
End of explanation
"""
# Python 2 only:
from HTMLParser import HTMLParser
# Python 2 and 3 (after ``pip install future``)
from html.parser import HTMLParser
# Python 2 and 3 (alternative 2):
from future.moves.html.parser import HTMLParser
"""
Explanation: html parsing
End of explanation
"""
# Python 2 only:
from urlparse import urlparse
from urllib import urlencode
from urllib2 import urlopen, Request, HTTPError
# Python 3 only:
from urllib.parse import urlparse, urlencode
from urllib.request import urlopen, Request
from urllib.error import HTTPError
# Python 2 and 3: easiest option
from future.standard_library import install_aliases
install_aliases()
from urllib.parse import urlparse, urlencode
from urllib.request import urlopen, Request
from urllib.error import HTTPError
# Python 2 and 3: alternative 2
from future.standard_library import hooks
with hooks():
from urllib.parse import urlparse, urlencode
from urllib.request import urlopen, Request
from urllib.error import HTTPError
# Python 2 and 3: alternative 3
from future.moves.urllib.parse import urlparse, urlencode
from future.moves.urllib.request import urlopen, Request
from future.moves.urllib.error import HTTPError
# or
from six.moves.urllib.parse import urlparse, urlencode
from six.moves.urllib.request import urlopen
from six.moves.urllib.error import HTTPError
# Python 2 and 3: alternative 4
try:
from urllib.parse import urlparse, urlencode
from urllib.request import urlopen, Request
from urllib.error import HTTPError
except ImportError:
from urlparse import urlparse
from urllib import urlencode
from urllib2 import urlopen, Request, HTTPError
"""
Explanation: urllib module
urllib is the hardest module to use from Python 2/3 compatible code. You may like to use Requests (http://python-requests.org) instead.
End of explanation
"""
# Python 2 only:
import Tkinter
import Dialog
import FileDialog
import ScrolledText
import SimpleDialog
import Tix
import Tkconstants
import Tkdnd
import tkColorChooser
import tkCommonDialog
import tkFileDialog
import tkFont
import tkMessageBox
import tkSimpleDialog
# Python 2 and 3 (after ``pip install future``):
import tkinter
import tkinter.dialog
import tkinter.filedialog
import tkinter.scolledtext
import tkinter.simpledialog
import tkinter.tix
import tkinter.constants
import tkinter.dnd
import tkinter.colorchooser
import tkinter.commondialog
import tkinter.filedialog
import tkinter.font
import tkinter.messagebox
import tkinter.simpledialog
import tkinter.ttk
"""
Explanation: Tkinter
End of explanation
"""
# Python 2 only:
import SocketServer
# Python 2 and 3 (after ``pip install future``):
import socketserver
"""
Explanation: socketserver
End of explanation
"""
# Python 2 only:
import copy_reg
# Python 2 and 3 (after ``pip install future``):
import copyreg
"""
Explanation: copy_reg, copyreg
End of explanation
"""
# Python 2 only:
from ConfigParser import ConfigParser
# Python 2 and 3 (after ``pip install future``):
from configparser import ConfigParser
"""
Explanation: configparser
End of explanation
"""
# Python 2 only:
from Queue import Queue, heapq, deque
# Python 2 and 3 (after ``pip install future``):
from queue import Queue, heapq, deque
"""
Explanation: queue
End of explanation
"""
# Python 2 only:
from repr import aRepr, repr
# Python 2 and 3 (after ``pip install future``):
from reprlib import aRepr, repr
"""
Explanation: repr, reprlib
End of explanation
"""
# Python 2 only:
from UserDict import UserDict
from UserList import UserList
from UserString import UserString
# Python 3 only:
from collections import UserDict, UserList, UserString
# Python 2 and 3: alternative 1
from future.moves.collections import UserDict, UserList, UserString
# Python 2 and 3: alternative 2
from six.moves import UserDict, UserList, UserString
# Python 2 and 3: alternative 3
from future.standard_library import install_aliases
install_aliases()
from collections import UserDict, UserList, UserString
"""
Explanation: UserDict, UserList, UserString
End of explanation
"""
# Python 2 only:
from itertools import ifilterfalse, izip_longest
# Python 3 only:
from itertools import filterfalse, zip_longest
# Python 2 and 3: alternative 1
from future.moves.itertools import filterfalse, zip_longest
# Python 2 and 3: alternative 2
from six.moves import filterfalse, zip_longest
# Python 2 and 3: alternative 3
from future.standard_library import install_aliases
install_aliases()
from itertools import filterfalse, zip_longest
"""
Explanation: itertools: filterfalse, zip_longest
End of explanation
"""
|
njtwomey/ADS | 01_data_ingress/02_dicts.ipynb | mit | from __future__ import print_function
import json
def print_dict(dd):
print(json.dumps(dd, indent=2))
"""
Explanation: Define simple printing functions
End of explanation
"""
d1 = dict()
d2 = {}
print_dict(d1)
print_dict(d2)
"""
Explanation: Constructing and allocating dictionaries
The syntax for dictionaries is that {} indicates an empty dictionary
End of explanation
"""
d3 = {
'one': 1,
'two': 2
}
print_dict(d3)
d4 = dict(one=1, two=2)
print_dict(d4)
"""
Explanation: There are multiple ways to construct a dictionary when the key/value pairs are known beforehand. The following two snippets are equivalent.
End of explanation
"""
keys = ['one', 'two', 'three']
values = [1, 2, 3]
d5 = {key: value for key, value in zip(keys, values)}
print_dict(d5)
"""
Explanation: Often an ordered list of keys and values are available as lists, and
it is desirable to create a dictionary from these lists. There are a
number of ways to do this, including:
End of explanation
"""
d1['key_1'] = 1
d1['key_2'] = False
print_dict(d1)
"""
Explanation: Adding new data to the dict
End of explanation
"""
d1['list_key'] = [1, 2, 3]
print_dict(d1)
d1['dict_key'] = {'one': 1, 'two': 2}
print_dict(d1)
del d1['key_1']
print_dict(d1)
"""
Explanation: Dictionaries are a dynamic data type, and any object can be used as a value type, including integers, floats, lists, and other dicts, for example:
End of explanation
"""
print(d1.keys())
for item in d1:
print(item)
d1['dict_key']['one']
"""
Explanation: Accessing the data
It is always possible to get access to the key/value pairs that are contained in the dictionary, and the following
functions help with this:
End of explanation
"""
for key, value in d1.items():
print(key, value)
for key, value in d1.iteritems(): # Only in Python2 (.items() returns an iterator in Python3)
print(key, value)
print(d1.keys())
print(d1.values())
"""
Explanation: Iterating over key/values
The following two cells are nearly equivalent.
In order to understand how they differ, it will be helpful to confer with Python documentation on iterators and generators
http://anandology.com/python-practice-book/iterators.html
End of explanation
"""
def dict_only(key_value):
return type(key_value[1]) is dict
print('All dictionary elements:')
print(list(filter(dict_only, d1.items())))
print('Same as above, but with inline function (lambda):')
print(filter(lambda key_value: type(key_value[1]) is dict, d1.items()))
"""
Explanation: Filtering and mapping dictionaries
End of explanation
"""
|
tensorflow/docs | site/en/r1/tutorials/keras/save_and_restore_models.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
"""
Explanation: Copyright 2018 The TensorFlow Authors.
End of explanation
"""
!pip install h5py pyyaml
"""
Explanation: Save and restore models
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/r1/tutorials/keras/save_and_restore_models.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/r1/tutorials/keras/save_and_restore_models.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
Note: This is an archived TF1 notebook. These are configured
to run in TF2's
compatibility mode
but will run in TF1 as well. To use TF1 in Colab, use the
%tensorflow_version 1.x
magic.
Model progress can be saved during—and after—training. This means a model can resume where it left off and avoid long training times. Saving also means you can share your model and others can recreate your work. When publishing research models and techniques, most machine learning practitioners share:
code to create the model, and
the trained weights, or parameters, for the model
Sharing this data helps others understand how the model works and try it themselves with new data.
Caution: Be careful with untrusted code—TensorFlow models are code. See Using TensorFlow Securely for details.
Options
There are different ways to save TensorFlow models—depending on the API you're using. This guide uses tf.keras, a high-level API to build and train models in TensorFlow. For other approaches, see the TensorFlow Save and Restore guide or Saving in eager.
Setup
Installs and imports
Install and import TensorFlow and dependencies:
End of explanation
"""
import os
import tensorflow.compat.v1 as tf
from tensorflow import keras
tf.__version__
(train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.mnist.load_data()
train_labels = train_labels[:1000]
test_labels = test_labels[:1000]
train_images = train_images[:1000].reshape(-1, 28 * 28) / 255.0
test_images = test_images[:1000].reshape(-1, 28 * 28) / 255.0
"""
Explanation: Get an example dataset
We'll use the MNIST dataset to train our model to demonstrate saving weights. To speed up these demonstration runs, only use the first 1000 examples:
End of explanation
"""
# Returns a short sequential model
def create_model():
model = tf.keras.models.Sequential([
keras.layers.Dense(512, activation=tf.keras.activations.relu, input_shape=(784,)),
keras.layers.Dropout(0.2),
keras.layers.Dense(10, activation=tf.keras.activations.softmax)
])
model.compile(optimizer=tf.keras.optimizers.Adam(),
loss=tf.keras.losses.sparse_categorical_crossentropy,
metrics=['accuracy'])
return model
# Create a basic model instance
model = create_model()
model.summary()
"""
Explanation: Define a model
Let's build a simple model we'll use to demonstrate saving and loading weights.
End of explanation
"""
checkpoint_path = "training_1/cp.ckpt"
checkpoint_dir = os.path.dirname(checkpoint_path)
# Create checkpoint callback
cp_callback = tf.keras.callbacks.ModelCheckpoint(checkpoint_path,
save_weights_only=True,
verbose=1)
model = create_model()
model.fit(train_images, train_labels, epochs = 10,
validation_data = (test_images,test_labels),
callbacks = [cp_callback]) # pass callback to training
# This may generate warnings related to saving the state of the optimizer.
# These warnings (and similar warnings throughout this notebook)
# are in place to discourage outdated usage, and can be ignored.
"""
Explanation: Save checkpoints during training
The primary use case is to automatically save checkpoints during and at the end of training. This way you can use a trained model without having to retrain it, or pick-up training where you left of—in case the training process was interrupted.
tf.keras.callbacks.ModelCheckpoint is a callback that performs this task. The callback takes a couple of arguments to configure checkpointing.
Checkpoint callback usage
Train the model and pass it the ModelCheckpoint callback:
End of explanation
"""
!ls {checkpoint_dir}
"""
Explanation: This creates a single collection of TensorFlow checkpoint files that are updated at the end of each epoch:
End of explanation
"""
model = create_model()
loss, acc = model.evaluate(test_images, test_labels, verbose=2)
print("Untrained model, accuracy: {:5.2f}%".format(100*acc))
"""
Explanation: Create a new, untrained model. When restoring a model from only weights, you must have a model with the same architecture as the original model. Since it's the same model architecture, we can share weights despite that it's a different instance of the model.
Now rebuild a fresh, untrained model, and evaluate it on the test set. An untrained model will perform at chance levels (~10% accuracy):
End of explanation
"""
model.load_weights(checkpoint_path)
loss,acc = model.evaluate(test_images, test_labels, verbose=2)
print("Restored model, accuracy: {:5.2f}%".format(100*acc))
"""
Explanation: Then load the weights from the checkpoint, and re-evaluate:
End of explanation
"""
# include the epoch in the file name. (uses `str.format`)
checkpoint_path = "training_2/cp-{epoch:04d}.ckpt"
checkpoint_dir = os.path.dirname(checkpoint_path)
cp_callback = tf.keras.callbacks.ModelCheckpoint(
checkpoint_path, verbose=1, save_weights_only=True,
# Save weights, every 5-epochs.
period=5)
model = create_model()
model.save_weights(checkpoint_path.format(epoch=0))
model.fit(train_images, train_labels,
epochs = 50, callbacks = [cp_callback],
validation_data = (test_images,test_labels),
verbose=0)
"""
Explanation: Checkpoint callback options
The callback provides several options to give the resulting checkpoints unique names, and adjust the checkpointing frequency.
Train a new model, and save uniquely named checkpoints once every 5-epochs:
End of explanation
"""
! ls {checkpoint_dir}
latest = tf.train.latest_checkpoint(checkpoint_dir)
latest
"""
Explanation: Now, look at the resulting checkpoints and choose the latest one:
End of explanation
"""
model = create_model()
model.load_weights(latest)
loss, acc = model.evaluate(test_images, test_labels, verbose=2)
print("Restored model, accuracy: {:5.2f}%".format(100*acc))
"""
Explanation: Note: the default tensorflow format only saves the 5 most recent checkpoints.
To test, reset the model and load the latest checkpoint:
End of explanation
"""
# Save the weights
model.save_weights('./checkpoints/my_checkpoint')
# Restore the weights
model = create_model()
model.load_weights('./checkpoints/my_checkpoint')
loss,acc = model.evaluate(test_images, test_labels, verbose=2)
print("Restored model, accuracy: {:5.2f}%".format(100*acc))
"""
Explanation: What are these files?
The above code stores the weights to a collection of checkpoint-formatted files that contain only the trained weights in a binary format. Checkpoints contain:
* One or more shards that contain your model's weights.
* An index file that indicates which weights are stored in a which shard.
If you are only training a model on a single machine, you'll have one shard with the suffix: .data-00000-of-00001
Manually save weights
Above you saw how to load the weights into a model.
Manually saving the weights is just as simple, use the Model.save_weights method.
End of explanation
"""
model = create_model()
model.fit(train_images, train_labels, epochs=5)
# Save entire model to a HDF5 file
model.save('my_model.h5')
"""
Explanation: Save the entire model
The entire model can be saved to a file that contains the weight values, the model's configuration, and even the optimizer's configuration (depends on set up). This allows you to checkpoint a model and resume training later—from the exact same state—without access to the original code.
Saving a fully-functional model is very useful—you can load them in TensorFlow.js (HDF5, Saved Model) and then train and run them in web browsers, or convert them to run on mobile devices using TensorFlow Lite (HDF5, Saved Model)
As an HDF5 file
Keras provides a basic save format using the HDF5 standard. For our purposes, the saved model can be treated as a single binary blob.
End of explanation
"""
# Recreate the exact same model, including weights and optimizer.
new_model = keras.models.load_model('my_model.h5')
new_model.summary()
"""
Explanation: Now recreate the model from that file:
End of explanation
"""
loss, acc = new_model.evaluate(test_images, test_labels, verbose=2)
print("Restored model, accuracy: {:5.2f}%".format(100*acc))
"""
Explanation: Check its accuracy:
End of explanation
"""
model = create_model()
model.fit(train_images, train_labels, epochs=5)
"""
Explanation: This technique saves everything:
The weight values
The model's configuration(architecture)
The optimizer configuration
Keras saves models by inspecting the architecture. Currently, it is not able to save TensorFlow optimizers (from tf.train). When using those you will need to re-compile the model after loading, and you will lose the state of the optimizer.
As a saved_model
Caution: This method of saving a tf.keras model is experimental and may change in future versions.
Build a fresh model:
End of explanation
"""
import time
saved_model_path = "./saved_models/"+str(int(time.time()))
model.save(saved_model_path, save_format='tf')
"""
Explanation: Create a saved_model:
End of explanation
"""
!ls {saved_model_path}
"""
Explanation: Have a look in the directory:
End of explanation
"""
new_model = tf.keras.models.load_model(saved_model_path)
new_model
"""
Explanation: Load the saved model.
End of explanation
"""
# The model has to be compiled before evaluating.
# This step is not required if the saved model is only being deployed.
new_model.compile(optimizer=tf.keras.optimizers.Adam(),
loss=tf.keras.losses.sparse_categorical_crossentropy,
metrics=['accuracy'])
# Evaluate the restored model.
loss, acc = new_model.evaluate(test_images, test_labels, verbose=2)
print("Restored model, accuracy: {:5.2f}%".format(100*acc))
"""
Explanation: Run the restored model.
End of explanation
"""
|
quantopian/research_public | notebooks/data/quandl.cboe_vxfxi/notebook.ipynb | apache-2.0 | # For use in Quantopian Research, exploring interactively
from quantopian.interactive.data.quandl import cboe_vxfxi as dataset
# import data operations
from odo import odo
# import other libraries we will use
import pandas as pd
# Let's use blaze to understand the data a bit using Blaze dshape()
dataset.dshape
# And how many rows are there?
# N.B. we're using a Blaze function to do this, not len()
dataset.count()
# Let's see what the data looks like. We'll grab the first three rows.
dataset[:3]
"""
Explanation: CBOE VXFXI Index
In this notebook, we'll take a look at the CBOE VXFXI Index dataset, available on the Quantopian Store. This dataset spans 16 Mar 2011 through the current day. This data has a daily frequency. CBOE VXFI is the China ETF Volatility Index which reflects the implied volatility of the FXI ETF
Notebook Contents
There are two ways to access the data and you'll find both of them listed below. Just click on the section you'd like to read through.
<a href='#interactive'><strong>Interactive overview</strong></a>: This is only available on Research and uses blaze to give you access to large amounts of data. Recommended for exploration and plotting.
<a href='#pipeline'><strong>Pipeline overview</strong></a>: Data is made available through pipeline which is available on both the Research & Backtesting environment. Recommended for custom factor development and moving back & forth between research/backtesting.
Limits
One key caveat: we limit the number of results returned from any given expression to 10,000 to protect against runaway memory usage. To be clear, you have access to all the data server side. We are limiting the size of the responses back from Blaze.
With preamble in place, let's get started:
<a id='interactive'></a>
Interactive Overview
Accessing the data with Blaze and Interactive on Research
Partner datasets are available on Quantopian Research through an API service known as Blaze. Blaze provides the Quantopian user with a convenient interface to access very large datasets, in an interactive, generic manner.
Blaze provides an important function for accessing these datasets. Some of these sets are many millions of records. Bringing that data directly into Quantopian Research directly just is not viable. So Blaze allows us to provide a simple querying interface and shift the burden over to the server side.
It is common to use Blaze to reduce your dataset in size, convert it over to Pandas and then to use Pandas for further computation, manipulation and visualization.
Helpful links:
* Query building for Blaze
* Pandas-to-Blaze dictionary
* SQL-to-Blaze dictionary.
Once you've limited the size of your Blaze object, you can convert it to a Pandas DataFrames using:
from odo import odo
odo(expr, pandas.DataFrame)
To see how this data can be used in your algorithm, search for the Pipeline Overview section of this notebook or head straight to <a href='#pipeline'>Pipeline Overview</a>
End of explanation
"""
# Plotting this DataFrame
df = odo(dataset, pd.DataFrame)
df.head(5)
# So we can plot it, we'll set the index as the `asof_date`
df['asof_date'] = pd.to_datetime(df['asof_date'])
df = df.set_index(['asof_date'])
df.head(5)
import matplotlib.pyplot as plt
df['open_'].plot(label=str(dataset))
plt.ylabel(str(dataset))
plt.legend()
plt.title("Graphing %s since %s" % (str(dataset), min(df.index)))
"""
Explanation: Let's go over the columns:
- open: open price for vxfxi
- high: daily high for vxfxi
- low: daily low for vxfxi
- close: close price for vxfxi
- asof_date: the timeframe to which this data applies
- timestamp: this is our timestamp on when we registered the data.
We've done much of the data processing for you. Fields like timestamp are standardized across all our Store Datasets, so the datasets are easy to combine.
We can select columns and rows with ease. Below, we'll do a simple plot.
End of explanation
"""
# Import necessary Pipeline modules
from quantopian.pipeline import Pipeline
from quantopian.research import run_pipeline
from quantopian.pipeline.factors import AverageDollarVolume
# Import the datasets available
from quantopian.pipeline.data.quandl import cboe_vxfxi
"""
Explanation: <a id='pipeline'></a>
Pipeline Overview
Accessing the data in your algorithms & research
The only method for accessing partner data within algorithms running on Quantopian is via the pipeline API. Different data sets work differently but in the case of this data, you can add this data to your pipeline as follows:
Import the data set here
from quantopian.pipeline.data.quandl import cboe_vxfxi
Then in intialize() you could do something simple like adding the raw value of one of the fields to your pipeline:
pipe.add(cboe_vxfxi.open_.latest, 'open')
Pipeline usage is very similar between the backtester and Research so let's go over how to import this data through pipeline and view its outputs.
End of explanation
"""
print "Here are the list of available fields per dataset:"
print "---------------------------------------------------\n"
def _print_fields(dataset):
print "Dataset: %s\n" % dataset.__name__
print "Fields:"
for field in list(dataset.columns):
print "%s - %s" % (field.name, field.dtype)
print "\n"
_print_fields(cboe_vxfxi)
print "---------------------------------------------------\n"
"""
Explanation: Now that we've imported the data, let's take a look at which fields are available for each dataset.
You'll find the dataset, the available fields, and the datatypes for each of those fields.
End of explanation
"""
pipe = Pipeline()
pipe.add(cboe_vxfxi.open_.latest, 'open_vxfxi')
# Setting some basic liquidity strings (just for good habit)
dollar_volume = AverageDollarVolume(window_length=20)
top_1000_most_liquid = dollar_volume.rank(ascending=False) < 1000
pipe.set_screen(top_1000_most_liquid & cboe_vxfxi.open_.latest.notnan())
# The show_graph() method of pipeline objects produces a graph to show how it is being calculated.
pipe.show_graph(format='png')
# run_pipeline will show the output of your pipeline
pipe_output = run_pipeline(pipe, start_date='2013-11-01', end_date='2013-11-25')
pipe_output
"""
Explanation: Now that we know what fields we have access to, let's see what this data looks like when we run it through Pipeline.
This is constructed the same way as you would in the backtester. For more information on using Pipeline in Research view this thread:
https://www.quantopian.com/posts/pipeline-in-research-build-test-and-visualize-your-factors-and-filters
End of explanation
"""
# This section is only importable in the backtester
from quantopian.algorithm import attach_pipeline, pipeline_output
# General pipeline imports
from quantopian.pipeline import Pipeline
from quantopian.pipeline.factors import AverageDollarVolume
# For use in your algorithms via the pipeline API
from quantopian.pipeline.data.quandl import cboe_vxfxi
def make_pipeline():
# Create our pipeline
pipe = Pipeline()
# Screen out penny stocks and low liquidity securities.
dollar_volume = AverageDollarVolume(window_length=20)
is_liquid = dollar_volume.rank(ascending=False) < 1000
# Create the mask that we will use for our percentile methods.
base_universe = (is_liquid)
# Add the datasets available
pipe.add(cboe_vxfxi.open_.latest, 'vxfxi_open')
# Set our pipeline screens
pipe.set_screen(is_liquid)
return pipe
def initialize(context):
attach_pipeline(make_pipeline(), "pipeline")
def before_trading_start(context, data):
results = pipeline_output('pipeline')
"""
Explanation: Here, you'll notice that each security is mapped to the corresponding value, so you could grab any security to get what you need.
Taking what we've seen from above, let's see how we'd move that into the backtester.
End of explanation
"""
|
metpy/MetPy | v0.4/_downloads/Advanced_Sounding.ipynb | bsd-3-clause | from datetime import datetime
import matplotlib.pyplot as plt
import metpy.calc as mpcalc
from metpy.io import get_upper_air_data
from metpy.io.upperair import UseSampleData
from metpy.plots import SkewT
from metpy.units import concatenate
with UseSampleData(): # Only needed to use our local sample data
# Download and parse the data
dataset = get_upper_air_data(datetime(1999, 5, 4, 0), 'OUN')
p = dataset.variables['pressure'][:]
T = dataset.variables['temperature'][:]
Td = dataset.variables['dewpoint'][:]
u = dataset.variables['u_wind'][:]
v = dataset.variables['v_wind'][:]
"""
Explanation: Advanced Sounding
Plot a sounding using MetPy with more advanced features.
Beyond just plotting data, this uses calculations from metpy.calc to find the lifted
condensation level (LCL) and the profile of a surface-based parcel. The area between the
ambient profile and the parcel profile is colored as well.
End of explanation
"""
fig = plt.figure(figsize=(9, 9))
skew = SkewT(fig, rotation=45)
# Plot the data using normal plotting functions, in this case using
# log scaling in Y, as dictated by the typical meteorological plot
skew.plot(p, T, 'r')
skew.plot(p, Td, 'g')
skew.plot_barbs(p, u, v)
skew.ax.set_ylim(1000, 100)
skew.ax.set_xlim(-40, 60)
# Calculate LCL height and plot as black dot
l = mpcalc.lcl(p[0], T[0], Td[0])
lcl_temp = mpcalc.dry_lapse(concatenate((p[0], l)), T[0])[-1].to('degC')
skew.plot(l, lcl_temp, 'ko', markerfacecolor='black')
# Calculate full parcel profile and add to plot as black line
prof = mpcalc.parcel_profile(p, T[0], Td[0]).to('degC')
skew.plot(p, prof, 'k', linewidth=2)
# Example of coloring area between profiles
greater = T >= prof
skew.ax.fill_betweenx(p, T, prof, where=greater, facecolor='blue', alpha=0.4)
skew.ax.fill_betweenx(p, T, prof, where=~greater, facecolor='red', alpha=0.4)
# An example of a slanted line at constant T -- in this case the 0
# isotherm
l = skew.ax.axvline(0, color='c', linestyle='--', linewidth=2)
# Add the relevant special lines
skew.plot_dry_adiabats()
skew.plot_moist_adiabats()
skew.plot_mixing_lines()
# Show the plot
plt.show()
"""
Explanation: Create a new figure. The dimensions here give a good aspect ratio
End of explanation
"""
|
AllenDowney/ThinkBayes2 | examples/pair_dice.ipynb | mit | # Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
import numpy as np
import pandas as pd
import thinkplot
from thinkbayes2 import Pmf, Suite
from fractions import Fraction
"""
Explanation: The pair of dice problem
Copyright 2018 Allen Downey
MIT License: https://opensource.org/licenses/MIT
End of explanation
"""
class BayesTable(pd.DataFrame):
def __init__(self, hypo, prior=1, **options):
columns = ['hypo', 'prior', 'likelihood', 'unnorm', 'posterior']
super().__init__(columns=columns, **options)
self.hypo = hypo
self.prior = prior
def mult(self):
self.unnorm = self.prior * self.likelihood
def norm(self):
nc = np.sum(self.unnorm)
self.posterior = self.unnorm / nc
return nc
def update(self):
self.mult()
return self.norm()
def reset(self):
return BayesTable(self.hypo, self.posterior)
"""
Explanation: The BayesTable class
Here's the class that represents a Bayesian table.
End of explanation
"""
sides = [4, 6, 8, 12]
hypo = []
for die1 in sides:
for die2 in sides:
if die2 > die1:
hypo.append((die1, die2))
hypo
"""
Explanation: The pair of dice problem
Suppose I have a box that contains one each of 4-sided, 6-sided, 8-sided, and 12-sided dice. I choose two dice at random and roll them without letting you see the die or the outcome. I report that the sum of the dice is 3.
1) What is the posterior probability that I rolled each possible pair of the dice?
2) If I roll the same dice again, what is the probability that the sum of the dice is 11?
Solution
I'll start by making a list of possible pairs of dice.
End of explanation
"""
table = BayesTable(hypo)
"""
Explanation: Here's a BayesTable that represents the hypotheses.
End of explanation
"""
for i, row in table.iterrows():
n1, n2 = row.hypo
table.loc[i, 'likelihood'] = 2 / n1 / n2
table
"""
Explanation: Since we didn't specify prior probabilities, the default value is equal priors for all hypotheses. They don't have to be normalized, because we have to normalize the posteriors anyway.
Now we can specify the likelihoods: if the first die has n1 sides and the second die has n2 sides, the probability of getting a sum of 3 is
2 / n1 / n2
The factor of 2 is there because there are two ways the sum can be 3, either the first die is 1 and the second is 2, or the other way around.
So the likelihoods are:
End of explanation
"""
table.update()
table
"""
Explanation: Now we can use update to compute the posterior probabilities:
End of explanation
"""
n1, n2 = 4, 6
d1 = Pmf(range(1, n1+1))
d2 = Pmf(range(1, n2+1))
total = d1 + d2
thinkplot.Hist(total)
"""
Explanation: Part two
The second part of the problem asks for the (posterior predictive) probability of getting a total of 11 if we roll the same dice again.
For this, it will be useful to write a more general function that computes the probability of getting a total, k, given n1 and n2.
Here's an example with the 4 and 6 sided dice:
End of explanation
"""
def prob_total(k, n1, n2):
d1 = Pmf(range(1, n1+1))
d2 = Pmf(range(1, n2+1))
total = d1 + d2
return total[k]
"""
Explanation: And here's the general function:
End of explanation
"""
for i, row in table.iterrows():
n1, n2 = row.hypo
p = prob_total(3, n1, n2)
print(n1, n2, p, p == row.likelihood)
"""
Explanation: To check the results, I'll compare them to the likelihoods in the previous table:
End of explanation
"""
total = 0
for i, row in table.iterrows():
n1, n2 = row.hypo
p = prob_total(11, n1, n2)
total += row.posterior * p
total
"""
Explanation: Now we can answer the second part of the question using the law of total probability. The chance of getting 11 on the second roll is the
$\sum_{n1, n2} P(n1, n2 ~|~ D) \cdot P(11 ~|~ n1, n2)$
The first term is the posterior probability, which we can read from the table; the second term is prob_total(11, n1, n2).
Here's how we compute the total probability:
End of explanation
"""
table2 = table.reset()
for i, row in table2.iterrows():
n1, n2 = row.hypo
table2.loc[i, 'likelihood'] = prob_total(11, n1, n2)
table2
table2.update()
table2
"""
Explanation: This calculation is similar to the first step of the update, so we can also compute it by
1) Creating a new table with the posteriors from table.
2) Adding the likelihood of getting a total of 11 on the next roll.
3) Computing the normalizing constant.
End of explanation
"""
dice = {}
for n in sides:
dice[n] = Pmf(range(1, n+1))
"""
Explanation: Using a Suite
We can solve this problem more concisely, and more efficiently, using a Suite.
First, I'll create Pmf object for each die.
End of explanation
"""
pairs = {}
for n1 in sides:
for n2 in sides:
if n2 > n1:
pairs[n1, n2] = dice[n1] + dice[n2]
"""
Explanation: And a Pmf object for the sum of each pair of dice.
End of explanation
"""
class Dice(Suite):
def Likelihood(self, data, hypo):
"""Likelihood of the data given the hypothesis.
data: total of two dice
hypo: pair of sides
return: probability
"""
return pairs[hypo][data]
"""
Explanation: Here's a Dice class that implements Likelihood by looking up the data, k, in the Pmf that corresponds to hypo:
End of explanation
"""
suite = Dice(pairs.keys())
suite.Print()
"""
Explanation: Here's the prior:
End of explanation
"""
suite.Update(3)
suite.Print()
"""
Explanation: And the posterior:
End of explanation
"""
suite.Update(11)
"""
Explanation: And the posterior probability of getting 11 on the next roll.
End of explanation
"""
|
diegocavalca/Studies | deep-learnining-specialization/2. improving deep neural networks/resources/Gradient Checking.ipynb | cc0-1.0 | # Packages
import numpy as np
from testCases import *
from gc_utils import sigmoid, relu, dictionary_to_vector, vector_to_dictionary, gradients_to_vector
"""
Explanation: Gradient Checking
Welcome to the final assignment for this week! In this assignment you will learn to implement and use gradient checking.
You are part of a team working to make mobile payments available globally, and are asked to build a deep learning model to detect fraud--whenever someone makes a payment, you want to see if the payment might be fraudulent, such as if the user's account has been taken over by a hacker.
But backpropagation is quite challenging to implement, and sometimes has bugs. Because this is a mission-critical application, your company's CEO wants to be really certain that your implementation of backpropagation is correct. Your CEO says, "Give me a proof that your backpropagation is actually working!" To give this reassurance, you are going to use "gradient checking".
Let's do it!
End of explanation
"""
# GRADED FUNCTION: forward_propagation
def forward_propagation(x, theta):
"""
Implement the linear forward propagation (compute J) presented in Figure 1 (J(theta) = theta * x)
Arguments:
x -- a real-valued input
theta -- our parameter, a real number as well
Returns:
J -- the value of function J, computed using the formula J(theta) = theta * x
"""
### START CODE HERE ### (approx. 1 line)
J = theta * x
### END CODE HERE ###
return J
x, theta = 2, 4
J = forward_propagation(x, theta)
print ("J = " + str(J))
"""
Explanation: 1) How does gradient checking work?
Backpropagation computes the gradients $\frac{\partial J}{\partial \theta}$, where $\theta$ denotes the parameters of the model. $J$ is computed using forward propagation and your loss function.
Because forward propagation is relatively easy to implement, you're confident you got that right, and so you're almost 100% sure that you're computing the cost $J$ correctly. Thus, you can use your code for computing $J$ to verify the code for computing $\frac{\partial J}{\partial \theta}$.
Let's look back at the definition of a derivative (or gradient):
$$ \frac{\partial J}{\partial \theta} = \lim_{\varepsilon \to 0} \frac{J(\theta + \varepsilon) - J(\theta - \varepsilon)}{2 \varepsilon} \tag{1}$$
If you're not familiar with the "$\displaystyle \lim_{\varepsilon \to 0}$" notation, it's just a way of saying "when $\varepsilon$ is really really small."
We know the following:
$\frac{\partial J}{\partial \theta}$ is what you want to make sure you're computing correctly.
You can compute $J(\theta + \varepsilon)$ and $J(\theta - \varepsilon)$ (in the case that $\theta$ is a real number), since you're confident your implementation for $J$ is correct.
Lets use equation (1) and a small value for $\varepsilon$ to convince your CEO that your code for computing $\frac{\partial J}{\partial \theta}$ is correct!
2) 1-dimensional gradient checking
Consider a 1D linear function $J(\theta) = \theta x$. The model contains only a single real-valued parameter $\theta$, and takes $x$ as input.
You will implement code to compute $J(.)$ and its derivative $\frac{\partial J}{\partial \theta}$. You will then use gradient checking to make sure your derivative computation for $J$ is correct.
<img src="images/1Dgrad_kiank.png" style="width:600px;height:250px;">
<caption><center> <u> Figure 1 </u>: 1D linear model<br> </center></caption>
The diagram above shows the key computation steps: First start with $x$, then evaluate the function $J(x)$ ("forward propagation"). Then compute the derivative $\frac{\partial J}{\partial \theta}$ ("backward propagation").
Exercise: implement "forward propagation" and "backward propagation" for this simple function. I.e., compute both $J(.)$ ("forward propagation") and its derivative with respect to $\theta$ ("backward propagation"), in two separate functions.
End of explanation
"""
# GRADED FUNCTION: backward_propagation
def backward_propagation(x, theta):
"""
Computes the derivative of J with respect to theta (see Figure 1).
Arguments:
x -- a real-valued input
theta -- our parameter, a real number as well
Returns:
dtheta -- the gradient of the cost with respect to theta
"""
### START CODE HERE ### (approx. 1 line)
dtheta = x
### END CODE HERE ###
return dtheta
x, theta = 2, 4
dtheta = backward_propagation(x, theta)
print ("dtheta = " + str(dtheta))
"""
Explanation: Expected Output:
<table style=>
<tr>
<td> ** J ** </td>
<td> 8</td>
</tr>
</table>
Exercise: Now, implement the backward propagation step (derivative computation) of Figure 1. That is, compute the derivative of $J(\theta) = \theta x$ with respect to $\theta$. To save you from doing the calculus, you should get $dtheta = \frac { \partial J }{ \partial \theta} = x$.
End of explanation
"""
# GRADED FUNCTION: gradient_check
def gradient_check(x, theta, epsilon = 1e-7):
"""
Implement the backward propagation presented in Figure 1.
Arguments:
x -- a real-valued input
theta -- our parameter, a real number as well
epsilon -- tiny shift to the input to compute approximated gradient with formula(1)
Returns:
difference -- difference (2) between the approximated gradient and the backward propagation gradient
"""
# Compute gradapprox using left side of formula (1). epsilon is small enough, you don't need to worry about the limit.
### START CODE HERE ### (approx. 5 lines)
thetaplus = theta + epsilon # Step 1
thetaminus = theta - epsilon # Step 2
J_plus = forward_propagation(x, thetaplus) # Step 3
J_minus = forward_propagation(x, thetaminus) # Step 4
gradapprox = (J_plus - J_minus) / (2 * epsilon) # Step 5
### END CODE HERE ###
# Check if gradapprox is close enough to the output of backward_propagation()
### START CODE HERE ### (approx. 1 line)
grad = backward_propagation(x, theta)
### END CODE HERE ###
### START CODE HERE ### (approx. 1 line)
numerator = np.linalg.norm(grad - gradapprox) # Step 1'
denominator = np.linalg.norm(grad) + np.linalg.norm(gradapprox) # Step 2'
difference = numerator / denominator # Step 3'
### END CODE HERE ###
if difference < 1e-7:
print ("The gradient is correct!")
else:
print ("The gradient is wrong!")
return difference
x, theta = 2, 4
difference = gradient_check(x, theta)
print("difference = " + str(difference))
"""
Explanation: Expected Output:
<table>
<tr>
<td> ** dtheta ** </td>
<td> 2 </td>
</tr>
</table>
Exercise: To show that the backward_propagation() function is correctly computing the gradient $\frac{\partial J}{\partial \theta}$, let's implement gradient checking.
Instructions:
- First compute "gradapprox" using the formula above (1) and a small value of $\varepsilon$. Here are the Steps to follow:
1. $\theta^{+} = \theta + \varepsilon$
2. $\theta^{-} = \theta - \varepsilon$
3. $J^{+} = J(\theta^{+})$
4. $J^{-} = J(\theta^{-})$
5. $gradapprox = \frac{J^{+} - J^{-}}{2 \varepsilon}$
- Then compute the gradient using backward propagation, and store the result in a variable "grad"
- Finally, compute the relative difference between "gradapprox" and the "grad" using the following formula:
$$ difference = \frac {\mid\mid grad - gradapprox \mid\mid_2}{\mid\mid grad \mid\mid_2 + \mid\mid gradapprox \mid\mid_2} \tag{2}$$
You will need 3 Steps to compute this formula:
- 1'. compute the numerator using np.linalg.norm(...)
- 2'. compute the denominator. You will need to call np.linalg.norm(...) twice.
- 3'. divide them.
- If this difference is small (say less than $10^{-7}$), you can be quite confident that you have computed your gradient correctly. Otherwise, there may be a mistake in the gradient computation.
End of explanation
"""
def forward_propagation_n(X, Y, parameters):
"""
Implements the forward propagation (and computes the cost) presented in Figure 3.
Arguments:
X -- training set for m examples
Y -- labels for m examples
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3":
W1 -- weight matrix of shape (5, 4)
b1 -- bias vector of shape (5, 1)
W2 -- weight matrix of shape (3, 5)
b2 -- bias vector of shape (3, 1)
W3 -- weight matrix of shape (1, 3)
b3 -- bias vector of shape (1, 1)
Returns:
cost -- the cost function (logistic cost for one example)
"""
# retrieve parameters
m = X.shape[1]
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
W3 = parameters["W3"]
b3 = parameters["b3"]
# LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID
Z1 = np.dot(W1, X) + b1
A1 = relu(Z1)
Z2 = np.dot(W2, A1) + b2
A2 = relu(Z2)
Z3 = np.dot(W3, A2) + b3
A3 = sigmoid(Z3)
# Cost
logprobs = np.multiply(-np.log(A3),Y) + np.multiply(-np.log(1 - A3), 1 - Y)
cost = 1./m * np.sum(logprobs)
cache = (Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3)
return cost, cache
"""
Explanation: Expected Output:
The gradient is correct!
<table>
<tr>
<td> ** difference ** </td>
<td> 2.9193358103083e-10 </td>
</tr>
</table>
Congrats, the difference is smaller than the $10^{-7}$ threshold. So you can have high confidence that you've correctly computed the gradient in backward_propagation().
Now, in the more general case, your cost function $J$ has more than a single 1D input. When you are training a neural network, $\theta$ actually consists of multiple matrices $W^{[l]}$ and biases $b^{[l]}$! It is important to know how to do a gradient check with higher-dimensional inputs. Let's do it!
3) N-dimensional gradient checking
The following figure describes the forward and backward propagation of your fraud detection model.
<img src="images/NDgrad_kiank.png" style="width:600px;height:400px;">
<caption><center> <u> Figure 2 </u>: deep neural network<br>LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID</center></caption>
Let's look at your implementations for forward propagation and backward propagation.
End of explanation
"""
def backward_propagation_n(X, Y, cache):
"""
Implement the backward propagation presented in figure 2.
Arguments:
X -- input datapoint, of shape (input size, 1)
Y -- true "label"
cache -- cache output from forward_propagation_n()
Returns:
gradients -- A dictionary with the gradients of the cost with respect to each parameter, activation and pre-activation variables.
"""
m = X.shape[1]
(Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
dW3 = 1./m * np.dot(dZ3, A2.T)
db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)
dA2 = np.dot(W3.T, dZ3)
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
dW2 = 1./m * np.dot(dZ2, A1.T) * 2
db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)
dA1 = np.dot(W2.T, dZ2)
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
dW1 = 1./m * np.dot(dZ1, X.T)
db1 = 4./m * np.sum(dZ1, axis=1, keepdims = True)
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,
"dA2": dA2, "dZ2": dZ2, "dW2": dW2, "db2": db2,
"dA1": dA1, "dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients
"""
Explanation: Now, run backward propagation.
End of explanation
"""
# GRADED FUNCTION: gradient_check_n
def gradient_check_n(parameters, gradients, X, Y, epsilon = 1e-7):
"""
Checks if backward_propagation_n computes correctly the gradient of the cost output by forward_propagation_n
Arguments:
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3":
grad -- output of backward_propagation_n, contains gradients of the cost with respect to the parameters.
x -- input datapoint, of shape (input size, 1)
y -- true "label"
epsilon -- tiny shift to the input to compute approximated gradient with formula(1)
Returns:
difference -- difference (2) between the approximated gradient and the backward propagation gradient
"""
# Set-up variables
parameters_values, _ = dictionary_to_vector(parameters)
grad = gradients_to_vector(gradients)
num_parameters = parameters_values.shape[0]
J_plus = np.zeros((num_parameters, 1))
J_minus = np.zeros((num_parameters, 1))
gradapprox = np.zeros((num_parameters, 1))
# Compute gradapprox
for i in range(num_parameters):
# Compute J_plus[i]. Inputs: "parameters_values, epsilon". Output = "J_plus[i]".
# "_" is used because the function you have to outputs two parameters but we only care about the first one
### START CODE HERE ### (approx. 3 lines)
thetaplus = np.copy(parameters_values) # Step 1
thetaplus[i][0] = thetaplus[i][0] + epsilon # Step 2
J_plus[i], _ = forward_propagation_n(X, Y, vector_to_dictionary(thetaplus)) # Step 3
### END CODE HERE ###
# Compute J_minus[i]. Inputs: "parameters_values, epsilon". Output = "J_minus[i]".
### START CODE HERE ### (approx. 3 lines)
thetaminus = np.copy(parameters_values) # Step 1
thetaminus[i][0] = thetaminus[i][0] - epsilon # Step 2
J_minus[i], _ = forward_propagation_n(X, Y, vector_to_dictionary(thetaminus)) # Step 3
### END CODE HERE ###
# Compute gradapprox[i]
### START CODE HERE ### (approx. 1 line)
gradapprox[i] = (J_plus[i] - J_minus[i]) / (2 * epsilon)
### END CODE HERE ###
# Compare gradapprox to backward propagation gradients by computing difference.
### START CODE HERE ### (approx. 1 line)
numerator = np.linalg.norm(grad - gradapprox) # Step 1'
denominator = np.linalg.norm(grad) + np.linalg.norm(gradapprox) # Step 2'
difference = numerator / denominator # Step 3'
### END CODE HERE ###
if difference > 1e-7:
print ("\033[93m" + "There is a mistake in the backward propagation! difference = " + str(difference) + "\033[0m")
else:
print ("\033[92m" + "Your backward propagation works perfectly fine! difference = " + str(difference) + "\033[0m")
return difference
X, Y, parameters = gradient_check_n_test_case()
cost, cache = forward_propagation_n(X, Y, parameters)
gradients = backward_propagation_n(X, Y, cache)
difference = gradient_check_n(parameters, gradients, X, Y)
"""
Explanation: You obtained some results on the fraud detection test set but you are not 100% sure of your model. Nobody's perfect! Let's implement gradient checking to verify if your gradients are correct.
How does gradient checking work?.
As in 1) and 2), you want to compare "gradapprox" to the gradient computed by backpropagation. The formula is still:
$$ \frac{\partial J}{\partial \theta} = \lim_{\varepsilon \to 0} \frac{J(\theta + \varepsilon) - J(\theta - \varepsilon)}{2 \varepsilon} \tag{1}$$
However, $\theta$ is not a scalar anymore. It is a dictionary called "parameters". We implemented a function "dictionary_to_vector()" for you. It converts the "parameters" dictionary into a vector called "values", obtained by reshaping all parameters (W1, b1, W2, b2, W3, b3) into vectors and concatenating them.
The inverse function is "vector_to_dictionary" which outputs back the "parameters" dictionary.
<img src="images/dictionary_to_vector.png" style="width:600px;height:400px;">
<caption><center> <u> Figure 2 </u>: dictionary_to_vector() and vector_to_dictionary()<br> You will need these functions in gradient_check_n()</center></caption>
We have also converted the "gradients" dictionary into a vector "grad" using gradients_to_vector(). You don't need to worry about that.
Exercise: Implement gradient_check_n().
Instructions: Here is pseudo-code that will help you implement the gradient check.
For each i in num_parameters:
- To compute J_plus[i]:
1. Set $\theta^{+}$ to np.copy(parameters_values)
2. Set $\theta^{+}_i$ to $\theta^{+}_i + \varepsilon$
3. Calculate $J^{+}_i$ using to forward_propagation_n(x, y, vector_to_dictionary($\theta^{+}$ )).
- To compute J_minus[i]: do the same thing with $\theta^{-}$
- Compute $gradapprox[i] = \frac{J^{+}_i - J^{-}_i}{2 \varepsilon}$
Thus, you get a vector gradapprox, where gradapprox[i] is an approximation of the gradient with respect to parameter_values[i]. You can now compare this gradapprox vector to the gradients vector from backpropagation. Just like for the 1D case (Steps 1', 2', 3'), compute:
$$ difference = \frac {\| grad - gradapprox \|_2}{\| grad \|_2 + \| gradapprox \|_2 } \tag{3}$$
End of explanation
"""
def backward_propagation_n(X, Y, cache):
"""
Implement the backward propagation presented in figure 2.
Arguments:
X -- input datapoint, of shape (input size, 1)
Y -- true "label"
cache -- cache output from forward_propagation_n()
Returns:
gradients -- A dictionary with the gradients of the cost with respect to each parameter, activation and pre-activation variables.
"""
m = X.shape[1]
(Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
dW3 = 1./m * np.dot(dZ3, A2.T)
db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)
dA2 = np.dot(W3.T, dZ3)
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
dW2 = 1./m * np.dot(dZ2, A1.T)
db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)
dA1 = np.dot(W2.T, dZ2)
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
dW1 = 1./m * np.dot(dZ1, X.T)
db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True)
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,
"dA2": dA2, "dZ2": dZ2, "dW2": dW2, "db2": db2,
"dA1": dA1, "dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients
X, Y, parameters = gradient_check_n_test_case()
cost, cache = forward_propagation_n(X, Y, parameters)
gradients = backward_propagation_n(X, Y, cache)
difference = gradient_check_n(parameters, gradients, X, Y)
"""
Explanation: Expected output:
<table>
<tr>
<td> ** There is a mistake in the backward propagation!** </td>
<td> difference = 0.285093156781 </td>
</tr>
</table>
It seems that there were errors in the backward_propagation_n code we gave you! Good that you've implemented the gradient check. Go back to backward_propagation and try to find/correct the errors (Hint: check dW2 and db1). Rerun the gradient check when you think you've fixed it. Remember you'll need to re-execute the cell defining backward_propagation_n() if you modify the code.
Can you get gradient check to declare your derivative computation correct? Even though this part of the assignment isn't graded, we strongly urge you to try to find the bug and re-run gradient check until you're convinced backprop is now correctly implemented.
Note
- Gradient Checking is slow! Approximating the gradient with $\frac{\partial J}{\partial \theta} \approx \frac{J(\theta + \varepsilon) - J(\theta - \varepsilon)}{2 \varepsilon}$ is computationally costly. For this reason, we don't run gradient checking at every iteration during training. Just a few times to check if the gradient is correct.
- Gradient Checking, at least as we've presented it, doesn't work with dropout. You would usually run the gradient check algorithm without dropout to make sure your backprop is correct, then add dropout.
Congrats, you can be confident that your deep learning model for fraud detection is working correctly! You can even use this to convince your CEO. :)
<font color='blue'>
What you should remember from this notebook:
- Gradient checking verifies closeness between the gradients from backpropagation and the numerical approximation of the gradient (computed using forward propagation).
- Gradient checking is slow, so we don't run it in every iteration of training. You would usually run it only to make sure your code is correct, then turn it off and use backprop for the actual learning process.
Correct the backward propagation and rerun the gradient check:
End of explanation
"""
|
ewulczyn/talk_page_abuse | src/data_generation/crowdflower_analysis/src/Crowdflower Analysis (Experiment v. 1).ipynb | apache-2.0 | %matplotlib inline
from __future__ import division
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
pd.set_option('display.width', 1000)
pd.set_option('display.max_colwidth', 1000)
# Download data from google drive (Respect Eng / Wiki Collab): wikipdia data/v2_annotated
blocked_dat = pd.read_csv('../data/annotated_1k_no_admin_blocked_user_post_sample.csv')
random_dat = pd.read_csv('../data/annotated_1k_no_admin_post_sample.csv')
# Removing irrelevant differing columns
del blocked_dat['na_gold']
del random_dat['unnamed_0']
blocked_dat['dat_type'] = 'blocked'
random_dat['dat_type'] = 'random'
# Do both arrays now have the same columns?
(random_dat.columns == blocked_dat.columns).all()
dat = pd.concat([blocked_dat, random_dat])
# Remove test questions
dat = dat[dat['_golden'] == False]
# Replace missing data with 'False'
dat = dat.replace(np.nan, False, regex=True)
# Reshape the data for later analysis
def create_column_of_counts_from_nums(df, col):
return df.apply(lambda x: int(col) == x)
aggressive_columns = ['1', '2', '3', '4', '5', '6', '7']
for col in aggressive_columns:
dat[col] = create_column_of_counts_from_nums(dat['how_aggressive_or_friendly_is_the_tone_of_this_comment'], col)
blocked_columns = ['0','1']
for col in blocked_columns:
dat['blocked_'+col] = create_column_of_counts_from_nums(dat['is_harassment_or_attack'], col)
blocked_columns = ['blocked_0','blocked_1']
# Group the data
agg_dict = dict.fromkeys(aggressive_columns, 'sum')
agg_dict.update(dict.fromkeys(blocked_columns, 'sum'))
agg_dict.update({'clean_diff': 'first', 'is_harassment_or_attack': 'mean',
'how_aggressive_or_friendly_is_the_tone_of_this_comment': 'mean', 'na': 'mean'})
grouped_dat = dat.groupby(['dat_type','rev_id'], as_index=False).agg(agg_dict)
# Get rid of data which the majority thinks is not in English or not readable
grouped_dat = grouped_dat[grouped_dat['na'] < 0.5]
"""
Explanation: Introduction
This notebook compares the annotated results of the "blocked" vs. "random" dataset of wikipedia talk pages. The "blocked" dataset consists of the few last comments before a user is blocked for personal harassment. The "random" dataset randomly samples all of the wikipedia talk page revisions. Both of these datasets are cleaned and filtered to remove common administrator messages. These datasets are annotated via crowdflower to measure friendliness, aggressiveness and whether the comment constitutes a personal attack. Below we plot a histogram of the results, pull out a few comments to examine, and compute inter-annotator agreement.
On Crowdflower, each revision is rated 7 times. The raters are given three questions:
Is this comment not English or not human readable?
Column 'na'
How aggressive or friendly is the tone of this comment?
Column 'how_aggressive_or_friendly_is_the_tone_of_this_comment'
Ranges from 1 (Friendly) to 7 (Aggressive)
Is this an example of harassment or a personal attack?
Column 'is_harassment_or_attack'
Loading packages and data
End of explanation
"""
def hist_comments(df, bins, dat_type, plot_by, title):
sliced_array = df[df['dat_type'] == dat_type][[plot_by]]
weights = np.ones_like(sliced_array)/len(sliced_array)
sliced_array.plot.hist(bins = bins, legend = False, title = title, weights=weights)
plt.ylabel('Proportion')
plt.xlabel('Average Score')
plt.figure()
bins = np.linspace(0,1,11)
hist_comments(grouped_dat, bins, 'blocked', 'is_harassment_or_attack', 'Average Harassment Rating for Blocked Data')
hist_comments(grouped_dat, bins, 'random', 'is_harassment_or_attack', 'Average Harassment Rating for Random Data')
"""
Explanation: Plot histogram of average ratings by comment
For each revision, we take the average of all the ratings by level of harassment. The histogram of these averages for both the blocked and random dataset are displayed below. We notice that the blocked dataset has a significantly higher proportion of attacking comments (approximately 20%).
End of explanation
"""
bins = np.linspace(1,7,61)
plt.figure()
hist_comments(grouped_dat, bins, 'blocked', 'how_aggressive_or_friendly_is_the_tone_of_this_comment',
'Average Aggressiveness Rating for Blocked Data')
hist_comments(grouped_dat, bins, 'random', 'how_aggressive_or_friendly_is_the_tone_of_this_comment',
'Average Aggressiveness Rating for Random Data')
"""
Explanation: For each revision, we take the average of all the ratings by level of friendliness/aggressiveness. The histogram of these averages for both the blocked and random dataset are displayed below. We notice that the blocked dataset has a more even distribution of aggressiveness scores.
End of explanation
"""
def sorted_comments(df, sort_by, is_ascending, quartile, num, dat_type = None):
if dat_type:
sub_df = df[df['dat_type'] == dat_type]
else:
sub_df = df
n = sub_df.shape[0]
start_index = int(quartile*n)
if dat_type:
return sub_df[['clean_diff', 'is_harassment_or_attack',
'how_aggressive_or_friendly_is_the_tone_of_this_comment']].sort_values(
by=sort_by, ascending = is_ascending)[start_index:start_index + num]
return df[['clean_diff', 'dat_type', 'is_harassment_or_attack',
'how_aggressive_or_friendly_is_the_tone_of_this_comment']].sort_values(
by=sort_by, ascending = is_ascending)[start_index:start_index + num]
"""
Explanation: Selected harassing and aggressive comments by quartile
We look at a sample of revisions whose average aggressive score falls into various quantiles. This allows us to subjectively evaluate the quality of the questions that we are asking on Crowdflower. This slicing is done on the aggregate of both the blocked and random dataset.
End of explanation
"""
sorted_comments(grouped_dat, 'is_harassment_or_attack', False, 0, 5)
"""
Explanation: Most harassing comments in aggregated dataset
End of explanation
"""
sorted_comments(grouped_dat, 'how_aggressive_or_friendly_is_the_tone_of_this_comment', False, 0, 5)
"""
Explanation: Most aggressive comments in aggregated dataset
End of explanation
"""
sorted_comments(grouped_dat, 'how_aggressive_or_friendly_is_the_tone_of_this_comment', False, 0.5, 5)
"""
Explanation: Median aggressive comments in aggregated dataset
End of explanation
"""
sorted_comments(grouped_dat, 'how_aggressive_or_friendly_is_the_tone_of_this_comment', True, 0, 5)
"""
Explanation: Least aggressive comments in aggregated dataset
End of explanation
"""
# Least aggressive comments that are considered harassment or a personal attack
sorted_comments(grouped_dat[grouped_dat['is_harassment_or_attack'] > 0.5], 'how_aggressive_or_friendly_is_the_tone_of_this_comment', True, 0, 5)
# Most aggressive comments that are NOT considered harassment or a personal attack
sorted_comments(grouped_dat[grouped_dat['is_harassment_or_attack'] < 0.5], 'how_aggressive_or_friendly_is_the_tone_of_this_comment', False, 0, 5)
"""
Explanation: Selected revisions by multiple questions
In this section, we examine a selection of revisions by their answer to Question 3 ('Is this an example of harassment or a personal attack?') and sorted by aggression score. Again, this allows us to subjectively evaluate the quality of questions and responses that we obtain from Crowdflower.
End of explanation
"""
def add_row_to_coincidence(o, row, columns):
m_u = row.sum(1)
for i in columns:
for j in columns:
if i == j:
o[i][j] = o[i][j] + row[i]*(row[i]-1)/(m_u-1)
else:
o[i][j] = o[i][j] + row[i]*row[j]/(m_u-1)
return o
def make_coincidence_matrix(df, columns):
df = df[columns]
n = df.shape[0]
num_cols = len(columns)
o = pd.DataFrame(np.zeros((num_cols,num_cols)), index = columns, columns=columns)
for i in xrange(n):
o = add_row_to_coincidence(o, df[i:i+1], columns)
return o
def binary_distance(i,j):
return i!=j
def interval_distance(i,j):
return (int(i)-int(j))**2
def e(n, i, j):
if i == j:
return n[i]*(n[i]-1)/sum(n)-1
else:
return n[i]*n[j]/sum(n)-1
def D_e(o, columns, distance):
n = o.sum(1)
output = 0
for i in columns:
for j in columns:
output = output + e(n,i,j)*distance(i,j)
return output
def D_o(o, columns, distance):
output = 0
for i in columns:
for j in columns:
output = output + o[i][j]*distance(i,j)
return output
def Krippendorf_alpha(df, columns, distance = binary_distance, o = None):
if o is None:
o = make_coincidence_matrix(df, columns)
d_o = D_o(o, columns, distance)
d_e = D_e(o, columns, distance)
return (1 - d_o/d_e)
df = grouped_dat[grouped_dat['dat_type'] == 'blocked']
Krippendorf_alpha(df, aggressive_columns, distance = interval_distance)
Krippendorf_alpha(df, blocked_columns)
df = grouped_dat[grouped_dat['dat_type'] == 'random']
Krippendorf_alpha(df, aggressive_columns, distance = interval_distance)
Krippendorf_alpha(df, blocked_columns)
"""
Explanation: Inter-Annotator Agreement
Below, we compute the Krippendorf's Alpha, which is a measure of the inter-annotator agreement of our Crowdflower responses. We achieve an Alpha value of 0.489 on our dataset, which is relatively low. We have since decided to reframe our questions and have achieved a higher Alpha score (see Experiment v. 2).
End of explanation
"""
|
vikasgorur/cs229 | Linear Regression.ipynb | mit | %matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
from sklearn.preprocessing import scale
"""
Explanation: CS229: Lecture 2
Linear Regression, Gradient Descent
In this notebook we implement some of the concepts discussed in Lecture 2 of CS229 - Machine Learning.
First we import the relevant Python modules. This notebook has been written assuming Python 3.5
End of explanation
"""
houses = pd.read_csv('house_prices.csv')
"""
Explanation: A little searching leads us to the Portland housing prices dataset that's used as an example in the lecture. We load the dataset from the CSV file.
End of explanation
"""
plt.figure(1)
plt.subplot(211)
plt.xlabel('sq. feet')
plt.ylabel('price (\'000)')
plt.scatter(houses['sqft'], houses['price'])
plt.subplot(212)
plt.xlabel('no. of rooms')
plt.ylabel('price (\'000)')
plt.scatter(houses['rooms'], houses['price'])
plt.tight_layout()
"""
Explanation: Let's plot the output variable (the price of a house) against each of the input variables (area in sq. feet, number of bedrooms) to get a little intuition about the data.
End of explanation
"""
X = houses[['sqft', 'rooms']].as_matrix()
X = np.column_stack([np.ones([X.shape[0]]), X])
y = houses[['price']].as_matrix().ravel()
"""
Explanation: Let's transform our data into the right matrix format. Note that we add a column of one's to the $X$ matrix, to be multiplied with $\theta_0$.
End of explanation
"""
# Hypothesis function
def h(theta, X):
return np.matmul(X, theta)
# Cost function
def J(theta, X, y):
d = h(theta, X) - y
return 0.5 * np.dot(d, d.T)
# One step of gradient descent
def descend(theta, X, y, alpha=0.01):
error = h(theta, X) - y
t = theta - alpha * np.matmul(X.T, error)
return t, np.dot(error, error.T)
"""
Explanation: Next we implement the hypothesis and cost functions and the parameter update using gradient descent.
End of explanation
"""
theta = np.zeros([X.shape[1]])
for i in range(50):
theta, cost = descend(theta, X, y)
if i % 10 == 0:
print("epoch: {0}, cost: {1}".format(i, cost))
print("epoch: {0}, cost: {1}".format(i, cost))
print("theta: {0}".format(theta))
"""
Explanation: We are now ready to fit the model using gradient descent. Let's initialize our parameters to 0 and run 50 iterations of gradient descent to see how it behaves.
End of explanation
"""
X_scaled = scale(X)
y_scaled = scale(y)
"""
Explanation: That doesn't look good. We expected the cost to steadily decrease as gradient descent progressed. Instead, the cost function diverged so much it exceeded our ability to represent it as a floating-point number. What happened?
The answer is that our training data is not on the right scale. The area in square feet is in the thousands, the number of rooms is 1-10 and the price is in the hundreds of thousands. With such widely varying numbers the gradient descent algorithm can overshoot the minimum and instead diverge to infinity.
The solution is to scale all of the variables to have mean of 0 and variance of 1. We can use the scale function from sklearn to do this.
End of explanation
"""
plt.figure(1)
plt.subplot(211)
plt.xlabel('sq. feet')
plt.ylabel('price')
plt.scatter(X_scaled[:, 1], y_scaled)
plt.subplot(212)
plt.xlabel('no. of rooms')
plt.ylabel('price')
plt.scatter(X_scaled[:, 2], y_scaled)
plt.tight_layout()
"""
Explanation: We can plot the data again to visualize the effect of the scaling operation.
End of explanation
"""
def fit(X, y):
theta = np.zeros([X.shape[1]])
theta, cost = descend(theta, X, y)
for i in range(10000):
cost_ = cost
theta, cost = descend(theta, X, y)
if cost_ - cost < 1e-7:
break
if i % 10 == 0:
print("epoch: {0}, cost: {1}".format(i, cost))
print("epoch: {0}, cost: {1}".format(i, cost))
print("theta: {0}".format(theta))
"""
Explanation: Let us write a function to fit the model such that it automatically stops once the improvement in the value of the cost function is below a certain threshold.
End of explanation
"""
fit(X_scaled, y_scaled)
"""
Explanation: Let's try to fit the model again with our scaled input and output matrices.
End of explanation
"""
from sklearn.linear_model import LinearRegression
l = LinearRegression(fit_intercept=False)
l.fit(X_scaled, y_scaled)
l.coef_
"""
Explanation: Success!
The gradient descent converges in just 38 steps.
We can verify that our solution is correct by fitting the same data using the library functions for linear regression in sklearn.
End of explanation
"""
|
greenelab/GCB535 | 26_Prelab_Python-IV/Lesson4.ipynb | bsd-3-clause | print range(5)
"""
Explanation: Lesson 4: Data Structures and File Parsing
Table of Contents
Data structures I: Lists
Data structures II: Dictionaries
String parsing with .split()
Test your understanding: practice set 4
1. Data Structures I: Lists
What is a data structure?
A data structure is basically a way of storing large amounts of data (numbers, strings, etc) in an organized manner, making storage and retrieval easier. There are several different data structures available in Python, but we'll just go over the two most common ones: lists and dictionaries.
What is a list?
A list is one type of built-in data structure in Python that is specialized for storing data in a ordered, sequential manner. We've already seen an example of lists when we used the range() function:
End of explanation
"""
myList = [3, "cat", 56.9, 4, 10, True] # recreating the list above
print myList[0]
print myList[1]
"""
Explanation: Now we'll go over more formally what a list is and how it can be used.
Side note for people who have used other programming languages:
Lists are similar to what other programming languages call arrays. There are actually some subtle (but important) differences between lists and arrays (Python lists are closer to what people usually call a "linked list"), but for most purposes they perform the same role. The most obvious difference you might notice is that you don't need to specify ahead of time how large your list will be. This is because the size of the list grows dynamically as you add things to it (it also shrinks automatically as you take things out).
How lists work
A Python list looks something like this when we print it out:
[3, "cat", 56.9, 4, 10, True]
However, it may be more helpful to think of a list as looking something like this:
<img align="left" src="list_diagram.PNG" />
Here, each thing in the list (element) is stored in its own cell, and each cell is given a sequential integer index, starting at 0.
We use only one variable name to refer to the whole list. To access a specific element in the list, we use the index of the element with following syntax: listName[index]. For example:
End of explanation
"""
line = "ATGCGTA***********"
line.rstrip("*")
print line
line = line.rstrip("*")
print line
"""
Explanation: <img align="left" src="list_diagram2.PNG" />
This ability to access a potentially huge amount of data using just one variable name is part of what makes data structures so useful. Imagine if you had 20,000 gene IDs you wanted to use in your code -- it wouldn't be feasible to create a separate variable name for each one. Instead, you can just dump all the gene IDs into a single list, and access them by index. We'll see how to actually do this sort of thing later in this lesson!
Important side note: "in-place" functions
Before we talk more about lists, we need to briefly introduce the idea of "in-place" functions.
The functions we've seen so far do not modify variables directly -- they simply "return" a value. For example, line.rstrip('\n') does nothing to the original string line, it just returns a modified version. To actually change line, you need to say line = line.rstrip('\n'), which overwrites line with the new value. Here's a similar example in code:
End of explanation
"""
shoppingList = ["pizza", "ice cream", "cat food"]
print shoppingList
"""
Explanation: Below, we're going to see a few examples of functions that do directly modify the variable that they act on. These functions are called "in-place" functions, and I'll make a note of it wherever we encounter them.
Using lists
Creating a new list
A list can expand or shrink as we add or remove things from it. Before we can do this, we need to create the list itself. Most often we'll just start of with an empty list, but sometimes it can be useful to pre-fill the list with certain values. Here are the three main ways to create a list:
myList = [] # create a new empty list
myList = [element1, element2, etc] # create a new list with some things already in it
myList = range(num) # create a new list automatically filled with a range of numbers
Example:
End of explanation
"""
contestRanking = ["Sally", "Billy", "Tommy", "Wilfred"]
print "First place goes to", contestRanking[0], "!"
print "Congrats also to our second and third place winners,", contestRanking[1], "and", contestRanking[2]
print "And in last place...", contestRanking[-1]
contestRanking = ["Sally", "Billy", "Tommy", "Wilfred"]
print "6th place goes to", contestRanking[5]
"""
Explanation: <br>
Accessing elements in a list
As we saw above, an element of a list can be accessed using its index. Do not to try to access an index that's not yet in the list -- this will give an error.
someData = myList[index]
You can also index backwards using negative indices:
lastElement = myList[-1]
Examples:
End of explanation
"""
shoppingList = ["pizza", "ice cream", "cat food"]
shoppingList.append("english muffins")
print shoppingList
shoppingList.insert(2, "lembas")
print shoppingList
"""
Explanation: <br>
Adding to a list
After creating a list, you can add additional elements to the end using .append(). This is an in-place function, meaning that it directly modifies the list.
myList.append(element)
Insert an element at the specified index. Elements that come after that index will shift up one index. (in-place function)
myList.insert(index, value)
Example:
End of explanation
"""
toDoList = ["Water plants", "feed cat", "do dishes", "make python lesson"]
doNext = toDoList.pop()
print doNext
print toDoList
pizzaToppings = ["peppers", "sausage", "bananas", "pepperoni"]
pizzaToppings.remove("bananas")
print pizzaToppings
"""
Explanation: <br>
Removing from a list
After creating a list, you can remove elements from it using .pop(). Elements that come after the removed element will be moved up one index so that there are no empty spaces in the list. .pop() also returns the element that was "popped". (in-place function)
myList.pop(index) # removes (and returns) the element at the specified index
myList.pop() # removes (and returns) the last element
You can also remove the first occurrence of a specified element using .remove(). Elements that come after will shift down one index. (in-place function)
myList.remove(element)
Examples:
End of explanation
"""
shoppingList = ["pizza", "ice cream", "cat food"]
item = "20lb bag of reese's"
if item in shoppingList:
print "I'LL TAKE IT"
else:
print "Not today..."
"""
Explanation: <br>
Checking if something is in the list
To check if a particular element is in a list, you can just use a special logical operator "in" (note that this is used differently in a logical statement as compared to a for loop):
if element in myList:
... do something
Example:
End of explanation
"""
currentCats = ["Mittens", "Tatertot", "Meatball", "Star Destroyer"]
for cat in currentCats:
print cat
"""
Explanation: <br>
Iterating through a list
This should be pretty familiar by now. Since a list is an iterable, we can loop through it using a for loop:
for element in myList:
... do something
Example:
End of explanation
"""
myDict = {'age':3, 'animal':'cat', 'num':56.9, 203:4, 'count':10, 'flag':True} # creates the dictionary shown above
print myDict['animal']
print myDict[203]
"""
Explanation: <br>
Other list operations
listLen = len(myList) # Get the length of a list
myList.sort() # Sort (in-place function)
myList.reverse() # Reverse (in-place function)
Related data structure: Tuples
There is another data structure in Python that is similar to lists, but not exactly the same, called tuples. We won't focus too much on these for this class, but essentially a tuple is, like a list, a sequence of items, but unlike a list, these items are immutable, meaning that once you have defined your tuple, you cannot change the items in it. These structures are useful in cases when you know exactly how many items you want to use, and if you aren't going to be changing these items.
Tuples are defined similarly to lists, except that they use rounded parentheses instead of square brackets, and can also be defined without any parentheses. For example:
myTuple = () # create an empty tuple
myTuple = "a", "b", "c", "d" # create a tuple of strings
myTuple = ("a", "b", "c", "d") # this makes the same exact tuple
myTuple = ("a", "b", 3, 4) # tuples can also include different data types
myTuple = ("a") # to make a tuple with a single value, you need a comma
For the purposes of this class, we won't go through all the different methods for tuples, except that they can be accessed similarly to lists, i.e.
myTuple[index]
For more info on tuples, see http://www.tutorialspoint.com/python/python_tuples.htm. Just note that you cannot change individual elements in a tuple!
2. Data structures II: Dictionaries
The next data structure we'll talk about is the dictionary. There are two key differences between dictionaries and lists:
In a dictionary, you retrieve elements using a key rather than an index
Dictionaries are unordered
Difference 1: Dictionaries are indexed by keys
With a dictionary (sometimes called a hash table in other programming languages), you access elements by a name ("key") that you pick:
<img align="left" src="dictionary_diagram.PNG" />
Keys can be strings or numbers. The only restriction is that each key must be unique.
To retrieve a value from the hash, we use the following notation: dictName[key]. For example:
End of explanation
"""
myDict = {'age':3, 'animal':'cat', 'num':56.9, 203:4, 'count':10, 'flag':True}
print myDict
"""
Explanation: Difference 2: Dictionaries are unordered
Lists are all about keeping elements in some order. Though you may change the ordering from time to time, it's still in some predictable order at all times.
You should think of dictionaries more like magic grab bags. You mark each piece of data with a key, then throw it in the bag. When you want that data back, you just tell the bag the key and it spits out the data assigned to that key. There's no intrinsic order to the things in the bag, they're all just kind of jumbled around in there. Because of this, dictionaries aren't great for situations where we need to keep data in a specific order -- but they're very convenient in other situations, as we'll see in a minute.
Technical side note:
Ok, so in reality, there is an order to your dictionary. But it is an order that Python picks that obeys complex rules and is essentially unpredictable by us. So as far as we're concerned, it may as well be unordered.
For example, here's what happens when we print a dictionary -- you can see that the elements are not maintained in the same order they were added:
End of explanation
"""
myDict = {"Joe": 25, "Sally": 35}
print myDict
"""
Explanation: Using dictionaries
<br>
Creating a dictionary
Create a new empty dictionary:
myDict = {}
Create a new dictionary with some elements:
myDict = {key1: value1, key2: value2}
Example:
End of explanation
"""
myDict = {"Joe": 25, "Sally": 35}
myDict["Bobby"] = 65
myDict["Joe"] = 104
print myDict
"""
Explanation: <br>
Adding to a dictionary
Add a new key-value pair to an existing dictionary:
myDict[newKey] = newVal
Note that if the specified key is already in the dictionary, the associated value will be overwritten by the new value we assign here!
End of explanation
"""
myDict = {"Joe": 25, "Sally": 35}
for person in myDict:
print "Name:", person, "- Age:", myDict[person]
"""
Explanation: <br>
Removing from a dictionary
Delete a key-value pair from an existing dictionary:
del hash[existingKey]
Note that trying to delete a key that is not in the dictionary will give an error.
<br>
Check if something is already in the dictionary
This works the same way it did with lists -- just use the in operator:
if someKey in myDict:
... do something
<br>
Iterating through a dictionary
A dictionary is an iterable, and the iterable unit is the key. So every time we loop, a new key from the dictionary is assigned to the for loop variable.
for key in myDict:
... do something
End of explanation
"""
phonebook = {}
phonebook["Joe Shmo"] = "958-273-7324"
phonebook["Sally Shmo"] = "958-273-9594"
phonebook["George Smith"] = "253-586-9933"
name = raw_input("Lookup number for: ")
print phonebook[name]
"""
Explanation: <br>
Special dictionary functions
Often, we'll want to quickly get a list of all the keys or values. Python provides the following functions to do this.
Get a list of the keys only:
keyList = myDict.keys()
Get a list of the values only:
valueList = myDict.values()
When are dictionaries useful?
One of the most natural applications of the dictionary is to create a "lookup table". There are many examples of lookup tables in our everyday lives -- things like a phonebook, the index at the back of a textbook, or (surprise surprise) a regular old dictionary. What these examples have in common is that they allow you to take one piece of information that you already know (a friend's name, a topic, a word) and use it to quickly look up some information that you don't know, but need (a phone number, a page number, a definition).
Here's a simple toy example of creating a phonebook using a dictionary:
End of explanation
"""
sentence = "Hello, how are you today?"
print sentence.split()
print sentence.split(",")
print sentence.split("o")
"""
Explanation: (Notice that we can store the name of a key in a variable, and then use that variable to access the desired element. In this case, the variable "name" holds the name that we input in the terminal, e.g. Sally Shmo.)
The real power of the dictionary comes when we start generating our lookup tables automatically from data files (instead of creating it manually like we did here). This allows us to very easily cross-reference data across multiple files. We'll look at a full example of this using real data at the end of this lesson!
3. String parsing with .split()
Before we can really dig in to analyzing some data files, there's one more tool we need: .split(). This is a simple and useful function that allows you to split any string into separate parts based on some delimiter. For example:
End of explanation
"""
line = "uc007afd.1 Mrpl15 368 internal-out-of-frame"
print line.split()
"""
Explanation: [ Definition ] .split()
Purpose: Splits a string into parts based on a specified delimiter. If no delimiter is given, splits on whitespace (spaces, tabs, and newlines). Returns a list.
Syntax:
result = string.split()
result = string.split(delimiter)
Notes:
The delimiter itself is not included in the output
Using .split() to parse text files
Most data files come in a tabular format, where the data is arranged in rows and columns in some consistent way. For example, you might have a file where each row is a gene, and each column is some type of information about the gene. If you open up this file in a plain text editor, you'll see something like this:
ucscID geneName numReads proteinProduct
uc007afd.1 Mrpl15 368 internal-out-of-frame
uc007afh.1 Lypla1 783 n-term-trunc
uc007afi.1 Tcea1 3852 canonical
uc007afn.1 Atp6v1h 1407 n-term-trunc
uc007agb.1 Pcmtd1 65 uorf
This might look a bit messy to read by eye, but in fact this is a perfect format for reading into our code. The important point is that on each line, the data belonging to each "column" is separated by a consistent delimiter. In this case, the delimiter is a single tab (other common delimiters are commas and spaces). Using the .split() function, we can split up each line from this file into its separate "column" components so that each piece of information can be used separately. This is often what we mean when we say we're "parsing" a data file -- we're breaking it up into meaningful parts.
Here's an example of splitting up on of the lines above:
End of explanation
"""
input = open("init_sites.txt", 'r')
input.readline() #skip header line
for line in input:
line = line.rstrip('\r\n')
data = line.split() #splits line on whitespace (includes tabs), returns a list
print data[5] #remember, list indexing starts at 0, so the 6th column = index 5 in the list!
input.close()
"""
Explanation: Example: parsing a file with multiple columns
In the same folder as this notebook, you should have a file called init_sites.txt. This file contains some real data on translation initiation sites from a ribosome profiling study in mouse (Ingolia et al., Cell 2011). Here's what the first few lines look like:
knownGene GeneName InitCodon[nt] DisttoCDS[codons] FramevsCDS InitContext[-3to+4] CDSLength[codons] HarrPeakStart HarrPeakWidth #HarrReads PeakScore Codon Product
uc007zzs.1 Cbr3 36 -23 -1 GCCACGG 22 35 3 379 4.75 nearcog uorf
uc009akk.1 Rac1 196 0 0 CAGATGC 192 195 3 3371 4.70 aug canonical
uc009eyb.1 Saps1 204 -91 1 GCCACGG 23 203 3 560 4.68 nearcog uorf
uc008wzq.1 Ppp1cb 96 0 0 AAGATGG 327 94 4 3218 4.56 aug canonical
uc007hnl.1 Pa2g4 38 -23 0 AGCCTGT 14 37 4 6236 4.54 nearcog uorf
uc007hnl.1 Pa2g4 40 -22 -1 CCTGTGG 17 37 4 6236 4.54 nearcog uorf
...
...
(Note: the header looks like it's on two lines here due to text wrapping, but it's actually just one line!)
Let's say the only info I'm want from this file is the initiation context of each translation initiation site. This is contained in column 6 of each row (under the header label "InitContext[-3to+4]"). How can I extract this information?
Think for a moment how you might do this, then take a look at the solution:
End of explanation
"""
# RUN THIS BLOCK FIRST TO SET UP VARIABLES! (and re-run it if the lists/dictionary are changed in subsequent code blocks)
fruits = {"apple":"red", "banana":"yellow", "grape":"purple"}
names = ["Wilfred", "Manfred", "Wadsworth", "Jeeves"]
ages = [65, 34, 96, 47]
str1 = "Good morning, Mr. Mitsworth."
print len(ages)
print len(ages) == len(names)
print names[-1]
for age in ages:
print age
for i in range(len(names)):
print names[i],"is",ages[i]
if "Willard" not in names:
names.append("Willard")
print names
ages.sort()
print ages
ages = ages.sort()
print ages
parts = str1.split()
print parts
print str1
parts = str1.split(",")
print parts
oldList = [2, 2, 6, 1, 2, 6]
newList = []
for item in oldList:
if item not in newList:
newList.append(item)
print newList
print fruits["banana"]
query = "apple"
print fruits[query]
print fruits[0]
print fruits.keys()
print fruits.values()
for key in fruits:
print fruits[key]
del fruits["banana"]
print fruits
print fruits["pear"]
fruits["apple"] = fruits["apple"] + " or green"
print fruits["apple"]
fruits["pear"] = "green"
print fruits["pear"]
"""
Explanation: If instead we were interested in some other column of the file, we just need to switch data[5] to whichever index holds our information of interest, e.g. data[0] to get the "knownGene" column or data[2] to get the "InitCodon[nt]". Remember, though, that the lines of a file are always read in as strings, so you will need to convert numbers using int() or float() as appropriate. So for the InitCodon[nt], we will probably want to say int(data[2]) before doing any computations on those numbers.
4. Test your understanding: practice set 4
For the following blocks of code, first try to guess what the output will be, and then run the code yourself. These examples may introduce some ideas and common pitfalls that were not explicitly covered in the text above, so be sure to complete this section.
End of explanation
"""
|
atulsingh0/MachineLearning | python_DC/LP_Summary_MissingData_#3.ipynb | gpl-3.0 | # idxmin and idxmax, return indirect statistics like the index value where the minimum or maximum values are attained
df.idxmin()
# for cumulative sum
df.cumsum()
# describing
df.describe()
"""
Explanation: Options for reduction method
axis Axis to reduce over. 0 for DataFrame’s rows and 1 for columns.
skipna Exclude missing values, True by default.
level Reduce grouped by level if the axis is hierarchically-indexed (MultiIndex).
End of explanation
"""
df.quantile()
obj = pd.Series(['a', 'a', 'b', 'c'] * 4)
print(obj)
obj.describe()
"""
Explanation: Descriptive and summary statistics
count Number of non-NA values
describe Compute set of summary statistics for Series or each DataFrame column
min, max Compute minimum and maximum values
argmin, argmax Compute index locations (integers) at which minimum or maximum value obtained, respectively
idxmin, idxmax Compute index values at which minimum or maximum value obtained, respectively
quantile Compute sample quantile ranging from 0 to 1
sum Sum of values
mean Mean of values
median Arithmetic median (50% quantile) of values
mad Mean absolute deviation from mean value
var Sample variance of values
std Sample standard deviation of values
skew Sample skewness (3rd moment) of values
kurt Sample kurtosis (4th moment) of values
cumsum Cumulative sum of values
cummin, cummax Cumulative minimum or maximum of values, respectively
cumprod Cumulative product of values
diff Compute 1st arithmetic difference (useful for time series)
pct_change Compute percent changes
End of explanation
"""
import pandas.io.data as web
all_data = {}
for ticker in ['AAPL', 'IBM', 'MSFT', 'GOOG']:
all_data[ticker] = web.get_data_yahoo(ticker, '1/1/2000', '1/1/2010')
price = pd.DataFrame({tic: data['Adj Close']
for tic, data in all_data.items()})
volume = pd.DataFrame({tic: data['Volume']
for tic, data in all_data.items()})
returns = price.pct_change()
returns.tail()
returns.MSFT.corr(returns.IBM)
returns.MSFT.cov(returns.IBM)
returns.corr()
returns.cov()
returns.corrwith(returns.IBM)
returns.corrwith(volume)
"""
Explanation: Correlation and Covariance
End of explanation
"""
obj = pd.Series(['c', 'a', 'd', 'a', 'a', 'b', 'b', 'c', 'c'])
print(obj)
obj.unique()
obj.value_counts()
pd.value_counts(obj.values, sort=False)
mask = obj.isin(['b', 'c'])
print(mask)
print(obj[mask])
"""
Explanation: Unique Values, Value Counts, and Membership
End of explanation
"""
data = pd.DataFrame({'Qu1': [1, 3, 4, 3, 4],
'Qu2': [2, 3, 1, 2, 3],
'Qu3': [1, 5, 2, 4, 4]})
print(data)
result = data.apply(pd.value_counts)
result
result.fillna(0)
"""
Explanation: Unique, value counts, and binning method
isin Compute boolean array indicating whether each Series value is contained in the passed sequence of values.
unique Compute array of unique values in a Series, returned in the order observed.
value_counts Return a Series containing unique values as its index and frequencies as its values, ordered count in
descending order.
End of explanation
"""
string_data = pd.Series(['aardvark', 'artichoke', np.nan, 'avocado'])
print(string_data)
string_data.isnull()
string_data[0] = None # None is also treated as NaN
string_data.isnull()
"""
Explanation: Handling Missing Data
End of explanation
"""
from numpy import nan as NA
data = pd.Series([1, NA, 3.5, NA, 7])
print(data)
data.dropna() # dropping na records
# same can be achieved as binary filtering
data[data.notnull()]
data = pd.DataFrame([[1., 6.5, 3.], [1., NA, NA],
[NA, NA, NA], [NA, 6.5, 3.]])
print(data)
data.dropna()
data.dropna(how='all') # this will remove only those records which has NaN in all columns
# can drop the columns same way with axis = 1
data[4] = NA
print(data)
data.dropna(how='all', axis=1)
data.dropna(how='all', axis=1).dropna(how='all')
df = pd.DataFrame(np.random.randn(7, 3))
print(df)
df.ix[:4, 1] = NA;
df.ix[:2, 2] = NA;
print(df)
df.dropna(thresh=2)
"""
Explanation: NA handling methods
dropna Filter axis labels based on whether values for each label have missing data, with varying thresholds for how much missing data to tolerate.
fillna Fill in missing data with some value or using an interpolation method such as 'ffill' or 'bfill'.
isnull Return like-type object containing boolean values indicating which values are missing / NA.
notnull Negation of isnull.
Filtering Out Missing Data
End of explanation
"""
data.fillna(999) # replacing na with some value
df.fillna(0)
df.fillna({1: 0.5, 2: -1})
# fillna returns a new object, but you can modify the existing object in place
df
_ = df.fillna(0, inplace=True) # modify the existing object in place
df
df = pd.DataFrame(np.random.randn(6, 3))
df.ix[2:, 1] = NA
df.ix[4:, 2] = NA
df
df.fillna(method='ffill') # repeat the last value of columns into NaN field
df.fillna(method='ffill', limit=2) # limit the fill
df.fillna(df.mean())
"""
Explanation: Filling in Missing Data
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.13/_downloads/plot_stats_cluster_time_frequency_repeated_measures_anova.ipynb | bsd-3-clause | # Authors: Denis Engemann <denis.engemann@gmail.com>
# Eric Larson <larson.eric.d@gmail.com>
# Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.time_frequency import tfr_morlet
from mne.stats import f_threshold_mway_rm, f_mway_rm, fdr_correction
from mne.datasets import sample
print(__doc__)
"""
Explanation: Mass-univariate twoway repeated measures ANOVA on single trial power
This script shows how to conduct a mass-univariate repeated measures
ANOVA. As the model to be fitted assumes two fully crossed factors,
we will study the interplay between perceptual modality
(auditory VS visual) and the location of stimulus presentation
(left VS right). Here we use single trials as replications
(subjects) while iterating over time slices plus frequency bands
for to fit our mass-univariate model. For the sake of simplicity we
will confine this analysis to one single channel of which we know
that it exposes a strong induced response. We will then visualize
each effect by creating a corresponding mass-univariate effect
image. We conclude with accounting for multiple comparisons by
performing a permutation clustering test using the ANOVA as
clustering function. The results final will be compared to
multiple comparisons using False Discovery Rate correction.
End of explanation
"""
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_raw-eve.fif'
tmin, tmax = -0.2, 0.5
# Setup for reading the raw data
raw = mne.io.read_raw_fif(raw_fname)
events = mne.read_events(event_fname)
include = []
raw.info['bads'] += ['MEG 2443'] # bads
# picks MEG gradiometers
picks = mne.pick_types(raw.info, meg='grad', eeg=False, eog=True,
stim=False, include=include, exclude='bads')
ch_name = 'MEG 1332'
# Load conditions
reject = dict(grad=4000e-13, eog=150e-6)
event_id = dict(aud_l=1, aud_r=2, vis_l=3, vis_r=4)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax,
picks=picks, baseline=(None, 0), preload=True,
reject=reject)
epochs.pick_channels([ch_name]) # restrict example to one channel
"""
Explanation: Set parameters
End of explanation
"""
epochs.equalize_event_counts(event_id, copy=False)
# Factor to down-sample the temporal dimension of the TFR computed by
# tfr_morlet.
decim = 2
frequencies = np.arange(7, 30, 3) # define frequencies of interest
n_cycles = frequencies / frequencies[0]
zero_mean = False # don't correct morlet wavelet to be of mean zero
# To have a true wavelet zero_mean should be True but here for illustration
# purposes it helps to spot the evoked response.
"""
Explanation: We have to make sure all conditions have the same counts, as the ANOVA
expects a fully balanced data matrix and does not forgive imbalances that
generously (risk of type-I error).
End of explanation
"""
epochs_power = list()
for condition in [epochs[k] for k in event_id]:
this_tfr = tfr_morlet(condition, frequencies, n_cycles=n_cycles,
decim=decim, average=False, zero_mean=zero_mean,
return_itc=False)
this_tfr.apply_baseline(mode='ratio', baseline=(None, 0))
this_power = this_tfr.data[:, 0, :, :] # we only have one channel.
epochs_power.append(this_power)
"""
Explanation: Create TFR representations for all conditions
End of explanation
"""
n_conditions = len(epochs.event_id)
n_replications = epochs.events.shape[0] / n_conditions
factor_levels = [2, 2] # number of levels in each factor
effects = 'A*B' # this is the default signature for computing all effects
# Other possible options are 'A' or 'B' for the corresponding main effects
# or 'A:B' for the interaction effect only (this notation is borrowed from the
# R formula language)
n_frequencies = len(frequencies)
times = 1e3 * epochs.times[::decim]
n_times = len(times)
"""
Explanation: Setup repeated measures ANOVA
We will tell the ANOVA how to interpret the data matrix in terms of factors.
This is done via the factor levels argument which is a list of the number
factor levels for each factor.
End of explanation
"""
data = np.swapaxes(np.asarray(epochs_power), 1, 0)
# reshape last two dimensions in one mass-univariate observation-vector
data = data.reshape(n_replications, n_conditions, n_frequencies * n_times)
# so we have replications * conditions * observations:
print(data.shape)
"""
Explanation: Now we'll assemble the data matrix and swap axes so the trial replications
are the first dimension and the conditions are the second dimension.
End of explanation
"""
fvals, pvals = f_mway_rm(data, factor_levels, effects=effects)
effect_labels = ['modality', 'location', 'modality by location']
# let's visualize our effects by computing f-images
for effect, sig, effect_label in zip(fvals, pvals, effect_labels):
plt.figure()
# show naive F-values in gray
plt.imshow(effect.reshape(8, 211), cmap=plt.cm.gray, extent=[times[0],
times[-1], frequencies[0], frequencies[-1]], aspect='auto',
origin='lower')
# create mask for significant Time-frequency locations
effect = np.ma.masked_array(effect, [sig > .05])
plt.imshow(effect.reshape(8, 211), cmap='RdBu_r', extent=[times[0],
times[-1], frequencies[0], frequencies[-1]], aspect='auto',
origin='lower')
plt.colorbar()
plt.xlabel('Time (ms)')
plt.ylabel('Frequency (Hz)')
plt.title(r"Time-locked response for '%s' (%s)" % (effect_label, ch_name))
plt.show()
"""
Explanation: While the iteration scheme used above for assembling the data matrix
makes sure the first two dimensions are organized as expected (with A =
modality and B = location):
.. table:: Sample data layout
===== ==== ==== ==== ====
trial A1B1 A1B2 A2B1 B2B2
===== ==== ==== ==== ====
1 1.34 2.53 0.97 1.74
... ... ... ... ...
56 2.45 7.90 3.09 4.76
===== ==== ==== ==== ====
Now we're ready to run our repeated measures ANOVA.
Note. As we treat trials as subjects, the test only accounts for
time locked responses despite the 'induced' approach.
For analysis for induced power at the group level averaged TRFs
are required.
End of explanation
"""
effects = 'A:B'
"""
Explanation: Account for multiple comparisons using FDR versus permutation clustering test
First we need to slightly modify the ANOVA function to be suitable for
the clustering procedure. Also want to set some defaults.
Let's first override effects to confine the analysis to the interaction
End of explanation
"""
def stat_fun(*args):
return f_mway_rm(np.swapaxes(args, 1, 0), factor_levels=factor_levels,
effects=effects, return_pvals=False)[0]
# The ANOVA returns a tuple f-values and p-values, we will pick the former.
pthresh = 0.00001 # set threshold rather high to save some time
f_thresh = f_threshold_mway_rm(n_replications, factor_levels, effects,
pthresh)
tail = 1 # f-test, so tail > 0
n_permutations = 256 # Save some time (the test won't be too sensitive ...)
T_obs, clusters, cluster_p_values, h0 = mne.stats.permutation_cluster_test(
epochs_power, stat_fun=stat_fun, threshold=f_thresh, tail=tail, n_jobs=1,
n_permutations=n_permutations, buffer_size=None)
"""
Explanation: A stat_fun must deal with a variable number of input arguments.
Inside the clustering function each condition will be passed as flattened
array, necessitated by the clustering procedure. The ANOVA however expects an
input array of dimensions: subjects X conditions X observations (optional).
The following function catches the list input and swaps the first and
the second dimension and finally calls the ANOVA function.
End of explanation
"""
good_clusers = np.where(cluster_p_values < .05)[0]
T_obs_plot = np.ma.masked_array(T_obs,
np.invert(clusters[np.squeeze(good_clusers)]))
plt.figure()
for f_image, cmap in zip([T_obs, T_obs_plot], [plt.cm.gray, 'RdBu_r']):
plt.imshow(f_image, cmap=cmap, extent=[times[0], times[-1],
frequencies[0], frequencies[-1]], aspect='auto',
origin='lower')
plt.xlabel('Time (ms)')
plt.ylabel('Frequency (Hz)')
plt.title("Time-locked response for 'modality by location' (%s)\n"
" cluster-level corrected (p <= 0.05)" % ch_name)
plt.show()
"""
Explanation: Create new stats image with only significant clusters:
End of explanation
"""
mask, _ = fdr_correction(pvals[2])
T_obs_plot2 = np.ma.masked_array(T_obs, np.invert(mask))
plt.figure()
for f_image, cmap in zip([T_obs, T_obs_plot2], [plt.cm.gray, 'RdBu_r']):
plt.imshow(f_image, cmap=cmap, extent=[times[0], times[-1],
frequencies[0], frequencies[-1]], aspect='auto',
origin='lower')
plt.xlabel('Time (ms)')
plt.ylabel('Frequency (Hz)')
plt.title("Time-locked response for 'modality by location' (%s)\n"
" FDR corrected (p <= 0.05)" % ch_name)
plt.show()
"""
Explanation: Now using FDR:
End of explanation
"""
|
darkomen/TFG | ipython_notebooks/06_regulador_experto/.ipynb_checkpoints/ensayo2-checkpoint.ipynb | cc0-1.0 | #Importamos las librerías utilizadas
import numpy as np
import pandas as pd
import seaborn as sns
#Mostramos las versiones usadas de cada librerías
print ("Numpy v{}".format(np.__version__))
print ("Pandas v{}".format(pd.__version__))
print ("Seaborn v{}".format(sns.__version__))
#Abrimos el fichero csv con los datos de la muestra
datos = pd.read_csv('ensayo2.CSV')
%pylab inline
#Almacenamos en una lista las columnas del fichero con las que vamos a trabajar
columns = ['Diametro X','Diametro Y', 'RPM TRAC']
#Mostramos un resumen de los datos obtenidoss
datos[columns].describe()
#datos.describe().loc['mean',['Diametro X [mm]', 'Diametro Y [mm]']]
"""
Explanation: Análisis de los datos obtenidos
Uso de ipython para el análsis y muestra de los datos obtenidos durante la producción.Se implementa un regulador experto. Los datos analizados son del día 12 de Agosto del 2015
Los datos del experimento:
* Hora de inicio: 11:05
* Hora final : 11:35
* Filamento extruido: 435cm
* $T: 150ºC$
* $V_{min} tractora: 1.5 mm/s$
* $V_{max} tractora: 3.4 mm/s$
* Los incrementos de velocidades en las reglas del sistema experto son distintas:
* En el caso 5 se pasa de un incremento de velocidad de +1 a un incremento de +2.
End of explanation
"""
graf=datos.ix[:, "Diametro X"].plot(figsize=(16,10),ylim=(0.5,3))
graf.axhspan(1.65,1.85, alpha=0.2)
graf.set_xlabel('Tiempo (s)')
graf.set_ylabel('Diámetro (mm)')
#datos['RPM TRAC'].plot(secondary_y='RPM TRAC')
box=datos.ix[:, "Diametro X":"Diametro Y"].boxplot(return_type='axes')
box.axhspan(1.65,1.85, alpha=0.2)
"""
Explanation: Representamos ambos diámetro y la velocidad de la tractora en la misma gráfica
End of explanation
"""
plt.scatter(x=datos['Diametro X'], y=datos['Diametro Y'], marker='.')
"""
Explanation: Con esta segunda aproximación se ha conseguido estabilizar los datos. Se va a tratar de bajar ese porcentaje. Como segunda aproximación, vamos a modificar los incrementos en los que el diámetro se encuentra entre $1.80mm$ y $1.70 mm$, en ambos sentidos. (casos 3 a 6)
Comparativa de Diametro X frente a Diametro Y para ver el ratio del filamento
End of explanation
"""
datos_filtrados = datos[(datos['Diametro X'] >= 0.9) & (datos['Diametro Y'] >= 0.9)]
#datos_filtrados.ix[:, "Diametro X":"Diametro Y"].boxplot(return_type='axes')
"""
Explanation: Filtrado de datos
Las muestras tomadas $d_x >= 0.9$ or $d_y >= 0.9$ las asumimos como error del sensor, por ello las filtramos de las muestras tomadas.
End of explanation
"""
plt.scatter(x=datos_filtrados['Diametro X'], y=datos_filtrados['Diametro Y'], marker='.')
"""
Explanation: Representación de X/Y
End of explanation
"""
ratio = datos_filtrados['Diametro X']/datos_filtrados['Diametro Y']
ratio.describe()
rolling_mean = pd.rolling_mean(ratio, 50)
rolling_std = pd.rolling_std(ratio, 50)
rolling_mean.plot(figsize=(12,6))
# plt.fill_between(ratio, y1=rolling_mean+rolling_std, y2=rolling_mean-rolling_std, alpha=0.5)
ratio.plot(figsize=(12,6), alpha=0.6, ylim=(0.5,1.5))
"""
Explanation: Analizamos datos del ratio
End of explanation
"""
Th_u = 1.85
Th_d = 1.65
data_violations = datos[(datos['Diametro X'] > Th_u) | (datos['Diametro X'] < Th_d) |
(datos['Diametro Y'] > Th_u) | (datos['Diametro Y'] < Th_d)]
data_violations.describe()
data_violations.plot(subplots=True, figsize=(12,12))
"""
Explanation: Límites de calidad
Calculamos el número de veces que traspasamos unos límites de calidad.
$Th^+ = 1.85$ and $Th^- = 1.65$
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.15/_downloads/plot_compute_raw_data_spectrum.ipynb | bsd-3-clause | # Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
# Martin Luessi <mluessi@nmr.mgh.harvard.edu>
# Eric Larson <larson.eric.d@gmail.com>
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne import io, read_proj, read_selection
from mne.datasets import sample
from mne.time_frequency import psd_multitaper
print(__doc__)
"""
Explanation: Compute the power spectral density of raw data
This script shows how to compute the power spectral density (PSD)
of measurements on a raw dataset. It also show the effect of applying SSP
to the data to reduce ECG and EOG artifacts.
End of explanation
"""
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
proj_fname = data_path + '/MEG/sample/sample_audvis_eog-proj.fif'
tmin, tmax = 0, 60 # use the first 60s of data
# Setup for reading the raw data (to save memory, crop before loading)
raw = io.read_raw_fif(raw_fname).crop(tmin, tmax).load_data()
raw.info['bads'] += ['MEG 2443', 'EEG 053'] # bads + 2 more
# Add SSP projection vectors to reduce EOG and ECG artifacts
projs = read_proj(proj_fname)
raw.add_proj(projs, remove_existing=True)
fmin, fmax = 2, 300 # look at frequencies between 2 and 300Hz
n_fft = 2048 # the FFT size (n_fft). Ideally a power of 2
"""
Explanation: Load data
We'll load a sample MEG dataset, along with SSP projections that will
allow us to reduce EOG and ECG artifacts. For more information about
reducing artifacts, see the preprocessing section in documentation.
End of explanation
"""
raw.plot_psd(area_mode='range', tmax=10.0, show=False, average=True)
"""
Explanation: Plot the raw PSD
First we'll visualize the raw PSD of our data. We'll do this on all of the
channels first. Note that there are several parameters to the
:meth:mne.io.Raw.plot_psd method, some of which will be explained below.
End of explanation
"""
# Pick MEG magnetometers in the Left-temporal region
selection = read_selection('Left-temporal')
picks = mne.pick_types(raw.info, meg='mag', eeg=False, eog=False,
stim=False, exclude='bads', selection=selection)
# Let's just look at the first few channels for demonstration purposes
picks = picks[:4]
plt.figure()
ax = plt.axes()
raw.plot_psd(tmin=tmin, tmax=tmax, fmin=fmin, fmax=fmax, n_fft=n_fft,
n_jobs=1, proj=False, ax=ax, color=(0, 0, 1), picks=picks,
show=False, average=True)
raw.plot_psd(tmin=tmin, tmax=tmax, fmin=fmin, fmax=fmax, n_fft=n_fft,
n_jobs=1, proj=True, ax=ax, color=(0, 1, 0), picks=picks,
show=False, average=True)
# And now do the same with SSP + notch filtering
# Pick all channels for notch since the SSP projection mixes channels together
raw.notch_filter(np.arange(60, 241, 60), n_jobs=1, fir_design='firwin')
raw.plot_psd(tmin=tmin, tmax=tmax, fmin=fmin, fmax=fmax, n_fft=n_fft,
n_jobs=1, proj=True, ax=ax, color=(1, 0, 0), picks=picks,
show=False, average=True)
ax.set_title('Four left-temporal magnetometers')
plt.legend(ax.lines[::3], ['Without SSP', 'With SSP', 'SSP + Notch'])
"""
Explanation: Plot a cleaned PSD
Next we'll focus the visualization on a subset of channels.
This can be useful for identifying particularly noisy channels or
investigating how the power spectrum changes across channels.
We'll visualize how this PSD changes after applying some standard
filtering techniques. We'll first apply the SSP projections, which is
accomplished with the proj=True kwarg. We'll then perform a notch filter
to remove particular frequency bands.
End of explanation
"""
f, ax = plt.subplots()
psds, freqs = psd_multitaper(raw, low_bias=True, tmin=tmin, tmax=tmax,
fmin=fmin, fmax=fmax, proj=True, picks=picks,
n_jobs=1)
psds = 10 * np.log10(psds)
psds_mean = psds.mean(0)
psds_std = psds.std(0)
ax.plot(freqs, psds_mean, color='k')
ax.fill_between(freqs, psds_mean - psds_std, psds_mean + psds_std,
color='k', alpha=.5)
ax.set(title='Multitaper PSD', xlabel='Frequency',
ylabel='Power Spectral Density (dB)')
plt.show()
"""
Explanation: Alternative functions for PSDs
There are also several functions in MNE that create a PSD using a Raw
object. These are in the :mod:mne.time_frequency module and begin with
psd_*. For example, we'll use a multitaper method to compute the PSD
below.
End of explanation
"""
|
miguelfrde/stanford-cs231n | assignment2/Dropout.ipynb | mit | # As usual, a bit of setup
from __future__ import print_function
import time
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.fc_net import *
from cs231n.data_utils import get_CIFAR10_data
from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array
from cs231n.solver import Solver
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
""" returns relative error """
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data()
for k, v in data.items():
print('%s: ' % k, v.shape)
"""
Explanation: Dropout
Dropout [1] is a technique for regularizing neural networks by randomly setting some features to zero during the forward pass. In this exercise you will implement a dropout layer and modify your fully-connected network to optionally use dropout.
[1] Geoffrey E. Hinton et al, "Improving neural networks by preventing co-adaptation of feature detectors", arXiv 2012
End of explanation
"""
np.random.seed(231)
x = np.random.randn(500, 500) + 10
for p in [0.3, 0.6, 0.75]:
out, _ = dropout_forward(x, {'mode': 'train', 'p': p})
out_test, _ = dropout_forward(x, {'mode': 'test', 'p': p})
print('Running tests with p = ', p)
print('Mean of input: ', x.mean())
print('Mean of train-time output: ', out.mean())
print('Mean of test-time output: ', out_test.mean())
print('Fraction of train-time output set to zero: ', (out == 0).mean())
print('Fraction of test-time output set to zero: ', (out_test == 0).mean())
print()
"""
Explanation: Dropout forward pass
In the file cs231n/layers.py, implement the forward pass for dropout. Since dropout behaves differently during training and testing, make sure to implement the operation for both modes.
Once you have done so, run the cell below to test your implementation.
End of explanation
"""
np.random.seed(231)
x = np.random.randn(10, 10) + 10
dout = np.random.randn(*x.shape)
dropout_param = {'mode': 'train', 'p': 0.8, 'seed': 123}
out, cache = dropout_forward(x, dropout_param)
dx = dropout_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda xx: dropout_forward(xx, dropout_param)[0], x, dout)
print('dx relative error: ', rel_error(dx, dx_num))
"""
Explanation: Dropout backward pass
In the file cs231n/layers.py, implement the backward pass for dropout. After doing so, run the following cell to numerically gradient-check your implementation.
End of explanation
"""
np.random.seed(231)
N, D, H1, H2, C = 2, 15, 20, 30, 10
X = np.random.randn(N, D)
y = np.random.randint(C, size=(N,))
for dropout in [0, 0.25, 0.5]:
print('Running check with dropout = ', dropout)
model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,
weight_scale=5e-2, dtype=np.float64,
dropout=dropout, seed=123)
loss, grads = model.loss(X, y)
print('Initial loss: ', loss)
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)
print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))
print()
"""
Explanation: Fully-connected nets with Dropout
In the file cs231n/classifiers/fc_net.py, modify your implementation to use dropout. Specificially, if the constructor the the net receives a nonzero value for the dropout parameter, then the net should add dropout immediately after every ReLU nonlinearity. After doing so, run the following to numerically gradient-check your implementation.
End of explanation
"""
# Train two identical nets, one with dropout and one without
np.random.seed(231)
num_train = 500
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
solvers = {}
dropout_choices = [0, 0.75]
for dropout in dropout_choices:
model = FullyConnectedNet([500], dropout=dropout)
print(dropout)
solver = Solver(model, small_data,
num_epochs=25, batch_size=100,
update_rule='adam',
optim_config={
'learning_rate': 5e-4,
},
verbose=True, print_every=100)
solver.train()
solvers[dropout] = solver
# Plot train and validation accuracies of the two models
train_accs = []
val_accs = []
for dropout in dropout_choices:
solver = solvers[dropout]
train_accs.append(solver.train_acc_history[-1])
val_accs.append(solver.val_acc_history[-1])
plt.subplot(3, 1, 1)
for dropout in dropout_choices:
plt.plot(solvers[dropout].train_acc_history, 'o', label='%.2f dropout' % dropout)
plt.title('Train accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend(ncol=2, loc='lower right')
plt.subplot(3, 1, 2)
for dropout in dropout_choices:
plt.plot(solvers[dropout].val_acc_history, 'o', label='%.2f dropout' % dropout)
plt.title('Val accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend(ncol=2, loc='lower right')
plt.gcf().set_size_inches(15, 15)
plt.show()
"""
Explanation: Regularization experiment
As an experiment, we will train a pair of two-layer networks on 500 training examples: one will use no dropout, and one will use a dropout probability of 0.75. We will then visualize the training and validation accuracies of the two networks over time.
End of explanation
"""
|
ageron/ml-notebooks | tools_numpy.ipynb | apache-2.0 | from __future__ import division, print_function, unicode_literals
"""
Explanation: Tools - NumPy
NumPy is the fundamental library for scientific computing with Python. NumPy is centered around a powerful N-dimensional array object, and it also contains useful linear algebra, Fourier transform, and random number functions.
<table align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/ageron/handson-ml/blob/master/tools_numpy.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
</table>
Warning: this notebook accompanies the 1st edition of the book. Please visit https://github.com/ageron/handson-ml2 for the 2nd edition code, with up-to-date notebooks using the latest library versions.
Creating arrays
First let's make sure that this notebook works both in python 2 and 3:
End of explanation
"""
import numpy as np
"""
Explanation: Now let's import numpy. Most people import it as np:
End of explanation
"""
np.zeros(5)
"""
Explanation: np.zeros
The zeros function creates an array containing any number of zeros:
End of explanation
"""
np.zeros((3,4))
"""
Explanation: It's just as easy to create a 2D array (ie. a matrix) by providing a tuple with the desired number of rows and columns. For example, here's a 3x4 matrix:
End of explanation
"""
a = np.zeros((3,4))
a
a.shape
a.ndim # equal to len(a.shape)
a.size
"""
Explanation: Some vocabulary
In NumPy, each dimension is called an axis.
The number of axes is called the rank.
For example, the above 3x4 matrix is an array of rank 2 (it is 2-dimensional).
The first axis has length 3, the second has length 4.
An array's list of axis lengths is called the shape of the array.
For example, the above matrix's shape is (3, 4).
The rank is equal to the shape's length.
The size of an array is the total number of elements, which is the product of all axis lengths (eg. 3*4=12)
End of explanation
"""
np.zeros((2,3,4))
"""
Explanation: N-dimensional arrays
You can also create an N-dimensional array of arbitrary rank. For example, here's a 3D array (rank=3), with shape (2,3,4):
End of explanation
"""
type(np.zeros((3,4)))
"""
Explanation: Array type
NumPy arrays have the type ndarrays:
End of explanation
"""
np.ones((3,4))
"""
Explanation: np.ones
Many other NumPy functions create ndarrays.
Here's a 3x4 matrix full of ones:
End of explanation
"""
np.full((3,4), np.pi)
"""
Explanation: np.full
Creates an array of the given shape initialized with the given value. Here's a 3x4 matrix full of π.
End of explanation
"""
np.empty((2,3))
"""
Explanation: np.empty
An uninitialized 2x3 array (its content is not predictable, as it is whatever is in memory at that point):
End of explanation
"""
np.array([[1,2,3,4], [10, 20, 30, 40]])
"""
Explanation: np.array
Of course you can initialize an ndarray using a regular python array. Just call the array function:
End of explanation
"""
np.arange(1, 5)
"""
Explanation: np.arange
You can create an ndarray using NumPy's range function, which is similar to python's built-in range function:
End of explanation
"""
np.arange(1.0, 5.0)
"""
Explanation: It also works with floats:
End of explanation
"""
np.arange(1, 5, 0.5)
"""
Explanation: Of course you can provide a step parameter:
End of explanation
"""
print(np.arange(0, 5/3, 1/3)) # depending on floating point errors, the max value is 4/3 or 5/3.
print(np.arange(0, 5/3, 0.333333333))
print(np.arange(0, 5/3, 0.333333334))
"""
Explanation: However, when dealing with floats, the exact number of elements in the array is not always predictible. For example, consider this:
End of explanation
"""
print(np.linspace(0, 5/3, 6))
"""
Explanation: np.linspace
For this reason, it is generally preferable to use the linspace function instead of arange when working with floats. The linspace function returns an array containing a specific number of points evenly distributed between two values (note that the maximum value is included, contrary to arange):
End of explanation
"""
np.random.rand(3,4)
"""
Explanation: np.rand and np.randn
A number of functions are available in NumPy's random module to create ndarrays initialized with random values.
For example, here is a 3x4 matrix initialized with random floats between 0 and 1 (uniform distribution):
End of explanation
"""
np.random.randn(3,4)
"""
Explanation: Here's a 3x4 matrix containing random floats sampled from a univariate normal distribution (Gaussian distribution) of mean 0 and variance 1:
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
plt.hist(np.random.rand(100000), normed=True, bins=100, histtype="step", color="blue", label="rand")
plt.hist(np.random.randn(100000), normed=True, bins=100, histtype="step", color="red", label="randn")
plt.axis([-2.5, 2.5, 0, 1.1])
plt.legend(loc = "upper left")
plt.title("Random distributions")
plt.xlabel("Value")
plt.ylabel("Density")
plt.show()
"""
Explanation: To give you a feel of what these distributions look like, let's use matplotlib (see the matplotlib tutorial for more details):
End of explanation
"""
def my_function(z, y, x):
return x * y + z
np.fromfunction(my_function, (3, 2, 10))
"""
Explanation: np.fromfunction
You can also initialize an ndarray using a function:
End of explanation
"""
c = np.arange(1, 5)
print(c.dtype, c)
c = np.arange(1.0, 5.0)
print(c.dtype, c)
"""
Explanation: NumPy first creates three ndarrays (one per dimension), each of shape (2, 10). Each array has values equal to the coordinate along a specific axis. For example, all elements in the z array are equal to their z-coordinate:
[[[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]]
[[ 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]
[ 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]]
[[ 2. 2. 2. 2. 2. 2. 2. 2. 2. 2.]
[ 2. 2. 2. 2. 2. 2. 2. 2. 2. 2.]]]
So the terms x, y and z in the expression x * y + z above are in fact ndarrays (we will discuss arithmetic operations on arrays below). The point is that the function my_function is only called once, instead of once per element. This makes initialization very efficient.
Array data
dtype
NumPy's ndarrays are also efficient in part because all their elements must have the same type (usually numbers).
You can check what the data type is by looking at the dtype attribute:
End of explanation
"""
d = np.arange(1, 5, dtype=np.complex64)
print(d.dtype, d)
"""
Explanation: Instead of letting NumPy guess what data type to use, you can set it explicitly when creating an array by setting the dtype parameter:
End of explanation
"""
e = np.arange(1, 5, dtype=np.complex64)
e.itemsize
"""
Explanation: Available data types include int8, int16, int32, int64, uint8|16|32|64, float16|32|64 and complex64|128. Check out the documentation for the full list.
itemsize
The itemsize attribute returns the size (in bytes) of each item:
End of explanation
"""
f = np.array([[1,2],[1000, 2000]], dtype=np.int32)
f.data
"""
Explanation: data buffer
An array's data is actually stored in memory as a flat (one dimensional) byte buffer. It is available via the data attribute (you will rarely need it, though).
End of explanation
"""
if (hasattr(f.data, "tobytes")):
data_bytes = f.data.tobytes() # python 3
else:
data_bytes = memoryview(f.data).tobytes() # python 2
data_bytes
"""
Explanation: In python 2, f.data is a buffer. In python 3, it is a memoryview.
End of explanation
"""
g = np.arange(24)
print(g)
print("Rank:", g.ndim)
g.shape = (6, 4)
print(g)
print("Rank:", g.ndim)
g.shape = (2, 3, 4)
print(g)
print("Rank:", g.ndim)
"""
Explanation: Several ndarrays can share the same data buffer, meaning that modifying one will also modify the others. We will see an example in a minute.
Reshaping an array
In place
Changing the shape of an ndarray is as simple as setting its shape attribute. However, the array's size must remain the same.
End of explanation
"""
g2 = g.reshape(4,6)
print(g2)
print("Rank:", g2.ndim)
"""
Explanation: reshape
The reshape function returns a new ndarray object pointing at the same data. This means that modifying one array will also modify the other.
End of explanation
"""
g2[1, 2] = 999
g2
"""
Explanation: Set item at row 1, col 2 to 999 (more about indexing below).
End of explanation
"""
g
"""
Explanation: The corresponding element in g has been modified.
End of explanation
"""
g.ravel()
"""
Explanation: ravel
Finally, the ravel function returns a new one-dimensional ndarray that also points to the same data:
End of explanation
"""
a = np.array([14, 23, 32, 41])
b = np.array([5, 4, 3, 2])
print("a + b =", a + b)
print("a - b =", a - b)
print("a * b =", a * b)
print("a / b =", a / b)
print("a // b =", a // b)
print("a % b =", a % b)
print("a ** b =", a ** b)
"""
Explanation: Arithmetic operations
All the usual arithmetic operators (+, -, *, /, //, **, etc.) can be used with ndarrays. They apply elementwise:
End of explanation
"""
h = np.arange(5).reshape(1, 1, 5)
h
"""
Explanation: Note that the multiplication is not a matrix multiplication. We will discuss matrix operations below.
The arrays must have the same shape. If they do not, NumPy will apply the broadcasting rules.
Broadcasting
In general, when NumPy expects arrays of the same shape but finds that this is not the case, it applies the so-called broadcasting rules:
First rule
If the arrays do not have the same rank, then a 1 will be prepended to the smaller ranking arrays until their ranks match.
End of explanation
"""
h + [10, 20, 30, 40, 50] # same as: h + [[[10, 20, 30, 40, 50]]]
"""
Explanation: Now let's try to add a 1D array of shape (5,) to this 3D array of shape (1,1,5). Applying the first rule of broadcasting!
End of explanation
"""
k = np.arange(6).reshape(2, 3)
k
"""
Explanation: Second rule
Arrays with a 1 along a particular dimension act as if they had the size of the array with the largest shape along that dimension. The value of the array element is repeated along that dimension.
End of explanation
"""
k + [[100], [200]] # same as: k + [[100, 100, 100], [200, 200, 200]]
"""
Explanation: Let's try to add a 2D array of shape (2,1) to this 2D ndarray of shape (2, 3). NumPy will apply the second rule of broadcasting:
End of explanation
"""
k + [100, 200, 300] # after rule 1: [[100, 200, 300]], and after rule 2: [[100, 200, 300], [100, 200, 300]]
"""
Explanation: Combining rules 1 & 2, we can do this:
End of explanation
"""
k + 1000 # same as: k + [[1000, 1000, 1000], [1000, 1000, 1000]]
"""
Explanation: And also, very simply:
End of explanation
"""
try:
k + [33, 44]
except ValueError as e:
print(e)
"""
Explanation: Third rule
After rules 1 & 2, the sizes of all arrays must match.
End of explanation
"""
k1 = np.arange(0, 5, dtype=np.uint8)
print(k1.dtype, k1)
k2 = k1 + np.array([5, 6, 7, 8, 9], dtype=np.int8)
print(k2.dtype, k2)
"""
Explanation: Broadcasting rules are used in many NumPy operations, not just arithmetic operations, as we will see below.
For more details about broadcasting, check out the documentation.
Upcasting
When trying to combine arrays with different dtypes, NumPy will upcast to a type capable of handling all possible values (regardless of what the actual values are).
End of explanation
"""
k3 = k1 + 1.5
print(k3.dtype, k3)
"""
Explanation: Note that int16 is required to represent all possible int8 and uint8 values (from -128 to 255), even though in this case a uint8 would have sufficed.
End of explanation
"""
m = np.array([20, -5, 30, 40])
m < [15, 16, 35, 36]
"""
Explanation: Conditional operators
The conditional operators also apply elementwise:
End of explanation
"""
m < 25 # equivalent to m < [25, 25, 25, 25]
"""
Explanation: And using broadcasting:
End of explanation
"""
m[m < 25]
"""
Explanation: This is most useful in conjunction with boolean indexing (discussed below).
End of explanation
"""
a = np.array([[-2.5, 3.1, 7], [10, 11, 12]])
print(a)
print("mean =", a.mean())
"""
Explanation: Mathematical and statistical functions
Many mathematical and statistical functions are available for ndarrays.
ndarray methods
Some functions are simply ndarray methods, for example:
End of explanation
"""
for func in (a.min, a.max, a.sum, a.prod, a.std, a.var):
print(func.__name__, "=", func())
"""
Explanation: Note that this computes the mean of all elements in the ndarray, regardless of its shape.
Here are a few more useful ndarray methods:
End of explanation
"""
c=np.arange(24).reshape(2,3,4)
c
c.sum(axis=0) # sum across matrices
c.sum(axis=1) # sum across rows
"""
Explanation: These functions accept an optional argument axis which lets you ask for the operation to be performed on elements along the given axis. For example:
End of explanation
"""
c.sum(axis=(0,2)) # sum across matrices and columns
0+1+2+3 + 12+13+14+15, 4+5+6+7 + 16+17+18+19, 8+9+10+11 + 20+21+22+23
"""
Explanation: You can also sum over multiple axes:
End of explanation
"""
a = np.array([[-2.5, 3.1, 7], [10, 11, 12]])
np.square(a)
"""
Explanation: Universal functions
NumPy also provides fast elementwise functions called universal functions, or ufunc. They are vectorized wrappers of simple functions. For example square returns a new ndarray which is a copy of the original ndarray except that each element is squared:
End of explanation
"""
print("Original ndarray")
print(a)
for func in (np.abs, np.sqrt, np.exp, np.log, np.sign, np.ceil, np.modf, np.isnan, np.cos):
print("\n", func.__name__)
print(func(a))
"""
Explanation: Here are a few more useful unary ufuncs:
End of explanation
"""
a = np.array([1, -2, 3, 4])
b = np.array([2, 8, -1, 7])
np.add(a, b) # equivalent to a + b
np.greater(a, b) # equivalent to a > b
np.maximum(a, b)
np.copysign(a, b)
"""
Explanation: Binary ufuncs
There are also many binary ufuncs, that apply elementwise on two ndarrays. Broadcasting rules are applied if the arrays do not have the same shape:
End of explanation
"""
a = np.array([1, 5, 3, 19, 13, 7, 3])
a[3]
a[2:5]
a[2:-1]
a[:2]
a[2::2]
a[::-1]
"""
Explanation: Array indexing
One-dimensional arrays
One-dimensional NumPy arrays can be accessed more or less like regular python arrays:
End of explanation
"""
a[3]=999
a
"""
Explanation: Of course, you can modify elements:
End of explanation
"""
a[2:5] = [997, 998, 999]
a
"""
Explanation: You can also modify an ndarray slice:
End of explanation
"""
a[2:5] = -1
a
"""
Explanation: Differences with regular python arrays
Contrary to regular python arrays, if you assign a single value to an ndarray slice, it is copied across the whole slice, thanks to broadcasting rules discussed above.
End of explanation
"""
try:
a[2:5] = [1,2,3,4,5,6] # too long
except ValueError as e:
print(e)
"""
Explanation: Also, you cannot grow or shrink ndarrays this way:
End of explanation
"""
try:
del a[2:5]
except ValueError as e:
print(e)
"""
Explanation: You cannot delete elements either:
End of explanation
"""
a_slice = a[2:6]
a_slice[1] = 1000
a # the original array was modified!
a[3] = 2000
a_slice # similarly, modifying the original array modifies the slice!
"""
Explanation: Last but not least, ndarray slices are actually views on the same data buffer. This means that if you create a slice and modify it, you are actually going to modify the original ndarray as well!
End of explanation
"""
another_slice = a[2:6].copy()
another_slice[1] = 3000
a # the original array is untouched
a[3] = 4000
another_slice # similary, modifying the original array does not affect the slice copy
"""
Explanation: If you want a copy of the data, you need to use the copy method:
End of explanation
"""
b = np.arange(48).reshape(4, 12)
b
b[1, 2] # row 1, col 2
b[1, :] # row 1, all columns
b[:, 1] # all rows, column 1
"""
Explanation: Multi-dimensional arrays
Multi-dimensional arrays can be accessed in a similar way by providing an index or slice for each axis, separated by commas:
End of explanation
"""
b[1, :]
b[1:2, :]
"""
Explanation: Caution: note the subtle difference between these two expressions:
End of explanation
"""
b[(0,2), 2:5] # rows 0 and 2, columns 2 to 4 (5-1)
b[:, (-1, 2, -1)] # all rows, columns -1 (last), 2 and -1 (again, and in this order)
"""
Explanation: The first expression returns row 1 as a 1D array of shape (12,), while the second returns that same row as a 2D array of shape (1, 12).
Fancy indexing
You may also specify a list of indices that you are interested in. This is referred to as fancy indexing.
End of explanation
"""
b[(-1, 2, -1, 2), (5, 9, 1, 9)] # returns a 1D array with b[-1, 5], b[2, 9], b[-1, 1] and b[2, 9] (again)
"""
Explanation: If you provide multiple index arrays, you get a 1D ndarray containing the values of the elements at the specified coordinates.
End of explanation
"""
c = b.reshape(4,2,6)
c
c[2, 1, 4] # matrix 2, row 1, col 4
c[2, :, 3] # matrix 2, all rows, col 3
"""
Explanation: Higher dimensions
Everything works just as well with higher dimensional arrays, but it's useful to look at a few examples:
End of explanation
"""
c[2, 1] # Return matrix 2, row 1, all columns. This is equivalent to c[2, 1, :]
"""
Explanation: If you omit coordinates for some axes, then all elements in these axes are returned:
End of explanation
"""
c[2, ...] # matrix 2, all rows, all columns. This is equivalent to c[2, :, :]
c[2, 1, ...] # matrix 2, row 1, all columns. This is equivalent to c[2, 1, :]
c[2, ..., 3] # matrix 2, all rows, column 3. This is equivalent to c[2, :, 3]
c[..., 3] # all matrices, all rows, column 3. This is equivalent to c[:, :, 3]
"""
Explanation: Ellipsis (...)
You may also write an ellipsis (...) to ask that all non-specified axes be entirely included.
End of explanation
"""
b = np.arange(48).reshape(4, 12)
b
rows_on = np.array([True, False, True, False])
b[rows_on, :] # Rows 0 and 2, all columns. Equivalent to b[(0, 2), :]
cols_on = np.array([False, True, False] * 4)
b[:, cols_on] # All rows, columns 1, 4, 7 and 10
"""
Explanation: Boolean indexing
You can also provide an ndarray of boolean values on one axis to specify the indices that you want to access.
End of explanation
"""
b[np.ix_(rows_on, cols_on)]
np.ix_(rows_on, cols_on)
"""
Explanation: np.ix_
You cannot use boolean indexing this way on multiple axes, but you can work around this by using the ix_ function:
End of explanation
"""
b[b % 3 == 1]
"""
Explanation: If you use a boolean array that has the same shape as the ndarray, then you get in return a 1D array containing all the values that have True at their coordinate. This is generally used along with conditional operators:
End of explanation
"""
c = np.arange(24).reshape(2, 3, 4) # A 3D array (composed of two 3x4 matrices)
c
for m in c:
print("Item:")
print(m)
for i in range(len(c)): # Note that len(c) == c.shape[0]
print("Item:")
print(c[i])
"""
Explanation: Iterating
Iterating over ndarrays is very similar to iterating over regular python arrays. Note that iterating over multidimensional arrays is done with respect to the first axis.
End of explanation
"""
for i in c.flat:
print("Item:", i)
"""
Explanation: If you want to iterate on all elements in the ndarray, simply iterate over the flat attribute:
End of explanation
"""
q1 = np.full((3,4), 1.0)
q1
q2 = np.full((4,4), 2.0)
q2
q3 = np.full((3,4), 3.0)
q3
"""
Explanation: Stacking arrays
It is often useful to stack together different arrays. NumPy offers several functions to do just that. Let's start by creating a few arrays.
End of explanation
"""
q4 = np.vstack((q1, q2, q3))
q4
q4.shape
"""
Explanation: vstack
Now let's stack them vertically using vstack:
End of explanation
"""
q5 = np.hstack((q1, q3))
q5
q5.shape
"""
Explanation: This was possible because q1, q2 and q3 all have the same shape (except for the vertical axis, but that's ok since we are stacking on that axis).
hstack
We can also stack arrays horizontally using hstack:
End of explanation
"""
try:
q5 = np.hstack((q1, q2, q3))
except ValueError as e:
print(e)
"""
Explanation: This is possible because q1 and q3 both have 3 rows. But since q2 has 4 rows, it cannot be stacked horizontally with q1 and q3:
End of explanation
"""
q7 = np.concatenate((q1, q2, q3), axis=0) # Equivalent to vstack
q7
q7.shape
"""
Explanation: concatenate
The concatenate function stacks arrays along any given existing axis.
End of explanation
"""
q8 = np.stack((q1, q3))
q8
q8.shape
"""
Explanation: As you might guess, hstack is equivalent to calling concatenate with axis=1.
stack
The stack function stacks arrays along a new axis. All arrays have to have the same shape.
End of explanation
"""
r = np.arange(24).reshape(6,4)
r
"""
Explanation: Splitting arrays
Splitting is the opposite of stacking. For example, let's use the vsplit function to split a matrix vertically.
First let's create a 6x4 matrix:
End of explanation
"""
r1, r2, r3 = np.vsplit(r, 3)
r1
r2
r3
"""
Explanation: Now let's split it in three equal parts, vertically:
End of explanation
"""
r4, r5 = np.hsplit(r, 2)
r4
r5
"""
Explanation: There is also a split function which splits an array along any given axis. Calling vsplit is equivalent to calling split with axis=0. There is also an hsplit function, equivalent to calling split with axis=1:
End of explanation
"""
t = np.arange(24).reshape(4,2,3)
t
"""
Explanation: Transposing arrays
The transpose method creates a new view on an ndarray's data, with axes permuted in the given order.
For example, let's create a 3D array:
End of explanation
"""
t1 = t.transpose((1,2,0))
t1
t1.shape
"""
Explanation: Now let's create an ndarray such that the axes 0, 1, 2 (depth, height, width) are re-ordered to 1, 2, 0 (depth→width, height→depth, width→height):
End of explanation
"""
t2 = t.transpose() # equivalent to t.transpose((2, 1, 0))
t2
t2.shape
"""
Explanation: By default, transpose reverses the order of the dimensions:
End of explanation
"""
t3 = t.swapaxes(0,1) # equivalent to t.transpose((1, 0, 2))
t3
t3.shape
"""
Explanation: NumPy provides a convenience function swapaxes to swap two axes. For example, let's create a new view of t with depth and height swapped:
End of explanation
"""
m1 = np.arange(10).reshape(2,5)
m1
m1.T
"""
Explanation: Linear algebra
NumPy 2D arrays can be used to represent matrices efficiently in python. We will just quickly go through some of the main matrix operations available. For more details about Linear Algebra, vectors and matrics, go through the Linear Algebra tutorial.
Matrix transpose
The T attribute is equivalent to calling transpose() when the rank is ≥2:
End of explanation
"""
m2 = np.arange(5)
m2
m2.T
"""
Explanation: The T attribute has no effect on rank 0 (empty) or rank 1 arrays:
End of explanation
"""
m2r = m2.reshape(1,5)
m2r
m2r.T
"""
Explanation: We can get the desired transposition by first reshaping the 1D array to a single-row matrix (2D):
End of explanation
"""
n1 = np.arange(10).reshape(2, 5)
n1
n2 = np.arange(15).reshape(5,3)
n2
n1.dot(n2)
"""
Explanation: Matrix dot product
Let's create two matrices and execute a matrix dot product using the dot method.
End of explanation
"""
import numpy.linalg as linalg
m3 = np.array([[1,2,3],[5,7,11],[21,29,31]])
m3
linalg.inv(m3)
"""
Explanation: Caution: as mentionned previously, n1*n2 is not a dot product, it is an elementwise product.
Matrix inverse and pseudo-inverse
Many of the linear algebra functions are available in the numpy.linalg module, in particular the inv function to compute a square matrix's inverse:
End of explanation
"""
linalg.pinv(m3)
"""
Explanation: You can also compute the pseudoinverse using pinv:
End of explanation
"""
m3.dot(linalg.inv(m3))
"""
Explanation: Identity matrix
The product of a matrix by its inverse returns the identiy matrix (with small floating point errors):
End of explanation
"""
np.eye(3)
"""
Explanation: You can create an identity matrix of size NxN by calling eye:
End of explanation
"""
q, r = linalg.qr(m3)
q
r
q.dot(r) # q.r equals m3
"""
Explanation: QR decomposition
The qr function computes the QR decomposition of a matrix:
End of explanation
"""
linalg.det(m3) # Computes the matrix determinant
"""
Explanation: Determinant
The det function computes the matrix determinant:
End of explanation
"""
eigenvalues, eigenvectors = linalg.eig(m3)
eigenvalues # λ
eigenvectors # v
m3.dot(eigenvectors) - eigenvalues * eigenvectors # m3.v - λ*v = 0
"""
Explanation: Eigenvalues and eigenvectors
The eig function computes the eigenvalues and eigenvectors of a square matrix:
End of explanation
"""
m4 = np.array([[1,0,0,0,2], [0,0,3,0,0], [0,0,0,0,0], [0,2,0,0,0]])
m4
U, S_diag, V = linalg.svd(m4)
U
S_diag
"""
Explanation: Singular Value Decomposition
The svd function takes a matrix and returns its singular value decomposition:
End of explanation
"""
S = np.zeros((4, 5))
S[np.diag_indices(4)] = S_diag
S # Σ
V
U.dot(S).dot(V) # U.Σ.V == m4
"""
Explanation: The svd function just returns the values in the diagonal of Σ, but we want the full Σ matrix, so let's create it:
End of explanation
"""
np.diag(m3) # the values in the diagonal of m3 (top left to bottom right)
np.trace(m3) # equivalent to np.diag(m3).sum()
"""
Explanation: Diagonal and trace
End of explanation
"""
coeffs = np.array([[2, 6], [5, 3]])
depvars = np.array([6, -9])
solution = linalg.solve(coeffs, depvars)
solution
"""
Explanation: Solving a system of linear scalar equations
The solve function solves a system of linear scalar equations, such as:
$2x + 6y = 6$
$5x + 3y = -9$
End of explanation
"""
coeffs.dot(solution), depvars # yep, it's the same
"""
Explanation: Let's check the solution:
End of explanation
"""
np.allclose(coeffs.dot(solution), depvars)
"""
Explanation: Looks good! Another way to check the solution:
End of explanation
"""
import math
data = np.empty((768, 1024))
for y in range(768):
for x in range(1024):
data[y, x] = math.sin(x*y/40.5) # BAD! Very inefficient.
"""
Explanation: Vectorization
Instead of executing operations on individual array items, one at a time, your code is much more efficient if you try to stick to array operations. This is called vectorization. This way, you can benefit from NumPy's many optimizations.
For example, let's say we want to generate a 768x1024 array based on the formula $sin(xy/40.5)$. A bad option would be to do the math in python using nested loops:
End of explanation
"""
x_coords = np.arange(0, 1024) # [0, 1, 2, ..., 1023]
y_coords = np.arange(0, 768) # [0, 1, 2, ..., 767]
X, Y = np.meshgrid(x_coords, y_coords)
X
Y
"""
Explanation: Sure, this works, but it's terribly inefficient since the loops are taking place in pure python. Let's vectorize this algorithm. First, we will use NumPy's meshgrid function which generates coordinate matrices from coordinate vectors.
End of explanation
"""
data = np.sin(X*Y/40.5)
"""
Explanation: As you can see, both X and Y are 768x1024 arrays, and all values in X correspond to the horizontal coordinate, while all values in Y correspond to the the vertical coordinate.
Now we can simply compute the result using array operations:
End of explanation
"""
import matplotlib.pyplot as plt
import matplotlib.cm as cm
fig = plt.figure(1, figsize=(7, 6))
plt.imshow(data, cmap=cm.hot, interpolation="bicubic")
plt.show()
"""
Explanation: Now we can plot this data using matplotlib's imshow function (see the matplotlib tutorial).
End of explanation
"""
a = np.random.rand(2,3)
a
np.save("my_array", a)
"""
Explanation: Saving and loading
NumPy makes it easy to save and load ndarrays in binary or text format.
Binary .npy format
Let's create a random array and save it.
End of explanation
"""
with open("my_array.npy", "rb") as f:
content = f.read()
content
"""
Explanation: Done! Since the file name contains no file extension was provided, NumPy automatically added .npy. Let's take a peek at the file content:
End of explanation
"""
a_loaded = np.load("my_array.npy")
a_loaded
"""
Explanation: To load this file into a NumPy array, simply call load:
End of explanation
"""
np.savetxt("my_array.csv", a)
"""
Explanation: Text format
Let's try saving the array in text format:
End of explanation
"""
with open("my_array.csv", "rt") as f:
print(f.read())
"""
Explanation: Now let's look at the file content:
End of explanation
"""
np.savetxt("my_array.csv", a, delimiter=",")
"""
Explanation: This is a CSV file with tabs as delimiters. You can set a different delimiter:
End of explanation
"""
a_loaded = np.loadtxt("my_array.csv", delimiter=",")
a_loaded
"""
Explanation: To load this file, just use loadtxt:
End of explanation
"""
b = np.arange(24, dtype=np.uint8).reshape(2, 3, 4)
b
np.savez("my_arrays", my_a=a, my_b=b)
"""
Explanation: Zipped .npz format
It is also possible to save multiple arrays in one zipped file:
End of explanation
"""
with open("my_arrays.npz", "rb") as f:
content = f.read()
repr(content)[:180] + "[...]"
"""
Explanation: Again, let's take a peek at the file content. Note that the .npz file extension was automatically added.
End of explanation
"""
my_arrays = np.load("my_arrays.npz")
my_arrays
"""
Explanation: You then load this file like so:
End of explanation
"""
my_arrays.keys()
my_arrays["my_a"]
"""
Explanation: This is a dict-like object which loads the arrays lazily:
End of explanation
"""
|
gilmana/Cu_transition_time_course- | data_explore_failed_clstr_mthds/HDBSCAN_clustering.ipynb | mit | # Clustering the pearsons_R with N/A vlaues removed
hdb_t1 = time.time()
hdb_pearson_r = hdbscan.HDBSCAN(metric = "precomputed", min_cluster_size=10).fit(df3_pearson_r)
hdb_pearson_r_labels = hdb_pearson_r.labels_
hdb_elapsed_time = time.time() - hdb_t1
print("time to cluster", hdb_elapsed_time)
print(np.unique(hdb_pearson_r_labels)) # unique bins, zero is noise
print(np.bincount(hdb_pearson_r_labels[hdb_pearson_r_labels!=-1]))
pearson_clusters = {i: np.where(hdb_pearson_r_labels == i)[0] for i in range(2)}
pearson_clusters
#pd.set_option('display.height', 500) #These two commands allow for the display of max of 500 rows - exploring genes
#pd.set_option('display.max_rows', 500)
df2_TPM.iloc[pearson_clusters[1],:] #the genes that were clustered together [0,1]
"""
Explanation: ### Clustering pearsons_r with HDBSCAN
End of explanation
"""
df3_euclidean_mean.hist()
# Clustering the mean centered euclidean distance of TPM counts
hdb_t1 = time.time()
hdb_euclidean_mean = hdbscan.HDBSCAN(metric = "precomputed", min_cluster_size=10).fit(df3_euclidean_mean)
hdb_euclidean_mean_labels = hdb_euclidean_mean.labels_
hdb_elapsed_time = time.time() - hdb_t1
print("time to cluster", hdb_elapsed_time)
print(np.unique(hdb_euclidean_mean_labels))
print(np.bincount(hdb_euclidean_mean_labels[hdb_euclidean_mean_labels!=-1]))
euclidean_mean_clusters = {i: np.where(hdb_euclidean_mean_labels == i)[0] for i in range(2)}
df2_TPM.iloc[euclidean_mean_clusters[1],:]
"""
Explanation: Looks like there are two clusters, some expression and zero expression across samples.
### Clustering mean centered euclidean distance with with HDBSCAN
End of explanation
"""
df3_euclidean_log2
# Clustering the log2 transformed euclidean distance of TPM counts
hdb_t1 = time.time()
hdb_euclidean_log2 = hdbscan.HDBSCAN(metric = "precomputed", min_cluster_size=10).fit(df3_euclidean_log2)
hdb_euclidean_log2_labels = hdb_euclidean_log2.labels_
hdb_elapsed_time = time.time() - hdb_t1
print("time to cluster", hdb_elapsed_time)
print(np.unique(hdb_euclidean_log2_labels))
print(np.bincount(hdb_euclidean_log2_labels[hdb_euclidean_log2_labels!=-1]))
euclidean_log2_clusters = {i: np.where(hdb_euclidean_log2_labels == i)[0] for i in range(2)}
df2_TPM.iloc[euclidean_log2_clusters[1],:]
"""
Explanation: Looks like 2 clusters - both with zero expression.
looks like wether it is a numpy array or pandas dataframe, the result is the same. lets now try to get index of the clustered points.
### Clustering log transformed euclidean distance with with HDBSCAN
End of explanation
"""
df2_TPM_values = df2_TPM.loc[:,"5GB1_FM40_T0m_TR2":"5GB1_FM40_T180m_TR1"] #isolating the data values
df2_TPM_values_T = df2_TPM_values.T #transposing the data
standard_scaler = StandardScaler()
TPM_counts_mean_centered = standard_scaler.fit_transform(df2_TPM_values_T) #mean centering the data
TPM_counts_mean_centered = pd.DataFrame(TPM_counts_mean_centered) #back to Dataframe
#transposing back to original form and reincerting indeces and columns
my_index = df2_TPM_values.index
my_columns = df2_TPM_values.columns
TPM_counts_mean_centered = TPM_counts_mean_centered.T
TPM_counts_mean_centered.set_index(my_index, inplace=True)
TPM_counts_mean_centered.columns = my_columns
# Clustering the pearsons_R with N/A vlaues removed
hdb_t1 = time.time()
hdb_euclidean = hdbscan.HDBSCAN(metric = "euclidean", min_cluster_size=5).fit(TPM_counts_mean_centered)
hdb_euclidean_labels = hdb_euclidean.labels_
hdb_elapsed_time = time.time() - hdb_t1
print("time to cluster", hdb_elapsed_time)
print(np.unique(hdb_euclidean_labels))
print(np.bincount(hdb_euclidean_labels[hdb_euclidean_labels!=-1]))
"""
Explanation: Clustering using built-in HDBSCAN euclidean distance metric (mean centered and scaled to unit variance)
End of explanation
"""
Euclidean_standard_scaled_clusters = {i: np.where(hdb_euclidean_labels == i)[0] for i in range(7)}
df2_TPM.iloc[Euclidean_standard_scaled_clusters[1],:]
"""
Explanation: lets look at some clusters
Euclidean_standard_scaled_clusters = {i: np.where(hdb_euclidean_labels == i)[0] for i in range(7)}
df2_TPM.iloc[Euclidean_standard_scaled_clusters[0],:]
End of explanation
"""
df2_TPM_log2_scale= df2_TPM_log2.T #transposing the data
standard_scaler = StandardScaler()
TPM_log2_mean_scaled = standard_scaler.fit_transform(df2_TPM_log2_scale) #mean centering the data
TPM_log2_mean_scaled = pd.DataFrame(TPM_log2_mean_scaled) #back to Dataframe
#transposing back to original form and reincerting indeces and columns
my_index = df2_TPM_values.index
my_columns = df2_TPM_values.columns
TPM_log2_mean_scaled = TPM_log2_mean_scaled.T
TPM_log2_mean_scaled.set_index(my_index, inplace=True)
TPM_log2_mean_scaled.columns = my_columns
# Clustering the pearsons_R with N/A vlaues removed
hdb_t1 = time.time()
hdb_log2_euclidean = hdbscan.HDBSCAN(metric = "euclidean", min_cluster_size=5).fit(TPM_log2_mean_scaled)
hdb_log2_euclidean = hdb_log2_euclidean.labels_
hdb_elapsed_time = time.time() - hdb_t1
print("time to cluster", hdb_elapsed_time)
print(np.unique(hdb_log2_euclidean))
print(np.bincount(hdb_log2_euclidean[hdb_log2_euclidean!=-1]))
"""
Explanation: Euclidean_standard_scaled_clusters
Clustering log2 transformed data using built-in HDBSCAN euclidean distance metric (mean centered and scaled to unit variance)
End of explanation
"""
|
david-hoffman/scripts | notebooks/montecarlo_numbapro.ipynb | apache-2.0 | import numpy as np # numpy namespace
from timeit import default_timer as timer # for timing
from matplotlib import pyplot # for plotting
import math
def step_numpy(dt, prices, c0, c1, noises):
return prices * np.exp(c0 * dt + c1 * noises)
def mc_numpy(paths, dt, interest, volatility):
c0 = interest - 0.5 * volatility ** 2
c1 = volatility * np.sqrt(dt)
for j in range(1, paths.shape[1]): # for each time step
prices = paths[:, j - 1] # last prices
# gaussian noises for simulation
noises = np.random.normal(0., 1., prices.size)
# simulate
paths[:, j] = step_numpy(dt, prices, c0, c1, noises)
"""
Explanation: A Monte Carlo Option Pricer
This notebook introduces the vectorize and CUDA Python features in NumbaPro to speedup a monte carlo option pricer.
A Numpy Implementation
The following is a NumPy implementatation of a simple monte carlo pricer.
It consists of two functions.
The mc_numpy function is the entry point of the pricer.
The entire simulation is divided into small time step dt.
The step_numpy function simulates the next batch of prices for each dt.
End of explanation
"""
# stock parameter
StockPrice = 20.83
StrikePrice = 21.50
Volatility = 0.021
InterestRate = 0.20
Maturity = 5. / 12.
# monte-carlo parameter
NumPath = 3000000
NumStep = 100
# plotting
MAX_PATH_IN_PLOT = 50
"""
Explanation: Configurations
End of explanation
"""
def driver(pricer, do_plot=False):
paths = np.zeros((NumPath, NumStep + 1), order='F')
paths[:, 0] = StockPrice
DT = Maturity / NumStep
ts = timer()
pricer(paths, DT, InterestRate, Volatility)
te = timer()
elapsed = te - ts
ST = paths[:, -1]
PaidOff = np.maximum(paths[:, -1] - StrikePrice, 0)
print('Result')
fmt = '%20s: %s'
print(fmt % ('stock price', np.mean(ST)))
print(fmt % ('standard error', np.std(ST) / np.sqrt(NumPath)))
print(fmt % ('paid off', np.mean(PaidOff)))
optionprice = np.mean(PaidOff) * np.exp(-InterestRate * Maturity)
print(fmt % ('option price', optionprice))
print('Performance')
NumCompute = NumPath * NumStep
print(fmt % ('Mstep/second', '%.2f' % (NumCompute / elapsed / 1e6)))
print(fmt % ('time elapsed', '%.3fs' % (te - ts)))
if do_plot:
pathct = min(NumPath, MAX_PATH_IN_PLOT)
for i in range(pathct):
pyplot.plot(paths[i])
print('Plotting %d/%d paths' % (pathct, NumPath))
pyplot.show()
return elapsed
"""
Explanation: Driver
The driver measures the performance of the given pricer and plots the simulation paths.
End of explanation
"""
numpy_time = driver(mc_numpy, do_plot=True)
"""
Explanation: Result
End of explanation
"""
from numbapro import vectorize
@vectorize(['f8(f8, f8, f8, f8, f8)'])
def step_cpuvec(last, dt, c0, c1, noise):
return last * math.exp(c0 * dt + c1 * noise)
def mc_cpuvec(paths, dt, interest, volatility):
c0 = interest - 0.5 * volatility ** 2
c1 = volatility * np.sqrt(dt)
for j in range(1, paths.shape[1]):
prices = paths[:, j - 1]
noises = np.random.normal(0., 1., prices.size)
paths[:, j] = step_cpuvec(prices, dt, c0, c1, noises)
cpuvec_time = driver(mc_cpuvec, do_plot=True)
"""
Explanation: Basic Vectorize
The vectorize decorator compiles a scalar function into a Numpy ufunc-like object for operation on arrays.
The decorator must be provided with a list of possible signatures.
The step_cpuvec takes 5 double arrays and return a double array.
End of explanation
"""
@vectorize(['f8(f8, f8, f8, f8, f8)'], target='parallel')
def step_parallel(last, dt, c0, c1, noise):
return last * math.exp(c0 * dt + c1 * noise)
def mc_parallel(paths, dt, interest, volatility):
c0 = interest - 0.5 * volatility ** 2
c1 = volatility * np.sqrt(dt)
for j in range(1, paths.shape[1]):
prices = paths[:, j - 1]
noises = np.random.normal(0., 1., prices.size)
paths[:, j] = step_parallel(prices, dt, c0, c1, noises)
parallel_time = driver(mc_parallel, do_plot=True)
"""
Explanation: Parallel Vectorize
By setting the target to parallel, the vectorize decorator produces a multithread implementation.
End of explanation
"""
@vectorize(['f8(f8, f8, f8, f8, f8)'], target='gpu')
def step_gpuvec(last, dt, c0, c1, noise):
return last * math.exp(c0 * dt + c1 * noise)
def mc_gpuvec(paths, dt, interest, volatility):
c0 = interest - 0.5 * volatility ** 2
c1 = volatility * np.sqrt(dt)
for j in range(1, paths.shape[1]):
prices = paths[:, j - 1]
noises = np.random.normal(0., 1., prices.size)
paths[:, j] = step_gpuvec(prices, dt, c0, c1, noises)
gpuvec_time = driver(mc_gpuvec, do_plot=True)
"""
Explanation: CUDA Vectorize
To take advantage of the CUDA GPU, user can simply set the target to gpu.
There are no different other than the target keyword argument.
End of explanation
"""
from numbapro import cuda, jit
from numbapro.cudalib import curand
@jit('void(double[:], double[:], double, double, double, double[:])', target='gpu')
def step_cuda(last, paths, dt, c0, c1, normdist):
i = cuda.grid(1)
if i >= paths.shape[0]:
return
noise = normdist[i]
paths[i] = last[i] * math.exp(c0 * dt + c1 * noise)
def mc_cuda(paths, dt, interest, volatility):
n = paths.shape[0]
blksz = cuda.get_current_device().MAX_THREADS_PER_BLOCK
gridsz = int(math.ceil(float(n) / blksz))
# instantiate a CUDA stream for queueing async CUDA cmds
stream = cuda.stream()
# instantiate a cuRAND PRNG
prng = curand.PRNG(curand.PRNG.MRG32K3A, stream=stream)
# Allocate device side array
d_normdist = cuda.device_array(n, dtype=np.double, stream=stream)
c0 = interest - 0.5 * volatility ** 2
c1 = volatility * np.sqrt(dt)
# configure the kernel
# similar to CUDA-C: step_cuda<<<gridsz, blksz, 0, stream>>>
step_cfg = step_cuda[gridsz, blksz, stream]
# transfer the initial prices
d_last = cuda.to_device(paths[:, 0], stream=stream)
for j in range(1, paths.shape[1]):
# call cuRAND to populate d_normdist with gaussian noises
prng.normal(d_normdist, mean=0, sigma=1)
# setup memory for new prices
# device_array_like is like empty_like for GPU
d_paths = cuda.device_array_like(paths[:, j], stream=stream)
# invoke step kernel asynchronously
step_cfg(d_last, d_paths, dt, c0, c1, d_normdist)
# transfer memory back to the host
d_paths.copy_to_host(paths[:, j], stream=stream)
d_last = d_paths
# wait for all GPU work to complete
stream.synchronize()
cuda_time = driver(mc_cuda, do_plot=True)
"""
Explanation: In the above simple CUDA vectorize example, the speedup is not significant due to the memory transfer overhead. Since the kernel has relatively low compute intensity, explicit management of memory transfer would give a significant speedup.
CUDA JIT
This implementation uses the CUDA JIT feature with explicit memory transfer and asynchronous kernel call. A cuRAND random number generator is used instead of the NumPy implementation.
End of explanation
"""
def perf_plot(rawdata, xlabels):
data = [numpy_time / x for x in rawdata]
idx = np.arange(len(data))
fig = pyplot.figure()
width = 0.5
ax = fig.add_subplot(111)
ax.bar(idx, data, width)
ax.set_ylabel('normalized speedup')
ax.set_xticks(idx + width / 2)
ax.set_xticklabels(xlabels)
ax.set_ylim(0.9)
pyplot.show()
perf_plot([numpy_time, cpuvec_time, parallel_time, gpuvec_time],
['numpy', 'cpu-vect', 'parallel-vect', 'gpu-vect'])
perf_plot([numpy_time, cpuvec_time, parallel_time, gpuvec_time, cuda_time],
['numpy', 'cpu-vect', 'parallel-vect', 'gpu-vect', 'cuda'])
"""
Explanation: Performance Comparision
End of explanation
"""
|
scikit-optimize/scikit-optimize.github.io | dev/notebooks/auto_examples/sklearn-gridsearchcv-replacement.ipynb | bsd-3-clause | print(__doc__)
import numpy as np
np.random.seed(123)
import matplotlib.pyplot as plt
"""
Explanation: Scikit-learn hyperparameter search wrapper
Iaroslav Shcherbatyi, Tim Head and Gilles Louppe. June 2017.
Reformatted by Holger Nahrstaedt 2020
.. currentmodule:: skopt
Introduction
This example assumes basic familiarity with
scikit-learn <http://scikit-learn.org/stable/index.html>_.
Search for parameters of machine learning models that result in best
cross-validation performance is necessary in almost all practical
cases to get a model with best generalization estimate. A standard
approach in scikit-learn is using :obj:sklearn.model_selection.GridSearchCV class, which takes
a set of values for every parameter to try, and simply enumerates all
combinations of parameter values. The complexity of such search grows
exponentially with the addition of new parameters. A more scalable
approach is using :obj:sklearn.model_selection.RandomizedSearchCV, which however does not take
advantage of the structure of a search space.
Scikit-optimize provides a drop-in replacement for :obj:sklearn.model_selection.GridSearchCV,
which utilizes Bayesian Optimization where a predictive model referred
to as "surrogate" is used to model the search space and utilized to
arrive at good parameter values combination as soon as possible.
Note: for a manual hyperparameter optimization example, see
"Hyperparameter Optimization" notebook.
End of explanation
"""
from skopt import BayesSearchCV
from sklearn.datasets import load_digits
from sklearn.svm import SVC
from sklearn.model_selection import train_test_split
X, y = load_digits(n_class=10, return_X_y=True)
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.75, test_size=.25, random_state=0)
# log-uniform: understand as search over p = exp(x) by varying x
opt = BayesSearchCV(
SVC(),
{
'C': (1e-6, 1e+6, 'log-uniform'),
'gamma': (1e-6, 1e+1, 'log-uniform'),
'degree': (1, 8), # integer valued parameter
'kernel': ['linear', 'poly', 'rbf'], # categorical parameter
},
n_iter=32,
cv=3
)
opt.fit(X_train, y_train)
print("val. score: %s" % opt.best_score_)
print("test score: %s" % opt.score(X_test, y_test))
"""
Explanation: Minimal example
A minimal example of optimizing hyperparameters of SVC (Support Vector machine Classifier) is given below.
End of explanation
"""
from skopt import BayesSearchCV
from skopt.space import Real, Categorical, Integer
from skopt.plots import plot_objective, plot_histogram
from sklearn.datasets import load_digits
from sklearn.svm import LinearSVC, SVC
from sklearn.pipeline import Pipeline
from sklearn.model_selection import train_test_split
X, y = load_digits(n_class=10, return_X_y=True)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
# pipeline class is used as estimator to enable
# search over different model types
pipe = Pipeline([
('model', SVC())
])
# single categorical value of 'model' parameter is
# sets the model class
# We will get ConvergenceWarnings because the problem is not well-conditioned.
# But that's fine, this is just an example.
linsvc_search = {
'model': [LinearSVC(max_iter=1000)],
'model__C': (1e-6, 1e+6, 'log-uniform'),
}
# explicit dimension classes can be specified like this
svc_search = {
'model': Categorical([SVC()]),
'model__C': Real(1e-6, 1e+6, prior='log-uniform'),
'model__gamma': Real(1e-6, 1e+1, prior='log-uniform'),
'model__degree': Integer(1,8),
'model__kernel': Categorical(['linear', 'poly', 'rbf']),
}
opt = BayesSearchCV(
pipe,
# (parameter space, # of evaluations)
[(svc_search, 40), (linsvc_search, 16)],
cv=3
)
opt.fit(X_train, y_train)
print("val. score: %s" % opt.best_score_)
print("test score: %s" % opt.score(X_test, y_test))
print("best params: %s" % str(opt.best_params_))
"""
Explanation: Advanced example
In practice, one wants to enumerate over multiple predictive model classes,
with different search spaces and number of evaluations per class. An
example of such search over parameters of Linear SVM, Kernel SVM, and
decision trees is given below.
End of explanation
"""
_ = plot_objective(opt.optimizer_results_[0],
dimensions=["C", "degree", "gamma", "kernel"],
n_minimum_search=int(1e8))
plt.show()
"""
Explanation: Partial Dependence plot of the objective function for SVC
End of explanation
"""
_ = plot_histogram(opt.optimizer_results_[1], 1)
plt.show()
"""
Explanation: Plot of the histogram for LinearSVC
End of explanation
"""
from skopt import BayesSearchCV
from sklearn.datasets import load_iris
from sklearn.svm import SVC
X, y = load_iris(return_X_y=True)
searchcv = BayesSearchCV(
SVC(gamma='scale'),
search_spaces={'C': (0.01, 100.0, 'log-uniform')},
n_iter=10,
cv=3
)
# callback handler
def on_step(optim_result):
score = -optim_result['fun']
print("best score: %s" % score)
if score >= 0.98:
print('Interrupting!')
return True
searchcv.fit(X, y, callback=on_step)
"""
Explanation: Progress monitoring and control using callback argument of fit method
It is possible to monitor the progress of :class:BayesSearchCV with an event
handler that is called on every step of subspace exploration. For single job
mode, this is called on every evaluation of model configuration, and for
parallel mode, this is called when n_jobs model configurations are evaluated
in parallel.
Additionally, exploration can be stopped if the callback returns True.
This can be used to stop the exploration early, for instance when the
accuracy that you get is sufficiently high.
An example usage is shown below.
End of explanation
"""
from skopt import BayesSearchCV
from sklearn.datasets import load_iris
from sklearn.svm import SVC
X, y = load_iris(return_X_y=True)
searchcv = BayesSearchCV(
SVC(),
search_spaces=[
({'C': (0.1, 1.0)}, 19), # 19 iterations for this subspace
{'gamma':(0.1, 1.0)}
],
n_iter=23
)
print(searchcv.total_iterations)
"""
Explanation: Counting total iterations that will be used to explore all subspaces
Subspaces in previous examples can further increase in complexity if you add
new model subspaces or dimensions for feature extraction pipelines. For
monitoring of progress, you would like to know the total number of
iterations it will take to explore all subspaces. This can be
calculated with total_iterations property, as in the code below.
End of explanation
"""
|
imamol555/Machine-Learning | DecisionTree_Math_Fruits.ipynb | mit | training_data = [
['Green', 3, 'Apple'],
['Yellow', 3, 'Apple'],
['Red', 1, 'Grape'],
['Red', 1, 'Grape'],
['Yellow', 3, 'Lemon'],
]
"""
Explanation: Decision Tree
Training Data : Toy Dataset for fruit classifier
End of explanation
"""
#Column names for our data
header = ["color","diameter","label"]
"""Find the unique values for a column in dataset"""
def unique_values(rows,col):
return set([row[col] for row in rows])
"""count the no of examples for each label in a dataset"""
def class_counts(rows):
counts = {} # a dictionary of label -> count.
for row in rows:
# in our dataset format, the label is always the last column
label = row[-1]
if label not in counts:
counts[label] = 0
counts[label] += 1
return counts
"""Check if the value is numeric"""
def is_numeric(value):
return isinstance(value, int) or isinstance(value, float)
"""
Explanation: Useful data and Methods for our Dataset manipulation
End of explanation
"""
class Question:
def __init__(self,col, val):
self.col = col
self.val = val
def match(self,example):
# Compare the feature value in an example to the
# feature value in this question.
value = example[self.col]
if is_numeric(value):
return value >= self.val
else:
return value == self.val
def __repr__(self):
# method to print the question in a readable format.
condition = "=="
if is_numeric(self.val):
condition = ">="
return "Is %s %s %s?" % (
header[self.col], condition, str(self.val))
"""
Explanation: Let's write a class for a question which can be asked to partition the data
Each object of a question class holds a column_no and a col_value
Eg. column_no = 0 denotes color and so col_value can be Green, Yellow or Red
We can write a method which would compare the feature value of example with the feature value of Question
End of explanation
"""
#create a new question with col = 1 and val = 3
q = Question(1,3)
#print q
q
"""
Explanation: Question format -
End of explanation
"""
"""For each row in the dataset, check if it satisfies the question. If
so, add it to 'true rows', otherwise, add it to 'false rows'.
"""
def partition(rows, question):
true_rows, false_rows = [], []
for row in rows:
if question.match(row):
true_rows.append(row)
else:
false_rows.append(row)
return true_rows, false_rows
"""
Explanation: Define a function which partitions the dataset on given question in True and False rows/examples
End of explanation
"""
"""Calculate the Gini Impurity for a list of rows."""
def gini(rows):
counts = class_counts(rows)
impurity = 1
for lbl in counts:
prob_of_lbl = counts[lbl] / float(len(rows))
impurity -= prob_of_lbl**2
return impurity
"""
Explanation: Now calculate a Gini Impurity for a node with given input rows of training dataset
End of explanation
"""
def info_gain(left, right, current_uncertainty):
#we need to calculate weighted avg of impurities at both child nodes
p = float(len(left)) / (len(left) + len(right))
return current_uncertainty - p * gini(left) - (1 - p) * gini(right)
"""
Explanation: Calculate the Information gain for a question given uncertainity at present node and incertainities at left and right child nodes
End of explanation
"""
"""Find the best question to ask by iterating over every feature / value
and calculating the information gain."""
def find_best_split(rows):
best_gain = 0 # keep track of the best information gain
best_question = None # keep train of the feature / value that produced it
current_uncertainty = gini(rows)
n_features = len(rows[0]) - 1 # number of columns
for col in range(n_features): # for each feature
values = set([row[col] for row in rows]) # unique values in the column
for val in values: # for each value
question = Question(col, val)
# try splitting the dataset
true_rows, false_rows = partition(rows, question)
# Skip this split if it doesn't divide the
# dataset.
if len(true_rows) == 0 or len(false_rows) == 0:
continue
# Calculate the information gain from this split
gain = info_gain(true_rows, false_rows, current_uncertainty)
# You actually can use '>' instead of '>=' here
# but I wanted the tree to look a certain way for our
# toy dataset.
if gain >= best_gain:
best_gain, best_question = gain, question
return best_gain, best_question
"""
Explanation: Which question to ask ??
End of explanation
"""
"""
A Decision Node asks a question.
This holds a reference to the question, and to the two child nodes.
"""
class Decision_Node:
def __init__(self,question,true_branch,false_branch):
self.question = question
self.true_branch = true_branch
self.false_branch = false_branch
"""
Explanation: Define nodes in tree
1. Decision Node - Node with Question to ask
End of explanation
"""
"""
A Leaf node classifies data.
This holds a dictionary of class (e.g., "Apple") -> number of time it
appears in the rows from the training data that reach this leaf.
"""
class Leaf:
def __init__(self, rows):
self.predictions = class_counts(rows)
"""
Explanation: 2. Leaf node - Gives prediction
End of explanation
"""
def build_tree(rows):
# Try partitioing the dataset on each of the unique attribute,
# calculate the information gain,
# and return the question that produces the highest gain.
gain, question = find_best_split(rows)
# Base case: no further info gain
# Since we can ask no further questions,
# we'll return a leaf.
if gain == 0:
return Leaf(rows)
# If we reach here, we have found a useful feature / value
# to partition on.
true_rows, false_rows = partition(rows, question)
# Recursively build the true branch.
true_branch = build_tree(true_rows)
# Recursively build the false branch.
false_branch = build_tree(false_rows)
# Return a Question node.
# This records the best feature / value to ask at this point,
# as well as the branches to follow
# dependingo on the answer.
return Decision_Node(question, true_branch, false_branch)
"""
Explanation: Build a Tree
End of explanation
"""
def print_tree(node, spacing=""):
# Base case: we've reached a leaf
if isinstance(node, Leaf):
print (spacing + "Predict", node.predictions)
return
# Print the question at this node
print (spacing + str(node.question))
# Call this function recursively on the true branch
print (spacing + '--> True:')
print_tree(node.true_branch, spacing + " ")
# Call this function recursively on the false branch
print (spacing + '--> False:')
print_tree(node.false_branch, spacing + " ")
"""
Explanation: Print the Tree
End of explanation
"""
my_tree = build_tree(training_data)
print_tree(my_tree)
"""
Explanation: All Work Done !!! Now It's time to Build a Model from given Training data
End of explanation
"""
def classify(row, node):
# Base case: we've reached a leaf
if isinstance(node, Leaf):
return node.predictions
# Decide whether to follow the true-branch or the false-branch.
# Compare the feature / value stored in the node,
# to the example we're considering.
if node.question.match(row):
return classify(row, node.true_branch)
else:
return classify(row, node.false_branch)
"""
Explanation: Test the model with test data
Write a function to classify the test data
End of explanation
"""
"""A nicer way to print the predictions at a leaf."""
def print_leaf(counts):
total = sum(counts.values()) * 1.0
probs = {}
for lbl in counts.keys():
probs[lbl] = str(int(counts[lbl] / total * 100)) + "%"
return probs
"""
Explanation: Print Prediction at Leaf Node
End of explanation
"""
print_leaf(classify(training_data[0],my_tree))
"""
Explanation: Check for example
End of explanation
"""
testing_data = [
['Green', 3, 'Apple'],
['Yellow', 4, 'Apple'],
['Red', 2, 'Grape'],
['Red', 1, 'Grape'],
['Yellow', 3, 'Lemon'],
]
"""
Explanation: Test Data
End of explanation
"""
for row in testing_data:
print ("Actual: %s. Predicted: %s" %
(row[-1], print_leaf(classify(row, my_tree))))
"""
Explanation: Evaluate
End of explanation
"""
|
fcollonval/coursera_data_visualization | PotentialModerator.ipynb | mit | # Magic command to insert the graph directly in the notebook
%matplotlib inline
# Load a useful Python libraries for handling data
import pandas as pd
import numpy as np
import statsmodels.formula.api as smf
import scipy.stats as stats
import seaborn as sns
import matplotlib.pyplot as plt
from IPython.display import Markdown, display
# Read the data
data_filename = r'gapminder.csv'
data = pd.read_csv(data_filename, low_memory=False)
data = data.set_index('country')
"""
Explanation: Data Analysis Tools
Assignment: Testing a Potential Moderator
Following is the Python program I wrote to fulfill the last assignment of the Data Analysis Tools online course.
I used Jupyter Notebook as it is a pretty way to write code and present results.
Research question for this assignment
Using the Gapminder database, I found a significant correlation between the income per person (the explanatory variable) and the residential electricity consumption (the response variable). For this exercice, I would like to see if the urban rate is a potential moderator.
Data management
For the question I'm interested in, the countries for which data are missing will be discarded. As missing data in Gapminder database are replace directly by NaN no special data treatment is needed.
End of explanation
"""
display(Markdown("Number of countries: {}".format(len(data))))
display(Markdown("Number of variables: {}".format(len(data.columns))))
subdata2 = (data[['incomeperperson', 'urbanrate', 'relectricperperson']]
.assign(income=lambda x: pd.to_numeric(x['incomeperperson'], errors='coerce'),
urbanrate=lambda x: pd.to_numeric(x['urbanrate'], errors='coerce'),
electricity=lambda x: pd.to_numeric(x['relectricperperson'], errors='coerce'))
.dropna())
"""
Explanation: General information on the Gapminder data
End of explanation
"""
sns.distplot(subdata2.income)
plt.xlabel("Income per person (constant 2000 US$)")
_ = plt.title("Distribution of the income per person")
sns.distplot(subdata2.electricity)
plt.xlabel("Residential electricity consumption (kWh)")
_ = plt.title("Distribution of the residential electricity consumption")
sns.distplot(subdata2.urbanrate)
plt.xlabel("Urban rate (%)")
_ = plt.title("Urban rate distribution")
"""
Explanation: Data analysis
End of explanation
"""
sns.regplot(x='income', y='electricity', data=subdata2)
plt.xlabel('Income per person (2000 US$)')
plt.ylabel('Residential electricity consumption (kWh)')
_ = plt.title('Scatterplot for the association between the income and the residential electricity consumption')
correlation, pvalue = stats.pearsonr(subdata2['income'], subdata2['electricity'])
display(Markdown("The correlation coefficient is {:.3g} and the associated p-value is {:.3g}.".format(correlation, pvalue)))
display(Markdown("And the coefficient of determination is {:.3g}.".format(correlation**2)))
"""
Explanation: Correlation test
End of explanation
"""
def urban_group(row):
if row['urbanrate'] < 25.0:
return '0%<=..<25%'
elif row['urbanrate'] < 50.0:
return '25%<=..<50%'
elif row['urbanrate'] < 75.0:
return '50%<=..<75%'
else:
return '75%=<'
subdata3 = subdata2.copy()
subdata3['urban_group'] = pd.Categorical(subdata3.apply(lambda x: urban_group(x), axis=1))
summary = dict()
for group in subdata3.urban_group.cat.categories:
moderator_group = subdata3[subdata3['urban_group'] == group]
summary[group] = stats.pearsonr(moderator_group['income'], moderator_group['electricity'])
df = (pd.DataFrame(summary)
.rename(index={0:'Pearson r', 1:'p-value'}))
df2 = (df.stack()
.unstack(level=0))
df2.index.name = 'Urban rate'
df2
"""
Explanation: The Pearson test proves a significant positive relationship between income per person and residential electricity consumption as the p-value is below 0.05.
Moreover, the square of the correlation coefficient, i.e. the coefficient of determination, is 0.425. This means that we can predict 42.5% of the variability of residential electricity consumption knowing the income per person.
Potential moderator
Now comes the analysis of Pearson correlation between different urban rate group to see if the urban rate is a moderator on the relationship between income per person and residential electricity consumption.
End of explanation
"""
g = sns.FacetGrid(subdata3.reset_index(),
col='urban_group', hue='urban_group', col_wrap=2, size=4)
_ =g.map(sns.regplot, 'income', 'electricity')
"""
Explanation: For all urban rate categories, the p-value is below the threshold of 0.05. Therefore the urban rate does not moderate the relationship between income per person and residential electricity consumption. In other words the residential electricity consumption has a significant positive relationship in regard to the income per person whatever the urban rate in the country.
By plotting the scatter plots of the four groups, we can see that the correlation is indeed present for all of them. One corollary finding from the graphics below is the tendency of countries with higher income per person to have higher urban rate.
End of explanation
"""
|
georgetown-analytics/machine-learning | examples/kbelita/Clustering-RealEstateData-City.ipynb | mit | import pandas as pd
import csv
import os
import numpy as np
import matplotlib
import seaborn as sns
import matplotlib.pyplot as plt
#from pandas.tools.plotting import scatter_matrix
from __future__ import print_function
import urllib.request
from sklearn.feature_selection import SelectFromModel
from sklearn.decomposition import PCA
from sklearn import preprocessing
matplotlib.style.use('ggplot')
from sklearn.cluster import KMeans
from sklearn.cluster import MiniBatchKMeans
from sklearn.preprocessing import MinMaxScaler, StandardScaler,RobustScaler, Normalizer
import seaborn as sns
import matplotlib.pyplot as plt
from time import time
import numpy as np
import matplotlib.pyplot as plt
from sklearn.preprocessing import scale
from sklearn import metrics
from sklearn.cluster import KMeans
from sklearn.preprocessing import StandardScaler, MinMaxScaler
%matplotlib inline
import matplotlib.cm as cm
from sklearn.cluster import AgglomerativeClustering
import warnings
from sklearn.metrics import silhouette_samples, silhouette_score
"""
Explanation: CLUSTERING WITH REAL ESTATE DATA
by Karen Belita
Clustering is a type of Unsupervised Machine Learning, which can determine relationships of unlabeled data.
This notebook will show how to get and prepare data for exploration of clustering methods.
This notebook will use scikit-learn for machine learning processes.
Data information
Zillow has real estate data at different geographic levels.
This notebook will explore some clustering methods with the Zillow Home Value Index data set at the City level.
The data can be found here.
Dependencies
End of explanation
"""
url = "http://files.zillowstatic.com/research/public/City/City_Zhvi_Summary_AllHomes.csv"
def csv_download():
path = os.getcwd() ## current location
file_name = path + "/" + "ZHVICity.csv"
if os.path.isfile(file_name): # makes sure that no file duplicates
pass
else:
f = urllib.request.urlopen(url)
data = f.read()
with open(file_name, "wb") as f:
f.write(data)
# return file_name to print location
csv_download()
"""
Explanation: Getting Data
Use urllib to download the csv file from the site.
End of explanation
"""
file_name = os.path.join(os.getcwd(), "ZHVICity.csv")
df = pd.read_csv(file_name)
"""
Explanation: Preparing Data
Use pandas to prepare data for machine learning.
End of explanation
"""
df.head()
"""
Explanation: Look at the structure of the data.
End of explanation
"""
#rename
df.rename(columns={"RegionName" : "City"},
inplace = True)
# add new column
df['City-State'] = df['City'] + "-" + df['State']
df.head(1)
##move new column to the front
cols = df.columns.tolist()
cols.insert(0, cols.pop(cols.index('City-State'))) #move to position 0
#drop columns
df = df.reindex(columns = cols)
df.drop(df.columns[[0,1,6,18]], axis = 1, inplace = True)
df.head(2)
"""
Explanation: Rename the column "RegionName" to "City" and create a column that combines the name of city with its state for readability purposes.
Also remove irrelevant columns.
End of explanation
"""
statedf = df.groupby("State")["Zhvi"].mean().sort_values(ascending = False)
statedf.head()
"""
Explanation: Pandas can also help with describing the Data which can help for analysis.
Such as Summarizing data by state (Average ZHVI data per state)...
End of explanation
"""
featcol= [
'Zhvi','MoM','QoQ','YoY','5Year','10Year','PeakZHVI','PctFallFromPeak']
x = df[featcol]
"""
Explanation: Select columns that will be used as features for Machine Learning.
End of explanation
"""
#check number of rows
print ("original number of rows: %d" % (len(x.index)))
#remove rows
x1 = x.dropna()
print ("new number of rows: %d" % (len(x1.index)))
"""
Explanation: Dealing with missing values can be done by removing rows with missing data....
End of explanation
"""
## all Nans to white space
x = x.replace(np.nan, ' ', regex = True)
x = x.replace(np.nan, 'NaN', regex = True)
## convert to all floats
with warnings.catch_warnings():
warnings.simplefilter("ignore")
x = x.convert_objects(convert_numeric = True)
x = x.interpolate()
x.isnull().any().any() ## to check if any missing data
"""
Explanation: Or imputating missing values with the interpolate function from pandas.
End of explanation
"""
# features into array
features = x.values
"""
Explanation: Prepare features by converting the dataframe into an array.
End of explanation
"""
min_max_scaler = MinMaxScaler()
fmm = min_max_scaler.fit_transform(features)
fmX = pd.DataFrame(fmm)
ax = sns.boxplot(data=fmX)
ax
"""
Explanation: To see variance of features: boxplot (from Seaborn) can be used with the MinMaxscaler (from scikit-learn) to visualize this.
End of explanation
"""
k = 4
cluster = KMeans(init='k-means++', n_clusters=k, n_init=12)
cluster.fit(features)
metrics.silhouette_score(features, cluster.labels_)
"""
Explanation: Unsupervised Machine Learning with Clustering
Clustering can label the unlabeled Real Estate Data.
Below is a summary of the parameters used by the clustering algorithms from scikit-learn:
K-Means: number of clusters
Affinity propagation: damping, sample preference
Means-shift: bandwidth
Spectral Clustering: number of clusters
Ward hierachical clustering: number of clusters
DBSCAN: neighborhood size
Gaussian mixtures: there are many to choose from
Birch: branching factor, threshold, optional global clusterer
Read more about Clustering with Scikit-Learn.
Start with K-means since it's simple.
Read more about K-Means.
The parameter that K-Means utilizes is the number of clusters or k.
Pick a number of clusters or k for K-Means, and check its performance by using the silhouette score, the metric that assesses the variance of objects within clusters. Score closer to 1 is best.
End of explanation
"""
# adapted from http://scikit-learn.org/stable/auto_examples/cluster/plot_kmeans_digits.html
range_n_clusters = range(8,11)
List1 = []
List2 = []
List3 = []
for n_clusters in range_n_clusters:
def bench_clustering(estimator, name, data):
estimator.fit(data)
v1 = name
v2 = n_clusters
v3 = metrics.silhouette_score(data, estimator.labels_)
List1.append(v1)
List2.append(v2)
List3.append(v3)
bench_clustering(KMeans(init='k-means++', n_clusters=n_clusters, n_init=12),
name="K-Means", data=features)
with warnings.catch_warnings():
warnings.simplefilter("ignore")
bench_clustering(MiniBatchKMeans(init='k-means++', n_clusters=n_clusters, n_init=12,max_no_improvement=10, verbose=0,random_state=0),
name="MiniBatchKMeans", data=features)
bench_clustering(AgglomerativeClustering(n_clusters=n_clusters, linkage='ward'),
name="Ward", data=features)
bench_clustering(AgglomerativeClustering(n_clusters=n_clusters, linkage='average'),
name="Average", data=features)
bench_clustering(AgglomerativeClustering(n_clusters=n_clusters, linkage='ward'),
name="Complete", data=features)
d = pd.DataFrame()
d['method'] = List1
d['k'] = List2
d['silhouette_score'] = List3
d = d.sort_values(['silhouette_score'], ascending = False)
print (d)
"""
Explanation: Model selection can be done by looping through a range of k.
This could also include looping through different types of clustering methods that uses k or number of clusters as a parameter.
The loop includes K-Means and the following clustering methods:
MiniBatch K-Means is the same as K-means but uses mini-batches to reduce computation time
Agglomerative Clustering performs heirarchical clustering and can be used with 3 three different merge or linkage strategies :
Ward
Complete
Average
The result of the loop is a ranking of the silhouette scores of the different methods and k explored.
Pick a range of k to explore.
End of explanation
"""
#adapted from http://scikit-learn.org/stable/auto_examples/cluster/plot_kmeans_silhouette_analysis.html#sphx-glr-auto-examples-cluster-plot-kmeans-silhouette-analysis-py
range_n_clusters = range(8,11)
for n_clusters in range_n_clusters:
cluster = KMeans(init='k-means++', n_clusters=n_clusters, n_init=12)
cluster.fit(features)
metrics.silhouette_score(features, cluster.labels_)
fig, (ax1, ax2) = plt.subplots(1, 2)
fig.set_size_inches(18, 7)
# The 1st subplot is the silhouette plot
# The silhouette coefficient can range from -1, 1 but in this example all
# lie within [-0.1, 1]
ax1.set_xlim([-0.1, 1])
# The (n_clusters+1)*10 is for inserting blank space between silhouette
# plots of individual clusters, to demarcate them clearly.
ax1.set_ylim([0, len(features) + (n_clusters + 1) * 10])
# Initialize the clusterer with n_clusters value and a random generator
# seed of 10 for reproducibility.
clusterer = cluster
cluster_labels = clusterer.fit_predict(features)
# The silhouette_score gives the average value for all the samples.
# This gives a perspective into the density and separation of the formed
# clusters
silhouette_avg = silhouette_score(features, cluster_labels)
print("For n_clusters =", n_clusters,
"The average silhouette_score is :", silhouette_avg)
# Compute the silhouette scores for each sample
sample_silhouette_values = silhouette_samples(features, cluster_labels)
y_lower = 10
for i in range(n_clusters):
# Aggregate the silhouette scores for samples belonging to
# cluster i, and sort them
ith_cluster_silhouette_values = \
sample_silhouette_values[cluster_labels == i]
ith_cluster_silhouette_values.sort()
size_cluster_i = ith_cluster_silhouette_values.shape[0]
y_upper = y_lower + size_cluster_i
color = cm.spectral(float(i) / n_clusters)
ax1.fill_betweenx(np.arange(y_lower, y_upper),
0, ith_cluster_silhouette_values,
facecolor=color, edgecolor=color, alpha=0.7)
# Label the silhouette plots with their cluster numbers at the middle
ax1.text(-0.05, y_lower + 0.5 * size_cluster_i, str(i))
# Compute the new y_lower for next plot
y_lower = y_upper + 10 # 10 for the 0 samples
ax1.set_title("The silhouette plot for the various clusters.")
ax1.set_xlabel("The silhouette coefficient values")
ax1.set_ylabel("Cluster label")
# The vertical line for average silhoutte score of all the values
ax1.axvline(x=silhouette_avg, color="red", linestyle="--")
ax1.set_yticks([]) # Clear the yaxis labels / ticks
ax1.set_xticks([-0.1, 0, 0.2, 0.4, 0.6, 0.8, 1])
# 2nd Plot showing the actual clusters formed
colors = cm.spectral(cluster_labels.astype(float) / n_clusters)
ax2.scatter(features[:, 0], features[:, 1], marker='.', s=30, lw=0, alpha=0.7,
c=colors)
# Labeling the clusters
centers = clusterer.cluster_centers_
# Draw white circles at cluster centers
ax2.scatter(centers[:, 0], centers[:, 1],
marker='o', c="white", alpha=1, s=200)
for i, c in enumerate(centers):
ax2.scatter(c[0], c[1], marker='$%d$' % i, alpha=1, s=50)
ax2.set_title("The visualization of the clustered data.")
ax2.set_xlabel("Feature space for the 1st feature")
ax2.set_ylabel("Feature space for the 2nd feature")
plt.suptitle(("Silhouette analysis for KMeans clustering on sample data "
"with n_clusters = %d" % n_clusters),
fontsize=14, fontweight='bold')
plt.show()
"""
Explanation: Silhouette plot analysis can also aid with model selection. The relationship of objects within each cluster can be assessed visually.
Pick a range of k to explore. (Below is used with K-Means, but can be used with the other clustering methods that use k as a parameter, as mentioned above.)
End of explanation
"""
#adapted from http://scikit-learn.org/stable/auto_examples/cluster/plot_kmeans_digits.html#sphx-glr-auto-examples-cluster-plot-kmeans-digits-py
n_clusters = 10
f_scaled = scale(features)
reduced_data = PCA(n_components=2).fit_transform(f_scaled)
kmeans = KMeans(init='k-means++', n_clusters=n_clusters, n_init=10)
kmeans.fit(reduced_data)
# Step size of the mesh. Decrease to increase the quality of the VQ.
h = .02 # point in the mesh [x_min, x_max]x[y_min, y_max].
# Plot the decision boundary. For that, we will assign a color to each
x_min, x_max = reduced_data[:, 0].min() - 1, reduced_data[:, 0].max() + 1
y_min, y_max = reduced_data[:, 1].min() - 1, reduced_data[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
# Obtain labels for each point in mesh. Use last trained model.
Z = kmeans.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure(1)
plt.clf()
plt.imshow(Z, interpolation='nearest',
extent=(xx.min(), xx.max(), yy.min(), yy.max()),
cmap=plt.cm.Paired,
aspect='auto', origin='lower')
plt.plot(reduced_data[:, 0], reduced_data[:, 1], 'k.', markersize=2)
# Plot the centroids as a white X
centroids = kmeans.cluster_centers_
plt.scatter(centroids[:, 0], centroids[:, 1],
marker='x', s=169, linewidths=3,
color='w', zorder=10)
plt.title('K-means clustering on the Zillow ZHVI dataset (PCA-reduced data)\n'
'Centroids are marked with white cross')
plt.xlim(x_min, x_max)
plt.ylim(y_min, y_max)
plt.xticks(())
plt.yticks(())
plt.show()
"""
Explanation: Visualizing Clusters -
The example below shows the clusters and their centroids.
Seeing the shape of the clusters and the location of the centroids can help with further analysis.
Pick a k to explore. (Below is used with K-Means, but can be used with the other clustering methods that use k as a parameter, as mentioned above.)
End of explanation
"""
|
yashdeeph709/Algorithms | PythonBootCamp/Complete-Python-Bootcamp-master/Milestone Project 1- Walkthrough Steps Workbook.ipynb | apache-2.0 | # For using the same code in either Python 2 or 3
from __future__ import print_function
## Note: Python 2 users, use raw_input() to get player input. Python 3 users, use input()
"""
Explanation: Milestone Project 1: Walk-through Steps Workbook
Below is a set of steps for you to follow to try to create the Tic Tac Toe Milestone Project game!
End of explanation
"""
from IPython.display import clear_output
def display_board(board):
pass
"""
Explanation: Step 1: Write a function that can print out a board. Set up your board as a list, where each index 1-9 corresponds with a number on a number pad, so you get a 3 by 3 board representation.
End of explanation
"""
def player_input():
pass
"""
Explanation: Step 2: Write a function that can take in a player input and assign their marker as 'X' or 'O'. Think about using while loops to continually ask until you get a correct answer.
End of explanation
"""
def place_marker(board, marker, position):
pass
"""
Explanation: Step 3: Write a function that takes, in the board list object, a marker ('X' or 'O'), and a desired position (number 1-9) and assigns it to the board.
End of explanation
"""
def win_check(board,mark):
pass
"""
Explanation: Step 4: Write a function that takes in a board and a mark (X or O) and then checks to see if that mark has won.
End of explanation
"""
import random
def choose_first():
pass
"""
Explanation: Step 5: Write a function that uses the random module to randomly decide which player goes first. You may want to lookup random.randint() Return a string of which player went first.
End of explanation
"""
def space_check(board, position):
pass
"""
Explanation: Step 6: Write a function that returns a boolean indicating whether a space on the board is freely available.
End of explanation
"""
def full_board_check(board):
pass
"""
Explanation: Step 7: Write a function that checks if the board is full and returns a boolean value. True if full, False otherwise.
End of explanation
"""
def player_choice(board):
pass
"""
Explanation: Step 8: Write a function that asks for a player's next position (as a number 1-9) and then uses the function from step 6 to check if its a free position. If it is, then return the position for later use.
End of explanation
"""
def replay():
pass
"""
Explanation: Step 9: Write a function that asks the player if they want to play again and returns a boolean True if they do want to play again.
End of explanation
"""
print('Welcome to Tic Tac Toe!')
#while True:
# Set the game up here
#pass
#while game_on:
#Player 1 Turn
# Player2's turn.
#pass
#if not replay():
#break
"""
Explanation: Step 10: Here comes the hard part! Use while loops and the functions you've made to run the game!
End of explanation
"""
|
rvperry/phys202-2015-work | assignments/assignment09/IntegrationEx01.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from scipy import integrate
"""
Explanation: Integration Exercise 1
Imports
End of explanation
"""
def trapz(f, a, b, N):
"""Integrate the function f(x) over the range [a,b] with N points."""
h=(b-a)/N
A=0
for i in range(N):
A+=.5*h*(f(a+i*h)+f(a+(i+1)*h))
return A
f = lambda x: x**2
g = lambda x: np.sin(x)
I = trapz(f, 0, 1, 1000)
assert np.allclose(I, 0.33333349999999995)
J = trapz(g, 0, np.pi, 1000)
assert np.allclose(J, 1.9999983550656628)
I,J
"""
Explanation: Trapezoidal rule
The trapezoidal rule generates a numerical approximation to the 1d integral:
$$ I(a,b) = \int_a^b f(x) dx $$
by dividing the interval $[a,b]$ into $N$ subdivisions of length $h$:
$$ h = (b-a)/N $$
Note that this means the function will be evaluated at $N+1$ points on $[a,b]$. The main idea of the trapezoidal rule is that the function is approximated by a straight line between each of these points.
Write a function trapz(f, a, b, N) that performs trapezoidal rule on the function f over the interval $[a,b]$ with N subdivisions (N+1 points).
End of explanation
"""
F=integrate.quad(f,0,1)
G=integrate.quad(g,0,np.pi)
print(F,I)
print(G,J)
assert True # leave this cell to grade the previous one
"""
Explanation: Now use scipy.integrate.quad to integrate the f and g functions and see how the result compares with your trapz function. Print the results and errors.
End of explanation
"""
|
OSGeo-live/CesiumWidget | Examples/CesiumWidget Example with CZML library.ipynb | apache-2.0 | from CesiumWidget import CesiumWidget
import czml
"""
Explanation: CesiumWidget together with CZML library
This notebook shows how to use the CesiumWidget together with the CZML library from https://github.com/cleder/czml
If the CesiumWidget is installed correctly, Cesium should be accessable at:
http://localhost:8888/nbextensions/CesiumWidget/cesium/index.html
End of explanation
"""
# Initialize a document
doc = czml.CZML()
# Create and append the document packet
packet1 = czml.CZMLPacket(id='document',version='1.0')
doc.packets.append(packet1)
p3 = czml.CZMLPacket(id='test')
p3.position = czml.Position(cartographicDegrees = [18.07,59.33, 20])
point = czml.Point(pixelSize=20, show=True)
point.color = czml.Color(rgba=(223, 150, 47, 128))
point.show = True
p3.point = point
l = czml.Label(show=True, text='Stockholm')
l.scale = 0.5
p3.label = l
doc.packets.append(p3)
"""
Explanation: Some data for the viewer to display
End of explanation
"""
cesiumExample = CesiumWidget(width="100%", czml=tuple(doc.data()))
"""
Explanation: Create widget object
End of explanation
"""
cesiumExample
"""
Explanation: Display the widget:
End of explanation
"""
|
VandyAstroML/Vanderbilt_Computational_Bootcamp | notebooks/Week_12/12_Pandas_II_Advanced_Data_Handling.ipynb | mit | # Importing modules
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import pandas as pd
sns.set_context("notebook")
"""
Explanation: <span style="color:blue">Week 12 - Pandas II</span>
<span style="color:red">Today's Agenda</span>
Useful functions when using Pandas
Review
See Week 8 for a review on Pandas
RESOLVE Dataset
Today we'll be using a dataset by the Resolve Survey. RESOLVE is a volume-limited census of stellar, gas, and dynamical mass as well as star formation and merging within >50,000 cubic Mpc of the nearby cosmic web, reaching down to the dwarf galaxy regime and up to structures on tens of Mpc scales such as filaments, walls, and voids.
End of explanation
"""
## Reading in data
# Column Names
colnames = ['RA','DEC','CZobs','Mr','HaloID','logMHalo',
'NGalH','FLAG','CZreal','DISTC','Vptot',
'Vptang','Morph','logMstar','rmag','umag',
'FSMGR','GalMatchFlag','ur_col','MhiMass','GroupID',
'GroupNGals','GroupRproj','GroupCZdisp',
'GroupLogMHalo','GroupGalType']
# We use `pd.read_csv` to read CSV files
# The argument in `sep`, i.e. "\s+" tells pandas to read white spaces
RS = pd.read_csv('./data/Resolve_Catalogue.dat',
sep='\s+',
skiprows=2,
names=colnames)
RS.head()
RS.shape
"""
Explanation: First, we need to read in the data from RESOLVE.
End of explanation
"""
sns.jointplot("logMHalo","logMstar",data=RS, color='green')
g = sns.lmplot(x="logMHalo", y="logMstar", hue="FLAG", data=RS, size=8)
sns.lmplot(x="logMHalo", y="GroupLogMHalo", hue="FLAG", data=RS, size=8)
"""
Explanation: Let's show some plots for this dataset
End of explanation
"""
RS1 = RS.loc[ (RS.logMHalo >= 11) & (RS.logMHalo <= 12)]
print(RS1.shape)
# Resetting Indices
RS1.reset_index(inplace=True)
RS1.head()
"""
Explanation: Now that we have looked at some of the data, calculating some
statistics for it.
Let's first create a subsample of the dataset
End of explanation
"""
RS1.loc[:,'RArad' ] = RS1.RA.map( lambda x: np.radians(x))
RS1.loc[:,'DECrad'] = RS1.DEC.map(lambda x: np.radians(x))
RS1[['RA','RArad','DEC','DECrad']].head()
"""
Explanation: Map
You can use the unique function to map some function or apply some
mask to your data.
Let's say I want to convert RA and DEC to radians.
End of explanation
"""
RS1.loc[:,'RArad'].apply(np.degrees)
RS1['RArad'].head()
"""
Explanation: apply
The apply is a function that applies a function along
axis of the DataFrame
End of explanation
"""
# Obtaining unique list of `HaloID`
print(RS1['HaloID'].shape)
RS1['HaloID'].head()
"""
Explanation: Unique
The unique function will return unique entries for a specified column.
This is analogous to the np.unique function from NumPy.
End of explanation
"""
HaloID_unq = RS1.HaloID.unique()
HaloID_unq.shape
HaloID_unq
"""
Explanation: Number of unique elements in HaloID
End of explanation
"""
RS1.head()
"""
Explanation: In-place Plotting
You can also plot distributions, x-y plots, etc using Pandas.
Let's plot the distribution of logMstar.
See more here: http://pandas.pydata.org/pandas-docs/stable/visualization.html
Group by
This is a really useful function in Pandas.
It allows you to group your data set into groups, and returns them
as dictionaries
End of explanation
"""
Groups_dict = RS1.groupby('GroupID')
Groups_dict
"""
Explanation: Let's group data by the GroupID
End of explanation
"""
Groups_dict.sum()
"""
Explanation: You can calcualte the sum of all columns, even though sometimes it
would not make sense to do it
End of explanation
"""
# Group with the most number of Galaxies
GroupID_max = int(RS1.GroupID.loc[RS1.GroupNGals==RS1.GroupNGals.max()].unique())
# Getting the stats for that Group
Groups_dict.get_group(GroupID_max)
"""
Explanation: You can also get back a particular group
End of explanation
"""
Groups_dict.get_group(GroupID_max).describe()
"""
Explanation: You can compute different statistics for the different columns
End of explanation
"""
RS1.cov()
"""
Explanation: Covariance
Or even calculate the covariance for the columns
End of explanation
"""
RS1.corr()
"""
Explanation: Correlation
The corr method provides the correlation between columns
End of explanation
"""
df = pd.DataFrame({'int_col' : [1,2,6,8,-1], 'float_col' : [0.1, 0.2,0.2,10.1,None], 'str_col' : ['a','b',None,'c','a']})
df
"""
Explanation: Handling Missing Values
You can handle missing values in DataFrames without a problem.
Check "Handling of missing values" to learn more about this.
Drop missing values
For this example, we'll use a new DataFrame
End of explanation
"""
df.dropna()
"""
Explanation: We can use the dropna function to drop NaN numbers
End of explanation
"""
df3 = df.copy()
mean = df3['float_col'].mean()
df3
df3['float_col'].fillna(mean)
df3
"""
Explanation: Fill missing values
The fillna method can be used to fill missing data (NaN).
This example will replae the missing values with the mean of the available values.
End of explanation
"""
|
GoogleCloudPlatform/ml-design-patterns | 03_problem_representation/ensemble_methods.ipynb | apache-2.0 | import os
import pandas as pd
import tensorflow as tf
from tensorflow import keras
from tensorflow import feature_column as fc
from tensorflow.keras import layers, models, Model
df = pd.read_csv("./data/babyweight_train.csv")
df.head()
"""
Explanation: Ensemble Design Pattern
Stacking is an Ensemble method which combines the outputs of a collection of models to make a prediction. The initial models, which are typically of different model types, are trained to completion on the full training dataset. Then, a secondary meta-model is trained using the initial model outputs as features. This second meta-model learns how to best combine the outcomes of the initial models to decrease the training error and can be any type of machine learning model.
Create a Stacking Ensemble model
In this notebook, we'll create an Ensemble of three neural network models and train on the natality dataset.
End of explanation
"""
# Determine CSV, label, and key columns
# Create list of string column headers, make sure order matches.
CSV_COLUMNS = ["weight_pounds",
"is_male",
"mother_age",
"plurality",
"gestation_weeks",
"mother_race"]
# Add string name for label column
LABEL_COLUMN = "weight_pounds"
# Set default values for each CSV column as a list of lists.
# Treat is_male and plurality as strings.
DEFAULTS = [[0.0], ["null"], [0.0], ["null"], [0.0], ["0"]]
def get_dataset(file_path):
dataset = tf.data.experimental.make_csv_dataset(
file_path,
batch_size=15, # Artificially small to make examples easier to show.
label_name=LABEL_COLUMN,
select_columns=CSV_COLUMNS,
column_defaults=DEFAULTS,
num_epochs=1,
ignore_errors=True)
return dataset
train_data = get_dataset("./data/babyweight_train.csv")
test_data = get_dataset("./data/babyweight_eval.csv")
"""
Explanation: Create our tf.data input pipeline
End of explanation
"""
def show_batch(dataset):
for batch, label in dataset.take(1):
for key, value in batch.items():
print("{:20s}: {}".format(key,value.numpy()))
show_batch(train_data)
"""
Explanation: Check that our tf.data dataset:
End of explanation
"""
numeric_columns = [fc.numeric_column("mother_age"),
fc.numeric_column("gestation_weeks")]
CATEGORIES = {
'plurality': ["Single(1)", "Twins(2)", "Triplets(3)",
"Quadruplets(4)", "Quintuplets(5)", "Multiple(2+)"],
'is_male' : ["True", "False", "Unknown"],
'mother_race': [str(_) for _ in df.mother_race.unique()]
}
categorical_columns = []
for feature, vocab in CATEGORIES.items():
cat_col = fc.categorical_column_with_vocabulary_list(
key=feature, vocabulary_list=vocab)
categorical_columns.append(fc.indicator_column(cat_col))
"""
Explanation: Create our feature columns
End of explanation
"""
inputs = {colname: tf.keras.layers.Input(
name=colname, shape=(), dtype="float32")
for colname in ["mother_age", "gestation_weeks"]}
inputs.update({colname: tf.keras.layers.Input(
name=colname, shape=(), dtype="string")
for colname in ["is_male", "plurality", "mother_race"]})
dnn_inputs = layers.DenseFeatures(categorical_columns+numeric_columns)(inputs)
# model_1
model1_h1 = layers.Dense(50, activation="relu")(dnn_inputs)
model1_h2 = layers.Dense(30, activation="relu")(model1_h1)
model1_output = layers.Dense(1, activation="relu")(model1_h2)
model_1 = tf.keras.models.Model(inputs=inputs, outputs=model1_output, name="model_1")
# model_2
model2_h1 = layers.Dense(64, activation="relu")(dnn_inputs)
model2_h2 = layers.Dense(32, activation="relu")(model2_h1)
model2_output = layers.Dense(1, activation="relu")(model2_h2)
model_2 = tf.keras.models.Model(inputs=inputs, outputs=model2_output, name="model_2")
# model_3
model3_h1 = layers.Dense(32, activation="relu")(dnn_inputs)
model3_output = layers.Dense(1, activation="relu")(model3_h1)
model_3 = tf.keras.models.Model(inputs=inputs, outputs=model3_output, name="model_3")
"""
Explanation: Create our ensemble models
We'll train three different neural network models.
End of explanation
"""
# fit model on dataset
def fit_model(model):
# define model
model.compile(
loss=tf.keras.losses.MeanSquaredError(),
optimizer='adam', metrics=['mse'])
# fit model
model.fit(train_data.shuffle(500), epochs=1)
# evaluate model
test_loss, test_mse = model.evaluate(test_data)
print('\n\n{}:\nTest Loss {}, Test RMSE {}'.format(
model.name, test_loss, test_mse**0.5))
return model
# create directory for models
try:
os.makedirs('models')
except:
print("directory already exists")
"""
Explanation: The function below trains a model and reports the MSE and RMSE on the test set.
End of explanation
"""
members = [model_1, model_2, model_3]
# fit and save models
n_members = len(members)
for i in range(n_members):
# fit model
model = fit_model(members[i])
# save model
filename = 'models/model_' + str(i + 1) + '.h5'
model.save(filename, save_format='tf')
print('Saved {}\n'.format(filename))
"""
Explanation: Next, we'll train each neural network and save the trained model to file.
End of explanation
"""
# load trained models from file
def load_models(n_models):
all_models = []
for i in range(n_models):
filename = 'models/model_' + str(i + 1) + '.h5'
# load model from file
model = models.load_model(filename)
# add to list of members
all_models.append(model)
print('>loaded %s' % filename)
return all_models
# load all models
members = load_models(n_members)
print('Loaded %d models' % len(members))
"""
Explanation: The RMSE varies on each of the neural networks.
Load the trained models and create the stacked ensemble model.
The function below loads the trained models and returns them in a list.
End of explanation
"""
# update all layers in all models to not be trainable
for i in range(n_members):
model = members[i]
for layer in model.layers:
# make not trainable
layer.trainable = False
# rename to avoid 'unique layer name' issue
layer._name = 'ensemble_' + str(i+1) + '_' + layer.name
"""
Explanation: We will need to freeze the layers of the pre-trained models since we won't train these models any further. The Stacked Ensemble will the trainable and learn how to best combine the results of the ensemble members.
End of explanation
"""
member_inputs = [model.input for model in members]
# concatenate merge output from each model
member_outputs = [model.output for model in members]
merge = layers.concatenate(member_outputs)
h1 = layers.Dense(30, activation='relu')(merge)
h2 = layers.Dense(20, activation='relu')(h1)
h3 = layers.Dense(10, activation='relu')(h2)
h4 = layers.Dense(5, activation='relu')(h2)
ensemble_output = layers.Dense(1, activation='relu')(h3)
ensemble_model = Model(inputs=member_inputs, outputs=ensemble_output)
# plot graph of ensemble
tf.keras.utils.plot_model(ensemble_model, show_shapes=True, to_file='ensemble_graph.png')
# compile
ensemble_model.compile(loss='mse', optimizer='adam', metrics=['mse'])
"""
Explanation: Lastly, we'll create our Stacked Ensemble model. It is also a neural network. We'll use the Functional Keras API.
End of explanation
"""
FEATURES = ["is_male", "mother_age", "plurality",
"gestation_weeks", "mother_race"]
# stack input features for our tf.dataset
def stack_features(features, label):
for feature in FEATURES:
for i in range(n_members):
features['ensemble_' + str(i+1) + '_' + feature] = features[feature]
return features, label
ensemble_data = train_data.map(stack_features).repeat(1)
ensemble_model.fit(ensemble_data.shuffle(500), epochs=1)
"""
Explanation: We need to adapt our tf.data pipeline to accommodate the multiple inputs for our Stacked Ensemble model.
End of explanation
"""
val_loss, val_mse = ensemble_model.evaluate(test_data.map(stack_features))
print("Validation RMSE: {}".format(val_mse**0.5))
"""
Explanation: Lastly, we will evaluate our Stacked Ensemble against the test set.
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst | courses/machine_learning/deepdive2/introduction_to_tensorflow/labs/adv_logistic_reg_TF2.0.ipynb | apache-2.0 | import tensorflow as tf
from tensorflow import keras
import os
import tempfile
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import sklearn
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
print("TensorFlow version: ",tf.version.VERSION)
"""
Explanation: Advanced Logistic Regression in TensorFlow 2.0
Learning Objectives
Load a CSV file using Pandas
Create train, validation, and test sets
Define and train a model using Keras (including setting class weights)
Evaluate the model using various metrics (including precision and recall)
Try common techniques for dealing with imbalanced data like:
Class weighting and
Oversampling
Introduction
This lab how to classify a highly imbalanced dataset in which the number of examples in one class greatly outnumbers the examples in another. You will work with the Credit Card Fraud Detection dataset hosted on Kaggle. The aim is to detect a mere 492 fraudulent transactions from 284,807 transactions in total. You will use Keras to define the model and class weights to help the model learn from the imbalanced data.
Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook.
Start by importing the necessary libraries for this lab.
End of explanation
"""
mpl.rcParams['figure.figsize'] = (12, 10)
colors = plt.rcParams['axes.prop_cycle'].by_key()['color']
"""
Explanation: In the next cell, we're going to customize our Matplot lib visualization figure size and colors. Note that each time Matplotlib loads, it defines a runtime configuration (rc) containing the default styles for every plot element we create. This configuration can be adjusted at any time using the plt.rc convenience routine.
End of explanation
"""
file = tf.keras.utils
raw_df = pd.read_csv('https://storage.googleapis.com/download.tensorflow.org/data/creditcard.csv')
raw_df.head()
"""
Explanation: Data processing and exploration
Download the Kaggle Credit Card Fraud data set
Pandas is a Python library with many helpful utilities for loading and working with structured data and can be used to download CSVs into a dataframe.
Note: This dataset has been collected and analysed during a research collaboration of Worldline and the Machine Learning Group of ULB (Université Libre de Bruxelles) on big data mining and fraud detection. More details on current and past projects on related topics are available here and the page of the DefeatFraud project
End of explanation
"""
raw_df[['Time', 'V1', 'V2', 'V3', 'V4', 'V5', 'V26', 'V27', 'V28', 'Amount', 'Class']].describe()
"""
Explanation: Now, let's view the statistics of the raw dataframe.
End of explanation
"""
neg, pos = np.bincount(raw_df['Class'])
total = neg + pos
print('Examples:\n Total: {}\n Positive: {} ({:.2f}% of total)\n'.format(
total, pos, 100 * pos / total))
"""
Explanation: Examine the class label imbalance
Let's look at the dataset imbalance:
End of explanation
"""
cleaned_df = raw_df.copy()
# You don't want the `Time` column.
cleaned_df.pop('Time')
# The `Amount` column covers a huge range. Convert to log-space.
eps=0.001 # 0 => 0.1¢
cleaned_df['Log Ammount'] = np.log(cleaned_df.pop('Amount')+eps)
"""
Explanation: This shows the small fraction of positive samples.
Clean, split and normalize the data
The raw data has a few issues. First the Time and Amount columns are too variable to use directly. Drop the Time column (since it's not clear what it means) and take the log of the Amount column to reduce its range.
End of explanation
"""
# TODO 1
# Use a utility from sklearn to split and shuffle our dataset.
train_df, test_df = #TODO: Your code goes here.
train_df, val_df = #TODO: Your code goes here.
# Form np arrays of labels and features.
train_labels = #TODO: Your code goes here.
bool_train_labels = #TODO: Your code goes here.
val_labels = #TODO: Your code goes here.
test_labels = #TODO: Your code goes here.
train_features = np.array(train_df)
val_features = np.array(val_df)
test_features = np.array(test_df)
"""
Explanation: Split the dataset into train, validation, and test sets. The validation set is used during the model fitting to evaluate the loss and any metrics, however the model is not fit with this data. The test set is completely unused during the training phase and is only used at the end to evaluate how well the model generalizes to new data. This is especially important with imbalanced datasets where overfitting is a significant concern from the lack of training data.
End of explanation
"""
scaler = StandardScaler()
train_features = scaler.fit_transform(train_features)
val_features = scaler.transform(val_features)
test_features = scaler.transform(test_features)
train_features = np.clip(train_features, -5, 5)
val_features = np.clip(val_features, -5, 5)
test_features = np.clip(test_features, -5, 5)
print('Training labels shape:', train_labels.shape)
print('Validation labels shape:', val_labels.shape)
print('Test labels shape:', test_labels.shape)
print('Training features shape:', train_features.shape)
print('Validation features shape:', val_features.shape)
print('Test features shape:', test_features.shape)
"""
Explanation: Normalize the input features using the sklearn StandardScaler.
This will set the mean to 0 and standard deviation to 1.
Note: The StandardScaler is only fit using the train_features to be sure the model is not peeking at the validation or test sets.
End of explanation
"""
pos_df = pd.DataFrame(train_features[ bool_train_labels], columns = train_df.columns)
neg_df = pd.DataFrame(train_features[~bool_train_labels], columns = train_df.columns)
sns.jointplot(pos_df['V5'], pos_df['V6'],
kind='hex', xlim = (-5,5), ylim = (-5,5))
plt.suptitle("Positive distribution")
sns.jointplot(neg_df['V5'], neg_df['V6'],
kind='hex', xlim = (-5,5), ylim = (-5,5))
_ = plt.suptitle("Negative distribution")
"""
Explanation: Caution: If you want to deploy a model, it's critical that you preserve the preprocessing calculations. The easiest way to implement them as layers, and attach them to your model before export.
Look at the data distribution
Next compare the distributions of the positive and negative examples over a few features. Good questions to ask yourself at this point are:
Do these distributions make sense?
Yes. You've normalized the input and these are mostly concentrated in the +/- 2 range.
Can you see the difference between the ditributions?
Yes the positive examples contain a much higher rate of extreme values.
End of explanation
"""
METRICS = [
keras.metrics.TruePositives(name='tp'),
keras.metrics.FalsePositives(name='fp'),
keras.metrics.TrueNegatives(name='tn'),
keras.metrics.FalseNegatives(name='fn'),
keras.metrics.BinaryAccuracy(name='accuracy'),
keras.metrics.Precision(name='precision'),
keras.metrics.Recall(name='recall'),
keras.metrics.AUC(name='auc'),
]
def make_model(metrics = METRICS, output_bias=None):
if output_bias is not None:
output_bias = tf.keras.initializers.Constant(output_bias)
# TODO 1
model = keras.Sequential(
#TODO: Your code goes here.
#TODO: Your code goes here.
#TODO: Your code goes here.
#TODO: Your code goes here.
)
model.compile(
optimizer=keras.optimizers.Adam(lr=1e-3),
loss=keras.losses.BinaryCrossentropy(),
metrics=metrics)
return model
"""
Explanation: Define the model and metrics
Define a function that creates a simple neural network with a densly connected hidden layer, a dropout layer to reduce overfitting, and an output sigmoid layer that returns the probability of a transaction being fraudulent:
End of explanation
"""
EPOCHS = 100
BATCH_SIZE = 2048
early_stopping = tf.keras.callbacks.EarlyStopping(
monitor='val_auc',
verbose=1,
patience=10,
mode='max',
restore_best_weights=True)
model = make_model()
model.summary()
"""
Explanation: Understanding useful metrics
Notice that there are a few metrics defined above that can be computed by the model that will be helpful when evaluating the performance.
False negatives and false positives are samples that were incorrectly classified
True negatives and true positives are samples that were correctly classified
Accuracy is the percentage of examples correctly classified
$\frac{\text{true samples}}{\text{total samples}}$
Precision is the percentage of predicted positives that were correctly classified
$\frac{\text{true positives}}{\text{true positives + false positives}}$
Recall is the percentage of actual positives that were correctly classified
$\frac{\text{true positives}}{\text{true positives + false negatives}}$
AUC refers to the Area Under the Curve of a Receiver Operating Characteristic curve (ROC-AUC). This metric is equal to the probability that a classifier will rank a random positive sample higher than than a random negative sample.
Note: Accuracy is not a helpful metric for this task. You can 99.8%+ accuracy on this task by predicting False all the time.
Read more:
* True vs. False and Positive vs. Negative
* Accuracy
* Precision and Recall
* ROC-AUC
Baseline model
Build the model
Now create and train your model using the function that was defined earlier. Notice that the model is fit using a larger than default batch size of 2048, this is important to ensure that each batch has a decent chance of containing a few positive samples. If the batch size was too small, they would likely have no fraudulent transactions to learn from.
Note: this model will not handle the class imbalance well. You will improve it later in this tutorial.
End of explanation
"""
model.predict(train_features[:10])
"""
Explanation: Test run the model:
End of explanation
"""
results = model.evaluate(train_features, train_labels, batch_size=BATCH_SIZE, verbose=0)
print("Loss: {:0.4f}".format(results[0]))
"""
Explanation: Optional: Set the correct initial bias.
These are initial guesses are not great. You know the dataset is imbalanced. Set the output layer's bias to reflect that (See: A Recipe for Training Neural Networks: "init well"). This can help with initial convergence.
With the default bias initialization the loss should be about math.log(2) = 0.69314
End of explanation
"""
initial_bias = np.log([pos/neg])
initial_bias
"""
Explanation: The correct bias to set can be derived from:
$$ p_0 = pos/(pos + neg) = 1/(1+e^{-b_0}) $$
$$ b_0 = -log_e(1/p_0 - 1) $$
$$ b_0 = log_e(pos/neg)$$
End of explanation
"""
model = make_model(output_bias = initial_bias)
model.predict(train_features[:10])
"""
Explanation: Set that as the initial bias, and the model will give much more reasonable initial guesses.
It should be near: pos/total = 0.0018
End of explanation
"""
results = model.evaluate(train_features, train_labels, batch_size=BATCH_SIZE, verbose=0)
print("Loss: {:0.4f}".format(results[0]))
"""
Explanation: With this initialization the initial loss should be approximately:
$$-p_0log(p_0)-(1-p_0)log(1-p_0) = 0.01317$$
End of explanation
"""
initial_weights = os.path.join(tempfile.mkdtemp(),'initial_weights')
model.save_weights(initial_weights)
"""
Explanation: This initial loss is about 50 times less than if would have been with naive initilization.
This way the model doesn't need to spend the first few epochs just learning that positive examples are unlikely. This also makes it easier to read plots of the loss during training.
Checkpoint the initial weights
To make the various training runs more comparable, keep this initial model's weights in a checkpoint file, and load them into each model before training.
End of explanation
"""
model = make_model()
model.load_weights(initial_weights)
model.layers[-1].bias.assign([0.0])
zero_bias_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=20,
validation_data=(val_features, val_labels),
verbose=0)
model = make_model()
model.load_weights(initial_weights)
careful_bias_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=20,
validation_data=(val_features, val_labels),
verbose=0)
def plot_loss(history, label, n):
# Use a log scale to show the wide range of values.
plt.semilogy(history.epoch, history.history['loss'],
color=colors[n], label='Train '+label)
plt.semilogy(history.epoch, history.history['val_loss'],
color=colors[n], label='Val '+label,
linestyle="--")
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plot_loss(zero_bias_history, "Zero Bias", 0)
plot_loss(careful_bias_history, "Careful Bias", 1)
"""
Explanation: Confirm that the bias fix helps
Before moving on, confirm quick that the careful bias initialization actually helped.
Train the model for 20 epochs, with and without this careful initialization, and compare the losses:
End of explanation
"""
model = make_model()
model.load_weights(initial_weights)
baseline_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
callbacks = [early_stopping],
validation_data=(val_features, val_labels))
"""
Explanation: The above figure makes it clear: In terms of validation loss, on this problem, this careful initialization gives a clear advantage.
Train the model
End of explanation
"""
def plot_metrics(history):
metrics = ['loss', 'auc', 'precision', 'recall']
for n, metric in enumerate(metrics):
name = metric.replace("_"," ").capitalize()
plt.subplot(2,2,n+1)
plt.plot(history.epoch, history.history[metric], color=colors[0], label='Train')
plt.plot(history.epoch, history.history['val_'+metric],
color=colors[0], linestyle="--", label='Val')
plt.xlabel('Epoch')
plt.ylabel(name)
if metric == 'loss':
plt.ylim([0, plt.ylim()[1]])
elif metric == 'auc':
plt.ylim([0.8,1])
else:
plt.ylim([0,1])
plt.legend()
plot_metrics(baseline_history)
"""
Explanation: Check training history
In this section, you will produce plots of your model's accuracy and loss on the training and validation set. These are useful to check for overfitting, which you can learn more about in this tutorial.
Additionally, you can produce these plots for any of the metrics you created above. False negatives are included as an example.
End of explanation
"""
# TODO 1
train_predictions_baseline = #TODO: Your code goes here.
test_predictions_baseline = #TODO: Your code goes here.
def plot_cm(labels, predictions, p=0.5):
cm = confusion_matrix(labels, predictions > p)
plt.figure(figsize=(5,5))
sns.heatmap(cm, annot=True, fmt="d")
plt.title('Confusion matrix @{:.2f}'.format(p))
plt.ylabel('Actual label')
plt.xlabel('Predicted label')
print('Legitimate Transactions Detected (True Negatives): ', cm[0][0])
print('Legitimate Transactions Incorrectly Detected (False Positives): ', cm[0][1])
print('Fraudulent Transactions Missed (False Negatives): ', cm[1][0])
print('Fraudulent Transactions Detected (True Positives): ', cm[1][1])
print('Total Fraudulent Transactions: ', np.sum(cm[1]))
"""
Explanation: Note: That the validation curve generally performs better than the training curve. This is mainly caused by the fact that the dropout layer is not active when evaluating the model.
Evaluate metrics
You can use a confusion matrix to summarize the actual vs. predicted labels where the X axis is the predicted label and the Y axis is the actual label.
End of explanation
"""
baseline_results = model.evaluate(test_features, test_labels,
batch_size=BATCH_SIZE, verbose=0)
for name, value in zip(model.metrics_names, baseline_results):
print(name, ': ', value)
print()
plot_cm(test_labels, test_predictions_baseline)
"""
Explanation: Evaluate your model on the test dataset and display the results for the metrics you created above.
End of explanation
"""
def plot_roc(name, labels, predictions, **kwargs):
fp, tp, _ = sklearn.metrics.roc_curve(labels, predictions)
plt.plot(100*fp, 100*tp, label=name, linewidth=2, **kwargs)
plt.xlabel('False positives [%]')
plt.ylabel('True positives [%]')
plt.xlim([-0.5,20])
plt.ylim([80,100.5])
plt.grid(True)
ax = plt.gca()
ax.set_aspect('equal')
plot_roc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_roc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plt.legend(loc='lower right')
"""
Explanation: If the model had predicted everything perfectly, this would be a diagonal matrix where values off the main diagonal, indicating incorrect predictions, would be zero. In this case the matrix shows that you have relatively few false positives, meaning that there were relatively few legitimate transactions that were incorrectly flagged. However, you would likely want to have even fewer false negatives despite the cost of increasing the number of false positives. This trade off may be preferable because false negatives would allow fraudulent transactions to go through, whereas false positives may cause an email to be sent to a customer to ask them to verify their card activity.
Plot the ROC
Now plot the ROC. This plot is useful because it shows, at a glance, the range of performance the model can reach just by tuning the output threshold.
End of explanation
"""
# Scaling by total/2 helps keep the loss to a similar magnitude.
# The sum of the weights of all examples stays the same.
# TODO 1
weight_for_0 = #TODO: Your code goes here.
weight_for_1 = #TODO: Your code goes here.
class_weight = #TODO: Your code goes here.
print('Weight for class 0: {:.2f}'.format(weight_for_0))
print('Weight for class 1: {:.2f}'.format(weight_for_1))
"""
Explanation: It looks like the precision is relatively high, but the recall and the area under the ROC curve (AUC) aren't as high as you might like. Classifiers often face challenges when trying to maximize both precision and recall, which is especially true when working with imbalanced datasets. It is important to consider the costs of different types of errors in the context of the problem you care about. In this example, a false negative (a fraudulent transaction is missed) may have a financial cost, while a false positive (a transaction is incorrectly flagged as fraudulent) may decrease user happiness.
Class weights
Calculate class weights
The goal is to identify fradulent transactions, but you don't have very many of those positive samples to work with, so you would want to have the classifier heavily weight the few examples that are available. You can do this by passing Keras weights for each class through a parameter. These will cause the model to "pay more attention" to examples from an under-represented class.
End of explanation
"""
weighted_model = make_model()
weighted_model.load_weights(initial_weights)
weighted_history = weighted_model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
callbacks = [early_stopping],
validation_data=(val_features, val_labels),
# The class weights go here
class_weight=class_weight)
"""
Explanation: Train a model with class weights
Now try re-training and evaluating the model with class weights to see how that affects the predictions.
Note: Using class_weights changes the range of the loss. This may affect the stability of the training depending on the optimizer. Optimizers whose step size is dependent on the magnitude of the gradient, like optimizers.SGD, may fail. The optimizer used here, optimizers.Adam, is unaffected by the scaling change. Also note that because of the weighting, the total losses are not comparable between the two models.
End of explanation
"""
plot_metrics(weighted_history)
"""
Explanation: Check training history
End of explanation
"""
# TODO 1
train_predictions_weighted = #TODO: Your code goes here.
test_predictions_weighted = #TODO: Your code goes here.
weighted_results = weighted_model.evaluate(test_features, test_labels,
batch_size=BATCH_SIZE, verbose=0)
for name, value in zip(weighted_model.metrics_names, weighted_results):
print(name, ': ', value)
print()
plot_cm(test_labels, test_predictions_weighted)
"""
Explanation: Evaluate metrics
End of explanation
"""
plot_roc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_roc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plot_roc("Train Weighted", train_labels, train_predictions_weighted, color=colors[1])
plot_roc("Test Weighted", test_labels, test_predictions_weighted, color=colors[1], linestyle='--')
plt.legend(loc='lower right')
"""
Explanation: Here you can see that with class weights the accuracy and precision are lower because there are more false positives, but conversely the recall and AUC are higher because the model also found more true positives. Despite having lower accuracy, this model has higher recall (and identifies more fraudulent transactions). Of course, there is a cost to both types of error (you wouldn't want to bug users by flagging too many legitimate transactions as fraudulent, either). Carefully consider the trade offs between these different types of errors for your application.
Plot the ROC
End of explanation
"""
# TODO 1
pos_features = #TODO: Your code goes here.
neg_features = train_features[~bool_train_labels]
pos_labels = #TODO: Your code goes here.
neg_labels = #TODO: Your code goes here.
"""
Explanation: Oversampling
Oversample the minority class
A related approach would be to resample the dataset by oversampling the minority class.
End of explanation
"""
ids = np.arange(len(pos_features))
choices = np.random.choice(ids, len(neg_features))
res_pos_features = pos_features[choices]
res_pos_labels = pos_labels[choices]
res_pos_features.shape
resampled_features = np.concatenate([res_pos_features, neg_features], axis=0)
resampled_labels = np.concatenate([res_pos_labels, neg_labels], axis=0)
order = np.arange(len(resampled_labels))
np.random.shuffle(order)
resampled_features = resampled_features[order]
resampled_labels = resampled_labels[order]
resampled_features.shape
"""
Explanation: Using NumPy
You can balance the dataset manually by choosing the right number of random
indices from the positive examples:
End of explanation
"""
BUFFER_SIZE = 100000
def make_ds(features, labels):
ds = tf.data.Dataset.from_tensor_slices((features, labels))#.cache()
ds = ds.shuffle(BUFFER_SIZE).repeat()
return ds
pos_ds = make_ds(pos_features, pos_labels)
neg_ds = make_ds(neg_features, neg_labels)
"""
Explanation: Using tf.data
If you're using tf.data the easiest way to produce balanced examples is to start with a positive and a negative dataset, and merge them. See the tf.data guide for more examples.
End of explanation
"""
for features, label in pos_ds.take(1):
print("Features:\n", features.numpy())
print()
print("Label: ", label.numpy())
"""
Explanation: Each dataset provides (feature, label) pairs:
End of explanation
"""
resampled_ds = tf.data.experimental.sample_from_datasets([pos_ds, neg_ds], weights=[0.5, 0.5])
resampled_ds = resampled_ds.batch(BATCH_SIZE).prefetch(2)
for features, label in resampled_ds.take(1):
print(label.numpy().mean())
"""
Explanation: Merge the two together using experimental.sample_from_datasets:
End of explanation
"""
resampled_steps_per_epoch = np.ceil(2.0*neg/BATCH_SIZE)
resampled_steps_per_epoch
"""
Explanation: To use this dataset, you'll need the number of steps per epoch.
The definition of "epoch" in this case is less clear. Say it's the number of batches required to see each negative example once:
End of explanation
"""
resampled_model = make_model()
resampled_model.load_weights(initial_weights)
# Reset the bias to zero, since this dataset is balanced.
output_layer = resampled_model.layers[-1]
output_layer.bias.assign([0])
val_ds = tf.data.Dataset.from_tensor_slices((val_features, val_labels)).cache()
val_ds = val_ds.batch(BATCH_SIZE).prefetch(2)
resampled_history = resampled_model.fit(
resampled_ds,
epochs=EPOCHS,
steps_per_epoch=resampled_steps_per_epoch,
callbacks = [early_stopping],
validation_data=val_ds)
"""
Explanation: Train on the oversampled data
Now try training the model with the resampled data set instead of using class weights to see how these methods compare.
Note: Because the data was balanced by replicating the positive examples, the total dataset size is larger, and each epoch runs for more training steps.
End of explanation
"""
plot_metrics(resampled_history )
"""
Explanation: If the training process were considering the whole dataset on each gradient update, this oversampling would be basically identical to the class weighting.
But when training the model batch-wise, as you did here, the oversampled data provides a smoother gradient signal: Instead of each positive example being shown in one batch with a large weight, they're shown in many different batches each time with a small weight.
This smoother gradient signal makes it easier to train the model.
Check training history
Note that the distributions of metrics will be different here, because the training data has a totally different distribution from the validation and test data.
End of explanation
"""
resampled_model = make_model()
resampled_model.load_weights(initial_weights)
# Reset the bias to zero, since this dataset is balanced.
output_layer = resampled_model.layers[-1]
output_layer.bias.assign([0])
resampled_history = resampled_model.fit(
resampled_ds,
# These are not real epochs
steps_per_epoch = 20,
epochs=10*EPOCHS,
callbacks = [early_stopping],
validation_data=(val_ds))
"""
Explanation: Re-train
Because training is easier on the balanced data, the above training procedure may overfit quickly.
So break up the epochs to give the callbacks.EarlyStopping finer control over when to stop training.
End of explanation
"""
plot_metrics(resampled_history)
"""
Explanation: Re-check training history
End of explanation
"""
# TODO 1
train_predictions_resampled = #TODO: Your code goes here.
test_predictions_resampled = #TODO: Your code goes here.
resampled_results = resampled_model.evaluate(test_features, test_labels,
batch_size=BATCH_SIZE, verbose=0)
for name, value in zip(resampled_model.metrics_names, resampled_results):
print(name, ': ', value)
print()
plot_cm(test_labels, test_predictions_resampled)
"""
Explanation: Evaluate metrics
End of explanation
"""
plot_roc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_roc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plot_roc("Train Weighted", train_labels, train_predictions_weighted, color=colors[1])
plot_roc("Test Weighted", test_labels, test_predictions_weighted, color=colors[1], linestyle='--')
plot_roc("Train Resampled", train_labels, train_predictions_resampled, color=colors[2])
plot_roc("Test Resampled", test_labels, test_predictions_resampled, color=colors[2], linestyle='--')
plt.legend(loc='lower right')
"""
Explanation: Plot the ROC
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/bcc/cmip6/models/bcc-esm1/land.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'bcc', 'bcc-esm1', 'land')
"""
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: BCC
Source ID: BCC-ESM1
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:39
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.21/_downloads/a09c964ce37825f750704113fa863276/plot_mne_inverse_envelope_correlation_volume.ipynb | bsd-3-clause | # Authors: Eric Larson <larson.eric.d@gmail.com>
# Sheraz Khan <sheraz@khansheraz.com>
# Denis Engemann <denis.engemann@gmail.com>
#
# License: BSD (3-clause)
import os.path as op
import mne
from mne.beamformer import make_lcmv, apply_lcmv_epochs
from mne.connectivity import envelope_correlation
from mne.preprocessing import compute_proj_ecg, compute_proj_eog
data_path = mne.datasets.brainstorm.bst_resting.data_path()
subjects_dir = op.join(data_path, 'subjects')
subject = 'bst_resting'
trans = op.join(data_path, 'MEG', 'bst_resting', 'bst_resting-trans.fif')
bem = op.join(subjects_dir, subject, 'bem', subject + '-5120-bem-sol.fif')
raw_fname = op.join(data_path, 'MEG', 'bst_resting',
'subj002_spontaneous_20111102_01_AUX.ds')
crop_to = 60.
"""
Explanation: Compute envelope correlations in volume source space
Compute envelope correlations of orthogonalized activity [1] [2] in source
space using resting state CTF data in a volume source space.
End of explanation
"""
raw = mne.io.read_raw_ctf(raw_fname, verbose='error')
raw.crop(0, crop_to).pick_types(meg=True, eeg=False).load_data().resample(80)
raw.apply_gradient_compensation(3)
projs_ecg, _ = compute_proj_ecg(raw, n_grad=1, n_mag=2)
projs_eog, _ = compute_proj_eog(raw, n_grad=1, n_mag=2, ch_name='MLT31-4407')
raw.info['projs'] += projs_ecg
raw.info['projs'] += projs_eog
raw.apply_proj()
cov = mne.compute_raw_covariance(raw) # compute before band-pass of interest
"""
Explanation: Here we do some things in the name of speed, such as crop (which will
hurt SNR) and downsample. Then we compute SSP projectors and apply them.
End of explanation
"""
raw.filter(14, 30)
events = mne.make_fixed_length_events(raw, duration=5.)
epochs = mne.Epochs(raw, events=events, tmin=0, tmax=5.,
baseline=None, reject=dict(mag=8e-13), preload=True)
del raw
"""
Explanation: Now we band-pass filter our data and create epochs.
End of explanation
"""
# This source space is really far too coarse, but we do this for speed
# considerations here
pos = 15. # 1.5 cm is very broad, done here for speed!
src = mne.setup_volume_source_space('bst_resting', pos, bem=bem,
subjects_dir=subjects_dir, verbose=True)
fwd = mne.make_forward_solution(epochs.info, trans, src, bem)
data_cov = mne.compute_covariance(epochs)
filters = make_lcmv(epochs.info, fwd, data_cov, 0.05, cov,
pick_ori='max-power', weight_norm='nai')
del fwd
"""
Explanation: Compute the forward and inverse
End of explanation
"""
epochs.apply_hilbert() # faster to do in sensor space
stcs = apply_lcmv_epochs(epochs, filters, return_generator=True)
corr = envelope_correlation(stcs, verbose=True)
"""
Explanation: Compute label time series and do envelope correlation
End of explanation
"""
degree = mne.connectivity.degree(corr, 0.15)
stc = mne.VolSourceEstimate(degree, [src[0]['vertno']], 0, 1, 'bst_resting')
brain = stc.plot(
src, clim=dict(kind='percent', lims=[75, 85, 95]), colormap='gnuplot',
subjects_dir=subjects_dir, mode='glass_brain')
"""
Explanation: Compute the degree and plot it
End of explanation
"""
|
atcemgil/notes | DynamicalSystems.ipynb | mit | %matplotlib inline
import numpy as np
import matplotlib.pylab as plt
N = 100
T = 100
a = 0.9
xm = 0.9
sP = np.sqrt(0.001)
sR = np.sqrt(0.01)
x1 = np.zeros(N)
x2 = np.zeros(N)
y = np.zeros(N)
for i in range(N):
if i==0:
x1[0] = xm
x2[0] = 0
else:
x1[i] = xm + a*x1[i-1] + np.random.normal(0, sP)
x2[i] = x2[i-1] + x1[i-1]
y[i] = np.cos(2*np.pi*x2[i]/T) + np.random.normal(0, sR)
plt.figure()
plt.plot(x)
plt.figure()
plt.plot(y)
plt.show()
"""
Explanation: Dynamical systems
A (discrete time) dynamical system describes the evolution of the state of a system and
the observations that can be obtained from the state. The general form is
\begin{eqnarray}
x_0 & \sim & \pi(x_0) \
x_t & = & f(x_{t-1}, \epsilon_t) \
y_t & = & g(x_{t}, \nu_t)
\end{eqnarray}
Here, $f$ and $g$ are transition and observation functions. The variables
$\epsilon_t$ and $\nu_t$ are assumed to be unknown random noise components with a known distribution. The initial state, $x_0$, can be either known exactly or at least and initial state distribution density $\pi$ is known. The model describes the relation between observations $y_t$ and states $x_t$.
Frequency modulated sinusoidal signal
\begin{eqnarray}
\epsilon_t & \sim & \mathcal{N}(0, P) \
x_{1,t} & = & \mu + a x_{1,t-1} + \epsilon_t \
x_{2,t} & = & x_{2,t-1} + x_{1,t-1} \
\nu_t & \sim & \mathcal{N}(0, R) \
y_t & = & \cos(2\pi x_{2,t}) + \nu_t
\end{eqnarray}
End of explanation
"""
%matplotlib inline
import numpy as np
import matplotlib.pylab as plt
A = np.array([[1,0],[1,1],[0,1]])
B = np.array([[2,0],[0,2],[0,0]])
S = B-A
N = S.shape[1]
M = S.shape[0]
STEPS = 50000
k = np.array([0.8,0.005, 0.3])
X = np.zeros((N,STEPS))
x = np.array([100,100])
T = np.zeros(STEPS)
t = 0
for i in range(STEPS-1):
rho = k*np.array([x[0], x[0]*x[1], x[1]])
srho = np.sum(rho)
if srho == 0:
break
idx = np.random.choice(M, p=rho/srho)
dt = np.random.exponential(scale=1./srho)
x = x + S[idx,:]
t = t + dt
X[:, i+1] = x
T[i+1] = t
plt.figure(figsize=(10,5))
plt.plot(T,X[0,:], '.b')
plt.plot(T,X[1,:], '.r')
plt.legend([u'Smiley',u'Zombie'])
plt.show()
"""
Explanation: Stochastic Kinetic Model
Stochastic Kinetic Model is a general modelling technique to describe the interactions of a set of objects such as molecules, individuals or items. This class of models are particularly useful in modeling queuing systems, production plants, chemical, ecological, biological systems or biological cell cycles at a sufficiently detailed level. It is a good example of a a dynamical model that displays quite interesting and complex behaviour.
The model is best motivated first with a specific example, known as the Lotka-Volterra predator-prey model:
A Predator Prey Model (Lotka-Volterra)
Consider a population of two species, named as smiley 😊 and zombie 👹. Our dynamical model will describe the evolution of the number of individuals in this entire population. We define $3$ different event types:
Event 1: Reproduction
The smiley, denoted by $X_1$, reproduces by division so one smiley becomes two smileys after a reproduction event.
<h1><center>
😊 $\rightarrow$ 🙂 😊
</center></h1>
In mathematical notation, we denote this event as
\begin{eqnarray}
X_1 & \xrightarrow{k_1} 2 X_1
\end{eqnarray}
Here, $k_1$ denotes the rate constant, the rate at which a single single smiley is reproducing according to the exponential distribution. When there are $x_1$ smileys, each reproducing with rate $k_1$, the rate at which a reproduction event occurs is simply
\begin{eqnarray}
h_1(x, k_1) & = & k_1 x_1
\end{eqnarray}
The rate $h_1$ is the rate of a reproduction event, increasing proportionally to the number of smileys.
Event 2: Consumption
The predatory species, the zombies, denoted as $X_2$, transform the smileys into zombies. So one zombie 'consumes' one smiley to create a new zombie.
<h1><center>
😥 👹 $\rightarrow$ 👹 👹
</center></h1>
The consumption event is denoted as
\begin{eqnarray}
X_1 + X_2 & \xrightarrow{k_2} 2 X_2
\end{eqnarray}
Here, $k_2$ denotes the rate constant, the rate at which a zombie and a smiley meet, and the zombie transforms the smiley into a new zombie. When there are $x_1$ smileys and $x_2$ zombies, there are in total $x_1 x_2$ possible meeting events, With each meeting event occurring at rate $k_2$, the rate at which a consumption event occurs is simply
\begin{eqnarray}
h_2(x, k_2) & = & k_2 x_1 x_2
\end{eqnarray}
The rate $h_2$ is the rate of a consumption event. There are more consumptions if there are more zombies or smileys.
Event 3: Death
Finally, in this story, unlike Hollywood blockbusters, the zombies are mortal and they decease after a certain random time.
<h1><center>
👹 $\rightarrow $ ☠️
</center></h1>
This is denoted as $X_2$ disappearing from the scene.
\begin{eqnarray}
X_2 & \xrightarrow{k_3} \emptyset
\end{eqnarray}
A zombie death event occurs, by a similar argument as reproduction, at rate
\begin{eqnarray}
h_3(x, k_3) & = & k_3 x_2
\end{eqnarray}
Model
All equations can be written
\begin{eqnarray}
X_1 & \xrightarrow{k_1} 2 X_1 & \hspace{3cm}\text{Reproduction}\
X_1 + X_2 & \xrightarrow{k_2} 2 X_2 & \hspace{3cm}\text{Consumption} \
X_2 & \xrightarrow{k_3} \emptyset & \hspace{3cm} \text{Death}
\end{eqnarray}
More compactly, in matrix form we can write:
\begin{eqnarray}
\left(
\begin{array}{cc}
1 & 0 \
1 & 1 \
0 & 1
\end{array}
\right)
\left(
\begin{array}{cc}
X_1 \
X_2
\end{array}
\right) \rightarrow
\left(
\begin{array}{cc}
2 & 0 \
0 & 2 \
0 & 0
\end{array}
\right)
\left(
\begin{array}{cc}
X_1 \
X_2
\end{array}
\right)
\end{eqnarray}
The rate constants $k_1, k_2$ and $k_3$ denote the rate at which a single event is occurring according to the exponential distribution.
All objects of type $X_1$ trigger the next event
\begin{eqnarray}
h_1(x, k_1) & = & k_1 x_1 \
h_2(x, k_2) & = & k_2 x_1 x_2 \
h_3(x, k_2) & = & k_3 x_2
\end{eqnarray}
The dynamical model is conditioned on the type of the next event, denoted by $r(j)$
\begin{eqnarray}
Z(j) & = & \sum_i h_i(x(j-1), k_i) \
\pi_i(j) & = & \frac{h_i(x(j-1), k_i) }{Z(j)} \
r(j) & \sim & \mathcal{C}(r; \pi(j)) \
\Delta(j) & \sim & \mathcal{E}(1/Z(j)) \
t(j) & = & t(j-1) + \Delta(j) \
x(j) & = & x(j-1) + S(r(j))
\end{eqnarray}
End of explanation
"""
plt.figure(figsize=(10,5))
plt.plot(X[0,:],X[1,:], '.')
plt.xlabel('# of Smileys')
plt.ylabel('# of Zombies')
plt.axis('square')
plt.show()
"""
Explanation: State Space Representation
End of explanation
"""
%matplotlib inline
import networkx as nx
import numpy as np
import matplotlib.pylab as plt
from itertools import product
# Maximum number of smileys or zombies
N = 20
#A = np.array([[1,0],[1,1],[0,1]])
#B = np.array([[2,0],[0,2],[0,0]])
#S = B-A
k = np.array([0.6,0.05, 0.3])
G = nx.DiGraph()
pos = [u for u in product(range(N),range(N))]
idx = [u[0]*N+u[1] for u in pos]
G.add_nodes_from(idx)
edge_colors = []
edges = []
for y,x in product(range(N),range(N)):
source = (x,y)
rho = k*np.array([source[0], source[0]*source[1], source[1]])
srho = np.sum(rho)
if srho==0:
srho = 1.
if x<N-1: # Birth
target = (x+1,y)
edges.append((source[0]*N+source[1], target[0]*N+target[1]))
edge_colors.append(rho[0]/srho)
if y<N-1 and x>0: # Consumption
target = (x-1,y+1)
edges.append((source[0]*N+source[1], target[0]*N+target[1]))
edge_colors.append(rho[1]/srho)
if y>0: # Death
target = (x,y-1)
edges.append((source[0]*N+source[1], target[0]*N+target[1]))
edge_colors.append(rho[2]/srho)
G.add_edges_from(edges)
col_dict = {u: c for u,c in zip(edges, edge_colors)}
cols = [col_dict[u] for u in G.edges() ]
plt.figure(figsize=(9,9))
nx.draw(G, pos, arrows=False, width=2, node_size=20, node_color="white", edge_vmin=0,edge_vmax=0.7, edge_color=cols, edge_cmap=plt.cm.gray_r )
plt.xlabel('# of smileys')
plt.ylabel('# of zombies')
#plt.gca().set_visible('on')
plt.show()
"""
Explanation: In this model, the state space can be visualized as a 2-D lattice of nonnegative integers, where each point $(x_1, x_2)$ denotes the number of smileys versus the zombies.
The model simulates a Markov chain on a directed graph where possible transitions are shown as edges where the edge color shade is proportional to the transition probability (darker means higher probability).
The edges are directed, the arrow tips are not shown. There are three types of edges, each corresponding to one event type:
$\rightarrow$ Birth
$\nwarrow$ Consumption
$\downarrow$ Death
End of explanation
"""
def simulate_skm(A, B, k, x0, STEPS=1000):
S = B-A
N = S.shape[1]
M = S.shape[0]
X = np.zeros((N,STEPS))
x = x0
T = np.zeros(STEPS)
t = 0
X[:,0] = x
for i in range(STEPS-1):
# rho = k*np.array([x[0]*x[2], x[0], x[0]*x[1], x[1]])
rho = [k[j]*np.prod(x**A[j,:]) for j in range(M)]
srho = np.sum(rho)
if srho == 0:
break
idx = np.random.choice(M, p=rho/srho)
dt = np.random.exponential(scale=1./srho)
x = x + S[idx,:]
t = t + dt
X[:, i+1] = x
T[i+1] = t
return X,T
"""
Explanation: Generic code to simulate an SKM
End of explanation
"""
#%matplotlib nbagg
%matplotlib inline
import numpy as np
import matplotlib.pylab as plt
A = np.array([[1,1],[1,0]])
B = np.array([[2,0],[0,1]])
k = np.array([0.02,0.3])
x0 = np.array([10,40])
X,T = simulate_skm(A,B,k,x0,STEPS=10000)
plt.figure(figsize=(10,5))
plt.plot(T,X[0,:], '.b',ms=2)
plt.plot(T,X[1,:], '.g',ms=2)
plt.legend([u'Rabbit', u'Clover'])
plt.show()
"""
Explanation: A simple ecosystem
Suppose there are $x_1$ rabbits and $x_2$ clovers. Rabbits eat clovers with a rate of $k_1$ to reproduce. Similarly, rabbits die with rate $k_2$ and a clover grows.
Pray (Clover): 🍀
Predator (Rabbit): 🐰
<h1><center>
🐰🍀 $\rightarrow$ 🐰🐰
</center></h1>
<h1><center>
🐰 $\rightarrow$ 🍀
</center></h1>
In this system, clearly the total number of objects $x_1+x_2 = N$ is constant.
Probabilistic question
What is the distribution of the number of rabbits at time $t$
Statistical questions
What are the parameters $k_1$ and $k_2$ of the system given observations of rabbit counts at specific times $t_1, t_2, \dots, t_K$
Given rabbit counts at time $t$, predict counts at time $t + \Delta$
End of explanation
"""
#%matplotlib nbagg
%matplotlib inline
import numpy as np
import matplotlib.pylab as plt
A = np.array([[1,0,1],[1,0,0],[1,1,0],[0,1,0]])
B = np.array([[2,0,0],[0,0,1],[0,2,0],[0,0,1]])
#k = np.array([0.02,0.09, 0.001, 0.3])
#x0 = np.array([1000,1000,10000])
k = np.array([0.02,0.19, 0.001, 2.8])
x0 = np.array([1000,1,10000])
X,T = simulate_skm(A,B,k,x0,STEPS=50000)
plt.figure(figsize=(10,5))
plt.plot(T,X[0,:], '.y',ms=2)
plt.plot(T,X[1,:], '.r',ms=2)
plt.plot(T,X[2,:], '.g',ms=2)
plt.legend([u'Rabbit',u'Wolf',u'Clover'])
plt.show()
sm = int(sum(X[:,0]))+1
Hist = np.zeros((sm,sm))
STEPS = X.shape[1]
for i in range(STEPS):
Hist[int(X[1,i]),int(X[0,i])] = Hist[int(X[1,i]),int(X[0,i])] + 1
plt.figure(figsize=(10,5))
#plt.plot(X[0,:],X[1,:], '.',ms=1)
plt.imshow(Hist,interpolation='nearest')
plt.xlabel('# of Rabbits')
plt.ylabel('# of Wolfs')
plt.gca().invert_yaxis()
#plt.axis('square')
plt.show()
%matplotlib inline
import networkx as nx
import numpy as np
import matplotlib.pylab as plt
# Maximum number of rabbits or wolves
N = 30
k = np.array([0.005,0.06, 0.001, 0.1])
G = nx.DiGraph()
pos = [u for u in product(range(N),range(N))]
idx = [u[0]*N+u[1] for u in pos]
G.add_nodes_from(idx)
edge_colors = []
edges = []
for y,x in product(range(N),range(N)):
clover = N - (x+y)
source = (x,y)
rho = k*np.array([source[0]*clover, source[0], source[0]*source[1], source[1]])
srho = np.sum(rho)
if srho==0:
srho = 1.
if x<N-1: # Rabbit Birth
target = (x+1,y)
edges.append((source[0]*N+source[1], target[0]*N+target[1]))
edge_colors.append(rho[0]/srho)
if y<N-1 and x>0: # Consumption
target = (x-1,y+1)
edges.append((source[0]*N+source[1], target[0]*N+target[1]))
edge_colors.append(rho[2]/srho)
# if y>0: # Wolf Death
# target = (x,y-1)
# edges.append((source[0]*N+source[1], target[0]*N+target[1]))
# edge_colors.append(rho[3]/srho)
# if x>0: # Rabbit Death
# target = (x-1,y)
# edges.append((source[0]*N+source[1], target[0]*N+target[1]))
# edge_colors.append(rho[1]/srho)
G.add_edges_from(edges)
col_dict = {u: c for u,c in zip(edges, edge_colors)}
cols = [col_dict[u] for u in G.edges() ]
plt.figure(figsize=(5,5))
nx.draw(G, pos, arrows=False, width=2, node_size=20, node_color="white", edge_vmin=0,edge_vmax=0.4, edge_color=cols, edge_cmap=plt.cm.gray_r )
plt.xlabel('# of smileys')
plt.ylabel('# of zombies')
#plt.gca().set_visible('on')
plt.show()
"""
Explanation: A simple ecological network
Food (Clover): 🍀
Prey (Rabbit): 🐰
Predator (Wolf): 🐺
<h1><center>
🐰🍀 $\rightarrow$ 🐰🐰
</center></h1>
<h1><center>
🐰 $\rightarrow$ 🍀
</center></h1>
<h1><center>
🐰🐺 $\rightarrow$ 🐺🐺
</center></h1>
<h1><center>
🐺 $\rightarrow$ 🍀
</center></h1>
The number of objects in this system are constant
End of explanation
"""
#%matplotlib nbagg
%matplotlib inline
import numpy as np
import matplotlib.pylab as plt
A = np.array([[1,0,1],[1,1,0],[0,1,0],[0,1,0]])
B = np.array([[2,0,1],[0,1,0],[0,2,0],[0,0,0]])
k = np.array([4.0,0.038, 0.02, 0.01])
x0 = np.array([50,100,1])
X,T = simulate_skm(A,B,k,x0,STEPS=10000)
plt.figure(figsize=(10,5))
plt.plot(T,X[0,:], '.b',ms=2)
plt.plot(T,X[1,:], '.r',ms=2)
plt.plot(T,X[2,:], '.g',ms=2)
plt.legend([u'Rabbit',u'Wolf',u'Clover'])
plt.show()
"""
Explanation: Alternative model
Constant food supply for the prey.
<h1><center>
🐰🍀 $\rightarrow$ 🐰🐰🍀
</center></h1>
<h1><center>
🐰🐺 $\rightarrow$ 🐺
</center></h1>
<h1><center>
🐺 $\rightarrow$ 🐺🐺
</center></h1>
<h1><center>
🐺 $\rightarrow$ ☠️
</center></h1>
This model is flawed as it allows predators to reproduce even when no prey is there.
End of explanation
"""
#%matplotlib nbagg
%matplotlib inline
import numpy as np
import matplotlib.pylab as plt
death_rate = 1.8
A = np.array([[1,0,0,1],[1,1,0,0],[0,0,1,0],[0,0,1,0],[0,0,1,0],[0,1,0,0]])
B = np.array([[2,0,0,1],[0,0,1,0],[0,1,0,0],[0,2,0,0],[0,0,0,0],[0,0,0,0]])
k = np.array([9.7, 9.5, 30, 3.5, death_rate, death_rate])
x0 = np.array([150,20,10,1])
X,T = simulate_skm(A,B,k,x0,STEPS=5000)
plt.figure(figsize=(10,5))
plt.plot(X[0,:], '.b',ms=2)
plt.plot(X[1,:], 'or',ms=2)
plt.plot(X[2,:], '.r',ms=3)
plt.legend([u'Mouse',u'Hungry Cat',u'Happy Cat'])
plt.show()
"""
Explanation: 🙀 : Hungry cat
😻 : Happy cat
<h1><center>
🐭🧀 $\rightarrow$ 🐭🐭🧀
</center></h1>
<h1><center>
🐭🙀 $\rightarrow$ 😻
</center></h1>
<h1><center>
😻 $\rightarrow$ 🙀
</center></h1>
<h1><center>
😻 $\rightarrow$ 🙀🙀
</center></h1>
<h1><center>
😻 $\rightarrow$ ☠️
</center></h1>
<h1><center>
🙀 $\rightarrow$ ☠️
</center></h1>
End of explanation
"""
%matplotlib inline
import numpy as np
"""
Explanation: From Diaconis and Freedman
A random walk on the unit interval. Start with $x$, choose one of the two intervals $[0,x]$ and $[x,1]$ with equal probability $0.5$, then choose a new $x$ uniformly on the interval.
End of explanation
"""
#Diaconis and Freedman fern
%matplotlib inline
import numpy as np
import matplotlib.pylab as plt
T = 3000;
x = np.matrix(np.zeros((2,T)));
x[:,0] = np.matrix('[0.3533; 0]');
A = [np.matrix('[0.444 -0.3733;0.06 0.6000]'), np.matrix('[-0.8 -0.1867;0.1371 0.8]')];
B = [np.matrix('[0.3533;0]'), np.matrix('[1.1;0.1]')];
w = 0.27;
for i in range(T-1):
if np.random.rand()<w:
c = 0;
else:
c = 1;
x[:,i+1] = A[c]*x[:,i] + B[c]
plt.figure(figsize=(5,5))
plt.plot(x[0,:],x[1,:], 'k.',ms=1)
plt.plot(x[0,0:40].T,x[1,0:40].T, 'k:')
plt.axis('equal')
plt.show()
plt.plot(x[0,0:200].T,x[1,0:200].T, 'k-')
plt.axis('equal')
plt.show()
"""
Explanation: A random switching system
\begin{eqnarray}
A(0) & = & \left(\begin{array}{cc} 0.444 & -0.3733 \ 0.06 & 0.6000 \end{array}\right) \
B(0) & = & \left(\begin{array}{c} 0.3533 \ 0 \end{array}\right) \
A(1) & = & \left(\begin{array}{cc} -0.8 & -0.1867 \ 0.1371 & 0.8 \end{array}\right) \
B(1) & = & \left(\begin{array}{c} 1.1 \ 0.1 \end{array}\right) \
w & = & 0.2993
\end{eqnarray}
\begin{eqnarray}
c_t & \sim & \mathcal{BE}(c; w) \
x_t & = & A(c_t) x_{t-1} + B(c_t)
\end{eqnarray}
End of explanation
"""
#%matplotlib nbagg
%matplotlib inline
import numpy as np
import matplotlib.pylab as plt
A = np.array([[1,0],[0,1]])
B = np.array([[0,1],[1,0]])
k = np.array([0.5,0.5])
x0 = np.array([0,50])
X,T = simulate_skm(A,B,k,x0,STEPS=10000)
plt.figure(figsize=(10,5))
plt.plot(T,X[0,:], '.b',ms=2)
plt.plot(T,X[1,:], '.g',ms=2)
plt.legend([u'A', u'B'])
plt.show()
plt.hist(X[0,:],range=(0,np.sum(x0)),bins=np.sum(x0))
plt.show()
"""
Explanation: Polya Urn Models
Many urn models can be represented as instances of the stochastic kinetic model
Mahmoud:
Ballot Problem
\begin{eqnarray}
2X_1 & \rightarrow & X_1 \
2X_2 & \rightarrow & X_2
\end{eqnarray}
Polya-Eggenberger Urn
\begin{eqnarray}
X_1 & \rightarrow & s X_1 \
X_2 & \rightarrow & s X_2
\end{eqnarray}
Bernard-Friedman Urn
\begin{eqnarray}
X_1 & \rightarrow & s X_1 + a X_2 \
X_2 & \rightarrow & a X_1 + s X_2
\end{eqnarray}
Bagchi-Pal Urn
\begin{eqnarray}
X_1 & \rightarrow & a X_1 + b X_2 \
X_2 & \rightarrow & c X_1 + d X_2
\end{eqnarray}
Ehrenfest
\begin{eqnarray}
X_1 & \rightarrow & X_2 \
X_2 & \rightarrow & X_1 \
\end{eqnarray}
Extended Ehrenfest?
\begin{eqnarray}
2 X_1 & \rightarrow & X_1 + X_2 \
2 X_2 & \rightarrow & X_1 + X_2 \
\end{eqnarray}
Ehrenfest
End of explanation
"""
%matplotlib inline
import numpy as np
import matplotlib.pylab as plt
A = np.array([[1,0],[0,1]])
B = np.array([[2,0],[0,2]])
k = np.array([0.05,0.05])
x0 = np.array([3,1])
X,T = simulate_skm(A,B,k,x0,STEPS=2000)
plt.figure(figsize=(10,5))
plt.plot(T,X[0,:]/(X[0,:]+X[1,:]), '.-',ms=2)
plt.ylim([0,1])
plt.show()
"""
Explanation: Polya
End of explanation
"""
|
qkitgroup/qkit | qkit/doc/notebooks/resonator_class_basics.ipynb | gpl-2.0 | ## start qkit and import the necessary classes; here we assume a already configured qkit environment
import qkit
qkit.start()
from qkit.analysis.resonator import Resonator
"""
Explanation: Resonator class basics
The resonator class can be used to fit resonator measurements (or fit live during the measurement, see the spectroscopy example notebook for that).<br>
The resonator class is able to fit the (squared) amplitude or complex scattering data to multiple fit models. Currently it is possible to fit the data to a Lorentzian curve, skewed Lorentzian curve, Fano resonance curve, and a circle fit.<br>
All fit parameters are saved in the hdf5 file and comparison views between data and fit are created.
More information about the background to the fit models can be found i.e. in the papers by P. J. Petersan, J. Appl. Phys 84 (1998) and S. Probst, Rev. Sci. Instrum. 86 (2015)
End of explanation
"""
r = Resonator(qkit.fid.measure_db['XXXXXX'])
"""
Explanation: A resonator object takes the path of the data file as an argument (mandatory). The path can be retrieved by using the file UUID and qkit's file information database.
End of explanation
"""
r.fit_lorentzian(f_min = 5.0e9) ## set lower frequency boundary
r.fit_skewed_lorentzian(f_min = 5.2e9, f_max = 5.6e9) ## set frequency range
r.fit_fano(fit_all = True) ## fit all entries of a value matrix
r.fit_circle(reflection = True, fit_all = True) ## reflection resonator; fit all entries of a value matrix
"""
Explanation: The fitting is done by calling one of the fit functions of the object. It is asumend, that the datasets for amplitude and phase are propperly named. Fitting all entries of a value matrix dataset requires an additional parameter fit_all = True. Fitting a value box is not yet possible.<br>
The circle fit algorithm takes the resonator type into account by chosing either reflection = True or notch = True (default).<br>
The frequency range of the fit can be set to only fit the data within that range. By default all frequency points contribute to the fit.
End of explanation
"""
|
jonathf/chaospy | docs/user_guide/fundamentals/quasi_random_samples.ipynb | mit | import chaospy
uniform_cube = chaospy.J(chaospy.Uniform(0, 1), chaospy.Uniform(0, 1))
count = 300
random_samples = uniform_cube.sample(count, rule="random", seed=1234)
additive_samples = uniform_cube.sample(count, rule="additive_recursion")
halton_samples = uniform_cube.sample(count, rule="halton")
hammersley_samples = uniform_cube.sample(count, rule="hammersley")
korobov_samples = uniform_cube.sample(count, rule="korobov")
sobol_samples = uniform_cube.sample(count, rule="sobol")
from matplotlib import pyplot
pyplot.rc("figure", figsize=[16, 9])
pyplot.subplot(231)
pyplot.scatter(*random_samples)
pyplot.title("random")
pyplot.subplot(232)
pyplot.scatter(*additive_samples)
pyplot.title("additive recursion")
pyplot.subplot(233)
pyplot.scatter(*halton_samples)
pyplot.title("halton")
pyplot.subplot(234)
pyplot.scatter(*hammersley_samples)
pyplot.title("hammersley")
pyplot.subplot(235)
pyplot.scatter(*korobov_samples)
pyplot.title("korobov")
pyplot.subplot(236)
pyplot.scatter(*sobol_samples)
pyplot.title("sobol")
pyplot.show()
"""
Explanation: Quasi-random samples
As demonstrated in the problem formulation section,
Monte Carlo integration is by nature a very slow converging method.
One way to improve on converging
The
error in convergence is proportional to $1/\sqrt{K}$ where $K$ is the
number of samples. It is somewhat better with variance reduction
techniques that often reaches errors proportional to $1/K$. For a full
overview of the convergence rate of the various methods, see for example
the excellent book \"handbook of Monte Carlo methods\" by Kroese, Taimre
and Botev kroese_handbook_2011. However
as the number of dimensions grows, Monte Carlo convergence rate stays
the same, making it immune to the curse of dimensionality.
Low-discrepancy sequences
In mathematics, a low-discrepancy
sequence is a
sequence with the property that for all values of N, its sub-sequence $Q_1,
\dots, Q_N$ has a low discrepancy.
Roughly speaking, the discrepancy of a sequence is low if the proportion
of points in the sequence falling into an arbitrary set B is close to
proportional to the measure of B, as would happen on average (but not
for particular samples) in the case of an equi-distributed sequence.
Specific definitions of discrepancy differ regarding the choice of B
(hyper-spheres, hyper-cubes, etc.) and how the discrepancy for every B is
computed (usually normalized) and combined (usually by taking the worst
value).
Low-discrepancy sequences are also called quasi-random or sub-random
sequences, due to their common use as a replacement of uniformly
distributed random numbers. The \"quasi\" modifier is used to denote
more clearly that the values of a low-discrepancy sequence are neither
random nor pseudo-random, but such sequences share some properties of
random variables and in certain applications such as the quasi-Monte
Carlo method their lower discrepancy is an important advantage.
In chaospy, the following low-discrepancy schemes exists and can be evoked by passing the appropriate rule flag to chaospy.Distribution.sample() method:
End of explanation
"""
antithetic_samples = uniform_cube.sample(40, antithetic=True, seed=1234)
pyplot.rc("figure", figsize=[6, 4])
pyplot.scatter(*antithetic_samples)
pyplot.show()
"""
Explanation: It is easy to observe by eye that for the average distance between each sample is much smaller for the sequences than the random samples.
All of these methods are deterministic, so running the same code again, and you will result in the same samples.
Antithetic variate
Create antithetic
variate from
variables on the unit hyper-cube.
In statistics, the antithetic variate method is a variance reduction
technique used in Monte Carlo methods. It does so by doing a type of mirroring of samples.
In chaospy we can create antithetic variate by providing the antithetic=True flag to the chaospy.Distribution.sample() method:
End of explanation
"""
pyplot.rc("figure", figsize=[8, 4])
pyplot.subplot(121)
pyplot.scatter(*uniform_cube.sample(40, antithetic=[False, True], seed=1234))
pyplot.title("mirror x-axis")
pyplot.subplot(122)
pyplot.scatter(*uniform_cube.sample(40, antithetic=[True, False], seed=1234))
pyplot.title("mirror y-axis")
pyplot.show()
"""
Explanation: Since the uniform distribution i fully symmetrical it is possible to observe the mirroring visually.
Looking at the 16 samples here, it is possible to interpret it as 10 unique samples, which are mirrored three times: along the x-axis, y-axis and the x-y-diagonal.
Antithetic variate does not scale too well into higher dimensions, as the number of mirrored samples to normal samples grows exponentially.
So in higher dimensional problems it is possible to limit the mirroring to include only a few dimensions of interest by passing a Boolean sequence as the antithetic flag:
End of explanation
"""
pyplot.rc("figure", figsize=[6, 4])
lhc_samples = uniform_cube.sample(count, rule="latin_hypercube", seed=1234)
pyplot.scatter(*lhc_samples)
pyplot.show()
"""
Explanation: Here 20 samples are generated and mirror along a single axis, which is twice as many as in the first try.
Latin hyper-cube sampling
Latin hyper-cube sampling is a stratification scheme for forcing random samples to be placed more spread out than traditional random samples.
It is similar to the low discrepancy sequences, but maintain random samples at it core.
Generating latin hyper-cube samples can be done by passing the rule="latin_hypercube" flag to chaospy.Distribution.sample():
End of explanation
"""
|
fifabsas/talleresfifabsas | python/Extras/Incertezas/introduccion.ipynb | mit | x = 5
y = 'Hola mundo!'
z = [1,2,3]
"""
Explanation: Taller de Python - Estadística en Física Experimental - 1er día
Esta presentación/notebook está disponible:
Repositorio Github FIFA BsAs (para descargarlo, usen el botón raw o hagan un fork del repositorio)
Página web de talleres FIFA BsAs
Programar ¿con qué se come?
Programar es dar una lista de tareas concretas a la computadora para que haga. Esencialmente, una computadora sabe:
Leer datos
Escribir datos
Transformar datos
Y nada más que esto,. Así, la computadora pasa a ser suna gran gran calculadora que permite hacer cualquier tipo de cuenta de las que necesitemos dentro de la Física (y de la vida también) mientras sepamos cómo decirle a la máquina qué cómputos hacer.
Pero, ¿qué es Python?
Python es un lenguaje para hablarle a la computadora, que se denominan lenguajes de programación. Este lenguaje, que puede ser escrito y entendido por la computadora debe ser transformado a un lenguaje que entieda la computadora (o un intermediario, que se denomina maquina virtual) así se hacen las transformaciones. Todo este modelo de programación lo podemos ver esquematizado en la figura siguiente
<img src="modelo_computacional_python.png" alt="Drawing" style="width: 400px;"/>
Historia
Python nació en 1991, cuando su creador Guido Van Rossum lo hizo público en su versión 0.9. El lenguaje siempre buscó ser fácil de aprender y poder hacer tareas de todo tipo. Es fácil de aprender por su sintaxis, el tipado dinámico (que vamos a ver de que se trata) y además la gran cantidad de librerías/módulos para todo.
Herramientas para el taller
Para trabajar vamos a usar algún editor de texto (recomendamos Visual Studio Code, que viene con Anaconda), una terminal, o directamente el editor Spyder (que pueden buscarlo en las aplicaciones de la computadora si instalaron Anaconda o si lo instalaron en la PC del aula). También, si quieren podemos trabajar en un Jupyter Notebook, que permite hacer archivos como este (y hacer informes con código intercalado)
Esto es a gusto del consumidor, sabemos usar todas esas herramientas. Cada una tiene sus ventajas y desventajas:
- Escribir y ejecutar en consola no necesita instalar nada más que Python. Aprender a usar la consola da muchos beneficios de productividad
- El editor o entorno de desarrollo al tener más funcionalidad es más pesado, y probablemente sea más caro (Pycharm, que es el entorno de desarrollo más completo de Python sale alrededor de 200 dolares... auch)
- Jupyter notebook es un entorno muy interactivo, pero puede traer problemas en la lógica de ejecución. Hay que tener cuidado
Para instalar Python, conviene descargarse Anaconda. Este proyecto corresponde a una distribución de Python, que al tener una interfaz grafica amigable y manejador de paquetes llamado conda te permite instalar todas las librerías científicas de una. En Linux y macOS instalar Python sin Anaconda es más fácil, en Windows diría que es una necesidad sin meterse en asuntos oscuros de compilación (y además que el soporte en Windows de las librerías no es tan amplio).
Existe un proyecto llamado pyenv que en Linux y macOS permite instalar cualquier versión de Python. Si lo quieren tener (aunque para empezar Anaconda es mejor) pregunte que lo configuramos rápidamente.
Datos, memoria y otras yerbas
Para hacer cuentas, primero necesitamos el medio para guardar o almacenar los datos. El sector este se denomina memoria. Nuestros datos se guardan en espacios de memoria, y esos espacios tienen un nombre, un rótulo con el cual los podremos llamar y pedirle a la computadora que los utilice para operar con ellos, los modifique, etc.
Como esos espacios son capaces de variar al avanzar los datos llegamos a llamarlos variables, y el proceso de llenar la variable con un valor se denomina asignación, que en Python se corresponde con el "=".
Hasta ahora sólo tenemos en la cabeza valores numéricos para nuestras variables, considerando la analogía de la super-calculadora. Pero esto no es así, y es más las variables en Python contienen la información adicional del tipo de dato. Este tipo de dato determina las operaciones posibles con la variable (además del tamaño en memoria, pero esto ya era esperable del mismo valor de la variable).
Veamos un par de ejemplos
End of explanation
"""
print(y)
print(type(x))
print(type(y), type(z), len(z))
"""
Explanation: Aquí hemos guardado en un espacio de memoria llamado por nosotros "x" la información de un valor de tipo entero, 5, en otro espacio de memoria, que nosotros llamamos "y" guardamos el texto "Hola mundo!". En Python, las comillas indican que lo que encerramos con ellas es un texto. x no es un texto, así que Python lo tratará como variable para manipular. "z" es el nombre del espacio de memoria donde se almacena una lista con 3 elementos enteros.
Podemos hacer cosas con esta información. Python es un lenguaje interpretado (a diferencia de otros como Java o C++), eso significa que ni bien nosotros le pedimos algo a Python, éste lo ejecuta. Así es que podremos pedirle por ejemplo que imprima en pantalla el contenido en y, el tipo de valor que es x (entero) entre otras cosas.
End of explanation
"""
# Realice el ejercicio 1
"""
Explanation: Vamos a utilizar mucho la función type() para entender con qué tipo de variables estamos trabajando. type() es una función predeterminada por Python, y lo que hace es pedir como argumento (lo que va entre los paréntesis) una variable y devuelve inmediatamente el tipo de variable que es.
Ejercicio 1
En el siguiente bloque cree las variables "dato1" y "dato2" y guarde en ellas los textos "estoy programando" y "que emocion!". Con la función type() averigue qué tipo de datos se almacena en esas variables.
End of explanation
"""
a = 5
b = 7
c = 5.0
d = 7.0
print(a+b, b+c, a*d, a/b, a/d, c**2)
"""
Explanation: Para las variables integers(enteros) y floats (flotantes) podemos hacer las operaciones matemáticas usuales y esperables. Veamos un poco las compatibilidades entre estos tipos de variables.
End of explanation
"""
# Realice el ejercicio 2. El resultado esperado es -98.01
"""
Explanation: Ejercicio 2
Calcule el resultado de $$ \frac{(2+7.9)^2}{4^{7.4-3.14*9.81}-1} $$ y guárdelo en una variable
End of explanation
"""
lista1 = [1, 2, 'saraza']
print(lista1, type(lista1))
print(lista1[1], type(lista1[1]))
print(lista1[2], type(lista1[2]))
print(lista1[-1])
lista2 = [2,3,4]
lista3 = [5,6,7]
#print(lista2+lista3)
print(lista2[2]+lista3[0])
tupla1 = (1,2,3)
lista4 = [1,2,3]
lista4[2] = 0
print(lista4)
#tupla1[0] = 0
print(tupla1)
"""
Explanation: Listas, tuplas y diccionarios
Las listas son cadenas de datos de cualquier tipo, unidos por estar en una misma variable, con posiciones dentro de esa lista, con las cuales nosotros podemos llamarlas. En Python, las listas se enumeran desde el 0 en adelante.
Estas listas también tienen algunas operaciones que le son válidas.
Distintas son las tuplas. Las listas son editables (en jerga, mutables), pero las tuplas no (inmutables). Esto es importante cuando, a lo largo del desarrollo de un código donde necesitamos que ciertas cosas no cambien, no editemos por error valores fundamentales de nuestro problema a resolver.
End of explanation
"""
listilla = list(range(10))
print(listilla, type(listilla))
"""
Explanation: Hay formas muy cómodas de hacer listas. Presentamos una que utilizaremos mucho, que es usando la función range. Esta devuelve como una receta de como hacer los numeros; por lo tanto tenemos que decirle al generador que cree la lista, por medio de otra herramienta incorporada de Python, list
End of explanation
"""
# Realice el ejercicio 3
"""
Explanation: Cómo en general no se hace seguido esto, no existe una forma "rápida" o "más elegante" de hacerlo.
Ejercicio 3
Haga una lista con los resultados de los últimos dos ejercicios y que la imprima en pantalla
Sobreescriba en la misma variable la misma lista pero con sus elementos permutados e imprima nuevamente la lista
Ejemplo de lo que debería mostrarse en pantalla
['estoy programando', 'que emocion!', -98.01]
['estoy programando', -98.01, 'que emocion!']
End of explanation
"""
# Realice el ejercicio 4
"""
Explanation: Ejercicio 4
Haga una lista con la función range de 15 elementos y sume los elementos 5, 10 y 12
Con la misma lista, haga el producto de los primeros 4 elementos de esa lista
Con la misma lista, reste el último valor con el primero
End of explanation
"""
d = {"hola": 1, "mundo": 2, 0: "numero", (0, 1): ["tupla", 0, 1]} # Las llaves pueden ser casi cualquier cosa (lista no)
print(d, type(d))
print(d["hola"])
print(d[0])
print(d[(0, 1)])
# Podés setear una llave (o key) vieja
d[0] = 10
# O podes agregar una nueva. El orden de las llaves no es algo en qué confiar necesariamente, para eso está OrderedDict
d[42] = "La respuesta"
# Cambiamos el diccionario, así que aparecen nuevas keys y cambios de values
print(d)
# Keys repetidas terminan siendo sobreescritas
rep_d = {0: 1, 0: 2}
print(rep_d)
# Otra cosas menor, un diccionario vacío es
empt_d = {}
print(empt_d)
"""
Explanation: Ahora, el titulo hablaba de diccionarios... pero no son los que usamos para buscar el significado de las palabras. ¡Aunque pueden ser parecidos o funcionar igual!.
Un diccionario es un relación entre una variable llamada llave y otra variable llamado valor. Relación en el sentido de función que veíamos en el secundario, pero usualmente de forma discreta.
La magia es que sabiendo la llave, o key, ya tienes el valor, o value, por lo que podés usarlo como una lista pero sin usar indices si no cosas como cadenas. Las keys son únicas, y si quiero crear un diccionario con las mismas keys se van a pisar y queda la última aparición
Veamos un ejemplo
End of explanation
"""
new_d = {0: '0', '0': 0}
print(len(new_d))
# Diccionario vacío
print(len({}))
"""
Explanation: Es particularmente mágico el diccionario y lo podes usar para muchisimas cosas (y además Python lo usa para casi todo internamente, así que está muy bueno saber usarlos!).
El largo de un diccionario es la cantidad de keys que tiene, por ejemplo
End of explanation
"""
# Realice el ejercicio 5
# Descomente esta línea y a trabajar
# print(tu_dict[1] + tu_dict["FIFA"] + tu_dict[(3,4)])
"""
Explanation: Ejercicio 5
Haga un diccionario con tal que con el siguiente código
print(tu_dict[1] + tu_dict["FIFA"] + tu_dict[(3,4)])
Imprima "Programador, hola mundo!". Puede tener todas las entradas que quieras, no hay limite de la creatividad acá
End of explanation
"""
print(5 > 4)
print(4 > 5)
print(4 == 5) #La igualdad matemática se escribe con doble ==
print(4 != 5) #La desigualdad matemática se escribe con !=
print(type(4 > 5))
"""
Explanation: Booleans
Este tipo de variable tiene sólo dos valores posibles: 1 y 0, o True y False. Las utilizaremos escencialmente para que Python reconozca relaciones entre números.
End of explanation
"""
print([1, 2, 3] == [1, 2, 3])
print([1, 2, 3] == [1, 3, 2])
"""
Explanation: También podemos comparar listas, donde todas las entradas deberíán ser iguales
End of explanation
"""
print((0, 1) == (0, 1))
print((1, 3) == (0, 3))
"""
Explanation: Lo mismo para tuplas (y aplica para diccionarios)
End of explanation
"""
a = 5
b = a
print(id(a) == id(b))
a = 12 # Reutilizamos la variable, con un nuevo valor
b = 12
print(id(a) == id(b)) # Python cachea números de 16bits
a = 66000
b = 66000
print(id(a) == id(b))
# No cachea listas, ni strings
a = [1, 2, 3]
b = [1, 2, 3]
print(id(a) == id(b))
a = "Python es lo más"
b = "Python es lo más"
print(id(a) == id(b))
"""
Explanation: Con la función id() podemos ver si dos variables apuntan a la misma dirección de memoria, es decir podemos ver si dos variables tienen exactamente el mismo valor (aunque sea filosófico, en Python la diferencia es importante)
End of explanation
"""
nueva_l = [0, 42, 3]
nueva_t = (2.3, 4.2);
nuevo_d = {"0": -4, (0, 1): "tupla"}
# La frase es
# >>> x in collection
# donde collection es una tupla, lista o diccionario. Parece inglés escrito no?
print(42 in nueva_l)
print(3 in nueva_t)
print((0,1) in nuevo_d)
"""
Explanation: Las listas, tuplas y diccionarios también pueden devolver booleanos cuando se le pregunta si tiene o no algún elemento. Los diccionarios trabajaran sobre las llaves y las listas/tuplas sobre sus indices/valores
End of explanation
"""
# Realice el ejercicio 5
"""
Explanation: Ejercicio 6
Averigue el resultado de 4!=5==1. ¿Dónde pondría paréntesis para que el resultado fuera distinto?
End of explanation
"""
parametro = 5
if parametro > 0: # un if inaugura un nuevo bloque indentado
print('Tu parametro es {} y es mayor a cero'.format(parametro))
print('Gracias')
else: # el else inaugura otro bloque indentado
print('Tu parametro es {} y es menor o igual a cero'.format(parametro))
print('Gracias')
print('Vuelva pronto')
print(' ')
parametro = -5
if parametro > 0: # un if inaugura un nuevo bloque indentado
print('Tu parametro es {} y es mayor a cero'.format(parametro))
print('Gracias')
else: # el else inaugura otro bloque indentado
print('Tu parametro es {} y es menor o igual a cero'.format(parametro))
print('Gracias')
print('Vuelva pronto')
print(' ')
"""
Explanation: Control de flujo: condicionales e iteraciones (if y for para los amigos)
Si en el fondo un programa es una serie de algoritmos que la computadora debe seguir, un conocimiento fundamental para programar es saber cómo pedirle a una computadora que haga operaciones si se cumple una condición y que haga otras si no se cumple. Nos va a permitir hacer programas mucho más complejos. Veamos entonces como aplicar un if.
End of explanation
"""
# Realice el ejercicio 7
"""
Explanation: Ejercicio 7
Haga un programa con un if que imprima la suma de dos números si un tercero es positivo, y que imprima la resta si el tercero es negativo.
End of explanation
"""
nueva_lista = ['nada',1,2,'tres', 'cuatro', 7-2, 2*3, 7/1, 2**3, 3**2]
for i in range(10): # i es una variable que inventamos en el for, y que tomará los valores de la
print(nueva_lista[i]) #lista que se genere con range(10)
"""
Explanation: Para que Python repita una misma acción n cantidad de veces, utilizaremos la estructura for. En cada paso, nosotros podemos aprovechar el "número de iteración" como una variable. Eso nos servirá en la mayoría de los casos.
End of explanation
"""
# Realice el ejercicio 8
"""
Explanation: Ejercicio 8
Haga otra lista con 16 elementos, y haga un programa que con un for imprima solo los primeros 7
Modifique el for anterior y haga que imprima solo los elementos pares de su lista
End of explanation
"""
i = 1
while i < 10: # tener cuidado con los while que se cumplen siempre. Eso daría lugar a los loops infinitos.
i = i+1
print(i)
"""
Explanation: La estructura while es poco recomendada en Python pero es importante saber que existe: consiste en repetir un paso mientras se cumpla una condición. Es como un for mezclado con un if.
End of explanation
"""
# Realice el ejercicio 8
"""
Explanation: Ejercicio 9
Calcule el factorial de N, siendo N la única variable que recibe la función (Se puede pensar usando for o usando while).
Calcule la sumatoria de los elementos de una lista.
End of explanation
"""
f = lambda x: x**2 - 5*x + 6
print(f(3), f(2), f(0))
"""
Explanation: Funciones
Pero si queremos definir nuestra propia manera de calcular algo, o si queremos agrupar una serie de órdenes bajo un mismo nombre, podemos definirnos nuestras propias funciones, pidiendo la cantidad de argumentos que querramos.
Vamos a usar las funciones lambda (también llamadas anonimas) más que nada para funciones matemáticas, aunque también tenga otros usos. Definamos el polinomio $f(x) = x^2 - 5x + 6$ que tiene como raíces $x = 3$ y $x = 2$.
End of explanation
"""
def promedio(a,b,c):
N = a + b + c # Es importante que toda la función tenga su contenido indentado
N = N/3.0
return N
mipromedio = promedio(5,5,7) # Aquí rompimos la indentación
print(mipromedio)
"""
Explanation: Las funciones lambda son necesariamente funciones de una sola linea y también tienen que retornar nada; por eso son candidatas para expresiones matemáticas simples.
Las otras funciones, las más generales, se las llama funciones def, y tienen la siguiente forma.
End of explanation
"""
def otra_funcion(a, b):
return a + b * 2
# Es un valor!
otra_f = otra_funcion
print(otra_f)
print(type(otra_f))
print(otra_f(2, 3))
"""
Explanation: Algo muy interesante y curioso, es que podemos hacer lo siguiente con las funciones
End of explanation
"""
# Realice el ejercicio 9
"""
Explanation: Las funciones pueden ser variables y esto abre la puerta a muchas cosas. Si tienen curiosidad, pregunten que está re bueno esto!
Ejercicio 10
Hacer una función que calcule el promedio de $n$ elementos dados en una lista.
Sugerencia: utilizar las funciones len() y sum() como auxiliares.
End of explanation
"""
# Realice el ejercicio 10
"""
Explanation: Ejercicio 11
Usando lo que ya sabemos de funciones matemáticas y las bifurcaciones que puede generar un if, hacer una función que reciba los coeficientes $a, b, c$ de la parábola $f(x) = ax^2 + bx + c$ y calcule las raíces si son reales (es decir, usando el discriminante $\Delta = b^2 - 4ac$ como criterio), y sino que imprima en pantalla una advertencia de que el cálculo no se puede hacer en $\mathbb{R}$.
End of explanation
"""
# Bonus track 1
"""
Explanation: Bonus track 1
Modificar la función anterior para que calcule las raíces de todos modos, aunque sean complejas. Python permite usar números complejos escritos de la forma 1 + 4j. Investiguen un poco
End of explanation
"""
# Realice el ejercicio 12
"""
Explanation: Ejercicio 12
Repitan el ejercicio 8, es decir
1. Hacer una función que calcule el factorial de N, siendo N la única variable que recibe la función (Se puede pensar usando for o usando while).
* Hacer una función que calcule la sumatoria de los elementos de una lista.
¿Se les ocurre otra forma de hacer el factorial? Piensen la definición matemática y escribanla en Python, y prueben calcular el factorial de 100 con esta definición nueva
End of explanation
"""
import math # Llamamos a una biblioteca
r1 = math.pow(2,4)
r2 = math.cos(math.pi)
r3 = math.log(100,10)
r4 = math.log(math.e)
print(r1, r2, r3, r4)
"""
Explanation: Paquetes y módulos
Pero las operaciones básicas de suma, resta, multiplicación y división son todo lo que un lenguaje como Python puede hacer "nativamente". Una potencia o un seno es álgebra no lineal, y para hacerlo, habría que inventarse un algoritmo (una serie de pasos) para calcular por ejemplo sen($\pi$). Pero alguien ya lo hizo, ya lo pensó, ya lo escribió en lenguaje Python y ahora todos podemos usar ese algoritmo sin pensar en él. Solamente hay que decirle a nuestro intérprete de Python dónde está guardado ese algoritmo. Esta posibilidad de usar algoritmos de otros es fundamental en la programación, porque es lo que permite que nuestro problema se limite solamente a entender cómo llamar a estos algoritmos ya pensados y no tener que pensarlos cada vez.
Vamos entonces a llamar a un paquete (como se le llama en Python) llamada math que nos va a extender nuestras posibilididades matemáticas.
End of explanation
"""
# Realice el ejercicio 13
"""
Explanation: Para entender cómo funcionan estas funciones, es importante recurrir a su documentation. La de esta biblioteca en particular se encuentra en
https://docs.python.org/2/library/math.html
Ejercicio 13
Use Python como calculadora y halle los resultados de
$\log(\cos(2\pi))$
$\text{atanh}(2^{\cos(e)} -1) $
$\sqrt{x^2+2x+1}$ con $x = 125$
End of explanation
"""
import taller_python # Vean el repositorio!
"""
Explanation: Crear bibliotecas
Bueno, ahora que sabemos como usar bibliotecas, nos queda saber cómo podemos crearlas. Pero para saber eso, tenemos que saber que es un módulo en Python y cómo se relaciona con un paquete.
Se le llama módulo a los archivos de Python, archivos con la extensión *.py, como por ejemplo taller_python.py (como tal vez algunos hicieron ya). En este archivo se agregan funciones, variables, etc, que pueden ser llamadas desde otro módulo con el nombre sin la extensión, es decir
End of explanation
"""
print(taller_python.func(5, 6))
# Veamos la documentación
help(taller_python.func)
"""
Explanation: Python para buscar estos módulos revisa si el módulo importado (con el comando import) está presente en la misma carpeta del que importa y luego en una serie de lugares estándares de Python (que se pueden alterar y revisar usando sys.path, importando el paquete sys). Si lo encuentra lo importa y podés usar las funciones, y si no puede salta una excepción
End of explanation
"""
# Realice el ejercicio 14
"""
Explanation: Traten de importar la función __func_oculta. Se puede, pero es un hack de Python y la idea es que no sepa de ella. Es una forma de ocultar y encapsular código, que es uno de los principios de la programación orientada a objetos.
Finalmente, un paquete como math es un conjunto de módulos ordenados en una carpeta con el nombre math, con un archivo especial __init__.py, que hace que la carpeta se comporte como un módulo. Python importa lo que vea en el archivo __init__.py y permite además importar los módulos dentro (o submodulos), si no tienen guiones bajos antes.
Usualmente no es recomendable trabajar en el __init__.py, salvo que se tenga una razón muy necesaria (o simplemente vagancia)
Ejercicio 14
Creen una libraría llamada mi_taller_python y agregen dos funciones, una que devuelva el resultado de $\sqrt{x^2+2x+1}$ para cualquier x y otra que resuelva el resultado de $(x^2+2x+1)^{y}$, para cualquier x e y. Hagan todas las funciones ocultas que requieran (aunque recomendamos siempre minimzarlas)
End of explanation
"""
#Acá va el bonus track 2, para ya saborear la próxima clase
"""
Explanation: Bonus track 2
Ahora que nos animamos a buscar nuevas bibliotecas y definir funciones, buscar la función newton() de la biblioteca scipy.optimize para hallar $x$ tal que se cumpla la siguiente ecuación no lineal $$\frac{1}{x} = ln(x)$$
End of explanation
"""
|
xpharry/Udacity-DLFoudation | tutorials/batch-norm/.ipynb_checkpoints/Batch_Normalization_Lesson-checkpoint.ipynb | mit | # Import necessary packages
import tensorflow as tf
import tqdm
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# Import MNIST data so we have something for our experiments
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
"""
Explanation: Batch Normalization – Lesson
What is it?
What are it's benefits?
How do we add it to a network?
Let's see it work!
What are you hiding?
What is Batch Normalization?<a id='theory'></a>
Batch normalization was introduced in Sergey Ioffe's and Christian Szegedy's 2015 paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. The idea is that, instead of just normalizing the inputs to the network, we normalize the inputs to layers within the network. It's called "batch" normalization because during training, we normalize each layer's inputs by using the mean and variance of the values in the current mini-batch.
Why might this help? Well, we know that normalizing the inputs to a network helps the network learn. But a network is a series of layers, where the output of one layer becomes the input to another. That means we can think of any layer in a neural network as the first layer of a smaller network.
For example, imagine a 3 layer network. Instead of just thinking of it as a single network with inputs, layers, and outputs, think of the output of layer 1 as the input to a two layer network. This two layer network would consist of layers 2 and 3 in our original network.
Likewise, the output of layer 2 can be thought of as the input to a single layer network, consistng only of layer 3.
When you think of it like that - as a series of neural networks feeding into each other - then it's easy to imagine how normalizing the inputs to each layer would help. It's just like normalizing the inputs to any other neural network, but you're doing it at every layer (sub-network).
Beyond the intuitive reasons, there are good mathematical reasons why it helps the network learn better, too. It helps combat what the authors call internal covariate shift. This discussion is best handled in the paper and in Deep Learning a book you can read online written by Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Specifically, check out the batch normalization section of Chapter 8: Optimization for Training Deep Models.
Benefits of Batch Normalization<a id="benefits"></a>
Batch normalization optimizes network training. It has been shown to have several benefits:
1. Networks train faster – Each training iteration will actually be slower because of the extra calculations during the forward pass and the additional hyperparameters to train during back propagation. However, it should converge much more quickly, so training should be faster overall.
2. Allows higher learning rates – Gradient descent usually requires small learning rates for the network to converge. And as networks get deeper, their gradients get smaller during back propagation so they require even more iterations. Using batch normalization allows us to use much higher learning rates, which further increases the speed at which networks train.
3. Makes weights easier to initialize – Weight initialization can be difficult, and it's even more difficult when creating deeper networks. Batch normalization seems to allow us to be much less careful about choosing our initial starting weights.
4. Makes more activation functions viable – Some activation functions do not work well in some situations. Sigmoids lose their gradient pretty quickly, which means they can't be used in deep networks. And ReLUs often die out during training, where they stop learning completely, so we need to be careful about the range of values fed into them. Because batch normalization regulates the values going into each activation function, non-linearlities that don't seem to work well in deep networks actually become viable again.
5. Simplifies the creation of deeper networks – Because of the first 4 items listed above, it is easier to build and faster to train deeper neural networks when using batch normalization. And it's been shown that deeper networks generally produce better results, so that's great.
6. Provides a bit of regularlization – Batch normalization adds a little noise to your network. In some cases, such as in Inception modules, batch normalization has been shown to work as well as dropout. But in general, consider batch normalization as a bit of extra regularization, possibly allowing you to reduce some of the dropout you might add to a network.
7. May give better results overall – Some tests seem to show batch normalization actually improves the train.ing results. However, it's really an optimization to help train faster, so you shouldn't think of it as a way to make your network better. But since it lets you train networks faster, that means you can iterate over more designs more quickly. It also lets you build deeper networks, which are usually better. So when you factor in everything, you're probably going to end up with better results if you build your networks with batch normalization.
Batch Normalization in TensorFlow<a id="implementation_1"></a>
This section of the notebook shows you one way to add batch normalization to a neural network built in TensorFlow.
The following cell imports the packages we need in the notebook and loads the MNIST dataset to use in our experiments. However, the tensorflow package contains all the code you'll actually need for batch normalization.
End of explanation
"""
class NeuralNet:
def __init__(self, initial_weights, activation_fn, use_batch_norm):
"""
Initializes this object, creating a TensorFlow graph using the given parameters.
:param initial_weights: list of NumPy arrays or Tensors
Initial values for the weights for every layer in the network. We pass these in
so we can create multiple networks with the same starting weights to eliminate
training differences caused by random initialization differences.
The number of items in the list defines the number of layers in the network,
and the shapes of the items in the list define the number of nodes in each layer.
e.g. Passing in 3 matrices of shape (784, 256), (256, 100), and (100, 10) would
create a network with 784 inputs going into a hidden layer with 256 nodes,
followed by a hidden layer with 100 nodes, followed by an output layer with 10 nodes.
:param activation_fn: Callable
The function used for the output of each hidden layer. The network will use the same
activation function on every hidden layer and no activate function on the output layer.
e.g. Pass tf.nn.relu to use ReLU activations on your hidden layers.
:param use_batch_norm: bool
Pass True to create a network that uses batch normalization; False otherwise
Note: this network will not use batch normalization on layers that do not have an
activation function.
"""
# Keep track of whether or not this network uses batch normalization.
self.use_batch_norm = use_batch_norm
self.name = "With Batch Norm" if use_batch_norm else "Without Batch Norm"
# Batch normalization needs to do different calculations during training and inference,
# so we use this placeholder to tell the graph which behavior to use.
self.is_training = tf.placeholder(tf.bool, name="is_training")
# This list is just for keeping track of data we want to plot later.
# It doesn't actually have anything to do with neural nets or batch normalization.
self.training_accuracies = []
# Create the network graph, but it will not actually have any real values until after you
# call train or test
self.build_network(initial_weights, activation_fn)
def build_network(self, initial_weights, activation_fn):
"""
Build the graph. The graph still needs to be trained via the `train` method.
:param initial_weights: list of NumPy arrays or Tensors
See __init__ for description.
:param activation_fn: Callable
See __init__ for description.
"""
self.input_layer = tf.placeholder(tf.float32, [None, initial_weights[0].shape[0]])
layer_in = self.input_layer
for weights in initial_weights[:-1]:
layer_in = self.fully_connected(layer_in, weights, activation_fn)
self.output_layer = self.fully_connected(layer_in, initial_weights[-1])
def fully_connected(self, layer_in, initial_weights, activation_fn=None):
"""
Creates a standard, fully connected layer. Its number of inputs and outputs will be
defined by the shape of `initial_weights`, and its starting weight values will be
taken directly from that same parameter. If `self.use_batch_norm` is True, this
layer will include batch normalization, otherwise it will not.
:param layer_in: Tensor
The Tensor that feeds into this layer. It's either the input to the network or the output
of a previous layer.
:param initial_weights: NumPy array or Tensor
Initial values for this layer's weights. The shape defines the number of nodes in the layer.
e.g. Passing in 3 matrix of shape (784, 256) would create a layer with 784 inputs and 256
outputs.
:param activation_fn: Callable or None (default None)
The non-linearity used for the output of the layer. If None, this layer will not include
batch normalization, regardless of the value of `self.use_batch_norm`.
e.g. Pass tf.nn.relu to use ReLU activations on your hidden layers.
"""
# Since this class supports both options, only use batch normalization when
# requested. However, do not use it on the final layer, which we identify
# by its lack of an activation function.
if self.use_batch_norm and activation_fn:
# Batch normalization uses weights as usual, but does NOT add a bias term. This is because
# its calculations include gamma and beta variables that make the bias term unnecessary.
# (See later in the notebook for more details.)
weights = tf.Variable(initial_weights)
linear_output = tf.matmul(layer_in, weights)
# Apply batch normalization to the linear combination of the inputs and weights
batch_normalized_output = tf.layers.batch_normalization(linear_output, training=self.is_training)
# Now apply the activation function, *after* the normalization.
return activation_fn(batch_normalized_output)
else:
# When not using batch normalization, create a standard layer that multiplies
# the inputs and weights, adds a bias, and optionally passes the result
# through an activation function.
weights = tf.Variable(initial_weights)
biases = tf.Variable(tf.zeros([initial_weights.shape[-1]]))
linear_output = tf.add(tf.matmul(layer_in, weights), biases)
return linear_output if not activation_fn else activation_fn(linear_output)
def train(self, session, learning_rate, training_batches, batches_per_sample, save_model_as=None):
"""
Trains the model on the MNIST training dataset.
:param session: Session
Used to run training graph operations.
:param learning_rate: float
Learning rate used during gradient descent.
:param training_batches: int
Number of batches to train.
:param batches_per_sample: int
How many batches to train before sampling the validation accuracy.
:param save_model_as: string or None (default None)
Name to use if you want to save the trained model.
"""
# This placeholder will store the target labels for each mini batch
labels = tf.placeholder(tf.float32, [None, 10])
# Define loss and optimizer
cross_entropy = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=labels, logits=self.output_layer))
# Define operations for testing
correct_prediction = tf.equal(tf.argmax(self.output_layer, 1), tf.argmax(labels, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
if self.use_batch_norm:
# If we don't include the update ops as dependencies on the train step, the
# tf.layers.batch_normalization layers won't update their population statistics,
# which will cause the model to fail at inference time
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)
else:
train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)
# Train for the appropriate number of batches. (tqdm is only for a nice timing display)
for i in tqdm.tqdm(range(training_batches)):
# We use batches of 60 just because the original paper did. You can use any size batch you like.
batch_xs, batch_ys = mnist.train.next_batch(60)
session.run(train_step, feed_dict={self.input_layer: batch_xs,
labels: batch_ys,
self.is_training: True})
# Periodically test accuracy against the 5k validation images and store it for plotting later.
if i % batches_per_sample == 0:
test_accuracy = session.run(accuracy, feed_dict={self.input_layer: mnist.validation.images,
labels: mnist.validation.labels,
self.is_training: False})
self.training_accuracies.append(test_accuracy)
# After training, report accuracy against test data
test_accuracy = session.run(accuracy, feed_dict={self.input_layer: mnist.validation.images,
labels: mnist.validation.labels,
self.is_training: False})
print('{}: After training, final accuracy on validation set = {}'.format(self.name, test_accuracy))
# If you want to use this model later for inference instead of having to retrain it,
# just construct it with the same parameters and then pass this file to the 'test' function
if save_model_as:
tf.train.Saver().save(session, save_model_as)
def test(self, session, test_training_accuracy=False, include_individual_predictions=False, restore_from=None):
"""
Trains a trained model on the MNIST testing dataset.
:param session: Session
Used to run the testing graph operations.
:param test_training_accuracy: bool (default False)
If True, perform inference with batch normalization using batch mean and variance;
if False, perform inference with batch normalization using estimated population mean and variance.
Note: in real life, *always* perform inference using the population mean and variance.
This parameter exists just to support demonstrating what happens if you don't.
:param include_individual_predictions: bool (default True)
This function always performs an accuracy test against the entire test set. But if this parameter
is True, it performs an extra test, doing 200 predictions one at a time, and displays the results
and accuracy.
:param restore_from: string or None (default None)
Name of a saved model if you want to test with previously saved weights.
"""
# This placeholder will store the true labels for each mini batch
labels = tf.placeholder(tf.float32, [None, 10])
# Define operations for testing
correct_prediction = tf.equal(tf.argmax(self.output_layer, 1), tf.argmax(labels, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# If provided, restore from a previously saved model
if restore_from:
tf.train.Saver().restore(session, restore_from)
# Test against all of the MNIST test data
test_accuracy = session.run(accuracy, feed_dict={self.input_layer: mnist.test.images,
labels: mnist.test.labels,
self.is_training: test_training_accuracy})
print('-'*75)
print('{}: Accuracy on full test set = {}'.format(self.name, test_accuracy))
# If requested, perform tests predicting individual values rather than batches
if include_individual_predictions:
predictions = []
correct = 0
# Do 200 predictions, 1 at a time
for i in range(200):
# This is a normal prediction using an individual test case. However, notice
# we pass `test_training_accuracy` to `feed_dict` as the value for `self.is_training`.
# Remember that will tell it whether it should use the batch mean & variance or
# the population estimates that were calucated while training the model.
pred, corr = session.run([tf.arg_max(self.output_layer,1), accuracy],
feed_dict={self.input_layer: [mnist.test.images[i]],
labels: [mnist.test.labels[i]],
self.is_training: test_training_accuracy})
correct += corr
predictions.append(pred[0])
print("200 Predictions:", predictions)
print("Accuracy on 200 samples:", correct/200)
"""
Explanation: Neural network classes for testing
The following class, NeuralNet, allows us to create identical neural networks with and without batch normalization. The code is heaviy documented, but there is also some additional discussion later. You do not need to read through it all before going through the rest of the notebook, but the comments within the code blocks may answer some of your questions.
About the code:
This class is not meant to represent TensorFlow best practices – the design choices made here are to support the discussion related to batch normalization.
It's also important to note that we use the well-known MNIST data for these examples, but the networks we create are not meant to be good for performing handwritten character recognition. We chose this network architecture because it is similar to the one used in the original paper, which is complex enough to demonstrate some of the benefits of batch normalization while still being fast to train.
End of explanation
"""
def plot_training_accuracies(*args, **kwargs):
"""
Displays a plot of the accuracies calculated during training to demonstrate
how many iterations it took for the model(s) to converge.
:param args: One or more NeuralNet objects
You can supply any number of NeuralNet objects as unnamed arguments
and this will display their training accuracies. Be sure to call `train`
the NeuralNets before calling this function.
:param kwargs:
You can supply any named parameters here, but `batches_per_sample` is the only
one we look for. It should match the `batches_per_sample` value you passed
to the `train` function.
"""
fig, ax = plt.subplots()
batches_per_sample = kwargs['batches_per_sample']
for nn in args:
ax.plot(range(0,len(nn.training_accuracies)*batches_per_sample,batches_per_sample),
nn.training_accuracies, label=nn.name)
ax.set_xlabel('Training steps')
ax.set_ylabel('Accuracy')
ax.set_title('Validation Accuracy During Training')
ax.legend(loc=4)
ax.set_ylim([0,1])
plt.yticks(np.arange(0, 1.1, 0.1))
plt.grid(True)
plt.show()
def train_and_test(use_bad_weights, learning_rate, activation_fn, training_batches=50000, batches_per_sample=500):
"""
Creates two networks, one with and one without batch normalization, then trains them
with identical starting weights, layers, batches, etc. Finally tests and plots their accuracies.
:param use_bad_weights: bool
If True, initialize the weights of both networks to wildly inappropriate weights;
if False, use reasonable starting weights.
:param learning_rate: float
Learning rate used during gradient descent.
:param activation_fn: Callable
The function used for the output of each hidden layer. The network will use the same
activation function on every hidden layer and no activate function on the output layer.
e.g. Pass tf.nn.relu to use ReLU activations on your hidden layers.
:param training_batches: (default 50000)
Number of batches to train.
:param batches_per_sample: (default 500)
How many batches to train before sampling the validation accuracy.
"""
# Use identical starting weights for each network to eliminate differences in
# weight initialization as a cause for differences seen in training performance
#
# Note: The networks will use these weights to define the number of and shapes of
# its layers. The original batch normalization paper used 3 hidden layers
# with 100 nodes in each, followed by a 10 node output layer. These values
# build such a network, but feel free to experiment with different choices.
# However, the input size should always be 784 and the final output should be 10.
if use_bad_weights:
# These weights should be horrible because they have such a large standard deviation
weights = [np.random.normal(size=(784,100), scale=5.0).astype(np.float32),
np.random.normal(size=(100,100), scale=5.0).astype(np.float32),
np.random.normal(size=(100,100), scale=5.0).astype(np.float32),
np.random.normal(size=(100,10), scale=5.0).astype(np.float32)
]
else:
# These weights should be good because they have such a small standard deviation
weights = [np.random.normal(size=(784,100), scale=0.05).astype(np.float32),
np.random.normal(size=(100,100), scale=0.05).astype(np.float32),
np.random.normal(size=(100,100), scale=0.05).astype(np.float32),
np.random.normal(size=(100,10), scale=0.05).astype(np.float32)
]
# Just to make sure the TensorFlow's default graph is empty before we start another
# test, because we don't bother using different graphs or scoping and naming
# elements carefully in this sample code.
tf.reset_default_graph()
# build two versions of same network, 1 without and 1 with batch normalization
nn = NeuralNet(weights, activation_fn, False)
bn = NeuralNet(weights, activation_fn, True)
# train and test the two models
with tf.Session() as sess:
tf.global_variables_initializer().run()
nn.train(sess, learning_rate, training_batches, batches_per_sample)
bn.train(sess, learning_rate, training_batches, batches_per_sample)
nn.test(sess)
bn.test(sess)
# Display a graph of how validation accuracies changed during training
# so we can compare how the models trained and when they converged
plot_training_accuracies(nn, bn, batches_per_sample=batches_per_sample)
"""
Explanation: There are quite a few comments in the code, so those should answer most of your questions. However, let's take a look at the most important lines.
We add batch normalization to layers inside the fully_connected function. Here are some important points about that code:
1. Layers with batch normalization do not include a bias term.
2. We use TensorFlow's tf.layers.batch_normalization function to handle the math. (We show lower-level ways to do this later in the notebook.)
3. We tell tf.layers.batch_normalization whether or not the network is training. This is an important step we'll talk about later.
4. We add the normalization before calling the activation function.
In addition to that code, the training step is wrapped in the following with statement:
python
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
This line actually works in conjunction with the training parameter we pass to tf.layers.batch_normalization. Without it, TensorFlow's batch normalization layer will not operate correctly during inference.
Finally, whenever we train the network or perform inference, we use the feed_dict to set self.is_training to True or False, respectively, like in the following line:
python
session.run(train_step, feed_dict={self.input_layer: batch_xs,
labels: batch_ys,
self.is_training: True})
We'll go into more details later, but next we want to show some experiments that use this code and test networks with and without batch normalization.
Batch Normalization Demos<a id='demos'></a>
This section of the notebook trains various networks with and without batch normalization to demonstrate some of the benefits mentioned earlier.
We'd like to thank the author of this blog post Implementing Batch Normalization in TensorFlow. That post provided the idea of - and some of the code for - plotting the differences in accuracy during training, along with the idea for comparing multiple networks using the same initial weights.
Code to support testing
The following two functions support the demos we run in the notebook.
The first function, plot_training_accuracies, simply plots the values found in the training_accuracies lists of the NeuralNet objects passed to it. If you look at the train function in NeuralNet, you'll see it that while it's training the network, it periodically measures validation accuracy and stores the results in that list. It does that just to support these plots.
The second function, train_and_test, creates two neural nets - one with and one without batch normalization. It then trains them both and tests them, calling plot_training_accuracies to plot how their accuracies changed over the course of training. The really imporant thing about this function is that it initializes the starting weights for the networks outside of the networks and then passes them in. This lets it train both networks from the exact same starting weights, which eliminates performance differences that might result from (un)lucky initial weights.
End of explanation
"""
train_and_test(False, 0.01, tf.nn.relu)
"""
Explanation: Comparisons between identical networks, with and without batch normalization
The next series of cells train networks with various settings to show the differences with and without batch normalization. They are meant to clearly demonstrate the effects of batch normalization. We include a deeper discussion of batch normalization later in the notebook.
The following creates two networks using a ReLU activation function, a learning rate of 0.01, and reasonable starting weights.
End of explanation
"""
train_and_test(False, 0.01, tf.nn.relu, 2000, 50)
"""
Explanation: As expected, both networks train well and eventually reach similar test accuracies. However, notice that the model with batch normalization converges slightly faster than the other network, reaching accuracies over 90% almost immediately and nearing its max acuracy in 10 or 15 thousand iterations. The other network takes about 3 thousand iterations to reach 90% and doesn't near its best accuracy until 30 thousand or more iterations.
If you look at the raw speed, you can see that without batch normalization we were computing over 1100 batches per second, whereas with batch normalization that goes down to just over 500. However, batch normalization allows us to perform fewer iterations and converge in less time over all. (We only trained for 50 thousand batches here so we could plot the comparison.)
The following creates two networks with the same hyperparameters used in the previous example, but only trains for 2000 iterations.
End of explanation
"""
train_and_test(False, 0.01, tf.nn.sigmoid)
"""
Explanation: As you can see, using batch normalization produces a model with over 95% accuracy in only 2000 batches, and it was above 90% at somewhere around 500 batches. Without batch normalization, the model takes 1750 iterations just to hit 80% – the network with batch normalization hits that mark after around 200 iterations! (Note: if you run the code yourself, you'll see slightly different results each time because the starting weights - while the same for each model - are different for each run.)
In the above example, you should also notice that the networks trained fewer batches per second then what you saw in the previous example. That's because much of the time we're tracking is actually spent periodically performing inference to collect data for the plots. In this example we perform that inference every 50 batches instead of every 500, so generating the plot for this example requires 10 times the overhead for the same 2000 iterations.
The following creates two networks using a sigmoid activation function, a learning rate of 0.01, and reasonable starting weights.
End of explanation
"""
train_and_test(False, 1, tf.nn.relu)
"""
Explanation: With the number of layers we're using and this small learning rate, using a sigmoid activation function takes a long time to start learning. It eventually starts making progress, but it took over 45 thousand batches just to get over 80% accuracy. Using batch normalization gets to 90% in around one thousand batches.
The following creates two networks using a ReLU activation function, a learning rate of 1, and reasonable starting weights.
End of explanation
"""
train_and_test(False, 1, tf.nn.relu)
"""
Explanation: Now we're using ReLUs again, but with a larger learning rate. The plot shows how training started out pretty normally, with the network with batch normalization starting out faster than the other. But the higher learning rate bounces the accuracy around a bit more, and at some point the accuracy in the network without batch normalization just completely crashes. It's likely that too many ReLUs died off at this point because of the high learning rate.
The next cell shows the same test again. The network with batch normalization performs the same way, and the other suffers from the same problem again, but it manages to train longer before it happens.
End of explanation
"""
train_and_test(False, 1, tf.nn.sigmoid)
"""
Explanation: In both of the previous examples, the network with batch normalization manages to gets over 98% accuracy, and get near that result almost immediately. The higher learning rate allows the network to train extremely fast.
The following creates two networks using a sigmoid activation function, a learning rate of 1, and reasonable starting weights.
End of explanation
"""
train_and_test(False, 1, tf.nn.sigmoid, 2000, 50)
"""
Explanation: In this example, we switched to a sigmoid activation function. It appears to hande the higher learning rate well, with both networks achieving high accuracy.
The cell below shows a similar pair of networks trained for only 2000 iterations.
End of explanation
"""
train_and_test(False, 2, tf.nn.relu)
"""
Explanation: As you can see, even though these parameters work well for both networks, the one with batch normalization gets over 90% in 400 or so batches, whereas the other takes over 1700. When training larger networks, these sorts of differences become more pronounced.
The following creates two networks using a ReLU activation function, a learning rate of 2, and reasonable starting weights.
End of explanation
"""
train_and_test(False, 2, tf.nn.sigmoid)
"""
Explanation: With this very large learning rate, the network with batch normalization trains fine and almost immediately manages 98% accuracy. However, the network without normalization doesn't learn at all.
The following creates two networks using a sigmoid activation function, a learning rate of 2, and reasonable starting weights.
End of explanation
"""
train_and_test(False, 2, tf.nn.sigmoid, 2000, 50)
"""
Explanation: Once again, using a sigmoid activation function with the larger learning rate works well both with and without batch normalization.
However, look at the plot below where we train models with the same parameters but only 2000 iterations. As usual, batch normalization lets it train faster.
End of explanation
"""
train_and_test(True, 0.01, tf.nn.relu)
"""
Explanation: In the rest of the examples, we use really bad starting weights. That is, normally we would use very small values close to zero. However, in these examples we choose randome values with a standard deviation of 5. If you were really training a neural network, you would not want to do this. But these examples demonstrate how batch normalization makes your network much more resilient.
The following creates two networks using a ReLU activation function, a learning rate of 0.01, and bad starting weights.
End of explanation
"""
train_and_test(True, 0.01, tf.nn.sigmoid)
"""
Explanation: As the plot shows, without batch normalization the network never learns anything at all. But with batch normalization, it actually learns pretty well and gets to almost 80% accuracy. The starting weights obviously hurt the network, but you can see how well batch normalization does in overcoming them.
The following creates two networks using a sigmoid activation function, a learning rate of 0.01, and bad starting weights.
End of explanation
"""
train_and_test(True, 1, tf.nn.relu)
"""
Explanation: Using a sigmoid activation function works better than the ReLU in the previous example, but without batch normalization it would take a tremendously long time to train the network, if it ever trained at all.
The following creates two networks using a ReLU activation function, a learning rate of 1, and bad starting weights.<a id="successful_example_lr_1"></a>
End of explanation
"""
train_and_test(True, 1, tf.nn.sigmoid)
"""
Explanation: The higher learning rate used here allows the network with batch normalization to surpass 90% in about 30 thousand batches. The network without it never gets anywhere.
The following creates two networks using a sigmoid activation function, a learning rate of 1, and bad starting weights.
End of explanation
"""
train_and_test(True, 2, tf.nn.relu)
"""
Explanation: Using sigmoid works better than ReLUs for this higher learning rate. However, you can see that without batch normalization, the network takes a long time tro train, bounces around a lot, and spends a long time stuck at 90%. The network with batch normalization trains much more quickly, seems to be more stable, and achieves a higher accuracy.
The following creates two networks using a ReLU activation function, a learning rate of 2, and bad starting weights.<a id="successful_example_lr_2"></a>
End of explanation
"""
train_and_test(True, 2, tf.nn.sigmoid)
"""
Explanation: We've already seen that ReLUs do not do as well as sigmoids with higher learning rates, and here we are using an extremely high rate. As expected, without batch normalization the network doesn't learn at all. But with batch normalization, it eventually achieves 90% accuracy. Notice, though, how its accuracy bounces around wildly during training - that's because the learning rate is really much too high, so the fact that this worked at all is a bit of luck.
The following creates two networks using a sigmoid activation function, a learning rate of 2, and bad starting weights.
End of explanation
"""
train_and_test(True, 1, tf.nn.relu)
"""
Explanation: In this case, the network with batch normalization trained faster and reached a higher accuracy. Meanwhile, the high learning rate makes the network without normalization bounce around erratically and have trouble getting past 90%.
Full Disclosure: Batch Normalization Doesn't Fix Everything
Batch normalization isn't magic and it doesn't work every time. Weights are still randomly initialized and batches are chosen at random during training, so you never know exactly how training will go. Even for these tests, where we use the same initial weights for both networks, we still get different weights each time we run.
This section includes two examples that show runs when batch normalization did not help at all.
The following creates two networks using a ReLU activation function, a learning rate of 1, and bad starting weights.
End of explanation
"""
train_and_test(True, 2, tf.nn.relu)
"""
Explanation: When we used these same parameters earlier, we saw the network with batch normalization reach 92% validation accuracy. This time we used different starting weights, initialized using the same standard deviation as before, and the network doesn't learn at all. (Remember, an accuracy around 10% is what the network gets if it just guesses the same value all the time.)
The following creates two networks using a ReLU activation function, a learning rate of 2, and bad starting weights.
End of explanation
"""
def fully_connected(self, layer_in, initial_weights, activation_fn=None):
"""
Creates a standard, fully connected layer. Its number of inputs and outputs will be
defined by the shape of `initial_weights`, and its starting weight values will be
taken directly from that same parameter. If `self.use_batch_norm` is True, this
layer will include batch normalization, otherwise it will not.
:param layer_in: Tensor
The Tensor that feeds into this layer. It's either the input to the network or the output
of a previous layer.
:param initial_weights: NumPy array or Tensor
Initial values for this layer's weights. The shape defines the number of nodes in the layer.
e.g. Passing in 3 matrix of shape (784, 256) would create a layer with 784 inputs and 256
outputs.
:param activation_fn: Callable or None (default None)
The non-linearity used for the output of the layer. If None, this layer will not include
batch normalization, regardless of the value of `self.use_batch_norm`.
e.g. Pass tf.nn.relu to use ReLU activations on your hidden layers.
"""
if self.use_batch_norm and activation_fn:
# Batch normalization uses weights as usual, but does NOT add a bias term. This is because
# its calculations include gamma and beta variables that make the bias term unnecessary.
weights = tf.Variable(initial_weights)
linear_output = tf.matmul(layer_in, weights)
num_out_nodes = initial_weights.shape[-1]
# Batch normalization adds additional trainable variables:
# gamma (for scaling) and beta (for shifting).
gamma = tf.Variable(tf.ones([num_out_nodes]))
beta = tf.Variable(tf.zeros([num_out_nodes]))
# These variables will store the mean and variance for this layer over the entire training set,
# which we assume represents the general population distribution.
# By setting `trainable=False`, we tell TensorFlow not to modify these variables during
# back propagation. Instead, we will assign values to these variables ourselves.
pop_mean = tf.Variable(tf.zeros([num_out_nodes]), trainable=False)
pop_variance = tf.Variable(tf.ones([num_out_nodes]), trainable=False)
# Batch normalization requires a small constant epsilon, used to ensure we don't divide by zero.
# This is the default value TensorFlow uses.
epsilon = 1e-3
def batch_norm_training():
# Calculate the mean and variance for the data coming out of this layer's linear-combination step.
# The [0] defines an array of axes to calculate over.
batch_mean, batch_variance = tf.nn.moments(linear_output, [0])
# Calculate a moving average of the training data's mean and variance while training.
# These will be used during inference.
# Decay should be some number less than 1. tf.layers.batch_normalization uses the parameter
# "momentum" to accomplish this and defaults it to 0.99
decay = 0.99
train_mean = tf.assign(pop_mean, pop_mean * decay + batch_mean * (1 - decay))
train_variance = tf.assign(pop_variance, pop_variance * decay + batch_variance * (1 - decay))
# The 'tf.control_dependencies' context tells TensorFlow it must calculate 'train_mean'
# and 'train_variance' before it calculates the 'tf.nn.batch_normalization' layer.
# This is necessary because the those two operations are not actually in the graph
# connecting the linear_output and batch_normalization layers,
# so TensorFlow would otherwise just skip them.
with tf.control_dependencies([train_mean, train_variance]):
return tf.nn.batch_normalization(linear_output, batch_mean, batch_variance, beta, gamma, epsilon)
def batch_norm_inference():
# During inference, use the our estimated population mean and variance to normalize the layer
return tf.nn.batch_normalization(linear_output, pop_mean, pop_variance, beta, gamma, epsilon)
# Use `tf.cond` as a sort of if-check. When self.is_training is True, TensorFlow will execute
# the operation returned from `batch_norm_training`; otherwise it will execute the graph
# operation returned from `batch_norm_inference`.
batch_normalized_output = tf.cond(self.is_training, batch_norm_training, batch_norm_inference)
# Pass the batch-normalized layer output through the activation function.
# The literature states there may be cases where you want to perform the batch normalization *after*
# the activation function, but it is difficult to find any uses of that in practice.
return activation_fn(batch_normalized_output)
else:
# When not using batch normalization, create a standard layer that multiplies
# the inputs and weights, adds a bias, and optionally passes the result
# through an activation function.
weights = tf.Variable(initial_weights)
biases = tf.Variable(tf.zeros([initial_weights.shape[-1]]))
linear_output = tf.add(tf.matmul(layer_in, weights), biases)
return linear_output if not activation_fn else activation_fn(linear_output)
"""
Explanation: When we trained with these parameters and batch normalization earlier, we reached 90% validation accuracy. However, this time the network almost starts to make some progress in the beginning, but it quickly breaks down and stops learning.
Note: Both of the above examples use extremely bad starting weights, along with learning rates that are too high. While we've shown batch normalization can overcome bad values, we don't mean to encourage actually using them. The examples in this notebook are meant to show that batch normalization can help your networks train better. But these last two examples should remind you that you still want to try to use good network design choices and reasonable starting weights. It should also remind you that the results of each attempt to train a network are a bit random, even when using otherwise identical architectures.
Batch Normalization: A Detailed Look<a id='implementation_2'></a>
The layer created by tf.layers.batch_normalization handles all the details of implementing batch normalization. Many students will be fine just using that and won't care about what's happening at the lower levels. However, some students may want to explore the details, so here is a short explanation of what's really happening, starting with the equations you're likely to come across if you ever read about batch normalization.
In order to normalize the values, we first need to find the average value for the batch. If you look at the code, you can see that this is not the average value of the batch inputs, but the average value coming out of any particular layer before we pass it through its non-linear activation function and then feed it as an input to the next layer.
We represent the average as $\mu_B$, which is simply the sum of all of the values $x_i$ divided by the number of values, $m$
$$
\mu_B \leftarrow \frac{1}{m}\sum_{i=1}^m x_i
$$
We then need to calculate the variance, or mean squared deviation, represented as $\sigma_{B}^{2}$. If you aren't familiar with statistics, that simply means for each value $x_i$, we subtract the average value (calculated earlier as $\mu_B$), which gives us what's called the "deviation" for that value. We square the result to get the squared deviation. Sum up the results of doing that for each of the values, then divide by the number of values, again $m$, to get the average, or mean, squared deviation.
$$
\sigma_{B}^{2} \leftarrow \frac{1}{m}\sum_{i=1}^m (x_i - \mu_B)^2
$$
Once we have the mean and variance, we can use them to normalize the values with the following equation. For each value, it subtracts the mean and divides by the (almost) standard deviation. (You've probably heard of standard deviation many times, but if you have not studied statistics you might not know that the standard deviation is actually the square root of the mean squared deviation.)
$$
\hat{x_i} \leftarrow \frac{x_i - \mu_B}{\sqrt{\sigma_{B}^{2} + \epsilon}}
$$
Above, we said "(almost) standard deviation". That's because the real standard deviation for the batch is calculated by $\sqrt{\sigma_{B}^{2}}$, but the above formula adds the term epsilon, $\epsilon$, before taking the square root. The epsilon can be any small, positive constant - in our code we use the value 0.001. It is there partially to make sure we don't try to divide by zero, but it also acts to increase the variance slightly for each batch.
Why increase the variance? Statistically, this makes sense because even though we are normalizing one batch at a time, we are also trying to estimate the population distribution – the total training set, which itself an estimate of the larger population of inputs your network wants to handle. The variance of a population is higher than the variance for any sample taken from that population, so increasing the variance a little bit for each batch helps take that into account.
At this point, we have a normalized value, represented as $\hat{x_i}$. But rather than use it directly, we multiply it by a gamma value, $\gamma$, and then add a beta value, $\beta$. Both $\gamma$ and $\beta$ are learnable parameters of the network and serve to scale and shift the normalized value, respectively. Because they are learnable just like weights, they give your network some extra knobs to tweak during training to help it learn the function it is trying to approximate.
$$
y_i \leftarrow \gamma \hat{x_i} + \beta
$$
We now have the final batch-normalized output of our layer, which we would then pass to a non-linear activation function like sigmoid, tanh, ReLU, Leaky ReLU, etc. In the original batch normalization paper (linked in the beginning of this notebook), they mention that there might be cases when you'd want to perform the batch normalization after the non-linearity instead of before, but it is difficult to find any uses like that in practice.
In NeuralNet's implementation of fully_connected, all of this math is hidden inside the following line, where linear_output serves as the $x_i$ from the equations:
python
batch_normalized_output = tf.layers.batch_normalization(linear_output, training=self.is_training)
The next section shows you how to implement the math directly.
Batch normalization without the tf.layers package
Our implementation of batch normalization in NeuralNet uses the high-level abstraction tf.layers.batch_normalization, found in TensorFlow's tf.layers package.
However, if you would like to implement batch normalization at a lower level, the following code shows you how.
It uses tf.nn.batch_normalization from TensorFlow's neural net (nn) package.
1) You can replace the fully_connected function in the NeuralNet class with the below code and everything in NeuralNet will still work like it did before.
End of explanation
"""
def batch_norm_test(test_training_accuracy):
"""
:param test_training_accuracy: bool
If True, perform inference with batch normalization using batch mean and variance;
if False, perform inference with batch normalization using estimated population mean and variance.
"""
weights = [np.random.normal(size=(784,100), scale=0.05).astype(np.float32),
np.random.normal(size=(100,100), scale=0.05).astype(np.float32),
np.random.normal(size=(100,100), scale=0.05).astype(np.float32),
np.random.normal(size=(100,10), scale=0.05).astype(np.float32)
]
tf.reset_default_graph()
# Train the model
bn = NeuralNet(weights, tf.nn.relu, True)
# First train the network
with tf.Session() as sess:
tf.global_variables_initializer().run()
bn.train(sess, 0.01, 2000, 2000)
bn.test(sess, test_training_accuracy=test_training_accuracy, include_individual_predictions=True)
"""
Explanation: This version of fully_connected is much longer than the original, but once again has extensive comments to help you understand it. Here are some important points:
It explicitly creates variables to store gamma, beta, and the population mean and variance. These were all handled for us in the previous version of the function.
It initializes gamma to one and beta to zero, so they start out having no effect in this calculation: $y_i \leftarrow \gamma \hat{x_i} + \beta$. However, during training the network learns the best values for these variables using back propagation, just like networks normally do with weights.
Unlike gamma and beta, the variables for population mean and variance are marked as untrainable. That tells TensorFlow not to modify them during back propagation. Instead, the lines that call tf.assign are used to update these variables directly.
TensorFlow won't automatically run the tf.assign operations during training because it only evaluates operations that are required based on the connections it finds in the graph. To get around that, we add this line: with tf.control_dependencies([train_mean, train_variance]): before we run the normalization operation. This tells TensorFlow it needs to run those operations before running anything inside the with block.
The actual normalization math is still mostly hidden from us, this time using tf.nn.batch_normalization.
tf.nn.batch_normalization does not have a training parameter like tf.layers.batch_normalization did. However, we still need to handle training and inference differently, so we run different code in each case using the tf.cond operation.
We use the tf.nn.moments function to calculate the batch mean and variance.
2) The current version of the train function in NeuralNet will work fine with this new version of fully_connected. However, it uses these lines to ensure population statistics are updated when using batch normalization:
python
if self.use_batch_norm:
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)
else:
train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)
Our new version of fully_connected handles updating the population statistics directly. That means you can also simplify your code by replacing the above if/else condition with just this line:
python
train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)
3) And just in case you want to implement every detail from scratch, you can replace this line in batch_norm_training:
python
return tf.nn.batch_normalization(linear_output, batch_mean, batch_variance, beta, gamma, epsilon)
with these lines:
python
normalized_linear_output = (linear_output - batch_mean) / tf.sqrt(batch_variance + epsilon)
return gamma * normalized_linear_output + beta
And replace this line in batch_norm_inference:
python
return tf.nn.batch_normalization(linear_output, pop_mean, pop_variance, beta, gamma, epsilon)
with these lines:
python
normalized_linear_output = (linear_output - pop_mean) / tf.sqrt(pop_variance + epsilon)
return gamma * normalized_linear_output + beta
As you can see in each of the above substitutions, the two lines of replacement code simply implement the following two equations directly. The first line calculates the following equation, with linear_output representing $x_i$ and normalized_linear_output representing $\hat{x_i}$:
$$
\hat{x_i} \leftarrow \frac{x_i - \mu_B}{\sqrt{\sigma_{B}^{2} + \epsilon}}
$$
And the second line is a direct translation of the following equation:
$$
y_i \leftarrow \gamma \hat{x_i} + \beta
$$
We still use the tf.nn.moments operation to implement the other two equations from earlier – the ones that calculate the batch mean and variance used in the normalization step. If you really wanted to do everything from scratch, you could replace that line, too, but we'll leave that to you.
Why the difference between training and inference?
In the original function that uses tf.layers.batch_normalization, we tell the layer whether or not the network is training by passing a value for its training parameter, like so:
python
batch_normalized_output = tf.layers.batch_normalization(linear_output, training=self.is_training)
And that forces us to provide a value for self.is_training in our feed_dict, like we do in this example from NeuralNet's train function:
python
session.run(train_step, feed_dict={self.input_layer: batch_xs,
labels: batch_ys,
self.is_training: True})
If you looked at the low level implementation, you probably noticed that, just like with tf.layers.batch_normalization, we need to do slightly different things during training and inference. But why is that?
First, let's look at what happens when we don't. The following function is similar to train_and_test from earlier, but this time we are only testing one network and instead of plotting its accuracy, we perform 200 predictions on test inputs, 1 input at at time. We can use the test_training_accuracy parameter to test the network in training or inference modes (the equivalent of passing True or False to the feed_dict for is_training).
End of explanation
"""
batch_norm_test(True)
"""
Explanation: In the following cell, we pass True for test_training_accuracy, which performs the same batch normalization that we normally perform during training.
End of explanation
"""
batch_norm_test(False)
"""
Explanation: As you can see, the network guessed the same value every time! But why? Because during training, a network with batch normalization adjusts the values at each layer based on the mean and variance of that batch. The "batches" we are using for these predictions have a single input each time, so their values are the means, and their variances will always be 0. That means the network will normalize the values at any layer to zero. (Review the equations from before to see why a value that is equal to the mean would always normalize to zero.) So we end up with the same result for every input we give the network, because its the value the network produces when it applies its learned weights to zeros at every layer.
Note: If you re-run that cell, you might get a different value from what we showed. That's because the specific weights the network learns will be different every time. But whatever value it is, it should be the same for all 200 predictions.
To overcome this problem, the network does not just normalize the batch at each layer. It also maintains an estimate of the mean and variance for the entire population. So when we perform inference, instead of letting it "normalize" all the values using their own means and variance, it uses the estimates of the population mean and variance that it calculated while training.
So in the following example, we pass False for test_training_accuracy, which tells the network that we it want to perform inference with the population statistics it calculates during training.
End of explanation
"""
|
ingmarschuster/rkhs_demo | RKHS_in_Machine_learning.ipynb | gpl-3.0 | from __future__ import division, print_function, absolute_import
from IPython.display import SVG, display, Image
import numpy as np, scipy as sp, pylab as pl, matplotlib.pyplot as plt, scipy.stats as stats, sklearn, sklearn.datasets
from scipy.spatial.distance import squareform, pdist, cdist
import distributions as dist #commit 480cf98 of https://github.com/ingmarschuster/distributions
"""
Explanation: $\newcommand{\Reals}{\mathbb{R}}
\newcommand{\Nats}{\mathbb{N}}
\newcommand{\PDK}{\mathbf{k}}
\newcommand{\IS}{\mathcal{X}}
\newcommand{\FM}{\Phi}
\newcommand{\Gram}{K}
\newcommand{\RKHS}{\mathcal{H}}
\newcommand{\prodDot}[2]{\left\langle#1,#2\right\rangle}
\DeclareMathOperator{\argmin}{arg\,min}
\DeclareMathOperator{\argmax}{arg\,max}$
Reproducing Kernel Hilbert Spaces in Machine Learning
End of explanation
"""
display(Image(filename="monomials.jpg", width=200))
"""
Explanation: Motivation: Feature engineering in Machine Learning
In ML, one classic way to handle nonlinear relations in data (non-numerical data) with linear methods is to map the data to so called features using a nonlinear function $\FM$ (a function mapping from the data to a vector space).
End of explanation
"""
data = np.vstack([stats.multivariate_normal(np.array([-2,2]), np.eye(2)*1.5).rvs(100),
stats.multivariate_normal(np.ones(2)*2, np.eye(2)*1.5).rvs(100)])
distr_idx = np.r_[[0]*100, [1]*100]
for (idx, c, marker) in [(0,'r', (0,3,0)), (1, "b", "x")]:
pl.scatter(*data[distr_idx==idx,:].T, c=c, alpha=0.4, marker=marker)
pl.arrow(0, 0, *data[distr_idx==idx,:].mean(0), head_width=0.2, head_length=0.2, fc=c, ec=c)
pl.show()
"""
Explanation: In the Feature Space (the domain of $\FM$), we can then use linear algebra, such as angles, norms and inner products, inducing nonlinear operations on the Input Space (codomain of $\FM$). The central thing we need, apart from the feature space being a vector space are inner products, as they induce norms and a possibility to measure angles.
Simple classification algorithm using only inner products
Say we are given data points from the mixture of two distributions with densities $p_0,p_1$:
$$x_i \sim w_0 p_0 + w_1 p_1$$ and labels $l_i = 0$ if $x_i$ was actually generated by $p_0$, $l_i = 1$ otherwise. A very simple classification algorithm would be to compute the mean in feature space $\mu_c = \frac{1}{N_c} \sum_{l_i = c} \FM(x_i)$ for $c \in {0,1}$ and assign a test point to the class which is the most similar in terms of inner product. In other words, the decision function
$ f_d:\IS\to{0,1}$ is defined by
$$f_d(x) = \argmax_{c\in{0,1}} \prodDot{\FM(x)}{\mu_c}$$
End of explanation
"""
class Kernel(object):
def mean_emb(self, samps):
return lambda Y: self.k(samps, Y).sum()/len(samps)
def mean_emb_len(self, samps):
return self.k(samps, samps).sum()/len(samps**2)
def k(self, X, Y):
raise NotImplementedError()
class FeatMapKernel(Kernel):
def __init__(self, feat_map):
self.features = feat_map
def features_mean(self, samps):
return self.features(samps).mean(0)
def mean_emb_len(self, samps):
featue_space_mean = self.features_mean(samps)
return featue_space_mean.dot(featue_space_mean)
def mean_emb(self, samps):
featue_space_mean = self.features(samps).mean(0)
return lambda Y: self.features(Y).dot(featue_space_mean)
def k(self, X, Y):
gram = self.features(X).dot(self.features(Y).T)
return gram
class LinearKernel(FeatMapKernel):
def __init__(self):
FeatMapKernel.__init__(self, lambda x: x)
class GaussianKernel(Kernel):
def __init__(self, sigma):
self.width = sigma
def k(self, X, Y=None):
assert(len(np.shape(X))==2)
# if X=Y, use more efficient pdist call which exploits symmetry
if Y is None:
sq_dists = squareform(pdist(X, 'sqeuclidean'))
else:
assert(len(np.shape(Y))==2)
assert(np.shape(X)[1]==np.shape(Y)[1])
sq_dists = cdist(X, Y, 'sqeuclidean')
K = exp(-0.5 * (sq_dists) / self.width ** 2)
return K
class StudentKernel(Kernel):
def __init__(self, s2, df):
self.dens = dist.mvt(0,s2,df)
def k(self, X,Y=None):
if Y is None:
sq_dists = squareform(pdist(X, 'sqeuclidean'))
else:
assert(len(np.shape(Y))==2)
assert(np.shape(X)[1]==np.shape(Y)[1])
sq_dists = cdist(X, Y, 'sqeuclidean')
dists = np.sqrt(sq_dists)
return exp(self.dens.logpdf(dists.flatten())).reshape(dists.shape)
def kernel_mean_inner_prod_classification(samps1, samps2, kernel):
mean1 = kernel.mean_emb(samps1)
norm_mean1 = kernel.mean_emb_len(samps1)
mean2 = kernel.mean_emb(samps2)
norm_mean2 = kernel.mean_emb_len(samps2)
def sim(test):
return (mean1(test) - mean2(test))
def decision(test):
if sim(test) >= 0:
return 1
else:
return 0
return sim, decision
def apply_to_mg(func, *mg):
#apply a function to points on a meshgrid
x = np.vstack([e.flat for e in mg]).T
return np.array([func(i.reshape((1,2))) for i in x]).reshape(mg[0].shape)
def plot_with_contour(samps, data_idx, cont_func, method_name, delta = 0.025, pl = pl):
x = np.arange(samps.T[0].min()-delta, samps.T[1].max()+delta, delta)
y = np.arange(samps.T[1].min()-delta, samps.T[1].max()+delta, delta)
X, Y = np.meshgrid(x, y)
Z = apply_to_mg(cont_func, X,Y)
Z = Z.reshape(X.shape)
# contour labels can be placed manually by providing list of positions
# (in data coordinate). See ginput_manual_clabel.py for interactive
# placement.
fig = pl.figure()
pl.pcolormesh(X, Y, Z > 0, cmap=pl.cm.Pastel2)
pl.contour(X, Y, Z, colors=['k', 'k', 'k'],
linestyles=['--', '-', '--'],
levels=[-.5, 0, .5])
pl.title('Decision for '+method_name)
#plt.clabel(CS, inline=1, fontsize=10)
for (idx, c, marker) in [(0,'r', (0,3,0)), (1, "b", "x")]:
pl.scatter(*data[distr_idx==idx,:].T, c=c, alpha=0.7, marker=marker)
pl.show()
for (kern_name, kern) in [("Linear", LinearKernel()),
("Student-t", StudentKernel(0.1,10)),
("Gauss", GaussianKernel(0.1))
]:
(sim, dec) = kernel_mean_inner_prod_classification(data[distr_idx==1,:], data[distr_idx==0,:], kern)
plot_with_contour(data, distr_idx, sim, 'Inner Product classif. '+kern_name, pl = plt)
"""
Explanation: Remarkably, all positive definite functions are inner products in some feature space.
Theorem Let $\IS$ be a nonempty set and let $\PDK:\IS\times\IS \to \Reals$, called a kernel. The following two conditions are equivalent:
* $\PDK$ is symmetric and positive semi definite (psd), i.e. for all $x_1, \dots, x_m \in \IS$ the matrix $\Gram$ defined by with entries $\Gram_{i,j} = \PDK(x_i, x_j)$ is symmetric psd
$\FM$ is called the Feature Map and $\RKHS_\FM$ the feature space.
* there exists a map $\FM: \IS \to \RKHS_\FM$ to a hilbert space $\RKHS_\FM$ such that $$\PDK(x_i, x_j) = \prodDot{\FM(x_i)}{\FM(x_j)}_\RKHS$$
In other words, $\PDK$ computes the inner product in some $\RKHS_\FM$. We furthermore endow the space with the norm induced by the dot product $\|\cdot\|_\PDK$. From the second condition, it is easy to construct $\PDK$ given $\FM$. A general construction for $\FM$ given $\PDK$ is not as trivial but still elementary.
Construction of the canonical feature map (Aronszajn map)
We give the canonical construction of $\FM$ from $\PDK$, together with a definition of the inner product in the new space. In particular, the feature for each $x \in \IS$ will be a function from $\IS$ to $\Reals$.
$$\FM:\IS \to \Reals^\IS\
\FM(x) = \PDK(\cdot, x)$$
Thus for the linear kernel $\PDK(x,y)=\prodDot{x}{y}$ we have $\FM(x) = \prodDot{\cdot}{x}$ and for the gaussian kernel $\PDK(x,y)=\exp\left(-0.5{\|x-y\|^2}/{\sigma^2}\right)$ we have $\FM(x) = \exp\left(-0.5{\|\cdot -x \|^2}/{\sigma^2}\right)$.
Now $\RKHS$ is the closure of $\FM(\IS)$ wrt. linear combinations of its elements:
$$\RKHS = \left{f: f(\cdot)=\sum_{i=1}^m a_i \PDK(\cdot, x_i) \right} = span(\FM(\IS))$$
where $m \in \Nats, a_i \in \Reals, x \in \IS$. This makes $\RKHS$ a vector space over $\Reals$.
For $f(\cdot)=\sum_{i=1}^m a_i \PDK(\cdot, x_i)$ and $g(\cdot)=\sum_{i=1}^m' b_j \PDK(\cdot, x'j)$ we define the inner product in $\RKHS$ as
$$\prodDot{f}{g} = \sum{i=1}^m \sum_{i=1}^m' b_j a_i \PDK(x'_j, x_i)$$
In particular, for $f(\cdot) = \PDK(\cdot,x), g(\cdot) = \PDK(\cdot,x')$, we have $\prodDot{f}{g} = \prodDot{ \PDK(\cdot,x)}{ \PDK(\cdot,x')}=\PDK(x,x')$. This is called the reproducing property of the kernel of this particular $\RKHS$.
Obviously $\RKHS$ with this inner product satisfies all conditions for a hilbert space: the inner product is
* positive definite
* linear in its first argument
* symmetric
which is why $\RKHS$ is called a Reproducing Kernel Hilbert Space (RKHS).
Inner product classification algorithm is equivalent to a classification with KDEs
The naive classification algorithm we outlined earlier is actually equivalent to a simple classification algorithm using KDEs. For concreteness, let $\PDK(x,x') = { {{(2\pi )^{-N/2}\left|\Sigma \right|^{-1/2}}}\exp({-{ 0.5}(x-x' )^{\top }\Sigma ^{-1}(x-x' )}})$.
Then the mean in feature space of data from distribution $c$ with the canonical feature map is
$$\mu_c = \frac{1}{N_c} \sum_{l_i = c} \FM(x_i) = \frac{1}{N_c} \sum_{l_i = c} \PDK(x_i, \cdot) = \frac{1}{N_c} \sum_{l_i = c} { {{(2\pi )^{-N/2}\left|\Sigma \right|^{-1/2}}}\exp({-{ 0.5}(\cdot-x_i )^{\top }\Sigma ^{-1}(\cdot-x_i )}})$$
which is just a KDE of the density $p_c$ using gaussian kernels with parameter $\Sigma$. For a test point $y$ that we want to classify, its feature is just $\PDK(y,\cdot) = { {{(2\pi )^{-N/2}\left|\Sigma \right|^{-1/2}}}\exp({-{ 0.5}(y-\cdot )^{\top }\Sigma ^{-1}(y-\cdot )}})$. Its inner product with the class mean is just the evaluation of the KDE at $y$ (because of the reproducing property). Thus each point is classified as belonging to the class for which the KDE estimate assigns highest probability to $y$.
End of explanation
"""
data, distr_idx = sklearn.datasets.make_circles(n_samples=400, factor=.3, noise=.05)
for (kern_name, kern) in [("Linear", LinearKernel()),
("Stud", StudentKernel(0.1,10)),
("Gauss1", GaussianKernel(0.1)),
]:
(sim, dec) = kernel_mean_inner_prod_classification(data[distr_idx==1,:], data[distr_idx==0,:], kern)
plot_with_contour(data, distr_idx, sim, 'Inner Product classif. '+kern_name, pl = plt)
"""
Explanation: Obviously, the linear kernel might be enough already for this simple dataset. Another interesting observation however is that the Student-t based kernel is more robust to outliers of the datasets and yields a lower variance classification algorithm as compared to using a Gaussian kernel. This is to be expected, given the fatter tails of the Student-t. Now lets look at a dataset that is not linearly separable.
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.24/_downloads/a179627fc73cce931ace004638e9685c/read_inverse.ipynb | bsd-3-clause | # Author: Alexandre Gramfort <alexandre.gramfort@inria.fr>
#
# License: BSD-3-Clause
import mne
from mne.datasets import sample
from mne.minimum_norm import read_inverse_operator
from mne.viz import set_3d_view
print(__doc__)
data_path = sample.data_path()
subjects_dir = data_path + '/subjects'
fname_trans = data_path + '/MEG/sample/sample_audvis_raw-trans.fif'
inv_fname = data_path
inv_fname += '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'
inv = read_inverse_operator(inv_fname)
print("Method: %s" % inv['methods'])
print("fMRI prior: %s" % inv['fmri_prior'])
print("Number of sources: %s" % inv['nsource'])
print("Number of channels: %s" % inv['nchan'])
src = inv['src'] # get the source space
# Get access to the triangulation of the cortex
print("Number of vertices on the left hemisphere: %d" % len(src[0]['rr']))
print("Number of triangles on left hemisphere: %d" % len(src[0]['use_tris']))
print("Number of vertices on the right hemisphere: %d" % len(src[1]['rr']))
print("Number of triangles on right hemisphere: %d" % len(src[1]['use_tris']))
"""
Explanation: Reading an inverse operator
The inverse operator's source space is shown in 3D.
End of explanation
"""
fig = mne.viz.plot_alignment(subject='sample', subjects_dir=subjects_dir,
trans=fname_trans, surfaces='white', src=src)
set_3d_view(fig, focalpoint=(0., 0., 0.06))
"""
Explanation: Show result on 3D source space
End of explanation
"""
|
weikang9009/pysal | notebooks/explore/pointpats/marks.ipynb | bsd-3-clause | from pysal.explore.pointpats import PoissonPointProcess, PoissonClusterPointProcess, Window, poly_from_bbox, PointPattern
import pysal.lib as ps
from pysal.lib.cg import shapely_ext
%matplotlib inline
import matplotlib.pyplot as plt
# open the virginia polygon shapefile
va = ps.io.open(ps.examples.get_path("virginia.shp"))
polys = [shp for shp in va]
# Create the exterior polygons for VA from the union of the county shapes
state = shapely_ext.cascaded_union(polys)
# create window from virginia state boundary
window = Window(state.parts)
window.bbox
window.centroid
samples = PoissonPointProcess(window, 200, 1, conditioning=False, asPP=False)
csr = PointPattern(samples.realizations[0])
cx, cy = window.centroid
cx
cy
west = csr.points.x < cx
south = csr.points.y < cy
east = 1 - west
north = 1 - south
"""
Explanation: Marked Point Pattern
In addition to the unmarked point pattern, non-binary attributes might be associated with each point, leading to the so-called marked point pattern. The charactertistics of a marked point pattern are:
Location pattern of the events are of interest
Stochastic attribute attached to the events is of interest
Unmarked point pattern can be modified to be a marked point pattern using the method add_marks while the method explode could decompose a marked point pattern into a sequence of unmarked point patterns. Both methods belong to the class PointPattern.
End of explanation
"""
quad = 1 * east * north + 2 * west * north + 3 * west * south + 4 * east * south
type(quad)
quad
"""
Explanation: Create an attribute named quad which has a value for each event.
End of explanation
"""
csr.add_marks([quad], mark_names=['quad'])
csr.df
"""
Explanation: Attach the attribute quad to the point pattern
End of explanation
"""
csr_q = csr.explode('quad')
len(csr_q)
csr
csr.summary()
"""
Explanation: Explode a marked point pattern into a sequence of individual point patterns. Since the mark quad has 4 unique values, the sequence will be of length 4.
End of explanation
"""
plt.xlim?
plt.xlim()
for ppn in csr_q:
ppn.plot()
"""
Explanation: Plot the 4 individual sequences
End of explanation
"""
x0, y0, x1, y1 = csr.mbb
ylim = (y0, y1)
xlim = (x0, x1)
for ppn in csr_q:
ppn.plot()
plt.xlim(xlim)
plt.ylim(ylim)
"""
Explanation: Plot the 4 unmarked point patterns using the same axes for a convenient comparison of locations
End of explanation
"""
|
tobsecret/Golub_dataset_Class_Prediction_Python | Golub_dataset_Class_Prediction.ipynb | mit | test = pd.read_csv('data_set_ALL_AML_independent.tsv', sep='\t', header=0, index_col=0)
train = pd.read_csv('data_set_ALL_AML_train.tsv', sep='\t', header=0, index_col=0)
train.drop(train.columns[len(train.columns)-1], axis=1, inplace=True)
#The sample_table contains the labels AML/ ALL
sample_table = pd.read_csv('table_ALL_AML_samples.txt', sep='\t')
sample_table['Idealized'] = 0
sample_table['Idealized'].loc[sample_table['ALL/AML']=='ALL'] = 1
"""
Explanation: Reading in the datasets
We'll read in the test into the variable test and the training dataset into the variable train.
End of explanation
"""
norm_train = train.iloc[:,1::2].apply(lambda x: (x - np.min(x))/(np.max(x)-np.min(x)), axis=1)
norm_test = test.iloc[:,1::2].apply(lambda x: (x - np.min(x))/(np.max(x)-np.min(x)), axis=1)
norm_test = norm_test.reindex_axis(sorted(norm_test.columns, key=int), axis=1)
# Sorting the columns numerically
norm_train = norm_train.reindex_axis(sorted(list(norm_train.columns.values), key=int), axis=1)
"""
Explanation: Normalizing the Data
We normalize each element x by subtracting the row minimum and dividing by (row maximum minus row minimum).
x = (x - row_min) / (row_max - row_min)
End of explanation
"""
norm_train_transp = norm_train.T
neigh = KNeighborsClassifier()
neigh.fit(norm_train_transp, sample_table.iloc[:38,1:2].values.reshape(38))
"""
Explanation: K-nearest neighbours
We will try out various different selections of the data and train a separate model for each.
Let's start with the whole data set.
End of explanation
"""
neigh.score(norm_train.T, sample_table.iloc[:38,1:2].values.reshape(38))
"""
Explanation: Let's score on the training data set to see whether our model is reasonably trained.
End of explanation
"""
neigh.score(norm_test.T, sample_table.iloc[38:, 1:2].values.reshape(34))
results = results.append(pd.Series({'Classifier':'K-nearest Neighbours', 'Dataset':'Normalized',
'Training_Error':neigh.score(norm_train.T, sample_table.iloc[:38,1:2].values.reshape(38)),
'Testing_Error':neigh.score(norm_test.T, sample_table.iloc[38:, 1:2].values.reshape(34))}),
ignore_index=True)
"""
Explanation: Let's score on the test set:
End of explanation
"""
bagging_neigh = BaggingClassifier(KNeighborsClassifier(),
max_samples=0.5, max_features=0.5)
bagging_neigh.fit(norm_train.T, sample_table.iloc[:38,1:2].values.reshape(38))
#
results = results.append(pd.Series({'Classifier':'Bagging Ensemble of K-nearest Neighbours', 'Dataset':'Normalized',
'Training_Error':bagging_neigh.score(norm_train.T, sample_table.iloc[:38,1:2].values.reshape(38)),
'Testing_Error':bagging_neigh.score(norm_test.T, sample_table.iloc[38:, 1:2].values.reshape(34))})
,ignore_index=True)
"""
Explanation: K-nearest Neighbours Bagging Ensemble
In the following lines we'll try out the BaggingClassifier from scikit-learn and we will populate it with a K-nearest Neighbours Classifier.
End of explanation
"""
non_negative_train = train.iloc[:,1::2].clip_lower(1)
non_negative_train
non_negative_test = test.iloc[:, 1::2].clip_lower(1)
"""
Explanation: Non-normalized Data
Let's try the same things with only a subset of genes and no normalization besides setting all values below 0 to 1.
End of explanation
"""
#The following line computes the independent t-tests between the AML and ALL samples for every gene.
non_negative_train_ttest_pvals = non_negative_train.apply(lambda x: 1-ttest_ind(x.iloc[:26].values.reshape(26,1), x.iloc[26:].values.reshape(12,1), equal_var=False).pvalue , axis=1).iloc[:,:1]
"""
Explanation: To separate the rubbish genes from the more predictive genes, we are doing independent sample Welch's t-tests for all genes, comparing the value for each gene in the AML samples to that value for the same gene in the ALL samples.
End of explanation
"""
sns.violinplot( y="1", data=non_negative_train_ttest_pvals);
"""
Explanation: The violin plot below shows the distribution of the results of the ttests. The values plotted are 1-pvalue for all the genes.
One can see that most genes actually have pretty good p-values.
End of explanation
"""
top3000_ttest_mask = non_negative_train_ttest_pvals.sort_values('1', axis=0, ascending=False).head(3000).index
top3000_non_negative_train = non_negative_train.loc[top3000_ttest_mask]
top3000_non_negative_test = non_negative_test.loc[top3000_ttest_mask]
topneigh = KNeighborsClassifier()
topneigh.fit(top3000_non_negative_train.T, sample_table.iloc[:38,1:2].values.reshape(38))
print('Training Prediction Performance: ',
topneigh.score(top3000_non_negative_train.T, sample_table.iloc[:38,1:2].values.reshape(38)),
' Testing Prediction Performance: ',
topneigh.score(top3000_non_negative_test.T, sample_table.iloc[38:, 1:2].values.reshape(34)))
results = results.append(pd.Series({'Classifier':'K-nearest Neighbours', 'Dataset':'Non-negative',
'Training_Error':topneigh.score(top3000_non_negative_train.T, sample_table.iloc[:38, 1:2].values.reshape(38)),
'Testing_Error' :topneigh.score(top3000_non_negative_test.T, sample_table.iloc[38:, 1:2].values.reshape(34))}),
ignore_index=True)
top_bagging_neigh = BaggingClassifier(KNeighborsClassifier(),
max_samples=0.5, max_features=0.5)
top_bagging_neigh.fit(top3000_non_negative_train.T, sample_table.iloc[:38,1:2].values.reshape(38))
print('Training Prediction Performance: ',
top_bagging_neigh.score(top3000_non_negative_train.T, sample_table.iloc[:38,1:2].values.reshape(38)),
' Testing Prediction Performance: ',
top_bagging_neigh.score(top3000_non_negative_test.T, sample_table.iloc[38:, 1:2].values.reshape(34)))
results = results.append(pd.Series({'Classifier':'Bagging Ensemble of K-nearest Neighbours', 'Dataset':'Non-negative',
'Training_Error':top_bagging_neigh.score(top3000_non_negative_train.T, sample_table.iloc[:38, 1:2].values.reshape(38)),
'Testing_Error' :top_bagging_neigh.score(top3000_non_negative_test.T, sample_table.iloc[38:, 1:2].values.reshape(34))}),
ignore_index=True)
top_bagging_tree = BaggingClassifier(max_samples=0.5, max_features=0.5)
top_bagging_tree.fit(top3000_non_negative_train.T, sample_table.iloc[:38,1:2].values.reshape(38))
print('Training Prediction Performance: ',
top_bagging_tree.score(top3000_non_negative_train.T, sample_table.iloc[:38,1:2].values.reshape(38)),
' Testing Prediction Performance: ',
top_bagging_tree.score(top3000_non_negative_test.T, sample_table.iloc[38:, 1:2].values.reshape(34)))
results = results.append(pd.Series({'Classifier':'Bagging Ensemble of Decision Trees', 'Dataset':'Non-negative',
'Training_Error': top_bagging_tree.score(top3000_non_negative_train.T, sample_table.iloc[:38,1:2].values.reshape(38)),
'Testing_Error' :top_bagging_tree.score(top3000_non_negative_test.T, sample_table.iloc[38:, 1:2].values.reshape(34))}),
ignore_index=True)
results = results.iloc[1:,:]
results['Training_Error'] = 1- results['Training_Error']
results['Testing_Error'] = 1-results['Testing_Error']
results
"""
Explanation: Now we'll select only the top 3000 genes based on the 1-pvalue scores, the lowest genesin this selection having a value of around 0.8
End of explanation
"""
ax = sns.factorplot(x='Dataset', y='Training_Error', kind='bar', hue='Classifier', data=results)
ax = sns.factorplot(x='Dataset', y='Testing_Error', kind='bar', hue='Classifier', data=results)
"""
Explanation: Plotting the Results
Below are plots of the different combinations of data sets and classifiers.
The data sets are non-negative, i.e. negative expression values are set to 1 and only the top 3000 genes were considered, and normalized, i.e. for each expression value x:
x --> (x - row_min) / (row_max - row_min)
End of explanation
"""
|
sorig/shogun | doc/ipython-notebooks/clustering/GMM.ipynb | bsd-3-clause | %pylab inline
%matplotlib inline
import os
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
# import all Shogun classes
from shogun import *
from matplotlib.patches import Ellipse
# a tool for visualisation
def get_gaussian_ellipse_artist(mean, cov, nstd=1.96, color="red", linewidth=3):
"""
Returns an ellipse artist for nstd times the standard deviation of this
Gaussian, specified by mean and covariance
"""
# compute eigenvalues (ordered)
vals, vecs = eigh(cov)
order = vals.argsort()[::-1]
vals, vecs = vals[order], vecs[:, order]
theta = numpy.degrees(arctan2(*vecs[:, 0][::-1]))
# width and height are "full" widths, not radius
width, height = 2 * nstd * sqrt(vals)
e = Ellipse(xy=mean, width=width, height=height, angle=theta, \
edgecolor=color, fill=False, linewidth=linewidth)
return e
"""
Explanation: Gaussian Mixture Models and Expectation Maximisation in Shogun
By Heiko Strathmann - heiko.strathmann@gmail.com - http://github.com/karlnapf - http://herrstrathmann.de.
Based on the GMM framework of the Google summer of code 2011 project of Alesis Novik - https://github.com/alesis
This notebook is about learning and using Gaussian <a href="https://en.wikipedia.org/wiki/Mixture_model">Mixture Models</a> (GMM) in Shogun. Below, we demonstrate how to use them for sampling, for density estimation via <a href="https://en.wikipedia.org/wiki/Expectation-maximization_algorithm">Expectation Maximisation (EM)</a>, and for <a href="https://en.wikipedia.org/wiki/Data_clustering">clustering</a>.
Note that Shogun's interfaces for mixture models are deprecated and are soon to be replace by more intuitive and efficient ones. This notebook contains some python magic at some places to compensate for this. However, all computations are done within Shogun itself.
Finite Mixture Models (skip if you just want code examples)
We begin by giving some intuition about mixture models. Consider an unobserved (or latent) discrete random variable taking $k$ states $s$ with probabilities $\text{Pr}(s=i)=\pi_i$ for $1\leq i \leq k$, and $k$ random variables $x_i|s_i$ with arbritary densities or distributions, which are conditionally independent of each other given the state of $s$. In the finite mixture model, we model the probability or density for a single point $x$ begin generated by the weighted mixture of the $x_i|s_i$
$$
p(x)=\sum_{i=1}^k\text{Pr}(s=i)p(x)=\sum_{i=1}^k \pi_i p(x|s)
$$
which is simply the marginalisation over the latent variable $s$. Note that $\sum_{i=1}^k\pi_i=1$.
For example, for the Gaussian mixture model (GMM), we get (adding a collection of parameters $\theta:={\boldsymbol{\mu}i, \Sigma_i}{i=1}^k$ that contains $k$ mean and covariance parameters of single Gaussian distributions)
$$
p(x|\theta)=\sum_{i=1}^k \pi_i \mathcal{N}(\boldsymbol{\mu}_i,\Sigma_i)
$$
Note that any set of probability distributions on the same domain can be combined to such a mixture model. Note again that $s$ is an unobserved discrete random variable, i.e. we model data being generated from some weighted combination of baseline distributions. Interesting problems now are
Learning the weights $\text{Pr}(s=i)=\pi_i$ from data
Learning the parameters $\theta$ from data for a fixed family of $x_i|s_i$, for example for the GMM
Using the learned model (which is a density estimate) for clustering or classification
All of these problems are in the context of unsupervised learning since the algorithm only sees the plain data and no information on its structure.
Expectation Maximisation
<a href="https://en.wikipedia.org/wiki/Expectation-maximization_algorithm">Expectation Maximisation (EM)</a> is a powerful method to learn any form of latent models and can be applied to the Gaussian mixture model case. Standard methods such as Maximum Likelihood are not straightforward for latent models in general, while EM can almost always be applied. However, it might converge to local optima and does not guarantee globally optimal solutions (this can be dealt with with some tricks as we will see later). While the general idea in EM stays the same for all models it can be used on, the individual steps depend on the particular model that is being used.
The basic idea in EM is to maximise a lower bound, typically called the free energy, on the log-likelihood of the model. It does so by repeatedly performing two steps
The E-step optimises the free energy with respect to the latent variables $s_i$, holding the parameters $\theta$ fixed. This is done via setting the distribution over $s$ to the posterior given the used observations.
The M-step optimises the free energy with respect to the paramters $\theta$, holding the distribution over the $s_i$ fixed. This is done via maximum likelihood.
It can be shown that this procedure never decreases the likelihood and that stationary points (i.e. neither E-step nor M-step produce changes) of it corresponds to local maxima in the model's likelihood. See references for more details on the procedure, and how to obtain a lower bound on the log-likelihood. There exist many different flavours of EM, including variants where only subsets of the model are iterated over at a time. There is no learning rate such as step size or similar, which is good and bad since convergence can be slow.
Mixtures of Gaussians in Shogun
The main class for GMM in Shogun is <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGMM.html">CGMM</a>, which contains an interface for setting up a model and sampling from it, but also to learn the model (the $\pi_i$ and parameters $\theta$) via EM. It inherits from the base class for distributions in Shogun, <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CDistribution.html">CDistribution</a>, and combines multiple single distribution instances to a mixture.
We start by creating a GMM instance, sampling from it, and computing the log-likelihood of the model for some points, and the log-likelihood of each individual component for some points. All these things are done in two dimensions to be able to plot them, but they generalise to higher (or lower) dimensions easily.
Let's sample, and illustrate the difference of knowing the latent variable indicating the component or not.
End of explanation
"""
# create mixture of three Gaussians
num_components=3
num_max_samples=100
gmm=GMM(num_components)
dimension=2
# set means (TODO interface should be to construct mixture from individuals with set parameters)
means=zeros((num_components, dimension))
means[0]=[-5.0, -4.0]
means[1]=[7.0, 3.0]
means[2]=[0, 0.]
[gmm.set_nth_mean(means[i], i) for i in range(num_components)]
# set covariances
covs=zeros((num_components, dimension, dimension))
covs[0]=array([[2, 1.3],[.6, 3]])
covs[1]=array([[1.3, -0.8],[-0.8, 1.3]])
covs[2]=array([[2.5, .8],[0.8, 2.5]])
[gmm.set_nth_cov(covs[i],i) for i in range(num_components)]
# set mixture coefficients, these have to sum to one (TODO these should be initialised automatically)
weights=array([0.5, 0.3, 0.2])
gmm.put('m_coefficients', weights)
"""
Explanation: Set up the model in Shogun
End of explanation
"""
# now sample from each component seperately first, the from the joint model
hold(True)
colors=["red", "green", "blue"]
for i in range(num_components):
# draw a number of samples from current component and plot
num_samples=int(rand()*num_max_samples)+1
# emulate sampling from one component (TODO fix interface of GMM to handle this)
w=zeros(num_components)
w[i]=1.
gmm.put('m_coefficients', w)
# sample and plot (TODO fix interface to have loop within)
X=array([gmm.sample() for _ in range(num_samples)])
plot(X[:,0], X[:,1], "o", color=colors[i])
# draw 95% elipsoid for current component
gca().add_artist(get_gaussian_ellipse_artist(means[i], covs[i], color=colors[i]))
hold(False)
_=title("%dD Gaussian Mixture Model with %d components" % (dimension, num_components))
# since we used a hack to sample from each component
gmm.put('m_coefficients', weights)
"""
Explanation: Sampling from mixture models
Sampling is extremely easy since every instance of the <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CDistribution.html">CDistribution</a> class in Shogun allows to sample from it (if implemented)
End of explanation
"""
# generate a grid over the full space and evaluate components PDF
resolution=100
Xs=linspace(-10,10, resolution)
Ys=linspace(-8,6, resolution)
pairs=asarray([(x,y) for x in Xs for y in Ys])
D=asarray([gmm.cluster(pairs[i])[3] for i in range(len(pairs))]).reshape(resolution,resolution)
figure(figsize=(18,5))
subplot(1,2,1)
pcolor(Xs,Ys,D)
xlim([-10,10])
ylim([-8,6])
title("Log-Likelihood of GMM")
subplot(1,2,2)
pcolor(Xs,Ys,exp(D))
xlim([-10,10])
ylim([-8,6])
_=title("Likelihood of GMM")
"""
Explanation: Evaluating densities in mixture Models
Next, let us visualise the density of the joint model (which is a convex sum of the densities of the individual distributions). Note the similarity between the calls since all distributions implement the <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CDistribution.html">CDistribution</a> interface, including the mixture.
End of explanation
"""
# sample and plot (TODO fix interface to have loop within)
X=array([gmm.sample() for _ in range(num_max_samples)])
plot(X[:,0], X[:,1], "o")
_=title("Samples from GMM")
"""
Explanation: Density estimating with mixture models
Now let us draw samples from the mixture model itself rather than from individual components. This is the situation that usually occurs in practice: Someone gives you a bunch of data with no labels attached to it all all. Our job is now to find structure in the data, which we will do with a GMM.
End of explanation
"""
def estimate_gmm(X, num_components):
# bring data into shogun representation (note that Shogun data is in column vector form, so transpose)
feat=features(X.T)
gmm_est=GMM(num_components)
gmm_est.set_features(feat)
# learn GMM
gmm_est.train_em()
return gmm_est
"""
Explanation: Imagine you did not know the true generating process of this data. What would you think just looking at it? There are clearly at least two components (or clusters) that might have generated this data, but three also looks reasonable. So let us try to learn a Gaussian mixture model on those.
End of explanation
"""
component_numbers=[2,3]
# plot true likelihood
D_true=asarray([gmm.cluster(pairs[i])[num_components] for i in range(len(pairs))]).reshape(resolution,resolution)
figure(figsize=(18,5))
subplot(1,len(component_numbers)+1,1)
pcolor(Xs,Ys,exp(D_true))
xlim([-10,10])
ylim([-8,6])
title("True likelihood")
for n in range(len(component_numbers)):
# TODO get rid of these hacks and offer nice interface from Shogun
# learn GMM with EM
gmm_est=estimate_gmm(X, component_numbers[n])
# evaluate at a grid of points
D_est=asarray([gmm_est.cluster(pairs[i])[component_numbers[n]] for i in range(len(pairs))]).reshape(resolution,resolution)
# visualise densities
subplot(1,len(component_numbers)+1,n+2)
pcolor(Xs,Ys,exp(D_est))
xlim([-10,10])
ylim([-8,6])
_=title("Estimated likelihood for EM with %d components"%component_numbers[n])
"""
Explanation: So far so good, now lets plot the density of this GMM using the code from above
End of explanation
"""
# function to draw ellipses for all components of a GMM
def visualise_gmm(gmm, color="blue"):
for i in range(gmm.get_num_components()):
component=Gaussian.obtain_from_generic(gmm.get_component(i))
gca().add_artist(get_gaussian_ellipse_artist(component.get_mean(), component.get_cov(), color=color))
# multiple runs to illustrate random initialisation matters
for _ in range(3):
figure(figsize=(18,5))
subplot(1, len(component_numbers)+1, 1)
plot(X[:,0],X[:,1], 'o')
visualise_gmm(gmm_est, color="blue")
title("True components")
for i in range(len(component_numbers)):
gmm_est=estimate_gmm(X, component_numbers[i])
subplot(1, len(component_numbers)+1, i+2)
plot(X[:,0],X[:,1], 'o')
visualise_gmm(gmm_est, color=colors[i])
# TODO add a method to get likelihood of full model, retraining is inefficient
likelihood=gmm_est.train_em()
_=title("Estimated likelihood: %.2f (%d components)"%(likelihood,component_numbers[i]))
"""
Explanation: It is also possible to access the individual components of the mixture distribution. In our case, we can for example draw 95% ellipses for each of the Gaussians using the method from above. We will do this (and more) below.
On local minima of EM
It seems that three comonents give a density that is closest to the original one. While two components also do a reasonable job here, it might sometimes happen (<a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CKMeans.html">KMeans</a> is used to initialise the cluster centres if not done by hand, using a random cluster initialisation) that the upper two Gaussians are grouped, re-run for a couple of times to see this. This illustrates how EM might get stuck in a local minimum. We will do this below, where it might well happen that all runs produce the same or different results - no guarantees.
Note that it is easily possible to initialise EM via specifying the parameters of the mixture components as did to create the original model above.
One way to decide which of multiple convergenced EM instances to use is to simply compute many of them (with different initialisations) and then choose the one with the largest likelihood. WARNING Do not select the number of components like this as the model will overfit.
End of explanation
"""
def cluster_and_visualise(gmm_est):
# obtain cluster index for each point of the training data
# TODO another hack here: Shogun should allow to pass multiple points and only return the index
# as the likelihood can be done via the individual components
# In addition, argmax should be computed for us, although log-pdf for all components should also be possible
clusters=asarray([argmax(gmm_est.cluster(x)[:gmm.get_num_components()]) for x in X])
# visualise points by cluster
hold(True)
for i in range(gmm.get_num_components()):
indices=clusters==i
plot(X[indices,0],X[indices,1], 'o', color=colors[i])
hold(False)
# learn gmm again
gmm_est=estimate_gmm(X, num_components)
figure(figsize=(18,5))
subplot(121)
cluster_and_visualise(gmm)
title("Clustering under true GMM")
subplot(122)
cluster_and_visualise(gmm_est)
_=title("Clustering under estimated GMM")
"""
Explanation: Clustering with mixture models
Recall that our initial goal was not to visualise mixture models (although that is already pretty cool) but to find clusters in a given set of points. All we need to do for this is to evaluate the log-likelihood of every point under every learned component and then pick the largest one. Shogun can do both. Below, we will illustrate both cases, obtaining a cluster index, and evaluating the log-likelihood for every point under each component.
End of explanation
"""
figure(figsize=(18,5))
for comp_idx in range(num_components):
subplot(1,num_components,comp_idx+1)
# evaluated likelihood under current component
# TODO Shogun should do the loop and allow to specify component indices to evaluate pdf for
# TODO distribution interface should be the same everywhere
component=Gaussian.obtain_from_generic(gmm.get_component(comp_idx))
cluster_likelihoods=asarray([component.compute_PDF(X[i]) for i in range(len(X))])
# normalise
cluster_likelihoods-=cluster_likelihoods.min()
cluster_likelihoods/=cluster_likelihoods.max()
# plot, coloured by likelihood value
cm=get_cmap("jet")
hold(True)
for j in range(len(X)):
color = cm(cluster_likelihoods[j])
plot(X[j,0], X[j,1] ,"o", color=color)
hold(False)
title("Data coloured by likelihood for component %d" % comp_idx)
"""
Explanation: These are clusterings obtained via the true mixture model and the one learned via EM. There is a slight subtlety here: even the model under which the data was generated will not cluster the data correctly if the data is overlapping. This is due to the fact that the cluster with the largest probability is chosen. This doesn't allow for any ambiguity. If you are interested in cases where data overlaps, you should always look at the log-likelihood of the point for each cluster and consider taking into acount "draws" in the decision, i.e. probabilities for two different clusters are equally large.
Below we plot all points, coloured by their likelihood under each component.
End of explanation
"""
# compute cluster index for every point in space
D_est=asarray([gmm_est.cluster(pairs[i])[:num_components].argmax() for i in range(len(pairs))]).reshape(resolution,resolution)
# visualise clustering
cluster_and_visualise(gmm_est)
# visualise space partitioning
hold(True)
pcolor(Xs,Ys,D_est)
hold(False)
"""
Explanation: Note how the lower left and middle cluster are overlapping in the sense that points at their intersection have similar likelihoods. If you do not care at all about this and are just interested in a partitioning of the space, simply choose the maximum.
Below we plot the space partitioning for a hard clustering.
End of explanation
"""
|
jseabold/statsmodels | examples/notebooks/metaanalysis1.ipynb | bsd-3-clause | %matplotlib inline
import numpy as np
import pandas as pd
from scipy import stats, optimize
from statsmodels.regression.linear_model import WLS
from statsmodels.genmod.generalized_linear_model import GLM
from statsmodels.stats.meta_analysis import (
effectsize_smd, effectsize_2proportions, combine_effects,
_fit_tau_iterative, _fit_tau_mm, _fit_tau_iter_mm)
# increase line length for pandas
pd.set_option('display.width', 100)
"""
Explanation: Meta-Analysis in statsmodels
Statsmodels include basic methods for meta-analysis. This notebook illustrates the current usage.
Status: The results have been verified against R meta and metafor packages. However, the API is still experimental and will still change. Some options for additional methods that are available in R meta and metafor are missing.
The support for meta-analysis has 3 parts:
effect size functions: this currently includes
effectsize_smd computes effect size and their standard errors for standardized mean difference,
effectsize_2proportions computes effect sizes for comparing two independent proportions using risk difference, (log) risk ratio, (log) odds-ratio or arcsine square root transformation
The combine_effects computes fixed and random effects estimate for the overall mean or effect. The returned results instance includes a forest plot function.
helper functions to estimate the random effect variance, tau-squared
The estimate of the overall effect size in combine_effects can also be performed using WLS or GLM with var_weights.
Finally, the meta-analysis functions currently do not include the Mantel-Hanszel method. However, the fixed effects results can be computed directly using StratifiedTable as illustrated below.
End of explanation
"""
data = [
["Carroll", 94, 22,60,92, 20,60],
["Grant", 98, 21,65, 92,22, 65],
["Peck", 98, 28, 40,88 ,26, 40],
["Donat", 94,19, 200, 82,17, 200],
["Stewart", 98, 21,50, 88,22 , 45],
["Young", 96,21,85, 92 ,22, 85]]
colnames = ["study","mean_t","sd_t","n_t","mean_c","sd_c","n_c"]
rownames = [i[0] for i in data]
dframe1 = pd.DataFrame(data, columns=colnames)
rownames
mean2, sd2, nobs2, mean1, sd1, nobs1 = np.asarray(dframe1[["mean_t","sd_t","n_t","mean_c","sd_c","n_c"]]).T
rownames = dframe1["study"]
rownames.tolist()
np.array(nobs1 + nobs2)
"""
Explanation: Example
End of explanation
"""
eff, var_eff = effectsize_smd(mean2, sd2, nobs2, mean1, sd1, nobs1)
"""
Explanation: estimate effect size standardized mean difference
End of explanation
"""
res3 = combine_effects(eff, var_eff, method_re="chi2", use_t=True, row_names=rownames)
# TODO: we still need better information about conf_int of individual samples
# We don't have enough information in the model for individual confidence intervals
# if those are not based on normal distribution.
res3.conf_int_samples(nobs=np.array(nobs1 + nobs2))
print(res3.summary_frame())
res3.cache_ci
res3.method_re
fig = res3.plot_forest()
fig.set_figheight(6)
fig.set_figwidth(6)
res3 = combine_effects(eff, var_eff, method_re="chi2", use_t=False, row_names=rownames)
# TODO: we still need better information about conf_int of individual samples
# We don't have enough information in the model for individual confidence intervals
# if those are not based on normal distribution.
res3.conf_int_samples(nobs=np.array(nobs1 + nobs2))
print(res3.summary_frame())
"""
Explanation: Using one-step chi2, DerSimonian-Laird estimate for random effects variance tau
Method option for random effect method_re="chi2" or method_re="dl", both names are accepted.
This is commonly referred to as the DerSimonian-Laird method, it is based on a moment estimator based on pearson chi2 from the fixed effects estimate.
End of explanation
"""
res4 = combine_effects(eff, var_eff, method_re="iterated", use_t=False, row_names=rownames)
res4_df = res4.summary_frame()
print("method RE:", res4.method_re)
print(res4.summary_frame())
fig = res4.plot_forest()
"""
Explanation: Using iterated, Paule-Mandel estimate for random effects variance tau
The method commonly referred to as Paule-Mandel estimate is a method of moment estimate for the random effects variance that iterates between mean and variance estimate until convergence.
End of explanation
"""
eff = np.array([61.00, 61.40, 62.21, 62.30, 62.34, 62.60, 62.70,
62.84, 65.90])
var_eff = np.array([0.2025, 1.2100, 0.0900, 0.2025, 0.3844, 0.5625,
0.0676, 0.0225, 1.8225])
rownames = ['PTB', 'NMi', 'NIMC', 'KRISS', 'LGC', 'NRC', 'IRMM', 'NIST', 'LNE']
res2_DL = combine_effects(eff, var_eff, method_re="dl", use_t=True, row_names=rownames)
print("method RE:", res2_DL.method_re)
print(res2_DL.summary_frame())
fig = res2_DL.plot_forest()
fig.set_figheight(6)
fig.set_figwidth(6)
res2_PM = combine_effects(eff, var_eff, method_re="pm", use_t=True, row_names=rownames)
print("method RE:", res2_PM.method_re)
print(res2_PM.summary_frame())
fig = res2_PM.plot_forest()
fig.set_figheight(6)
fig.set_figwidth(6)
"""
Explanation: Example Kacker interlaboratory mean
In this example the effect size is the mean of measurements in a lab. We combine the estimates from several labs to estimate and overall average.
End of explanation
"""
import io
ss = """\
study,nei,nci,e1i,c1i,e2i,c2i,e3i,c3i,e4i,c4i
1,19,22,16.0,20.0,11,12,4.0,8.0,4,3
2,34,35,22.0,22.0,18,12,15.0,8.0,15,6
3,72,68,44.0,40.0,21,15,10.0,3.0,3,0
4,22,20,19.0,12.0,14,5,5.0,4.0,2,3
5,70,32,62.0,27.0,42,13,26.0,6.0,15,5
6,183,94,130.0,65.0,80,33,47.0,14.0,30,11
7,26,50,24.0,30.0,13,18,5.0,10.0,3,9
8,61,55,51.0,44.0,37,30,19.0,19.0,11,15
9,36,25,30.0,17.0,23,12,13.0,4.0,10,4
10,45,35,43.0,35.0,19,14,8.0,4.0,6,0
11,246,208,169.0,139.0,106,76,67.0,42.0,51,35
12,386,141,279.0,97.0,170,46,97.0,21.0,73,8
13,59,32,56.0,30.0,34,17,21.0,9.0,20,7
14,45,15,42.0,10.0,18,3,9.0,1.0,9,1
15,14,18,14.0,18.0,13,14,12.0,13.0,9,12
16,26,19,21.0,15.0,12,10,6.0,4.0,5,1
17,74,75,,,42,40,,,23,30"""
df3 = pd.read_csv(io.StringIO(ss))
df_12y = df3[["e2i", "nei", "c2i", "nci"]]
# TODO: currently 1 is reference, switch labels
count1, nobs1, count2, nobs2 = df_12y.values.T
dta = df_12y.values.T
eff, var_eff = effectsize_2proportions(*dta, statistic="rd")
eff, var_eff
res5 = combine_effects(eff, var_eff, method_re="iterated", use_t=False)#, row_names=rownames)
res5_df = res5.summary_frame()
print("method RE:", res5.method_re)
print("RE variance tau2:", res5.tau2)
print(res5.summary_frame())
fig = res5.plot_forest()
fig.set_figheight(8)
fig.set_figwidth(6)
"""
Explanation: Meta-analysis of proportions
In the following example the random effect variance tau is estimated to be zero.
I then change two counts in the data, so the second example has random effects variance greater than zero.
End of explanation
"""
dta_c = dta.copy()
dta_c.T[0, 0] = 18
dta_c.T[1, 0] = 22
dta_c.T
eff, var_eff = effectsize_2proportions(*dta_c, statistic="rd")
res5 = combine_effects(eff, var_eff, method_re="iterated", use_t=False)#, row_names=rownames)
res5_df = res5.summary_frame()
print("method RE:", res5.method_re)
print(res5.summary_frame())
fig = res5.plot_forest()
fig.set_figheight(8)
fig.set_figwidth(6)
res5 = combine_effects(eff, var_eff, method_re="chi2", use_t=False)
res5_df = res5.summary_frame()
print("method RE:", res5.method_re)
print(res5.summary_frame())
fig = res5.plot_forest()
fig.set_figheight(8)
fig.set_figwidth(6)
"""
Explanation: changing data to have positive random effects variance
End of explanation
"""
from statsmodels.genmod.generalized_linear_model import GLM
eff, var_eff = effectsize_2proportions(*dta_c, statistic="or")
res = combine_effects(eff, var_eff, method_re="chi2", use_t=False)
res_frame = res.summary_frame()
print(res_frame.iloc[-4:])
"""
Explanation: Replicate fixed effect analysis using GLM with var_weights
combine_effects computes weighted average estimates which can be replicated using GLM with var_weights or with WLS.
The scale option in GLM.fit can be used to replicate fixed meta-analysis with fixed and with HKSJ/WLS scale
End of explanation
"""
weights = 1 / var_eff
mod_glm = GLM(eff, np.ones(len(eff)),
var_weights=weights)
res_glm = mod_glm.fit(scale=1.)
print(res_glm.summary().tables[1])
# check results
res_glm.scale, res_glm.conf_int() - res_frame.loc["fixed effect", ["ci_low", "ci_upp"]].values
"""
Explanation: We need to fix scale=1 in order to replicate standard errors for the usual meta-analysis.
End of explanation
"""
res_glm = mod_glm.fit(scale="x2")
print(res_glm.summary().tables[1])
# check results
res_glm.scale, res_glm.conf_int() - res_frame.loc["fixed effect", ["ci_low", "ci_upp"]].values
"""
Explanation: Using HKSJ variance adjustment in meta-analysis is equivalent to estimating the scale using pearson chi2, which is also the default for the gaussian family.
End of explanation
"""
t, nt, c, nc = dta_c
counts = np.column_stack([t, nt - t, c, nc - c])
ctables = counts.T.reshape(2, 2, -1)
ctables[:, :, 0]
counts[0]
dta_c.T[0]
import statsmodels.stats.api as smstats
st = smstats.StratifiedTable(ctables.astype(np.float64))
"""
Explanation: Mantel-Hanszel odds-ratio using contingency tables
The fixed effect for the log-odds-ratio using the Mantel-Hanszel can be directly computed using StratifiedTable.
We need to create a 2 x 2 x k contingency table to be used with StratifiedTable.
End of explanation
"""
st.logodds_pooled, st.logodds_pooled - 0.4428186730553189 # R meta
st.logodds_pooled_se, st.logodds_pooled_se - 0.08928560091027186 # R meta
st.logodds_pooled_confint()
print(st.test_equal_odds())
print(st.test_null_odds())
"""
Explanation: compare pooled log-odds-ratio and standard error to R meta package
End of explanation
"""
ctables.sum(1)
nt, nc
"""
Explanation: check conversion to stratified contingency table
Row sums of each table are the sample sizes for treatment and control experiments
End of explanation
"""
print(st.summary())
"""
Explanation: Results from R meta package
```
res_mb_hk = metabin(e2i, nei, c2i, nci, data=dat2, sm="OR", Q.Cochrane=FALSE, method="MH", method.tau="DL", hakn=FALSE, backtransf=FALSE)
res_mb_hk
logOR 95%-CI %W(fixed) %W(random)
1 2.7081 [ 0.5265; 4.8896] 0.3 0.7
2 1.2567 [ 0.2658; 2.2476] 2.1 3.2
3 0.3749 [-0.3911; 1.1410] 5.4 5.4
4 1.6582 [ 0.3245; 2.9920] 0.9 1.8
5 0.7850 [-0.0673; 1.6372] 3.5 4.4
6 0.3617 [-0.1528; 0.8762] 12.1 11.8
7 0.5754 [-0.3861; 1.5368] 3.0 3.4
8 0.2505 [-0.4881; 0.9892] 6.1 5.8
9 0.6506 [-0.3877; 1.6889] 2.5 3.0
10 0.0918 [-0.8067; 0.9903] 4.5 3.9
11 0.2739 [-0.1047; 0.6525] 23.1 21.4
12 0.4858 [ 0.0804; 0.8911] 18.6 18.8
13 0.1823 [-0.6830; 1.0476] 4.6 4.2
14 0.9808 [-0.4178; 2.3795] 1.3 1.6
15 1.3122 [-1.0055; 3.6299] 0.4 0.6
16 -0.2595 [-1.4450; 0.9260] 3.1 2.3
17 0.1384 [-0.5076; 0.7844] 8.5 7.6
Number of studies combined: k = 17
logOR 95%-CI z p-value
Fixed effect model 0.4428 [0.2678; 0.6178] 4.96 < 0.0001
Random effects model 0.4295 [0.2504; 0.6086] 4.70 < 0.0001
Quantifying heterogeneity:
tau^2 = 0.0017 [0.0000; 0.4589]; tau = 0.0410 [0.0000; 0.6774];
I^2 = 1.1% [0.0%; 51.6%]; H = 1.01 [1.00; 1.44]
Test of heterogeneity:
Q d.f. p-value
16.18 16 0.4404
Details on meta-analytical method:
- Mantel-Haenszel method
- DerSimonian-Laird estimator for tau^2
- Jackson method for confidence interval of tau^2 and tau
res_mb_hk$TE.fixed
[1] 0.4428186730553189
res_mb_hk$seTE.fixed
[1] 0.08928560091027186
c(res_mb_hk$lower.fixed, res_mb_hk$upper.fixed)
[1] 0.2678221109331694 0.6178152351774684
```
End of explanation
"""
|
gkc1000/pyscf | pyscf/nao/notebook/AWS/example-ase-siesta-pyscf-ch4-eels.ipynb | apache-2.0 | # import libraries and set up the molecule geometry
from ase.units import Ry, eV, Ha
from ase.calculators.siesta import Siesta
from ase import Atoms
import numpy as np
import matplotlib.pyplot as plt
from ase.build import molecule
CH4 = molecule("CH4")
# visualization of the particle
from ase.visualize import view
view(CH4, viewer='x3d')
"""
Explanation: Easy Ab initio calculation with ASE-Siesta-Pyscf
No installation necessary, just download a ready to go container for any system, or run it into the cloud
We first import the necessary libraries and define the system using ASE
End of explanation
"""
# enter siesta input and run siesta
siesta = Siesta(
mesh_cutoff=150 * Ry,
basis_set='DZP',
pseudo_qualifier='lda',
energy_shift=(10 * 10**-3) * eV,
fdf_arguments={
'SCFMustConverge': False,
'COOP.Write': True,
'WriteDenchar': True,
'PAO.BasisType': 'split',
'DM.Tolerance': 1e-4,
'DM.MixingWeight': 0.1,
'MaxSCFIterations': 300,
'DM.NumberPulay': 4,
'XML.Write': True})
CH4.set_calculator(siesta)
e = CH4.get_potential_energy()
"""
Explanation: We can then run the DFT calculation using Siesta
End of explanation
"""
# compute polarizability using pyscf-nao
siesta.pyscf_tddft_eels(label="siesta", jcutoff=7, iter_broadening=0.15/Ha,
xc_code='LDA,PZ', tol_loc=1e-6, tol_biloc=1e-7, freq = np.arange(0.0, 15.0, 0.05),
velec=np.array([50.0, 0.0, 0.0]), b = np.array([0.0, 0.0, 5.0]))
# plot polarizability with matplotlib
%matplotlib inline
fig = plt.figure(1)
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
ax1.plot(siesta.results["freq range"], siesta.results["eel spectra nonin"].imag)
ax2.plot(siesta.results["freq range"], siesta.results["eel spectra inter"].imag)
ax1.set_xlabel(r"$\omega$ (eV)")
ax2.set_xlabel(r"$\omega$ (eV)")
ax1.set_ylabel(r"$\Gamma_{nonin}$ (au)")
ax2.set_ylabel(r"$\Gamma_{inter}$ (au)")
ax1.set_title(r"Non interacting")
ax2.set_title(r"Interacting")
fig.tight_layout()
"""
Explanation: The TDDFT calculations with PySCF-NAO
End of explanation
"""
|
landlab/landlab | notebooks/teaching/geomorphology_exercises/channels_streampower_notebooks/stream_power_channels_class_notebook.ipynb | mit | # Code block 1
import copy
import numpy as np
from matplotlib import pyplot as plt
from landlab import RasterModelGrid, imshow_grid
from landlab.components import (
ChannelProfiler,
ChiFinder,
FlowAccumulator,
SteepnessFinder,
StreamPowerEroder,
)
from landlab.io import write_esri_ascii
"""
Explanation: <a href="http://landlab.github.io"><img style="float: left" src="../../../landlab_header.png"></a>
Quantifying river channel evolution with Landlab
These exercises are based on a project orginally designed by Kelin Whipple at Arizona State University. This notebook was created by Nicole Gasparini at Tulane University.
<hr>
<small>For tutorials on learning Landlab, click here: <a href="https://github.com/landlab/landlab/wiki/Tutorials">https://github.com/landlab/landlab/wiki/Tutorials</a></small>
<hr>
What is this notebook?
This notebook illustrates the evolution of detachment-limited channels in an actively uplifting landscape. The landscape evolves according to the equation:
\begin{equation}
\frac{d z}{d t} = -K_\text{sp} A^{m_{sp}} S^{n_{sp}} + U
\end{equation}
Here, $K_{sp}$ is the erodibility coefficient on fluvial incision, which is thought to be positively correlated with climate wetness, or storminess (this is hard to quantify) and to be negatively correlated with rock strength (again, rock strength is hard to quantify). $m_{sp}$ and $n_{sp}$ are positive exponents, usually thought to have a ratio, $m_{sp}/n_{sp} \approx 0.5$. $A$ is drainage area and $S$ is the slope of steepest descent ($-\frac{dz}{dx}$) where $x$ is horizontal distance (positive in the downslope direction) and $z$ is elevation. (If slope is negative there is no fluvial erosion.) $U$ is an externally-applied rock uplift field.
The fluvial erosion term is also known as the stream power equation. Before using this notebook you should be familiar with this equation from class lectures and reading.
For a great overview of the stream power equation, see:
Whipple and Tucker, 1999, Dynamics of the stream-power river incision model: Implications for height limits of mountain ranges, landscape response timescales, and research needs, Journal of Geophysical Research.
For some great illustrations of modeling with the sream power equation, see:
Tucker and Whipple, 2002, Topographic outcomes predicted by stream erosion models: Sensitivity analysis and intermodel comparison, Journal of Geophysical Research.
Helpful background on landscape sensitivity to rock uplift rates and patterns can be found here:
Kirby and Whipple, 2012, Expression of active tectonics in erosional landscapes, Journal of Structural Geology.
What will you do?
In this exercise you will modify the code to get a better understanding of how rock uplift rates and patterns and the erodibility coefficient control fluvial channel form.
Start at the top by reading each block of text and sequentially running each code block (shift - enter OR got to the Cell pulldown menu at the top and choose Run Cells).
If you just change one code block and rerun only that code block, only the parts of the code in that code block will be updated. (E.g. if you change parameters but don't reset the code blocks that initialize run time or topography, then these values will not be reset.)
STUDENTS: Questions to answer before starting this assignment.
Answer these questions before running the notebook.
What do you think will happen to total relief (defined as the maximum minus the minimum elevation, here area is fixed) and channel slope at steady state if $K_{sp}$ is uniformly increased?
What do you think will happen to total relief and channel slope at steady state if $U$ is uniformly increased?
How do you think a steady-state landscape with a uniform low rock uplift rate will respond if rock uplift is uniformly increased (relative to a steady base level)? How will channel slopes change through time?
Now on to the code...
First we have to import the parts of Python and Landlab that are needed to run this code. You should not have to change this first code block.
End of explanation
"""
# Code Block 2
number_of_rows = 50 # number of raster cells in vertical direction (y)
number_of_columns = 100 # number of raster cells in horizontal direction (x)
dxy = 200 # side length of a raster model cell, or resolution [m]
# Below is a raster (square cells) grid, with equal width and height
mg1 = RasterModelGrid((number_of_rows, number_of_columns), dxy)
# Set boundary conditions - only the south side of the grid is open.
# Boolean parameters are sent to function in order of
# east, north, west, south.
mg1.set_closed_boundaries_at_grid_edges(True, True, True, False)
"""
Explanation: Make a grid and set boundary conditions.
End of explanation
"""
# Code Block 3
np.random.seed(35) # seed set so our figures are reproducible
mg1_noise = (
np.random.rand(mg1.number_of_nodes) / 1000.0
) # intial noise on elevation gri
# set up the elevation on the grid
z1 = mg1.add_zeros("topographic__elevation", at="node")
z1 += mg1_noise
"""
Explanation: Here we make the initial grid of elevation of zeros with a very small amount of noise to make a more pleasing network.
End of explanation
"""
# Code Block 4
tmax = 5e5 # time for the model to run [yr] (Original value was 5E5 yr)
dt = 1000 # time step [yr] (Original value was 100 yr)
total_time = 0 # amount of time the landscape has evolved [yr]
# total_time will increase as you keep running the code.
t = np.arange(0, tmax, dt) # each of the time steps that the code will run
"""
Explanation: Set parameters related to time.
End of explanation
"""
# Code Block 5
# Original K_sp value is 1e-5
K_sp = 1.0e-5 # units vary depending on m_sp and n_sp
m_sp = 0.5 # exponent on drainage area in stream power equation
n_sp = 1.0 # exponent on slope in stream power equation
frr = FlowAccumulator(mg1, flow_director="FlowDirectorD8") # intializing flow routing
spr = StreamPowerEroder(
mg1, K_sp=K_sp, m_sp=m_sp, n_sp=n_sp, threshold_sp=0.0
) # initializing stream power incision
theta = m_sp / n_sp
# initialize the component that will calculate channel steepness
sf = SteepnessFinder(mg1, reference_concavity=theta, min_drainage_area=1000.0)
# initialize the component that will calculate the chi index
cf = ChiFinder(
mg1, min_drainage_area=1000.0, reference_concavity=theta, use_true_dx=True
)
"""
Explanation: Set parameters for incision and intializing all of the process components that do the work. We also initialize tools for quantifying the landscape.
End of explanation
"""
# Code Block 6
# uplift_rate [m/yr] (Original value is 0.0001 m/yr)
uplift_rate = np.ones(mg1.number_of_nodes) * 0.0001
"""
Explanation: Initialize rock uplift rate. This will need to be changed later.
End of explanation
"""
# Code Block 7
for ti in t:
z1[mg1.core_nodes] += uplift_rate[mg1.core_nodes] * dt # uplift the landscape
frr.run_one_step() # route flow
spr.run_one_step(dt) # fluvial incision
total_time += dt # update time keeper
print(total_time)
"""
Explanation: Now for the code loop.
Note that you can rerun Code Block 7 many times, and as long as you don't reset the elevation field (Code Block 3), it will take the already evolved landscape and evolve it even more. If you want to change parameters in other code blocks (e.g. Code Block 5 or 6), you can do that too, and as long as you don't reset the elevation field (Code Block 3) the new parameters will apply on the already evolved topography.
End of explanation
"""
# Code Block 8
imshow_grid(
mg1, "topographic__elevation", grid_units=("m", "m"), var_name="Elevation (m)"
)
title_text = f"$K_{{sp}}$={K_sp}; $time$={total_time} yr; $dx$={dxy} m"
plt.title(title_text)
max_elev = np.max(z1)
print("Maximum elevation is ", np.max(z1))
"""
Explanation: Plot the topography.
End of explanation
"""
# Code Block 9
plt.loglog(
mg1.at_node["drainage_area"][mg1.core_nodes],
mg1.at_node["topographic__steepest_slope"][mg1.core_nodes],
"b.",
)
plt.ylabel("Topographic slope")
plt.xlabel("Drainage area (m^2)")
title_text = f"$K_{{sp}}$={K_sp}; $time$={total_time} yr; $dx$={dxy} m"
plt.title(title_text)
"""
Explanation: Plot the slope and area data at each point on the landscape (in log-log space). We will only plot the core nodes because the boundary nodes have slopes that are influenced by the boundary conditions.
End of explanation
"""
# Code Block 10
# profile the largest channels, set initially to find the mainstem channel in the three biggest watersheds
# you can change the number of watersheds, or choose to plot all the channel segments in the watershed that
# have drainage area below the threshold (here we have set the threshold to the area of a grid cell).
prf = ChannelProfiler(
mg1,
number_of_watersheds=3,
main_channel_only=True,
minimum_channel_threshold=dxy ** 2,
)
prf.run_one_step()
# plot the elevation as a function of distance upstream
plt.figure(1)
title_text = f"$K_{{sp}}$={K_sp}; $time$={total_time} yr; $dx$={dxy} m"
prf.plot_profiles(
xlabel="distance upstream (m)", ylabel="elevation (m)", title=title_text
)
# plot the location of the channels in map view
plt.figure(2)
prf.plot_profiles_in_map_view()
# slope-area data in just the profiled channels
plt.figure(3)
for i, outlet_id in enumerate(prf.data_structure):
for j, segment_id in enumerate(prf.data_structure[outlet_id]):
if j == 0:
label = "channel {i}".format(i=i + 1)
else:
label = "_nolegend_"
segment = prf.data_structure[outlet_id][segment_id]
profile_ids = segment["ids"]
color = segment["color"]
plt.loglog(
mg1.at_node["drainage_area"][profile_ids],
mg1.at_node["topographic__steepest_slope"][profile_ids],
".",
color=color,
label=label,
)
plt.legend(loc="lower left")
plt.xlabel("drainage area (m^2)")
plt.ylabel("channel slope [m/m]")
title_text = f"$K_{{sp}}$={K_sp}; $time$={total_time} yr; $dx$={dxy} m"
plt.title(title_text)
"""
Explanation: It is slightly easier to interpret slope-area data when we look at a single channel, rather than the entire landscape. Below we plot the profile and slope-area data for the three largest channels on the landscape.
End of explanation
"""
# Code Block 11
# calculate the chi index
cf.calculate_chi()
# chi-elevation plots in the profiled channels
plt.figure(4)
for i, outlet_id in enumerate(prf.data_structure):
for j, segment_id in enumerate(prf.data_structure[outlet_id]):
if j == 0:
label = "channel {i}".format(i=i + 1)
else:
label = "_nolegend_"
segment = prf.data_structure[outlet_id][segment_id]
profile_ids = segment["ids"]
color = segment["color"]
plt.plot(
mg1.at_node["channel__chi_index"][profile_ids],
mg1.at_node["topographic__elevation"][profile_ids],
color=color,
label=label,
)
plt.xlabel("chi index (m)")
plt.ylabel("elevation (m)")
plt.legend(loc="lower right")
title_text = (
f"$K_{{sp}}$={K_sp}; $time$={total_time} yr; $dx$={dxy} m; concavity={theta}"
)
plt.title(title_text)
# chi map
plt.figure(5)
imshow_grid(
mg1,
"channel__chi_index",
grid_units=("m", "m"),
var_name="Chi index (m)",
cmap="jet",
)
title_text = (
f"$K_{{sp}}$={K_sp}; $time$={total_time} yr; $dx$={dxy} m; concavity={theta}"
)
plt.title(title_text)
"""
Explanation: The chi index is a useful way to quantitatively interpret fluvial channels. Below we plot the chi index in the three largest channels and also a chi map across the entire landscape.
End of explanation
"""
# Code Block 12
# calculate channel steepness
sf.calculate_steepnesses()
# plots of steepnes vs. distance upstream in the profiled channels
plt.figure(6)
for i, outlet_id in enumerate(prf.data_structure):
for j, segment_id in enumerate(prf.data_structure[outlet_id]):
if j == 0:
label = "channel {i}".format(i=i + 1)
else:
label = "_nolegend_"
segment = prf.data_structure[outlet_id][segment_id]
profile_ids = segment["ids"]
distance_upstream = segment["distances"]
color = segment["color"]
plt.plot(
distance_upstream,
mg1.at_node["channel__steepness_index"][profile_ids],
"x",
color=color,
label=label,
)
plt.xlabel("distance upstream (m)")
plt.ylabel("steepness index")
plt.legend(loc="upper left")
plt.title(f"$K_{{sp}}$={K_sp}; $time$={total_time} yr; $dx$={dxy} m; concavity={theta}")
# channel steepness map
plt.figure(7)
imshow_grid(
mg1,
"channel__steepness_index",
grid_units=("m", "m"),
var_name="Steepness index ",
cmap="jet",
)
title_text = (
"$K_{sp}$="
+ str(K_sp)
+ "; $time$="
+ str(total_time)
+ "yr; $dx$="
+ str(dxy)
+ "m"
+ "; concavity="
+ str(theta)
)
plt.title(f"$K_{{sp}}$={K_sp}; $time$={total_time} yr; $dx$={dxy} m; concavity={theta}")
"""
Explanation: The channel steepness index is another useful index to quantify fluvial channels. Below we plot the steepness index in the same three largest channels, and also plot steepness index across the grid.
End of explanation
"""
# Code Block 13
## Below has the name of the file that data will be written to.
## You need to change the name of the file every time that you want
## to write data, otherwise you will get an error.
## This will write to the directory that you are running the code in.
# write_file_name = 'data_file.txt'
## Below is writing elevation data in the ESRI ascii format so that it can
## easily be read into Arc GIS or back into Landlab.
# write_esri_ascii(write_file_name, mg1, 'topographic__elevation')
"""
Explanation: If you have a grid that you want to export, uncomment and edit the appropriate lines below and run the code block.
End of explanation
"""
# Code Block 14
number_of_rows = 50 # number of raster cells in vertical direction (y)
number_of_columns = 100 # number of raster cells in horizontal direction (x)
dxy2 = 200 # side length of a raster model cell, or resolution [m]
# Below is a raster (square cells) grid, with equal width and height
mg2 = RasterModelGrid((number_of_rows, number_of_columns), dxy2)
# Set boundary conditions - only the south side of the grid is open.
# Boolean parameters are sent to function in order of
# east, north, west, south.
mg2.set_closed_boundaries_at_grid_edges(True, True, True, False)
z2 = copy.copy(z1) # initialize the elevations with the steady state
# topography produced for question 1
z2 = mg2.add_field("topographic__elevation", z2, at="node")
# K_sp value for base landscape is 1e-5
K_sp2 = 1e-5 # units vary depending on m_sp and n_sp
m_sp2 = 0.5 # exponent on drainage area in stream power equation
n_sp2 = 1.0 # exponent on slope in stream power equation
frr2 = FlowAccumulator(mg2, flow_director="FlowDirectorD8") # intializing flow routing
spr2 = StreamPowerEroder(
mg2, K_sp=K_sp2, m_sp=m_sp2, n_sp=n_sp2, threshold_sp=0.0
) # initializing stream power incision
theta2 = m_sp2 / n_sp2
# initialize the component that will calculate channel steepness
sf2 = SteepnessFinder(mg2, reference_concavity=theta2, min_drainage_area=1000.0)
# initialize the component that will calculate the chi index
cf2 = ChiFinder(
mg2, min_drainage_area=1000.0, reference_concavity=theta2, use_true_dx=True
)
# Code Block 15
tmax = 1e5 # time for the model to run [yr] (Original value was 5E5 yr)
dt = 500 # time step [yr] (Original value was 500 yr)
total_time = 0 # amount of time the landscape has evolved [yr]
# total_time will increase as you keep running the code.
t = np.arange(0, tmax, dt) # each of the time steps that the code will run
# Code Block 16
# uplift_rate [m/yr] (value was 0.0001 m/yr for base landscape)
uplift_rate = np.ones(mg2.number_of_nodes) * 0.0001
## If you want to add a one-time event that uplifts only part of the
## landscape, uncomment the 3 lines below
# fault_location = 4000 # [m]
# uplift_amount = 10 # [m]
# z2[np.nonzero(mg2.node_y>fault_location)] += uplift_amount
## IMPORTANT! To use the below fault generator, comment the one-time
## uplift event above if it isn't already commented out.
## Code below creates a fault horizontally across the grid.
## Uplift rates are greater where y values > fault location.
## To use, uncomment the 5 code lines below and edit to your values
# fault_location = 4000 # [m]
# low_uplift_rate = 0.0001 # [m/yr]
# high_uplift_rate = 0.0004 # [m/yr]
# uplift_rate[np.nonzero(mg2.node_y<fault_location)] = low_uplift_rate
# uplift_rate[np.nonzero(mg2.node_y>fault_location)] = high_uplift_rate
## IMPORTANT! To use below rock uplift gradient, comment the two
## uplift options above if they aren't already commented out.
## If you want a linear gradient in uplift rate
## (increasing uplift into the range),
## uncomment the 4 code lines below and edit to your values.
# low_uplift_rate = 0.0001 # [m/yr]
# high_uplift_rate = 0.0004 # [m/yr]
## below is uplift gradient per node row index, NOT row value in meters
# uplift_rate_gradient = (high_uplift_rate - low_uplift_rate)/(number_of_rows-3)
# uplift_rate = low_uplift_rate + ((mg2.node_y / dxy)-1) * uplift_rate_gradient
# Code Block 17
for ti in t:
z2[mg2.core_nodes] += uplift_rate[mg2.core_nodes] * dt # uplift the landscape
frr2.run_one_step() # route flow
spr2.run_one_step(dt) # fluvial incision
total_time += dt # update time keeper
print(total_time)
# Code Block 18
# Plot topography
plt.figure(8)
imshow_grid(
mg2, "topographic__elevation", grid_units=("m", "m"), var_name="Elevation (m)"
)
plt.title(f"$K_{{sp}}$={K_sp2}; $time$={total_time} yr; $dx$={dxy2} m")
max_elev = np.max(z2)
print("Maximum elevation is ", np.max(z2))
# Code Block 19
# Plot Channel Profiles and slope-area data along the channels
prf2 = ChannelProfiler(
mg2,
number_of_watersheds=3,
main_channel_only=True,
minimum_channel_threshold=dxy ** 2,
)
prf2.run_one_step()
# plot the elevation as a function of distance upstream
plt.figure(9)
title_text = f"$K_{{sp}}$={K_sp2}; $time$={total_time} yr; $dx$={dxy} m"
prf2.plot_profiles(
xlabel="distance upstream (m)", ylabel="elevation (m)", title=title_text
)
# plot the location of the channels in map view
plt.figure(10)
prf2.plot_profiles_in_map_view()
# slope-area data in just the profiled channels
plt.figure(11)
for i, outlet_id in enumerate(prf2.data_structure):
for j, segment_id in enumerate(prf2.data_structure[outlet_id]):
if j == 0:
label = "channel {i}".format(i=i + 1)
else:
label = "_nolegend_"
segment = prf2.data_structure[outlet_id][segment_id]
profile_ids = segment["ids"]
color = segment["color"]
plt.loglog(
mg2.at_node["drainage_area"][profile_ids],
mg2.at_node["topographic__steepest_slope"][profile_ids],
".",
color=color,
label=label,
)
plt.legend(loc="lower left")
plt.xlabel("drainage area (m^2)")
plt.ylabel("channel slope [m/m]")
title_text = f"$K_{{sp}}$={K_sp2}; $time$={total_time} yr; $dx$={dxy2} m"
plt.title(title_text)
# Code Block 20
# Chi Plots
# calculate the chi index
cf2.calculate_chi()
# chi-elevation plots in the profiled channels
plt.figure(12)
for i, outlet_id in enumerate(prf2.data_structure):
for j, segment_id in enumerate(prf2.data_structure[outlet_id]):
if j == 0:
label = "channel {i}".format(i=i + 1)
else:
label = "_nolegend_"
segment = prf2.data_structure[outlet_id][segment_id]
profile_ids = segment["ids"]
color = segment["color"]
plt.plot(
mg2.at_node["channel__chi_index"][profile_ids],
mg2.at_node["topographic__elevation"][profile_ids],
color=color,
label=label,
)
plt.xlabel("chi index (m)")
plt.ylabel("elevation (m)")
plt.legend(loc="lower right")
title_text = (
f"$K_{{sp}}$={K_sp2}; $time$={total_time} yr; $dx$={dxy2} m; concavity={theta2}"
)
plt.title(title_text)
# chi map
plt.figure(13)
imshow_grid(
mg2,
"channel__chi_index",
grid_units=("m", "m"),
var_name="Chi index (m)",
cmap="jet",
)
plt.title(
f"$K_{{sp}}$={K_sp2}; $time$={total_time} yr; $dx$={dxy2} m; concavity={theta2}"
)
# Code Block 21
# Plot channel steepness along profiles and across the landscape
# calculate channel steepness
sf2.calculate_steepnesses()
# plots of steepnes vs. distance upstream in the profiled channels
plt.figure(14)
for i, outlet_id in enumerate(prf2.data_structure):
for j, segment_id in enumerate(prf2.data_structure[outlet_id]):
if j == 0:
label = "channel {i}".format(i=i + 1)
else:
label = "_nolegend_"
segment = prf2.data_structure[outlet_id][segment_id]
profile_ids = segment["ids"]
distance_upstream = segment["distances"]
color = segment["color"]
plt.plot(
distance_upstream,
mg2.at_node["channel__steepness_index"][profile_ids],
"x",
color=color,
label=label,
)
plt.xlabel("distance upstream (m)")
plt.ylabel("steepness index")
plt.legend(loc="upper left")
plt.title(
f"$K_{{sp}}$={K_sp2}; $time$={total_time} yr; $dx$={dxy2} m; concavity={theta2}"
)
# channel steepness map
plt.figure(15)
imshow_grid(
mg2,
"channel__steepness_index",
grid_units=("m", "m"),
var_name="Steepness index ",
cmap="jet",
)
plt.title(
f"$K_{{sp}}$={K_sp2}; $time$={total_time} yr; $dx$={dxy2} m; concavity={theta2}"
)
"""
Explanation: After running every code block once, has the landscape reached steady state? Answer: NO! How do you know? After you think about this, you are ready to complete this project.
Answer the following questions using the code above and below. All answers should be typed, and supporting figures (produced using the code) should be embedded in one document that you hand in. Code Blocks 8-12 and 18-21 produce different figures that you may find useful. You can use any or all of these different figures to help you with the questions below. (Download or screenshoot the figures.)
Anything with a question mark should be answered in the document that you hand in. Make sure your write in full sentences and proofread the document that you hand in.
Steady state with low uplift rate. Using the parameters provided in the initial notebook, run the landscape to steady state. (Note that you can keep running the main evolution loop - Code Block 7 - and the different plotting blocks without running the code blocks above them. You may also want to change $tmax$ in Code Block 4.) How did you know that the landscape reached steady state? Note the approximate time that it took to reach steady state for your own reference. (This will be usefull for later questions.) Include appropriate plots. (If you want to analyze these landscapes outside of Landlab or save for later, make sure you save the elevation data to a text file (Code Block 13).)
NOTE, For the rest of the questions you should use Code Blocks 14 - 21. These will allow you to use the steady-state landscape created for question 1 - referred to here as the 'base landscape' - as the initial condition. Start by editing what you need to in Code Blocks 14 - 16. Run these each once, sequentially. You can run Code Block 17, the time loop, as many times as you need to, along with Code Blocks 18-21, which produce plots.
Transient landscape responding to an increase in rock uplift. Use the base landscape and increase rock uplift uniformly by a factor of 4 to 0.0004 m/yr. Make sure you update the rock uplift rate (Code Block 16) and ensure that $tmax$ is 1e5 yrs and $dt$ is 500 yrs (Code Block 15). Run this until the maximum elevation in the grid is ~ 170 m and observe how the landscape gets to this elevation, i.e. plot intermediate steps. What patterns do you see in the supporting plots that illustrate this type of transient? Which patterns, if any, are diagnostic of a landscape response to uniform increase in rock uplift rate? (You may need to answer this after completing all of the questions.)
Steady-state landscape with increased rock uplift. Now run the landscape from question 2 until it reaches steady state. (I.e. run the time loop, Code Block 17, a bunch of times. You can increase $tmax$ and $dt$ to make this run faster.) Provide a plot that illustrates that the landscape is in steady state. What aspects of the landscape have changed in comparison with the base landscape from question 1?
Increase erodibility. Start again from the base landscape, but this time increase $K_{sp}$ to 2E-5 (Code Block 14). Make sure rock uplift rate is set to the original value of 0.0001 m/yr (Code Block 16). Set $tmax$ to 1e5 yrs (Code Block 15). Run for 1e5 yrs and save the plots that you think are diagnostic. Run for another 1e5 yrs and save plots again. Now run for 5e5 yrs and save plots again. Quantitatively describe how the landscape evolves in response to the increase in erodibility and provide supporting plots. What could cause a uniform increase in erodibility?
Spatially varible uplift - discrete, massive earthquake. Start again from the base landscape, and make sure that $K_{sp}$ = 1E-5 (Code Block 14). Now add a seismic event to this steady state landscape - a fault that runs horizontally across the landscape at y = 4000 m, and instantaneously uplifts half the landscape by 10 meters (Code Block 16). In this case, we will keep background uplift uniform at 0.0001 m/yr. Set $tmax$ to 1e5 yrs and $dt$ to 500 yrs (Code Block 15) before evolving the landscape after the fault. Now run the time loop four times and look at the different plots after each loop. How does the landscape respond to this fault? What patterns do you see in the supporting plots that illustrate this type of transient? Which patterns, if any, are diagnostic of a channel response to an earthquake? (You may need to answer this after completing all of the questions.)
Spatially Varible Rock Uplift - discrete fault with two different uplift rates. Start again from the base landscape, and make sure that $K_{sp}$ = 1E-5 (Code Block 14). Now we will add a fault (at y = 4000 m) to this landscape. In this case the uplift rate on the footwall is higher (0.0004 m/yr) than on the hanging wall (uplift rate = 0.0001 m/yr). (Edit Code Block 16.) Set $tmax$ to 1e5 yrs and $dt$ to 500 yrs (Code Block 15). Now run the time loop four separate times and look at the different plots after each loop. How does the landscape respond to this fault? What patterns do you see in the supporting plots that illustrate this type of transient? Which patterns, if any, are diagnostic of a channel response to a this type of gradient in rock uplift rates? (You may need to answer this after completing all of the questions.)
Spatially Varible Rock Uplift - gradient in uplift across the range. Start again from the base landscape, and make sure that $K_{sp}$ = 1E-5 (Code Block 14). Now we will add a linear gradient in uplift rate across the entire range (edit Code Block 16). The maximum uplift rate will be 0.0004 m/yr at the core of the range, and 0.0001 m/yr at the front of the range. Set $tmax$ to 1e5 yrs (Code Block 4) and $dt$ to 500 yrs before you start running the time loop for the fault before you start running the time loop with the rock uplift gradient. Now run the time loop four separate times and look at the different plots after each loop. How does the landscape respond to this gradient in uplift rate? What patterns do you see in the supporting plots that illustrate this type of transient? Which patterns, if any, are diagnostic of a channel response to this type of gradient in rock uplift rates? (You may need to answer this after completing all of the questions.)
Final Reflection. Was your initial insight into how parameters would affect the landscape correct? Discuss in 6 sentences or less.
End of explanation
"""
|
matthias-k/pysaliency | notebooks/Demo_Saliency_Maps.ipynb | mit | import pysaliency
import pysaliency.external_datasets
data_location = 'test_datasets'
mit_stimuli, mit_fixations = pysaliency.external_datasets.get_mit1003(location=data_location)
index = 0
plt.imshow(mit_stimuli.stimuli[index])
f = mit_fixations[mit_fixations.n == index]
plt.scatter(f.x, f.y, color='r')
_ = plt.axis('off')
"""
Explanation: Pysaliency
Saliency Map Models
pysaliency comes with a variety of features to evaluate saliency map models. This notebooks demonstrates these features.
First we load the MIT1003 dataset:
End of explanation
"""
cutoff = 10
short_stimuli = pysaliency.FileStimuli(filenames=mit_stimuli.filenames[:cutoff])
short_fixations = mit_fixations[mit_fixations.n < cutoff]
"""
Explanation: As some evaluation methods can take quite a long time to run, we prepare a smaller dataset consisting of only the first 10 stimuli:
End of explanation
"""
aim = pysaliency.AIM(location='test_models', cache_location=os.path.join('model_caches', 'AIM'))
smap = aim.saliency_map(mit_stimuli[10])
plt.imshow(-smap)
plt.axis('off');
"""
Explanation: We will use the saliency model AIM by Bruce and Tsotos
End of explanation
"""
aim.AUC(short_stimuli, short_fixations, nonfixations='uniform', verbose=True)
"""
Explanation: Evaluating Saliency Map Models
Pysaliency is able to use a variety of evaluation methods to evaluate saliency models, both saliency map based models and probabilistic models. Here we demonstrate the evaluation of saliency map models
We can evaluate area under the curve with respect to a uniform nonfixation distribution:
End of explanation
"""
aim.AUC(short_stimuli, short_fixations, nonfixations='shuffled', verbose=True)
"""
Explanation: By setting nonfixations='shuffled' the fixations from all other stimuli will be used:
End of explanation
"""
aim.AUC(short_stimuli, short_fixations, nonfixations=short_fixations, verbose=True)
"""
Explanation: Also, you can hand over arbitrary Fixations instances as nonfixations:
End of explanation
"""
perf = aim.fixation_based_KL_divergence(short_stimuli, short_fixations, nonfixations='uniform')
print('Fixation based KL-divergence wrt. uniform nonfixations: {:.02f}'.format(perf))
perf = aim.fixation_based_KL_divergence(short_stimuli, short_fixations, nonfixations='shuffled')
print('Fixation based KL-divergence wrt. shuffled nonfixations: {:.02f}'.format(perf))
perf = aim.fixation_based_KL_divergence(short_stimuli, short_fixations, nonfixations=short_fixations)
print('Fixation based KL-divergence wrt. identical nonfixations: {:.02f}'.format(perf))
"""
Explanation: Another popular saliency metric is the fixation based KL-Divergence as introduced by Itti. Usually it is just called KL-Divergence which creates confusion as there is also another completely different saliency metric called KL-Divergence (here called image based KL-Divergence, see below).
As AUC, fixation based KL-Divergence needs a nonfixation distribution to compare to. Again, you can use uniform, shuffled or any Fixations instance for this.
End of explanation
"""
gold_standard = pysaliency.FixationMap(short_stimuli, short_fixations, kernel_size=30)
perf = aim.image_based_kl_divergence(short_stimuli, gold_standard)
print("Image based KL-divergence: {} bit".format(perf / np.log(2)))
"""
Explanation: The image based KL-Divergence can be calculated, too. Unlike all previous metrics, it needs a gold standard to compare to. Here we use a fixation map that has been blured with a Gaussian kernel of size 30px. Often a kernel size of one degree of visual angle is used.
End of explanation
"""
gold_standard.image_based_kl_divergence(short_stimuli, gold_standard, minimum_value=1e-20)
"""
Explanation: The gold standard is assumed to be the real distribution, hence it has a image based KL divergence of zero:
End of explanation
"""
class MySaliencyMapModel(pysaliency.SaliencyMapModel):
def _saliency_map(self, stimulus):
return np.ones((stimulus.shape[0], stimulus.shape[1]))
msmm = MySaliencyMapModel()
"""
Explanation: To implement you own saliency map model, inherit from pysaliency.SaliencyMapModel and implement the _saliency_map method.
End of explanation
"""
|
thalesians/tsa | src/jupyter/python/foundations/statistical-inference-and-estimation-theory.ipynb | apache-2.0 | # Copyright (c) Thalesians Ltd, 2018-2019. All rights reserved
# Copyright (c) Paul Alexander Bilokon, 2018-2019. All rights reserved
# Author: Paul Alexander Bilokon <paul@thalesians.com>
# Version: 1.1 (2019.01.24)
# Previous versions: 1.0 (2018.08.31)
# Email: paul@thalesians.com
# Platform: Tested on Windows 10 with Python 3.6
"""
Explanation:
End of explanation
"""
%matplotlib inline
"""
Explanation: Statistical inference and Estimation Theory
Motivation
Much of data science and machine learning (ML) is concerned with estimation: what is the optimal neural net for such and such a task? This question can be rephrased: what are the optimal weights and biases (subject to some sensible criteria) for a neural net for such and such a task?
At best we end up with an estimate of these quantities (weights, biases, linear regression coefficients, etc.). Then it is our task to work out how good they are.
Thus we have to rely on statistical inference and estimation theory, which dates back to the work of Gauss and Legendre.
In this Jupyter lab we shall demonstrate how stochastic simulation can be used to verify theoretical ideas from statistical inference and estimation theory.
Objectives
To show how Python's random can be used to generate simulated data.
To introduce the use of histograms to verify statistical properties of simulated data.
To introduce various estimators for means, variances, and standard deviation.
To perform numerical experiments to study the bias, variance, and consistency of these estimators.
The following is needed to enable inlining of Matplotlib graphs in this Jypyter notebook:
End of explanation
"""
import random as rnd
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = 16,10
import seaborn as sns
"""
Explanation: Now let us import some Python libraries and set some settings.
End of explanation
"""
population_size = 500000
a = 0; b = 100
population = [rnd.randint(a, b) for _ in range(population_size)]
"""
Explanation: Let us generate (simulate) a population of 500000. We shall use a single feature drawn from a
End of explanation
"""
plt.hist(population, bins=10);
"""
Explanation: Of course, there is the usual caveat: Garbage In, Garbage Out. Our inferences can only be as good as the (pseudo-) random variates that we generate. In this study we take these random variates as our gold standard, and we compare things against them. In practice, the random number generators are never perfect.
Most functions in Python's random module depend on random.random() under the hood, which uses a threadsafe C implementation of the Mersenne Twister algorithm as the core generator. It produces 53-bit precision floats and has a period of 2**19937-1. Mersenne Twister is a completely deterministic generator of pseudo-random numbers.
For a discussion of its quality, we refer the reader to https://stackoverflow.com/questions/12164280/is-pythons-random-randint-statistically-random
The statistical properties of the variates generated are good enough for our purposes. It is still a good idea to check that our population "looks like" it has a discrete uniform distribution, as it should. Matplotlib's histogram should be good enough for that:
End of explanation
"""
sns.distplot(population, bins=10);
"""
Explanation: Although seaborn's distplot looks more elegant:
End of explanation
"""
def mean(x): return sum(x) / len(x)
"""
Explanation: Let's introduce an implementation of the sample mean estimator:
End of explanation
"""
mean = lambda x: sum(x) / len(x)
"""
Explanation: Python lambda fans will probably prefer this variant, although there is little benefit from using a lambda here as it is not passed to a function as an argument:
End of explanation
"""
def var(x, m=None):
m = m or mean(x)
return sum([(xe - m)**2 for xe in x]) / len(x)
"""
Explanation: We'll also introduce an implementation of uncorrected sample variance:
End of explanation
"""
population_mean = mean(population)
population_var = var(population, population_mean)
print("population mean: {:.2f}, population variance: {:.2f}".format(population_mean, population_var))
"""
Explanation: What estimates do we get if we use the entire population as our sample?
End of explanation
"""
true_mean = .5 * (a + b); true_mean
true_var = ((b - a + 1.)**2 - 1) / 12.; true_var
"""
Explanation: The true values for the discrete uniform random variable are given by $$\mathbb{E}[X] = \frac{a+b}{2}$$ and $$\text{Var}[X] = \frac{(b - a + 1)^2 - 1}{12}.$$
End of explanation
"""
sample = rnd.sample(population, 10)
sample_mean = mean(sample)
sample_var = var(sample)
print("sample mean: {:.2f}, sample variance: {:.2f}".format(sample_mean, sample_var))
"""
Explanation: Our estimates above are pretty close. What if we now pick a smaller sample?
End of explanation
"""
sample_means, sample_vars = [], []
for _ in range(100000):
sample = rnd.sample(population, 10)
m = mean(sample)
sample_means.append(m)
sample_vars.append(var(sample, m))
"""
Explanation: Not quite as good! Let us stick with this small sample size, 10, and compute the sampling distribution of our estimators by computing them for many samples of the same size.
End of explanation
"""
plt.hist(sample_means, 50)
plt.axvline(true_mean, color='red');
"""
Explanation: Let's use a histogram to visualize the sampling distribution for the sample mean estimator:
End of explanation
"""
plt.hist(sample_vars, 50);
plt.axvline(true_var, color='red');
"""
Explanation: This looks pretty centred around the true value, so the estimator appears unbiased. What about the sampling distribution of the uncorrected sample variance estimator?
End of explanation
"""
sample_sizes, sample_vars = [], []
for ss in range(1, 50):
sample_sizes.append(ss)
sample_vars.append(mean([var(rnd.sample(population, ss)) for _ in range(1000)]))
plt.plot(sample_sizes, sample_vars, 'o')
plt.axhline(true_var, color='red')
plt.plot(sample_sizes, [(n-1)/n * true_var for n in sample_sizes])
plt.xlabel('sample size')
plt.ylabel('estimate of population variance');
"""
Explanation: This time the distribution is to the left of the true population variance, so it appears that we are systematically underestimating it: the uncorrected sample variance estimator is biased.
How does its value change as we increase the sample size?
End of explanation
"""
def sample_var(x, m=None):
m = m or mean(x)
n = len(x)
return n/(n-1) * var(x, m)
"""
Explanation: Looks like it gets closer to the true value, although a certain bias remains visible on the plot.
Will the (corrected) sample variance estimator do better? Let's implement it...
End of explanation
"""
sample_sizes, sample_vars, sample_vars1 = [], [], []
for ss in range(2, 50):
sample_sizes.append(ss)
sample_vars.append(mean([var(rnd.sample(population, ss)) for _ in range(1000)]))
sample_vars1.append(mean([sample_var(rnd.sample(population, ss)) for _ in range(1000)]))
plt.plot(sample_sizes, sample_vars, 'o', label='uncorrected')
plt.plot(sample_sizes, sample_vars1, 'o', label='corrected')
plt.axhline(true_var, color='red')
plt.plot(sample_sizes, [(n-1)/n * true_var for n in sample_sizes])
plt.xlabel('sample size')
plt.ylabel('estimate of population variance')
plt.legend();
"""
Explanation: ...and compare the uncorrected and corrected sample variance:
End of explanation
"""
sample_means, sample_vars = [], []
for _ in range(100000):
sample = rnd.sample(population, 10)
m = mean(sample)
sample_vars.append(sample_var(sample, m))
plt.hist(sample_vars, 50)
plt.axvline(true_var, color='red');
"""
Explanation: The (corrected) sample variance is clearly unbiased. Let us confirm this by visualising its sampling distribution with a histogram:
End of explanation
"""
def mean1(x): return (sum(x) + 10.) / len(x)
def mean2(x): return x[0]
"""
Explanation: Now let us demonstrate the concept of consistency. We introduce two more estimators of the mean. The first will be consistent but biased, the second unbiased but inconsistent:
End of explanation
"""
sample_sizes, sample_means, sample_means1, sample_means2 = [], [], [], []
for ss in range(1, 50):
sample_sizes.append(ss)
sample_means.append(mean([mean(rnd.sample(population, ss)) for _ in range(1000)]))
sample_means1.append(mean([mean1(rnd.sample(population, ss)) for _ in range(1000)]))
sample_means2.append(mean([mean2(rnd.sample(population, ss)) for _ in range(1000)]))
plt.plot(sample_sizes, sample_means, 'o-', label='sample mean')
plt.plot(sample_sizes, sample_means1, 'o-', label='consistent but biased')
plt.plot(sample_sizes, sample_means2, 'o-', label='unbiased but inconsistent')
plt.axhline(true_mean, color='red')
plt.xlabel('sample size')
plt.ylabel('estimate of population mean')
plt.legend();
"""
Explanation: Let's see how these estimators perform:
End of explanation
"""
import math
sample_sizes, sample_sds, sample_sds1 = [], [], []
for ss in range(2, 50):
sample_sizes.append(ss)
sample_sds.append(mean([math.sqrt(var(rnd.sample(population, ss))) for _ in range(1000)]))
sample_sds1.append(mean([math.sqrt(sample_var(rnd.sample(population, ss))) for _ in range(1000)]))
plt.plot(sample_sizes, sample_sds, 'o', label='uncorrected')
plt.plot(sample_sizes, sample_sds1, 'o', label='corrected')
plt.axhline(math.sqrt(true_var), color='red')
plt.xlabel('sample size')
plt.ylabel('estimate of population standard deviation')
plt.legend();
"""
Explanation: Now let's see how well the square roots of the square roots of the uncorrected and corrected sample variance estimate the standard deviation:
End of explanation
"""
|
lknelson/DH-Institute-2017 | 01-Intro to NLP/Intro to NLP.ipynb | bsd-2-clause | print("For me it has to do with the work that gets done at the crossroads of digital media and traditional humanistic study. And that happens in two different ways. On the one hand, it's bringing the tools and techniques of digital media to bear on traditional humanistic questions; on the other, it's also bringing humanistic modes of inquiry to bear on digital media.")
# Assign the quote to a variable, so we can refer back to it later
# We get to make up the name of our variable, so let's give it a descriptive label: "sentence"
sentence = "For me it has to do with the work that gets done at the crossroads of digital media and traditional humanistic study. And that happens in two different ways. On the one hand, it's bringing the tools and techniques of digital media to bear on traditional humanistic questions; on the other, it's also bringing humanistic modes of inquiry to bear on digital media."
# Oh, also: anything on a line starting with a hashtag is called a comment,
# and is meant to clarify code for human readers. The computer ignores these lines.
# Print the contents of the variable 'sentence'
print(sentence)
"""
Explanation: Introduction to Natural Language Processing (NLP)
Generally speaking, <i>Computational Text Analysis</i> is a set of interpretive methods which seek to understand patterns in human discourse, in part through statistics. More familiar methods, such as close reading, are exceptionally well-suited to the analysis of individual texts, however our research questions typically compel us to look for relationships across texts, sometimes counting in the thousands or even millions. We have to zoom out, in order to perform so-called <i>distant reading</i>. Fortunately for us, computers are well-suited to identify the kinds of textual relationships that exist at scale.
We will spend the week exploring research questions that computational methods can help to answer and thinking about how these complement -- rather than displace -- other interpretive methods. Before moving to that conceptual level, however, we will familiarize ourselves with the basic tools of the trade.
<i>Natural Language Processing</i> is an umbrella term for the methods by which a computer handles human language text. This includes transforming the text into a numerical form that the computer manipulates natively, as well as the measurements that reserchers often perform. In the parlance, <i>natural language</i> refers to a language spoken by humans, as opposed to a <i>formal language</i>, such as Python, which comprises a set of logical operations.
The goal of this lesson is to jump right in to text analysis and natural language processing. Rather than starting with the nitty gritty of programming in Python, this lesson will demonstrate some neat things you can do with a minimal amount of coding. Today, we aim to build intuition about how computers read human text and learn some of the basic operations we'll perform with them.
Lesson Outline
Jargon
Text in Python
Tokenization & Term Frequency
Pre-Processing:
Changing words to lowercase
Removing stop words
Removing punctuation
Part-of-Speech Tagging
Tagging tokens
Counting tagged tokens
Demonstration: Guess the Novel!
Concordance
0. Key Jargon
General
programming (or coding)
A program is a sequence of instructions given to the computer, in order to perform a specific task. Those instructions are written in a specific programming language, in our case, Python. Writing these instructions can be an art as much as a science.
Python
A general-use programming language that is popular for NLP and statistics.
script
A block of executable code.
Jupyter Notebook
Jupyter is a popular interface in which Python scripts can be written and executed. Stand-alone scripts are saved in Notebooks. The script can be sub-divided into units called <i>cells</i> and executed individually. Cells can also contain discursive text and html formatting (such as in this cell!)
package (or module)
Python offers a basic set of functions that can be used off-the-shelf. However, we often wish to go beyond the basics. To that end, <i>packages</i> are collections of python files that contain pre-made functions. These functions are made available to our program when we <i>import</i> the package that contains them.
Anaconda
Anaconda is a <i>platform</i> for programming in Python. A platform constitutes a closed environment on your computer that has been standardized for functionality. For example, Anaconda contains common packages and programming interfaces for Python, and its developers ensure compatibility among the moving parts.
When Programming
variable
A variable is a generic container that stores a value, such as a number or series of letters. This is not like a variable from high-school algebra, which had a single "correct" value that must be solved. Rather, the user <i>assigns</i> values to the variable in order to perform operations on it later.
string
A type of object consisting of a single sequence of alpha-numeric characters. In Python, a string is indicated by quotation marks around the sequence"
list
A type of object that consists of a sequence of elements.
Natural Language Processing
pre-processing
Transforming a human lanugage text into computer-manipulable format. A typical pre-processing workflow includes <i>stop-word</i> removal, setting text in lower case, and <i>term frequency</i> counting.
token
An individual word unit within a sentence.
stop words
The function words in a natural langauge, such as <i>the</i>, <i>of</i>, <i>it</i>, etc. These are typically the most common words.
term frequency
The number of times a term appears in a given text. This is either reported as a raw tally or it is <i>normalized</i> by dividing by the total number of words in a text.
POS tagging
One common task in NLP is the determination of a word's part-of-speech (POS). The label that describes a word's POS is called its <i>tag</i>. Specialized functions that make these determinations are called <i>POS Taggers</i>.
concordance
Index of instances of a given word (or other linguistic feature) in a text. Typically, each instance is presented within a contextual window for human readability.
NLTK (Natural Language Tool Kit)
A common Python package that contains many NLP-related functions
Further Resources:
Check out the full range of techniques included in Python's nltk package here: http://www.nltk.org/book/
1. Text in Python
First, a quote about what digital humanities means, from digital humanist Kathleen Fitzpatrick. Source: "On Scholarly Communication and the Digital Humanities: An Interview with Kathleen Fitzpatrick", In the Library with the Lead Pipe
End of explanation
"""
# Import the NLTK (Natural Language Tool Kit) package
import nltk
# Download NLTK language models
nltk_data = ["punkt", "words", "stopwords", "averaged_perceptron_tagger", "maxent_ne_chunker", 'wordnet']
nltk.download(nltk_data)
# Tokenize our sentence!
nltk.word_tokenize(sentence)
# Create new variable that contains our tokenized sentence
sentence_tokens = nltk.word_tokenize(sentence)
# Inspect our new variable
# Note the square braces at the beginning and end that indicate we are looking at a list-type object
print(sentence_tokens)
"""
Explanation: 2. Tokenizing Text and Counting Words
The above output is how a human would read that sentence. Next we look the main way in which a computer "reads", or parses, that sentence.
The first step is typically to <i>tokenize</i> it, or to change it into a series of <i>tokens</i>. Each token roughly corresponds to either a word or punctuation mark. These smaller units are more straight-forward for the computer to handle for tasks like counting.
End of explanation
"""
# How many tokens are in our list?
len(sentence_tokens)
# How often does each token appear in our list?
import collections
collections.Counter(sentence_tokens)
# Assign those token counts to a variable
token_frequency = collections.Counter(sentence_tokens)
# Get an ordered list of the most frequent tokens
token_frequency.most_common(10)
"""
Explanation: Note on Tokenization
While seemingly simple, tokenization is a non-trivial task.
For example, notice how the tokenizer has handled contractions: a contracted word is divided into two separate tokens! What do you think is the motivation for this? How else might you tokenize them?
Also notice each token is either a word or punctuation mark. In practice, it is sometimes useful to remove punctuation marks and at other times to include them, depending on the situation.
In the coming days, we will see other tokenizers and have opportunities to explore their reasoning. For now, we will look at a few examples of NLP tasks that tokenization enables.
End of explanation
"""
# Let's revisit our original sentence
sentence
# And now transform it to lower case, all at once
sentence.lower()
# Okay, let's set our list of tokens to lower case, one at a time
# The syntax of the line below is tricky. Don't worry about it for now.
# We'll spend plenty of time on it tomorrow!
lower_case_tokens = [ word.lower() for word in sentence_tokens ]
# Inspect
print(lower_case_tokens)
"""
Explanation: Note on Term Frequency
Some of the most frequent words appear to summarize the sentence: in particular the words "humanistic", "digital", and "media". However, most of the these terms seem to add noise in the summary: "the", "it", "to", ".", etc.
There are many strategies for identifying the most important words in a text, and we will cover the most popular ones in the next week. Today, we will look at two of them. In the first, we will simply remove the noisey tokens. In the second, we will identify important words using their parts of speech.
3. Pre-Processing: Lower Case, Remove Stop Words and Punctuation
Typically, a text goes through a number of pre-processing steps before beginning to the actual analysis. We have already seen the tokenization step. Typically, pre-processing includes transforming tokens to lower case and removing stop words and punctuation marks.
Again, pre-processing is a non-trivial process that can have large impacts on the analysis that follows. For instance, what will be the most common token in our example sentence, once we set all tokens to lower case?
Lower Case
End of explanation
"""
# Import the stopwords list
from nltk.corpus import stopwords
# Take a look at what stop words are included
print(stopwords.words('english'))
# Try another language
print(stopwords.words('spanish'))
# Create a new variable that contains the sentence tokens but NOT the stopwords
tokens_nostops = [ word for word in lower_case_tokens if word not in stopwords.words('english') ]
# Inspect
print(tokens_nostops)
"""
Explanation: Stop Words
End of explanation
"""
# Import a list of punctuation marks
import string
# Inspect
string.punctuation
# Remove punctuation marks from token list
tokens_clean = [word for word in tokens_nostops if word not in string.punctuation]
# See what's left
print(tokens_clean)
"""
Explanation: Punctuation
End of explanation
"""
# Count the new token list
word_frequency_clean = collections.Counter(tokens_clean)
# Most common words
word_frequency_clean.most_common(10)
"""
Explanation: Re-count the Most Frequent Words
End of explanation
"""
# Let's revisit our original list of tokens
print(sentence_tokens)
# Use the NLTK POS tagger
nltk.pos_tag(sentence_tokens)
# Assign POS-tagged list to a variable
tagged_tokens = nltk.pos_tag(sentence_tokens)
"""
Explanation: Better! The ten most frequent words now give us a pretty good sense of the substance of this sentence. But we still have problems. For example, the token "'s" sneaked in there. One solution is to keep adding stop words to our list, but this could go on forever and is not a good solution when processing lots of text.
There's another way of identifying content words, and it involves identifying the part of speech of each word.
4. Part-of-Speech Tagging
You may have noticed that stop words are typically short function words, like conjunctions and prepositions. Intuitively, if we could identify the part of speech of a word, we would have another way of identifying which contribute to the text's subject matter. NLTK can do that too!
NLTK has a <i>POS Tagger</i>, which identifies and labels the part-of-speech (POS) for every token in a text. The particular labels that NLTK uses come from the Penn Treebank corpus, a major resource from corpus linguistics.
You can find a list of all Penn POS tags here: https://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html
Note that, from this point on, the code is going to get a little more complex. Don't worry about the particularities of each line. For now, we will focus on the NLP tasks themselves and the textual patterns they identify.
End of explanation
"""
# We'll tread lightly here, and just say that we're counting POS tags
tag_frequency = collections.Counter( [ tag for (word, tag) in tagged_tokens ])
# POS Tags sorted by frequency
tag_frequency.most_common()
"""
Explanation: Most Frequent POS Tags
End of explanation
"""
# Let's filter our list, so it only keeps adjectives
adjectives = [word for word,pos in tagged_tokens if pos == 'JJ' or pos=='JJR' or pos=='JJS']
# Inspect
print( adjectives )
# Tally the frequency of each adjective
adj_frequency = collections.Counter(adjectives)
# Most frequent adjectives
adj_frequency.most_common(5)
# Let's do the same for nouns.
nouns = [word for word,pos in tagged_tokens if pos=='NN' or pos=='NNS']
# Inspect
print(nouns)
# Tally the frequency of the nouns
noun_frequency = collections.Counter(nouns)
# Most Frequent Nouns
print(noun_frequency.most_common(5))
"""
Explanation: Now it's getting interesting
The "IN" tag refers to prepositions, so it's no surprise that it should be the most common. However, we can see at a glance now that the sentence contains a lot of adjectives, "JJ". This feels like it tells us something about the rhetorical style or structure of the sentence: certain qualifiers seem to be important to the meaning of the sentence.
Let's dig in to see what those adjectives are.
End of explanation
"""
# And we'll do the verbs in one fell swoop
verbs = [word for word,pos in tagged_tokens if pos == 'VB' or pos=='VBD' or pos=='VBG' or pos=='VBN' or pos=='VBP' or pos=='VBZ']
verb_frequency = collections.Counter(verbs)
print(verb_frequency.most_common(5))
# If we bring all of this together we get a pretty good summary of the sentence
print(adj_frequency.most_common(3))
print(noun_frequency.most_common(3))
print(verb_frequency.most_common(3))
"""
Explanation: And now verbs.
End of explanation
"""
# Read the two text files from your hard drive
# Assign first mystery text to variable 'text1' and second to 'text2'
text1 = open('text1.txt').read()
text2 = open('text2.txt').read()
# Tokenize both texts
text1_tokens = nltk.word_tokenize(text1)
text2_tokens = nltk.word_tokenize(text2)
# Set to lower case
text1_tokens_lc = [word.lower() for word in text1_tokens]
text2_tokens_lc = [word.lower() for word in text2_tokens]
# Remove stopwords
text1_tokens_nostops = [word for word in text1_tokens_lc if word not in stopwords.words('english')]
text2_tokens_nostops = [word for word in text2_tokens_lc if word not in stopwords.words('english')]
# Remove punctuation using the list of punctuation from the string pacage
text1_tokens_clean = [word for word in text1_tokens_nostops if word not in string.punctuation]
text2_tokens_clean = [word for word in text2_tokens_nostops if word not in string.punctuation]
# Frequency distribution
text1_word_frequency = collections.Counter(text1_tokens_clean)
text2_word_frequency = collections.Counter(text2_tokens_clean)
# Guess the novel!
text1_word_frequency.most_common(20)
# Guess the novel!
text2_word_frequency.most_common(20)
"""
Explanation: 5. Demonstration: Guess the Novel
To illustrate this process on a slightly larger scale, we will do the exactly what we did above, but will do so on two unknown novels. Your challenge: guess the novels from the most frequent words.
We will do this in one chunk of code, so another challenge for you during breaks or the next few weeks is to see how much of the following code you can follow (or, in computer science terms, how much of the code you can parse). If the answer is none, not to worry! Tomorrow we will take a step back and work on the nitty gritty of programming.
End of explanation
"""
# Transform our raw token lists in NLTK Text-objects
text1_nltk = nltk.Text(text1_tokens)
text2_nltk = nltk.Text(text2_tokens)
# Really they're no differnt from the raw text, but they have additional useful functions
print(text1_nltk)
print(text2_nltk)
# Like a concordancer!
text1_nltk.concordance("monstrous")
text2_nltk.concordance("monstrous")
"""
Explanation: Computational Text Analysis is not simply the processing of texts through computers, but involves reflection on the part of human interpreters. How were you able to tell what each novel was? Do you notice any differences between each novel's list of frequent words?
The patterns that we notice in our computational model often enrich and extend our research questions -- sometimes in surprising ways! What next steps would you take to investigate these novels?
6. Concordances and Similar Words using NLTK
Tallying word frequencies gives us a bird's-eye-view of our text but we lose one important aspect: context. As the dictum goes: "You shall know a word by the company it keeps."
Concordances show us every occurrence of a given word in a text, inside a window of context words that appear before and after it. This is helpful for close reading to get at a word's meaning by seeing how it is used. We can also use the logic of shared context in order to identify which words have similar meanings. To illustrate this, we can compare the way the word "monstrous" is used in our two novels.
Concordance
End of explanation
"""
# Get words that appear in a similar context to "monstrous"
text1_nltk.similar("monstrous")
text2_nltk.similar("monstrous")
"""
Explanation: Contextual Similarity
End of explanation
"""
|
Trevortds/Etymachine | Prototyping semi-supervised.ipynb | gpl-2.0 | import tsvopener
import pandas as pd
import numpy as np
from nltk import word_tokenize
from sklearn.feature_extraction.text import CountVectorizer
from scipy.sparse import csr_matrix, vstack
from sklearn.semi_supervised import LabelPropagation, LabelSpreading
regex_categorized = tsvopener.open_tsv("categorized.tsv")
human_categorized = tsvopener.open_tsv("human_categorized.tsv")
# Accuracy Check
#
# match = 0
# no_match = 0
# for key in human_categorized:
# if human_categorized[key] == regex_categorized[key]:
# match += 1
# else:
# no_match += 1
#
# print("accuracy of regex data in {} human-categorized words".format(
# len(human_categorized)))
# print(match/(match+no_match))
#
# accuracy of regex data in 350 human-categorized words
# 0.7857142857142857
"""
Explanation: Setup
End of explanation
"""
# set up targets for the human-categorized data
targets = pd.DataFrame.from_dict(human_categorized, 'index')
targets[0] = pd.Categorical(targets[0])
targets['code'] = targets[0].cat.codes
# form: | word (label) | language | code (1-5)
tmp_dict = {}
for key in human_categorized:
tmp_dict[key] = tsvopener.etymdict[key]
supervised_sents = pd.DataFrame.from_dict(tmp_dict, 'index')
all_sents = pd.DataFrame.from_dict(tsvopener.etymdict, 'index')
vectorizer = CountVectorizer(stop_words='english', max_features=10000)
all_sents.index.get_loc("anyways (adv.)")
# vectorize the unsupervised vectors.
vectors = vectorizer.fit_transform(all_sents.values[:,0])
print(vectors.shape)
# supervised_vectors = vectorizer.fit_transform(supervised_data.values[:,0])
# add labels
# initialize to -1
all_sents['code'] = -1
supervised_vectors = csr_matrix((len(human_categorized),
vectors.shape[1]),
dtype=vectors.dtype)
j = 0
for key in supervised_sents.index:
all_sents.loc[key]['code'] = targets.loc[key]['code']
i = all_sents.index.get_loc(key)
supervised_vectors[j] = vectors[i]
j += 1
# supervised_vectors = csr_matrix((len(human_categorized),
# unsupervised_vectors.shape[1]),
# dtype=unsupervised_vectors.dtype)
# j = 0
# for key in supervised_data.index:
# i = unsupervised_data.index.get_loc(key)
# supervised_vectors[j] = unsupervised_vectors[i]
# j += 1
all_sents.loc['dicky (n.)']
"""
Explanation: Prepare Vectors
End of explanation
"""
num_points = 1000
num_test = 50
x = vstack([vectors[:num_points], supervised_vectors]).toarray()
t = all_sents['code'][:num_points].append(targets['code'])
x_test = x[-num_test:]
t_test = t[-num_test:]
x = x[:-num_test]
t = t[:-num_test]
label_prop_model = LabelSpreading(kernel='knn')
from time import time
print("fitting model")
timer_start = time()
label_prop_model.fit(x, t)
print("runtime: %0.3fs" % (time()-timer_start))
print("done!")
# unsupervised_data['code'].iloc[:1000]
import pickle
# with open("classifiers/labelspreading_knn_all_but_100.pkl", 'bw') as writefile:
# pickle.dump(label_prop_model, writefile)
import smtplib
server = smtplib.SMTP('smtp.gmail.com', 587)
server.starttls()
server.login("trevortds3@gmail.com", "Picardy3")
msg = "Job's done!"
server.sendmail("trevortds3@gmail.com", "trevortds@gmail.com", msg)
server.quit()
targets
"""
Explanation: Use Scikit's semisupervised learning
There are two semisupervised methods that scikit has. Label Propagation and Label Spreading. The difference is in how they regularize.
End of explanation
"""
from sklearn.metrics import precision_score, accuracy_score, f1_score, recall_score
t_pred = label_prop_model.predict(x_test)
print("Metrics based on 50 hold-out points")
print("Macro")
print("accuracy: %f" % accuracy_score(t_test, t_pred))
print("precision: %f" % precision_score(t_test, t_pred, average='macro'))
print("recall: %f" % recall_score(t_test, t_pred, average='macro'))
print("f1: %f" % f1_score(t_test, t_pred, average='macro'))
print("\n\nMicro")
print("accuracy: %f" % accuracy_score(t_test, t_pred))
print("precision: %f" % precision_score(t_test, t_pred, average='micro'))
print("recall: %f" % recall_score(t_test, t_pred, average='micro'))
print("f1: %f" % f1_score(t_test, t_pred, average='micro'))
from sklearn import metrics
import matplotlib.pyplot as pl
labels = ["English", "French", "Greek", "Latin","Norse", "Other"]
labels_digits = [0, 1, 2, 3, 4, 5]
cm = metrics.confusion_matrix(t_test, t_pred, labels_digits)
fig = pl.figure()
ax = fig.add_subplot(111)
cax = ax.matshow(cm)
pl.title("Label Spreading with KNN kernel (k=7)")
fig.colorbar(cax)
ax.set_xticklabels([''] + labels)
ax.set_yticklabels([''] + labels)
pl.xlabel('Predicted')
pl.ylabel('True')
pl.show()
"""
Explanation: Measuring effectiveness.
End of explanation
"""
supervised_vectors
import matplotlib.pyplot as pl
u, s, v = np.linalg.svd(supervised_vectors.toarray())
pca = np.dot(u[:,0:2], np.diag(s[0:2]))
english = np.empty((0,2))
french = np.empty((0,2))
greek = np.empty((0,2))
latin = np.empty((0,2))
norse = np.empty((0,2))
other = np.empty((0,2))
for i in range(pca.shape[0]):
if targets[0].iloc[i] == "English":
english = np.vstack((english, pca[i]))
elif targets[0].iloc[i] == "French":
french = np.vstack((french, pca[i]))
elif targets[0].iloc[i] == "Greek":
greek = np.vstack((greek, pca[i]))
elif targets[0].iloc[i] == "Latin":
latin = np.vstack((latin, pca[i]))
elif targets[0].iloc[i] == "Norse":
norse = np.vstack((norse, pca[i]))
elif targets[0].iloc[i] == "Other":
other = np.vstack((other, pca[i]))
pl.plot( english[:,0], english[:,1], "ro",
french[:,0], french[:,1], "bs",
greek[:,0], greek[:,1], "g+",
latin[:,0], latin[:,1], "c^",
norse[:,0], norse[:,1], "mD",
other[:,0], other[:,1], "kx")
pl.axis([-5,0,-2, 5])
pl.show()
print (s)
"""
Explanation: PCA: Let's see what it looks like
Performing PCA
End of explanation
"""
|
DAInamite/programming-humanoid-robot-in-python | joint_control/scikit-learn-intro.ipynb | gpl-2.0 | from sklearn import datasets
digits = datasets.load_digits()
%pylab inline
digits.data
digits.data.shape # n_samples, n_features
"""
Explanation: Introduction to scikit-learn
Classification of Handwritten Digits the task is to predict, given an image, which digit it represents. We are given samples of each of the 10 possible classes (the digits zero through nine) on which we fit an estimator to be able to predict the classes to which unseen samples belong.
1. Data collection
2. Data preprocessing
A dataset is a dictionary-like object that holds all the data and some metadata about the data.
End of explanation
"""
digits.target
digits.target.shape
# show images
fig = plt.figure(figsize=(6, 6)) # figure size in inches
fig.subplots_adjust(left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05)
# plot the digits: each image is 8x8 pixels
for i in range(64):
ax = fig.add_subplot(8, 8, i + 1, xticks=[], yticks=[])
ax.imshow(digits.images[i], cmap=plt.cm.binary)
# label the image with the target value
ax.text(0, 7, str(digits.target[i]))
"""
Explanation: digits.images.shape
End of explanation
"""
from sklearn import svm
clf = svm.SVC(gamma=0.001, C=100.)
"""
Explanation: 3. Build a model on training data
In scikit-learn, an estimator for classification is a Python object that implements the methods fit(X, y) and predict(T).
An example of an estimator is the class sklearn.svm.SVC that implements support vector classification.
End of explanation
"""
clf.fit(digits.data[:-500], digits.target[:-500])
"""
Explanation: learning
End of explanation
"""
clf.predict(digits.data[-1:]), digits.target[-1:]
"""
Explanation: predicting
End of explanation
"""
(clf.predict(digits.data[:-500]) == digits.target[:-500]).sum() / float(len(digits.target[:-500]))
"""
Explanation: 4. Evaluate the model on the test data
learning dataset
End of explanation
"""
(clf.predict(digits.data[-500:]) == digits.target[-500:]).sum() / 500.0
"""
Explanation: test dataset
End of explanation
"""
from sklearn import metrics
def evaluate(expected, predicted):
print("Classification report:\n%s\n" % metrics.classification_report(expected, predicted))
print("Confusion matrix:\n%s" % metrics.confusion_matrix(expected, predicted))
predicted = clf.predict(digits.data[-500:])
evaluate(digits.target[-500:], predicted)
"""
Explanation: evaluation metrics
End of explanation
"""
import pickle
s = pickle.dumps(clf)
clf2 = pickle.loads(s)
clf2.predict(digits.data[-1:]), digits.target[-1:]
"""
Explanation: 5. Deploy to the real system
End of explanation
"""
|
GoogleCloudPlatform/healthcare | imaging/ml/ml_codelab/breast_density_auto_ml.ipynb | apache-2.0 | %%bash
pip3 install git+https://github.com/GoogleCloudPlatform/healthcare.git#subdirectory=imaging/ml/toolkit
pip3 install dicomweb-client
pip3 install pydicom
"""
Explanation: Copyright 2018 Google Inc.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
This tutorial is for educational purposes purposes only and is not intended for use in clinical diagnosis or clinical decision-making or for any other clinical use.
Training/Inference on Breast Density Classification Model on AutoML Vision
The goal of this tutorial is to train, deploy and run inference on a breast density classification model. Breast density is thought to be a factor for an increase in the risk for breast cancer. This will emphasize using the Cloud Healthcare API in order to store, retreive and transcode medical images (in DICOM format) in a managed and scalable way. This tutorial will focus on using Cloud AutoML Vision to scalably train and serve the model.
Note: This is the AutoML version of the Cloud ML Engine Codelab found here.
Requirements
A Google Cloud project.
Project has Cloud Healthcare API enabled.
Project has Cloud AutoML API enabled.
Project has Cloud Build API enabled.
Project has Kubernetes engine API enabled.
Project has Cloud Resource Manager API enabled.
Notebook dependencies
We will need to install the hcls_imaging_ml_toolkit package found here. This toolkit helps make working with DICOM objects and the Cloud Healthcare API easier.
In addition, we will install dicomweb-client to help us interact with the DIOCOMWeb API and pydicom which is used to help up construct DICOM objects.
End of explanation
"""
project_id = "MY_PROJECT" # @param
location = "us-central1"
dataset_id = "MY_DATASET" # @param
dicom_store_id = "MY_DICOM_STORE" # @param
# Input data used by AutoML must be in a bucket with the following format.
automl_bucket_name = "gs://" + project_id + "-vcm"
%%bash -s {project_id} {location} {automl_bucket_name}
# Create bucket.
gsutil -q mb -c regional -l $2 $3
# Allow Cloud Healthcare API to write to bucket.
PROJECT_NUMBER=`gcloud projects describe $1 | grep projectNumber | sed 's/[^0-9]//g'`
SERVICE_ACCOUNT="service-${PROJECT_NUMBER}@gcp-sa-healthcare.iam.gserviceaccount.com"
COMPUTE_ENGINE_SERVICE_ACCOUNT="${PROJECT_NUMBER}-compute@developer.gserviceaccount.com"
gsutil -q iam ch serviceAccount:${SERVICE_ACCOUNT}:objectAdmin $3
gsutil -q iam ch serviceAccount:${COMPUTE_ENGINE_SERVICE_ACCOUNT}:objectAdmin $3
gcloud projects add-iam-policy-binding $1 --member=serviceAccount:${SERVICE_ACCOUNT} --role=roles/pubsub.publisher
gcloud projects add-iam-policy-binding $1 --member=serviceAccount:${COMPUTE_ENGINE_SERVICE_ACCOUNT} --role roles/pubsub.admin
# Allow compute service account to create datasets and dicomStores.
gcloud projects add-iam-policy-binding $1 --member=serviceAccount:${COMPUTE_ENGINE_SERVICE_ACCOUNT} --role roles/healthcare.dicomStoreAdmin
gcloud projects add-iam-policy-binding $1 --member=serviceAccount:${COMPUTE_ENGINE_SERVICE_ACCOUNT} --role roles/healthcare.datasetAdmin
import json
import os
import google.auth
from google.auth.transport.requests import AuthorizedSession
from hcls_imaging_ml_toolkit import dicom_path
credentials, project = google.auth.default()
authed_session = AuthorizedSession(credentials)
# Path to Cloud Healthcare API.
HEALTHCARE_API_URL = 'https://healthcare.googleapis.com/v1'
# Create Cloud Healthcare API dataset.
path = os.path.join(HEALTHCARE_API_URL, 'projects', project_id, 'locations', location, 'datasets?dataset_id=' + dataset_id)
headers = {'Content-Type': 'application/json'}
resp = authed_session.post(path, headers=headers)
assert resp.status_code == 200, 'error creating Dataset, code: {0}, response: {1}'.format(resp.status_code, resp.text)
print('Full response:\n{0}'.format(resp.text))
# Create Cloud Healthcare API DICOM store.
path = os.path.join(HEALTHCARE_API_URL, 'projects', project_id, 'locations', location, 'datasets', dataset_id, 'dicomStores?dicom_store_id=' + dicom_store_id)
resp = authed_session.post(path, headers=headers)
assert resp.status_code == 200, 'error creating DICOM store, code: {0}, response: {1}'.format(resp.status_code, resp.text)
print('Full response:\n{0}'.format(resp.text))
dicom_store_path = dicom_path.Path(project_id, location, dataset_id, dicom_store_id)
"""
Explanation: Input Dataset
The dataset that will be used for training is the TCIA CBIS-DDSM dataset. This dataset contains ~2500 mammography images in DICOM format. Each image is given a BI-RADS breast density score from 1 to 4. In this tutorial, we will build a binary classifier that distinguishes between breast density "2" (scattered density) and "3" (heterogeneously dense). These are the two most common and variably assigned scores. In the literature, this is said to be particularly difficult for radiologists to consistently distinguish.
End of explanation
"""
# Store DICOM instances in Cloud Healthcare API.
path = 'https://healthcare.googleapis.com/v1/{}:import'.format(dicom_store_path)
headers = {'Content-Type': 'application/json'}
body = {
'gcsSource': {
'uri': 'gs://gcs-public-data--healthcare-tcia-cbis-ddsm/dicom/**'
}
}
resp = authed_session.post(path, headers=headers, json=body)
assert resp.status_code == 200, 'error creating Dataset, code: {0}, response: {1}'.format(resp.status_code, resp.text)
print('Full response:\n{0}'.format(resp.text))
response = json.loads(resp.text)
operation_name = response['name']
import time
def wait_for_operation_completion(path, timeout, sleep_time=30):
success = False
while time.time() < timeout:
print('Waiting for operation completion...')
resp = authed_session.get(path)
assert resp.status_code == 200, 'error polling for Operation results, code: {0}, response: {1}'.format(resp.status_code, resp.text)
response = json.loads(resp.text)
if 'done' in response:
if response['done'] == True and 'error' not in response:
success = True;
break
time.sleep(sleep_time)
print('Full response:\n{0}'.format(resp.text))
assert success, "operation did not complete successfully in time limit"
print('Success!')
return response
path = os.path.join(HEALTHCARE_API_URL, operation_name)
timeout = time.time() + 40*60 # Wait up to 40 minutes.
_ = wait_for_operation_completion(path, timeout)
"""
Explanation: Next, we are going to transfer the DICOM instances to the Cloud Healthcare API.
Note: We are transfering >100GB of data so this will take some time to complete
End of explanation
"""
num_of_studies_to_print = 2 # @param
path = os.path.join(HEALTHCARE_API_URL, dicom_store_path.dicomweb_path_str, 'studies')
resp = authed_session.get(path)
assert resp.status_code == 200, 'error querying Dataset, code: {0}, response: {1}'.format(resp.status_code, resp.text)
response = json.loads(resp.text)
print(json.dumps(response[:num_of_studies_to_print], indent=2))
"""
Explanation: Explore the Cloud Healthcare DICOM dataset (optional)
This is an optional section to explore the Cloud Healthcare DICOM dataset. In the following code, we simply just list the studies that we have loaded into the Cloud Healthcare API. You can modify the num_of_studies_to_print parameter to print as many studies as desired.
End of explanation
"""
# Folder to store input images for AutoML Vision.
jpeg_folder = automl_bucket_name + "/images/"
"""
Explanation: Convert DICOM to JPEG
The ML model that we will build requires that the dataset be in JPEG. We will leverage the Cloud Healthcare API to transcode DICOM to JPEG.
First we will create a Google Cloud Storage bucket to hold the output JPEG files. Next, we will use the ExportDicomData API to transform the DICOMs to JPEGs.
End of explanation
"""
%%bash -s {jpeg_folder} {project_id} {location} {dataset_id} {dicom_store_id}
gcloud beta healthcare --project $2 dicom-stores export gcs $5 --location=$3 --dataset=$4 --mime-type="image/jpeg; transfer-syntax=1.2.840.10008.1.2.4.50" --gcs-uri-prefix=$1
"""
Explanation: Next we will convert the DICOMs to JPEGs using the ExportDicomData.
End of explanation
"""
# tensorflow==1.15.0 to have same versions in all environments - dataflow, automl, ai-platform
!pip install tensorflow==1.15.0 --ignore-installed
# CSV to hold (IMAGE_PATH, LABEL) list.
input_data_csv = automl_bucket_name + "/input.csv"
import csv
import os
import re
from tensorflow.python.lib.io import file_io
import scripts.tcia_utils as tcia_utils
# Get map of study_uid -> file paths.
path_list = file_io.get_matching_files(os.path.join(jpeg_folder, '*/*/*'))
study_uid_to_file_paths = {}
pattern = r'^{0}(?P<study_uid>[^/]+)/(?P<series_uid>[^/]+)/(?P<instance_uid>.*)'.format(jpeg_folder)
for path in path_list:
match = re.search(pattern, path)
study_uid_to_file_paths[match.group('study_uid')] = path
# Get map of study_uid -> labels.
study_uid_to_labels = tcia_utils.GetStudyUIDToLabelMap()
# Join the two maps, output results to CSV in Google Cloud Storage.
with file_io.FileIO(input_data_csv, 'w') as f:
writer = csv.writer(f, delimiter=',')
for study_uid, label in study_uid_to_labels.items():
if study_uid in study_uid_to_file_paths:
writer.writerow([study_uid_to_file_paths[study_uid], label])
"""
Explanation: Meanwhile, you should be able to observe the JPEG images being added to your Google Cloud Storage bucket.
Next, we will join the training data stored in Google Cloud Storage with the labels in the TCIA website. The output of this step is a CSV file that is input to AutoML. This CSV contains a list of pairs of (IMAGE_PATH, LABEL).
End of explanation
"""
automl_dataset_display_name = "MY_AUTOML_DATASET" # @param
import json
import os
# Path to AutoML API.
AUTOML_API_URL = 'https://automl.googleapis.com/v1beta1'
# Path to request creation of AutoML dataset.
path = os.path.join(AUTOML_API_URL, 'projects', project_id, 'locations', location, 'datasets')
# Headers (request in JSON format).
headers = {'Content-Type': 'application/json'}
# Body (encoded in JSON format).
config = {'display_name': automl_dataset_display_name, 'image_classification_dataset_metadata': {'classification_type': 'MULTICLASS'}}
resp = authed_session.post(path, headers=headers, json=config)
assert resp.status_code == 200, 'creating AutoML dataset, code: {0}, response: {1}'.format(resp.status_code, resp.text)
print('Full response:\n{0}'.format(resp.text))
# Record the AutoML dataset name.
response = json.loads(resp.text)
automl_dataset_name = response['name']
"""
Explanation: Training
This section will focus on using AutoML through its API. AutoML can also be used through the user interface found here. The below steps in this section can all be done through the web UI .
We will use AutoML Vision to train the classification model. AutoML provides a fully managed solution for training the model. All we will do is input the list of input images and labels. The trained model in AutoML will be able to classify the mammography images as either "2" (scattered density) or "3" (heterogeneously dense).
As a first step, we will create a AutoML dataset.
End of explanation
"""
# Path to request import into AutoML dataset.
path = os.path.join(AUTOML_API_URL, automl_dataset_name + ':importData')
# Body (encoded in JSON format).
config = {'input_config': {'gcs_source': {'input_uris': [input_data_csv]}}}
resp = authed_session.post(path, headers=headers, json=config)
assert resp.status_code == 200, 'error importing AutoML dataset, code: {0}, response: {1}'.format(resp.status_code, resp.text)
print('Full response:\n{0}'.format(resp.text))
# Record operation_name so we can poll for it later.
response = json.loads(resp.text)
operation_name = response['name']
"""
Explanation: Next, we will import the CSV that contains the list of (IMAGE_PATH, LABEL) list into AutoML. Please ignore errors regarding an existing ground truth.
End of explanation
"""
path = os.path.join(AUTOML_API_URL, operation_name)
timeout = time.time() + 40*60 # Wait up to 40 minutes.
_ = wait_for_operation_completion(path, timeout)
"""
Explanation: The output of the previous step is an operation that will need to poll the status for. We will poll until the operation's "done" field is set to true. This will take a few minutes to complete so we will wait until completion.
End of explanation
"""
# Name of the model.
model_display_name = "MY_MODEL_NAME" # @param
# Training budget (1 hr).
training_budget = 1 # @param
# Path to request import into AutoML dataset.
path = os.path.join(AUTOML_API_URL, 'projects', project_id, 'locations', location, 'models')
# Headers (request in JSON format).
headers = {'Content-Type': 'application/json'}
# Body (encoded in JSON format).
automl_dataset_id = automl_dataset_name.split('/')[-1]
config = {'display_name': model_display_name, 'dataset_id': automl_dataset_id, 'image_classification_model_metadata': {'train_budget': training_budget}}
resp = authed_session.post(path, headers=headers, json=config)
assert resp.status_code == 200, 'error creating AutoML model, code: {0}, response: {1}'.format(resp.status_code, contenresp.text)
print('Full response:\n{0}'.format(resp.text))
# Record operation_name so we can poll for it later.
response = json.loads(resp.text)
operation_name = response['name']
"""
Explanation: Next, we will train the model to perform classification. We will set the training budget to be a maximum of 1hr (but this can be modified below). The cost of using AutoML can be found here. Typically, the longer the model is trained for, the more accurate it will be.
End of explanation
"""
path = os.path.join(AUTOML_API_URL, operation_name)
timeout = time.time() + 40*60 # Wait up to 40 minutes.
sleep_time = 5*60 # Update each 5 minutes.
response = wait_for_operation_completion(path, timeout, sleep_time)
full_model_name = response['response']['name']
# google.cloud.automl to make api calls to Cloud AutoML
!pip install google-cloud-automl
from google.cloud import automl_v1
client = automl_v1.AutoMlClient()
response = client.deploy_model(full_model_name)
print(u'Model deployment finished. {}'.format(response.result()))
"""
Explanation: The output of the previous step is also an operation that will need to poll the status of. We will poll until the operation's "done" field is set to true. This will take a few minutes to complete.
End of explanation
"""
# Path to request to get model accuracy metrics.
path = os.path.join(AUTOML_API_URL, full_model_name, 'modelEvaluations')
resp = authed_session.get(path)
assert resp.status_code == 200, 'error getting AutoML model evaluations, code: {0}, response: {1}'.format(resp.status_code, resp.text)
print('Full response:\n{0}'.format(resp.text))
"""
Explanation: Next, we will check out the accuracy metrics for the trained model. The following command will return the AUC (ROC), precision and recall for the model, for various ML classification thresholds.
End of explanation
"""
# Pubsub config.
pubsub_topic_id = "MY_PUBSUB_TOPIC_ID" # @param
pubsub_subscription_id = "MY_PUBSUB_SUBSRIPTION_ID" # @param
# DICOM Store for store DICOM used for inference.
inference_dicom_store_id = "MY_INFERENCE_DICOM_STORE" # @param
pubsub_subscription_name = "projects/" + project_id + "/subscriptions/" + pubsub_subscription_id
inference_dicom_store_path = dicom_path.FromPath(dicom_store_path, store_id=inference_dicom_store_id)
%%bash -s {pubsub_topic_id} {pubsub_subscription_id} {project_id} {location} {dataset_id} {inference_dicom_store_id}
# Create Pubsub channel.
gcloud beta pubsub topics create $1
gcloud beta pubsub subscriptions create $2 --topic $1
# Create a Cloud Healthcare DICOM store that published on given Pubsub topic.
TOKEN=`gcloud beta auth application-default print-access-token`
NOTIFICATION_CONFIG="{notification_config: {pubsub_topic: \"projects/$3/topics/$1\"}}"
curl -s -X POST -H "Content-Type: application/json" -H "Authorization: Bearer ${TOKEN}" -d "${NOTIFICATION_CONFIG}" https://healthcare.googleapis.com/v1/projects/$3/locations/$4/datasets/$5/dicomStores?dicom_store_id=$6
# Enable Cloud Healthcare API to publish on given Pubsub topic.
PROJECT_NUMBER=`gcloud projects describe $3 | grep projectNumber | sed 's/[^0-9]//g'`
SERVICE_ACCOUNT="service-${PROJECT_NUMBER}@gcp-sa-healthcare.iam.gserviceaccount.com"
gcloud beta pubsub topics add-iam-policy-binding $1 --member="serviceAccount:${SERVICE_ACCOUNT}" --role="roles/pubsub.publisher"
"""
Explanation: Inference
To allow medical imaging ML models to be easily integrated into clinical workflows, an inference module can be used. A standalone modality, a PACS system or a DICOM router can push DICOM instances into Cloud Healthcare DICOM stores, allowing ML models to be triggered for inference. This inference results can then be structured into various DICOM formats (e.g. DICOM structured reports) and stored in the Cloud Healthcare API, which can then be retrieved by the customer.
The inference module is built as a Docker container and deployed using Kubernetes, allowing you to easily scale your deployment. The dataflow for inference can look as follows (see corresponding diagram below):
Client application uses STOW-RS to push a new DICOM instance to the Cloud Healthcare DICOMWeb API.
The insertion of the DICOM instance triggers a Cloud Pubsub message to be published. The inference module will pull incoming Pubsub messages and will recieve a message for the previously inserted DICOM instance.
The inference module will retrieve the instance in JPEG format from the Cloud Healthcare API using WADO-RS.
The inference module will send the JPEG bytes to the model hosted on AutoML.
AutoML will return the prediction back to the inference module.
The inference module will package the prediction into a DICOM instance. This can potentially be a DICOM structured report, presentation state, or even burnt text on the image. In this codelab, we will focus on just DICOM structured reports, specifically Comprehensive Structured Reports. The structured report is then stored back in the Cloud Healthcare API using STOW-RS.
The client application can query for (or retrieve) the structured report by using QIDO-RS or WADO-RS. Pubsub can also be used by the client application to poll for the newly created DICOM structured report instance.
To begin, we will create a new DICOM store that will store our inference source (DICOM mammography instance) and results (DICOM structured report). In order to enable Pubsub notifications to be triggered on inserted instances, we will give the DICOM store a Pubsub channel to publish on.
End of explanation
"""
%%bash -s {project_id}
PROJECT_ID=$1
gcloud builds submit --config scripts/inference/cloudbuild.yaml --timeout 1h scripts/inference
"""
Explanation: Next, we will building the inference module using Cloud Build API. This will create a Docker container that will be stored in Google Container Registry. The inference module code is found in inference.py. The build script used to build the Docker container for this module is cloudbuild.yaml. Progress of build may be found on cloud build dashboard.
End of explanation
"""
%%bash -s {project_id} {location} {pubsub_subscription_name} {full_model_name} {inference_dicom_store_path}
gcloud container clusters create inference-module --region=$2 --scopes https://www.googleapis.com/auth/cloud-platform --num-nodes=1
PROJECT_ID=$1
SUBSCRIPTION_PATH=$3
MODEL_PATH=$4
INFERENCE_DICOM_STORE_PATH=$5
cat <<EOF | kubectl create -f -
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: inference-module
namespace: default
spec:
replicas: 1
template:
metadata:
labels:
app: inference-module
spec:
containers:
- name: inference-module
image: gcr.io/${PROJECT_ID}/inference-module:latest
command:
- "/opt/inference_module/bin/inference_module"
- "--subscription_path=${SUBSCRIPTION_PATH}"
- "--model_path=${MODEL_PATH}"
- "--dicom_store_path=${INFERENCE_DICOM_STORE_PATH}"
- "--prediction_service=AutoML"
EOF
"""
Explanation: Next, we will deploy the inference module to Kubernetes.
Then we create a Kubernetes Cluster and a Deployment for the inference module.
End of explanation
"""
# DICOM Study/Series UID of input mammography image that we'll push for inference.
input_mammo_study_uid = "1.3.6.1.4.1.9590.100.1.2.85935434310203356712688695661986996009"
input_mammo_series_uid = "1.3.6.1.4.1.9590.100.1.2.374115997511889073021386151921807063992"
input_mammo_instance_uid = "1.3.6.1.4.1.9590.100.1.2.289923739312470966435676008311959891294"
from google.cloud import storage
from dicomweb_client.api import DICOMwebClient
from dicomweb_client import session_utils
from pydicom
storage_client = storage.Client()
bucket = storage_client.bucket('gcs-public-data--healthcare-tcia-cbis-ddsm', user_project=project_id)
blob = bucket.blob("dicom/{}/{}/{}.dcm".format(input_mammo_study_uid,input_mammo_series_uid,input_mammo_instance_uid))
blob.download_to_filename('example.dcm')
dataset = pydicom.dcmread('example.dcm')
session = session_utils.create_session_from_gcp_credentials()
study_path = dicom_path.FromPath(inference_dicom_store_path, study_uid=input_mammo_study_uid)
dicomweb_url = os.path.join(HEALTHCARE_API_URL, study_path.dicomweb_path_str)
dcm_client = DICOMwebClient(dicomweb_url, session)
dcm_client.store_instances(datasets=[dataset])
"""
Explanation: Next, we will store a mammography DICOM instance from the TCIA dataset to the DICOM store. This is the image that we will request inference for. Pushing this instance to the DICOM store will result in a Pubsub message, which will trigger the inference module.
End of explanation
"""
!kubectl logs -l app=inference-module
"""
Explanation: You should be able to observe the inference module's logs by running the following command. In the logs, you should observe that the inference module successfully recieved the the Pubsub message and ran inference on the DICOM instance. The logs should also include the inference results. It can take a few minutes for the Kubernetes deployment to start up, so you many need to run this a few times. The logs should also include the inference results. It can take a few minutes for the Kubernetes deployment to start up, so you many need to run this a few times.
End of explanation
"""
dcm_client.search_for_instances(study_path.study_uid, fields=['all'])
"""
Explanation: You can also query the Cloud Healthcare DICOMWeb API (using QIDO-RS) to see that the DICOM structured report has been inserted for the study. The structured report contents can be found under tag "0040A730".
You can optionally also use WADO-RS to recieve the instance (e.g. for viewing).
End of explanation
"""
|
GoogleCloudPlatform/data-science-on-gcp | 07_sparkml/logistic_regression.ipynb | apache-2.0 | BUCKET='ai-analytics-solutions-dsongcp' # CHANGE ME
import os
os.environ['BUCKET'] = BUCKET
# Create spark session
from pyspark.sql import SparkSession
from pyspark import SparkContext
sc = SparkContext('local', 'logistic')
spark = SparkSession \
.builder \
.appName("Logistic regression w/ Spark ML") \
.getOrCreate()
print(spark)
print(sc)
from pyspark.mllib.classification import LogisticRegressionWithLBFGS
from pyspark.mllib.regression import LabeledPoint
"""
Explanation: <h1> Logistic Regression using Spark ML </h1>
Set up bucket
End of explanation
"""
traindays = spark.read \
.option("header", "true") \
.csv('gs://{}/flights/trainday.csv'.format(BUCKET))
traindays.createOrReplaceTempView('traindays')
spark.sql("SELECT * from traindays LIMIT 5").show()
inputs = 'gs://{}/flights/tzcorr/all_flights-00000-*'.format(BUCKET) # 1/30th
#inputs = 'gs://{}/flights/tzcorr/all_flights-*'.format(BUCKET) # FULL
flights = spark.read.json(inputs)
# this view can now be queried ...
flights.createOrReplaceTempView('flights')
"""
Explanation: <h2> Read dataset </h2>
End of explanation
"""
trainquery = """
SELECT
f.*
FROM flights f
JOIN traindays t
ON f.FL_DATE == t.FL_DATE
WHERE
t.is_train_day == 'True'
"""
traindata = spark.sql(trainquery)
print(traindata.head(2)) # if this is empty, try changing the shard you are using.
trainquery = """
SELECT
DEP_DELAY, TAXI_OUT, ARR_DELAY, DISTANCE
FROM flights f
JOIN traindays t
ON f.FL_DATE == t.FL_DATE
WHERE
t.is_train_day == 'True'
"""
traindata = spark.sql(trainquery)
traindata.describe().show()
"""
Explanation: <h2> Clean up </h2>
End of explanation
"""
trainquery = """
SELECT
DEP_DELAY, TAXI_OUT, ARR_DELAY, DISTANCE
FROM flights f
JOIN traindays t
ON f.FL_DATE == t.FL_DATE
WHERE
t.is_train_day == 'True' AND
f.dep_delay IS NOT NULL AND
f.arr_delay IS NOT NULL
"""
traindata = spark.sql(trainquery)
traindata.describe().show()
trainquery = """
SELECT
DEP_DELAY, TAXI_OUT, ARR_DELAY, DISTANCE
FROM flights f
JOIN traindays t
ON f.FL_DATE == t.FL_DATE
WHERE
t.is_train_day == 'True' AND
f.CANCELLED == 'False' AND
f.DIVERTED == 'False'
"""
traindata = spark.sql(trainquery)
traindata.describe().show()
def to_example(fields):
return LabeledPoint(\
float(fields['ARR_DELAY'] < 15), #ontime? \
[ \
fields['DEP_DELAY'], \
fields['TAXI_OUT'], \
fields['DISTANCE'], \
])
examples = traindata.rdd.map(to_example)
lrmodel = LogisticRegressionWithLBFGS.train(examples, intercept=True)
print(lrmodel.weights,lrmodel.intercept)
print(lrmodel.predict([6.0,12.0,594.0]))
print(lrmodel.predict([36.0,12.0,594.0]))
lrmodel.clearThreshold()
print(lrmodel.predict([6.0,12.0,594.0]))
print(lrmodel.predict([36.0,12.0,594.0]))
lrmodel.setThreshold(0.7) # cancel if prob-of-ontime < 0.7
print(lrmodel.predict([6.0,12.0,594.0]))
print(lrmodel.predict([36.0,12.0,594.0]))
"""
Explanation: Note that the counts for the various columns are all different; We have to remove NULLs in the delay variables (these correspond to canceled or diverted flights).
<h2> Logistic regression </h2>
End of explanation
"""
!gsutil -m rm -r gs://$BUCKET/flights/sparkmloutput/model
MODEL_FILE='gs://' + BUCKET + '/flights/sparkmloutput/model'
lrmodel.save(sc, MODEL_FILE)
print('{} saved'.format(MODEL_FILE))
lrmodel = 0
print(lrmodel)
"""
Explanation: <h2> Predict with the model </h2>
First save the model
End of explanation
"""
from pyspark.mllib.classification import LogisticRegressionModel
lrmodel = LogisticRegressionModel.load(sc, MODEL_FILE)
lrmodel.setThreshold(0.7)
print(lrmodel.predict([36.0,12.0,594.0]))
print(lrmodel.predict([8.0,4.0,594.0]))
"""
Explanation: Now retrieve the model
End of explanation
"""
lrmodel.clearThreshold() # to make the model produce probabilities
print(lrmodel.predict([20, 10, 500]))
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import numpy as np
dist = np.arange(10, 2000, 10)
prob = [lrmodel.predict([20, 10, d]) for d in dist]
sns.set_style("whitegrid")
ax = plt.plot(dist, prob)
plt.xlabel('distance (miles)')
plt.ylabel('probability of ontime arrival')
delay = np.arange(-20, 60, 1)
prob = [lrmodel.predict([d, 10, 500]) for d in delay]
ax = plt.plot(delay, prob)
plt.xlabel('departure delay (minutes)')
plt.ylabel('probability of ontime arrival')
"""
Explanation: <h2> Examine the model behavior </h2>
For dep_delay=20 and taxiout=10, how does the distance affect prediction?
End of explanation
"""
inputs = 'gs://{}/flights/tzcorr/all_flights-00001-*'.format(BUCKET) # 1/30th
flights = spark.read.json(inputs)
flights.createOrReplaceTempView('flights')
testquery = trainquery.replace("t.is_train_day == 'True'","t.is_train_day == 'False'")
print(testquery)
testdata = spark.sql(testquery)
examples = testdata.rdd.map(to_example)
testdata.describe().show() # if this is empty, change the shard you are using
def eval(labelpred):
'''
data = (label, pred)
data[0] = label
data[1] = pred
'''
cancel = labelpred.filter(lambda data: data[1] < 0.7)
nocancel = labelpred.filter(lambda data: data[1] >= 0.7)
corr_cancel = cancel.filter(lambda data: data[0] == int(data[1] >= 0.7)).count()
corr_nocancel = nocancel.filter(lambda data: data[0] == int(data[1] >= 0.7)).count()
cancel_denom = cancel.count()
nocancel_denom = nocancel.count()
if cancel_denom == 0:
cancel_denom = 1
if nocancel_denom == 0:
nocancel_denom = 1
return {'total_cancel': cancel.count(), \
'correct_cancel': float(corr_cancel)/cancel_denom, \
'total_noncancel': nocancel.count(), \
'correct_noncancel': float(corr_nocancel)/nocancel_denom \
}
# Evaluate model
lrmodel.clearThreshold() # so it returns probabilities
labelpred = examples.map(lambda p: (p.label, lrmodel.predict(p.features)))
print('All flights:')
print(eval(labelpred))
# keep only those examples near the decision threshold
print('Flights near decision threshold:')
labelpred = labelpred.filter(lambda data: data[1] > 0.65 and data[1] < 0.75)
print(eval(labelpred))
"""
Explanation: <h2> Evaluate model </h2>
Evaluate on the test data
End of explanation
"""
|
kristianfoerster/melodist | examples/precip5min_example.ipynb | gpl-3.0 | import pandas as pd
import numpy as np
import melodist
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: MELODIST 5min precipitation example
In this notebook the usage of MELODIST for working with highly resolved precipitation utilizing the cascade model is demonstrated.
For this purpose, we use a subset of the station data for Rosenthal-Willershausen, which is located in the file examples/testdata_precip5min.csv.gz. The approach differs to some degree from the examples involving hourly data. The reason for that is the design of the station which is (up to now) limited to hourly data and its statistics. Here we call functions of the MELODIST package directly.
Import all relevant packages:
End of explanation
"""
file = 'testdata_precip5min.csv.gz'
data = pd.read_csv(file, names=['time','precip'], parse_dates=[0], \
index_col=0)
"""
Explanation: First, we load time series data which includes 7 years of highly resolved rainfall data (i.e. 5 minutes temporal resolution):
End of explanation
"""
np.unique(data['precip'])
"""
Explanation: Before we start running MELODIST features, a first look at the data reveals some interesting things to keep in mind. We briefly explore the data in terms of unique values:
End of explanation
"""
median = np.percentile(data['precip'][data['precip']>0.],50)
p90 = np.percentile(data['precip'][data['precip']>0.],90)
print(median)
print(p90)
"""
Explanation: This table reveals that the time series consist of multiples of 0.2 mm/5min. This is a technical limitation of the rain gauge used here. It is a tipping bucket rain gauge which needs to collect at least 0.2 mm of water (in fact, it's a personal weather station of one of the authors; for this purpose the accuracy is sufficient ;-) ). This value is the minimum value required to send a signal to the data logger. Hence, a continuous series is seprated into 0.2 mm volumes or multiples of the value. This limitation in accuracy has some implications for modelling. Since a lot of smaller events below this threshold are not captured, a single value of 0.2 mm/5min might cover several small events. What about the distribution of values?
End of explanation
"""
cascopt = melodist.build_casc(data,hourly=False,level=9, percentile=90)
"""
Explanation: Since the median is equal to the minimum value, using the median for separating below and above average conditions is not feasible. Only the 90 percent value does not coincide with the minimum value. Therefore, we utilize the 90 percent value to perform statistics for below and above average conditions, respectively.
Please note: The following line is subjected to higher computation time (up to one minute):
End of explanation
"""
# derive daily values first
precip_daily = data.resample('D').sum()
n = 4
list_res = list()
for ii in range(n):
disag = melodist.disagg_prec_cascade(precip_daily['precip'],cascopt[0], \
hourly=False,level=9)
list_res.append(disag)
"""
Explanation: n.b.: Running with the median will result in warnings.
Once the statistics have been derived, you can run the disaggregation function accordingly. Please note that an independent period is often used (split-sample test). Due to the limited length of the time series, we apply the same period for both calibration and disaggregation. However, we involve four different realisations to highlight the stochastic nature of the methodology:
End of explanation
"""
def plot(cumsum=False):
plt.figure(figsize=(9,9))
for ii in range(n):
disag = list_res[ii]
results = pd.DataFrame(data={'Observed':data['precip'], 'Disaggregated':disag})
ax = plt.subplot(2, 2, ii+1)
if cumsum:
np.cumsum(results['20110904']).plot(ax=ax)
ytext = 'Rainfall total [mm]'
else:
results['20110904'].plot(ax=ax)
ytext = 'Rainfall intensity [mm/5min]'
if ii > 1:
ax.set_xlabel('Time')
else:
ax.set_xlabel('')
if ii == 0 or ii == 2:
ax.set_ylabel(ytext)
ax.set_title('Realisation #%i' % (ii+1))
plt.tight_layout()
"""
Explanation: For displaying the results we define a function that allows displaying both intensities and cumulative series. As an example we focus on 04 Sep 2011. Please feel free to change the date.
End of explanation
"""
plot(cumsum=False)
"""
Explanation: Now we can run the function twice in order to visualize intensities and the mass curves:
End of explanation
"""
plot(cumsum=True)
"""
Explanation: This plot demonstrates that completely different solutions are possible on that day. However, major characteristics of the multi-year period should be preserved. In order to show that the model preserves the precipitation total on each day, we run the same function using cumsum=True:
End of explanation
"""
|
usantamaria/iwi131 | ipynb/21-EjerciciosDeCertamen/Certamen2_2014_1S_CC.ipynb | cc0-1.0 | r,s = (2014,3,12),(2014,1,1)
t = (2014,2,1)
print r > s and s < t
# DIGRESION: COMPARACION DE TUPLAS DEL MISMO LARGO
# Se verifican elementos en orden.
# El primer elemento que sea mayor, gana.
t1 = (0,1,2,3,4)
t2 = (10,0,0,0)
print t1<t2
# DIGRESION: COMPARACION DE TUPLAS DE DISTINTO LARGO
# Se verifican elementos en orden.
# El primer elemento que sea mayor, gana
t1 = (0,1,2,3)
t2 = (0,1,2,3,4,5)
print t1<t2
w = {'uno':[1,3],'dos':[2,4],
'tres':[3,6]}
print w['uno'] + w['tres']
def funcion1(a):
a.reverse()
return a
x = {1:[1, 0], 0:[0, 1]}
r = funcion1(x[0])[1]
print r
def funcion2(x):
if len(x) == 1:
return x
else:
return x[-1] + funcion2(x[:-1])
print funcion2('FTW')
"""
Explanation: <header class="w3-container w3-teal">
<img src="images/utfsm.png" alt="" align="left"/>
<img src="images/inf.png" alt="" align="right"/>
</header>
<br/><br/><br/><br/><br/>
IWI131
Programación de Computadores
Sebastián Flores
http://progra.usm.cl/
https://www.github.com/usantamaria/iwi131
Soluciones a Certamen 2, 1S 2014, Casa Central
Pregunta 1 [25%]
(a) Realice el ruteo de los siguientes programas e indique qué es lo que imprimen.
Cada vez que el valor de una variable cambie, escríbalo en una nueva fila de la tabla.
Recuerde que si una variable es de tipo string, debe colocar su valor entre comillas simples ’ ’.
Si una variable almacena una función coloque el nombre de ésta como valor (sin comillas).
<img src="images/2014.png" alt="" align="middle"/>
Pregunta 1.b : Impresiones
Indique lo que imprimen los siguientes programas.
End of explanation
"""
# CARGAR LOS DATOS
terroristas = {
2352: [('Stanfox', '2010-05-02'),
('Hardyard', '2010-06-07'),
('Yon Jopkins', '2010-05-02')],
1352: [('Stanfox', '2010-05-02'),
('Stanfox', '2011-06-08')],
352: [('Hardyard', '2009-03-03')],
22: [('Yon Jopkins', '2012-11-16')]}
experticias = { 2352:'TNT', 1352:'TNT',
352:'rayos laser', 22:'teletransportacion'}
"""
Explanation: Pregunta 2 [35%]
El servicio de inteligencia de la UTFSM ha detectado una amenaza inminente a sus
instalaciones por parte de un grupo terrorista que busca impedir que la universidad se convierta en el mejor centro educacional del mundo. Dada la gravedad de esta amenaza, se ha solicitado que la división de agentes “IWI-131” analice los datos obtenidos por los infiltrados que el servicio de inteligencia posee en otras universidades.
Los datos con los que se trabajará se encuentran en un diccionario llamado terroristas (variable global) que tiene por llave el identificador de cada terrorista y, por valor, una lista de tuplas que indica las universidades en las que ha sido visto el terrorista junto con la fecha correspondiente.
Python
terroristas = {
2352: [('Stanfox', '2010-05-02'),
('Hardyard', '2010-06-07'),
('Yon Jopkins', '2010-05-02')],
1352: [('Stanfox', '2010-05-02'),
('Stanfox', '2011-06-08')],
352: [('Hardyard', '2009-03-03')],
22: [('Yon Jopkins', '2012-11-16')]}
Un diccionario llamado experticias (variable global) que tiene por llave el identificador de cada terrorista y, por valor, la experticia de dicho terrorista.
Python
experticias = { 2352:'TNT', 1352:'TNT',
352:'rayos laser', 22:'teletransportacion'}
End of explanation
"""
def terroristas_se_conocen(terrorista1, terrorista2):
lugares1 = set( terroristas[terrorista1] )
lugares2 = set( terroristas[terrorista2] )
return len(lugares1 & lugares2) > 0
print terroristas_se_conocen(2352, 1352)
print terroristas_se_conocen(2352, 352)
"""
Explanation: Pregunta 2.a
Desarrolle la función terroristas_se_conocen(terrorista1, terrorista2) que reciba
como parámetros los identificadores de dos terroristas y que retorne True si ambos se conocen o False si no. Dos terroristas se conocen si ambos han sido vistos en el mismo lugar en la misma fecha.
```Python
terroristas_se_conocen(2352, 1352)
True
terroristas_se_conocen(2352, 352)
False
```
Estrategia de solución:
* ¿Qué estructura tienen los datos de entrada?
* ¿Que estructura deben tener los datos de salida?
* ¿Cómo proceso los inputs para generar el output deseado?
End of explanation
"""
def terroristas_que_han_estado_en(universidad):
terroristas_a_la_vista = set()
for terrorista_id, terrorista_lugares in terroristas.items():
for lugar, fecha in terrorista_lugares:
if lugar==universidad:
terroristas_a_la_vista.add(terrorista_id)
return terroristas_a_la_vista
print terroristas_que_han_estado_en('Stanfox')
print terroristas_que_han_estado_en('Prinxton')
"""
Explanation: Pregunta 2.b
Desarrolle la función terroristas_que_han_estado_en(universidad) que reciba como
parámetro el nombre de una universidad y que retorne un conjunto conformado por los identificadores de los terroristas que han sido visto en la universidad ingresada como parámetro.
```Python
terroristas_que_han_estado_en('Stanfox')
set([1352, 2352])
terroristas_que_han_estado_en('Prinxton')
set([])
```
Estrategia de solución:
* ¿Qué estructura tienen los datos de entrada?
* ¿Que estructura deben tener los datos de salida?
* ¿Cómo proceso los inputs para generar el output deseado?
End of explanation
"""
def sleeper_call(terrorista_id):
for tid in terroristas:
if tid!=terrorista_id:
if terroristas_se_conocen(tid, terrorista_id):
return False
return True
def terroristas_clave():
# Obtener terroristas claves
experticias_inv = {}
for tid, exp in experticias.items():
if exp not in experticias_inv:
experticias_inv[exp] = []
experticias_inv[exp].append(tid)
lista_terroristas_claves = []
for exp, lista_id in experticias_inv.items():
if len(lista_id)==1:
lista_terroristas_claves.append(lista_id[0])
# Sleeper Call
clave_y_sleeper_call = []
for tid in lista_terroristas_claves:
clave_y_sleeper_call.append( (tid, sleeper_call(tid)))
# Return value
return clave_y_sleeper_call
terroristas_clave()
"""
Explanation: Pregunta 2.c
Desarrolle la función terroristas_clave() que retorne una lista de tuplas con los identificadores de los terroristas claves e informe si cada uno de ellos pertenece (True) o no (False) a una ”sleeper cell”.
Se considera que un terrorista es clave si es el único
que posee cierta experticia.
Se considera que un terrorista pertenece a una ”sleeper cell” si es que no conoce a ningún
otro terrorista.
```Python
terroristas_clave()
[(22, True), (352, True)]
```
Estrategia de solución:
* ¿Qué estructura tienen los datos de entrada?
* ¿Que estructura deben tener los datos de salida?
* ¿Cómo proceso los inputs para generar el output deseado?
End of explanation
"""
# CARGAR DATOS
inscritos = { #num_corredor: (rut,nombre,apellido,id_categoria,edad)
1001: ('1111111-2', 'Carlos', 'Caszely', 2, 55),
1002: ('223244-4', 'Marcelo', 'Rios', 3, 45),
2129: ('3838292-1', 'Ivan', 'Zamorano', 4, 38),
4738: ('5940301-2', 'Erika', 'Olivera', 5, 48),
8883: ('3843993-1', 'Condor', 'ito', 3, 22),
231: ('9492922-2', 'Pepe', 'Antartico', 3, 30)
}
categorias = { # id_categoria: (distancia, premio)
1: ('1k', 10000),
2: ('5k', 20000),
3: ('10k', 450000),
4: ('21k', 100000),
5: ('42k', 250000)
}
resultados = [(1001, '00:30:12'), (1002, '00:55:43'),
(2129, '01:45:23'), (4738, '03:05:09'),
(8883, '00:31:33'), (231, '00:39:45')]
# OBSERVACION
# Tenemos los resultados como una lista
resultados = [(1001, '00:30:12'), (1002, '00:55:43'),
(2129, '01:45:23'), (4738, '03:05:09'),
(8883, '00:31:33'), (231, '00:39:45')]
# Para buscar un resultado en particular (por ej, para 4738), tendriamos que recorrer toda la lista
# Pero podemos convertir a un diccionario de resultados
resultados_dict = dict(resultados)
print resultados_dict[2129]
print resultados_dict[231]
# Esto es por una hermosa simetría en python
diccio = {"zero":0, "uno":1,"dos":2,"tres":3,"cuatro":4}
print diccio
l = diccio.items()
print l
d = dict(l)
print d
# Es decir, podemos convertir toda lista del tipo [(key1, val1), ..., (keyn, valn)]
# en un diccionario {key1:val1, ..., keyn:valn}
"""
Explanation: Pregunta 3 [35%]
La gran maratón de Chago City es una de las carreras más importantes a nivel mundial.
Debido a la gran cantidad de competidores que reúne este evento, se han generado las siguientes estructuras para ayudar con la organiación.
Diccionario con los inscritos, donde se almacena el número del corredor y los datos de éste.
Python
inscritos = { #num_corredor: (rut,nombre,apellido,id_categoria,edad)
1001: ('1111111-2', 'Carlos', 'Caszely', 2, 55),
1002: ('223244-4', 'Marcelo', 'Rios', 3, 45),
2129: ('3838292-1', 'Ivan', 'Zamorano', 4, 38),
4738: ('5940301-2', 'Erika', 'Olivera', 5, 48),
8883: ('3843993-1', 'Condor', 'ito', 3, 22),
231: ('9492922-2', 'Pepe', 'Antartico', 3, 30)
}
Diccionario con los inscritos, donde se almacena el número del corredor y los datos de éste.
Python
categorias = { # id_categoria: (distancia, premio)
1: ('1k', 10000),
2: ('5k', 20000),
3: ('10k', 450000),
4: ('21k', 100000),
5: ('42k', 250000)
}
Lista de resultados, donde se registra el número del corredor y el tiempo que logró.
```Python
[ (num_corredor, tiempo) ]
resultados = [(1001, '00:30:12'), (1002, '00:55:43'),
(2129, '01:45:23'), (4738, '03:05:09'),
(8883, '00:31:33'), (231, '00:39:45')]
```
End of explanation
"""
def competidores_edad(inscritos, categorias, min_edad, max_edad):
corredores_en_edad = []
for num_corredor, datos in inscritos.items():
rut, nombre, apellido, id_categoria,edad = datos
if min_edad<=edad<=max_edad:
distancia, premio = categorias[id_categoria]
tupla = (nombre, apellido, distancia)
corredores_en_edad.append(tupla)
return corredores_en_edad
print competidores_edad(inscritos,categorias,25,40)
"""
Explanation: Pregunta 3.a
Desarrolle la función competidores_edad(inscritos, categorias, min_edad, max_edad)
que reciba el diccionario inscritos, el diccionario categorias y los valores enteros min_edad y max_edad (que representan la máxima y mínima edad).
La función debe retornar una lista de tuplas de todos los competidores que se encuentren entre la edad mínima y máxima (incluyéndolos), donde cada tupla contenga el nombre, apellido y la distancia a correr de un individuo.
```Python
competidores_edad(inscritos,categorias,25,40)
[('Pepe', 'Antartico', '10k'), ('Ivan', 'Zamorano', '21k')]
```
Estrategia de solución:
* ¿Qué estructura tienen los datos de entrada?
* ¿Que estructura deben tener los datos de salida?
* ¿Cómo proceso los inputs para generar el output deseado?
End of explanation
"""
def obtener_numero_corredor(inscritos, rut_buscado):
for num_corredor, datos in inscritos.items():
rut, nombre, apellido, id_categoria,edad = datos
if rut==rut_buscado:
return num_corredor
print "Not found"
return ""
def tiempo_competidor(inscritos, resultados, rut):
# Obtener el id_corredor
num_corredor = obtener_numero_corredor(inscritos, rut)
# Convertir resultados a un dict
resultados_dict = dict(resultados)
# Obtener el tiempo
return resultados_dict[num_corredor]
print tiempo_competidor(inscritos,resultados,'9492922-2')
"""
Explanation: Pregunta 3.b
Desarrolle la función tiempo_competidor(inscritos, resultados, rut) que reciba el
diccionario inscritos, la lista de tuplas resultados y el string rut.
La función debe retornar el tiempo, como cadena de texto, de un competidor en particular.
```Python
tiempo_competidor(inscritos,resultados,'9492922-2')
'00:39:45'
```
Estrategia de solución:
* ¿Qué estructura tienen los datos de entrada?
* ¿Que estructura deben tener los datos de salida?
* ¿Cómo proceso los inputs para generar el output deseado?
End of explanation
"""
def tiempo_en_segundos(tiempo_string):
horas = int(tiempo_string[:2])
minutos = int(tiempo_string[3:5])
segundos = int(tiempo_string[6:])
tiempo = horas*3600+minutos*60+segundos
return tiempo
def ganador_categoria(inscritos, categorias, resultados, distancia):
# Obtener id de la distancia
id_distancia = 0
premio_distancia = 0
for idc, (dist, premio) in categorias.items():
if dist==distancia:
id_distancia = idc
premio_distancia = premio
# Convertir resultados a un dict
resultados_dict = dict(resultados)
# Obtener menor tiempo
menor_tiempo = float("inf")
nombre_ganador = ""
apellido_ganador = ""
for num_corredor, datos in inscritos.items():
rut, nombre, apellido, id_categoria,edad = datos
if id_categoria==id_distancia:
tiempo = tiempo_en_segundos(resultados_dict[num_corredor])
if tiempo<menor_tiempo:
menor_tiempo = tiempo
nombre_ganador = nombre
apellido_ganador = apellido
# Regresar ganador
tupla_ganador = (nombre_ganador, apellido_ganador, premio_distancia)
return tupla_ganador
ganador_categoria(inscritos, categorias, resultados, '10k')
"""
Explanation: Pregunta 3.c
Desarrolle la función ganador_categoria(inscritos, categorias, resultados, distancia) que reciba el diccionario inscritos, el diccionario categorias, la lista de tuplas
resultados y el string distancia. La función debe retornar una tupla con el ganador de la categoría, indicando el nombre, apellido y premio obtenido.
```Python
ganador_categoria(inscritos, categorias, resultados, '10k')
('Condor', 'ito', 450000)
```
Estrategia de solución:
* ¿Qué estructura tienen los datos de entrada?
* ¿Que estructura deben tener los datos de salida?
* ¿Cómo proceso los inputs para generar el output deseado?
End of explanation
"""
|
sf-wind/caffe2 | caffe2/python/tutorials/Toy_Regression.ipynb | apache-2.0 | from caffe2.python import core, cnn, net_drawer, workspace, visualize
import numpy as np
from IPython import display
from matplotlib import pyplot
"""
Explanation: Tutorial 2. A Simple Toy Regression
This is a quick example showing how one can use the concepts introduced in Tutorial 1 (Basics) to do a quick toy regression.
The problem we are dealing with is a very simple one, with two-dimensional input x and one-dimensional output y, and a weight vector w=[2.0, 1.5] and bias b=0.5. The equation to generate ground truth is:
y = wx + b
For this tutorial, we will be generating training data using Caffe2 operators as well. Note that this is usually not the case in your daily training jobs: in a real training scenario data is usually loaded from an external source, such as a Caffe DB (i.e. a key-value storage) or a Hive table. We will cover this in the MNIST tutorial.
We will write out every piece of math in Caffe2 operators. This is often an overkill if your algorithm is relatively standard, such as CNN models. In the MNIST tutorial, we will show how to use the CNN model helper to more easily construct CNN models.
End of explanation
"""
init_net = core.Net("init")
# The ground truth parameters.
W_gt = init_net.GivenTensorFill(
[], "W_gt", shape=[1, 2], values=[2.0, 1.5])
B_gt = init_net.GivenTensorFill([], "B_gt", shape=[1], values=[0.5])
# Constant value ONE is used in weighted sum when updating parameters.
ONE = init_net.ConstantFill([], "ONE", shape=[1], value=1.)
# ITER is the iterator count.
ITER = init_net.ConstantFill([], "ITER", shape=[1], value=0, dtype=core.DataType.INT32)
# For the parameters to be learned: we randomly initialize weight
# from [-1, 1] and init bias with 0.0.
W = init_net.UniformFill([], "W", shape=[1, 2], min=-1., max=1.)
B = init_net.ConstantFill([], "B", shape=[1], value=0.0)
print('Created init net.')
"""
Explanation: Declaring the computation graphs
There are two graphs that we declare: one is used to initialize the various parameters and constants that we are going to use in the computation, and another main graph that is used to run stochastic gradient descent.
First, the init net: note that the name does not matter, we basically want to put the initialization code in one net so we can then call RunNetOnce() to execute it. The reason we have a separate init_net is that, these operators do not need to run more than once for the whole training procedure.
End of explanation
"""
train_net = core.Net("train")
# First, we generate random samples of X and create the ground truth.
X = train_net.GaussianFill([], "X", shape=[64, 2], mean=0.0, std=1.0, run_once=0)
Y_gt = X.FC([W_gt, B_gt], "Y_gt")
# We add Gaussian noise to the ground truth
noise = train_net.GaussianFill([], "noise", shape=[64, 1], mean=0.0, std=1.0, run_once=0)
Y_noise = Y_gt.Add(noise, "Y_noise")
# Note that we do not need to propagate the gradients back through Y_noise,
# so we mark StopGradient to notify the auto differentiating algorithm
# to ignore this path.
Y_noise = Y_noise.StopGradient([], "Y_noise")
# Now, for the normal linear regression prediction, this is all we need.
Y_pred = X.FC([W, B], "Y_pred")
# The loss function is computed by a squared L2 distance, and then averaged
# over all items in the minibatch.
dist = train_net.SquaredL2Distance([Y_noise, Y_pred], "dist")
loss = dist.AveragedLoss([], ["loss"])
"""
Explanation: The main training network is defined as follows. We will show the creation in multiple steps:
- The forward pass that generates the loss
- The backward pass that is generated by auto differentiation.
- The parameter update part, which is a standard SGD.
End of explanation
"""
graph = net_drawer.GetPydotGraph(train_net.Proto().op, "train", rankdir="LR")
display.Image(graph.create_png(), width=800)
"""
Explanation: Now, let's take a look at what the whole network looks like. From the graph below, you can find that it is mainly composed of four parts:
Randomly generate X for this batch (GaussianFill that generates X)
Use W_gt, B_gt and the FC operator to generate the ground truth Y_gt
Use the current parameters, W and B, to make predictions.
Compare the outputs and compute the loss.
End of explanation
"""
# Get gradients for all the computations above.
gradient_map = train_net.AddGradientOperators([loss])
graph = net_drawer.GetPydotGraph(train_net.Proto().op, "train", rankdir="LR")
display.Image(graph.create_png(), width=800)
"""
Explanation: Now, similar to all other frameworks, Caffe2 allows us to automatically generate the gradient operators. let's do so and then look at what the graph becomes.
End of explanation
"""
# Increment the iteration by one.
train_net.Iter(ITER, ITER)
# Compute the learning rate that corresponds to the iteration.
LR = train_net.LearningRate(ITER, "LR", base_lr=-0.1,
policy="step", stepsize=20, gamma=0.9)
# Weighted sum
train_net.WeightedSum([W, ONE, gradient_map[W], LR], W)
train_net.WeightedSum([B, ONE, gradient_map[B], LR], B)
# Let's show the graph again.
graph = net_drawer.GetPydotGraph(train_net.Proto().op, "train", rankdir="LR")
display.Image(graph.create_png(), width=800)
"""
Explanation: Once we get the gradients for the parameters, we will add the SGD part of the graph: get the learning rate of the current step, and then do parameter updates. We are not doing anything fancy in this example: just simple SGDs.
End of explanation
"""
workspace.RunNetOnce(init_net)
workspace.CreateNet(train_net)
"""
Explanation: Now that we have created the networks, let's run them.
End of explanation
"""
print("Before training, W is: {}".format(workspace.FetchBlob("W")))
print("Before training, B is: {}".format(workspace.FetchBlob("B")))
for i in range(100):
workspace.RunNet(train_net.Proto().name)
"""
Explanation: Before we start any training iterations, let's take a look at the parameters.
End of explanation
"""
print("After training, W is: {}".format(workspace.FetchBlob("W")))
print("After training, B is: {}".format(workspace.FetchBlob("B")))
print("Ground truth W is: {}".format(workspace.FetchBlob("W_gt")))
print("Ground truth B is: {}".format(workspace.FetchBlob("B_gt")))
"""
Explanation: Now, let's take a look at the parameters after training.
End of explanation
"""
workspace.RunNetOnce(init_net)
w_history = []
b_history = []
for i in range(50):
workspace.RunNet(train_net.Proto().name)
w_history.append(workspace.FetchBlob("W"))
b_history.append(workspace.FetchBlob("B"))
w_history = np.vstack(w_history)
b_history = np.vstack(b_history)
pyplot.plot(w_history[:, 0], w_history[:, 1], 'r')
pyplot.axis('equal')
pyplot.xlabel('w_0')
pyplot.ylabel('w_1')
pyplot.grid(True)
pyplot.figure()
pyplot.plot(b_history)
pyplot.xlabel('iter')
pyplot.ylabel('b')
pyplot.grid(True)
"""
Explanation: Looks simple enough right? Let's take a closer look at the progression of the parameter updates over the training steps. For this, let's re-initialize the parameters, and look at the change of the parameters over the steps. Remember, we can fetch blobs from the workspace whenever we want.
End of explanation
"""
|
MStefko/STEADIER-SAILOR | src/test/resources/GibsonLanniAlgorithm.ipynb | gpl-3.0 | import sys
%pylab inline
import scipy.special
from scipy.interpolate import interp1d
from scipy.interpolate import RectBivariateSpline
print('Python {}\n'.format(sys.version))
print('NumPy\t\t{}'.format(np.__version__))
print('matplotlib\t{}'.format(matplotlib.__version__))
print('SciPy\t\t{}'.format(scipy.__version__))
"""
Explanation: Generate test data for SASS implementation of the Gibson-Lanni PSF.
This Python algorithm has been verified against the original MATLAB code from the paper Li, J., Xue, F., & Blu, T. (2017). Fast and accurate three-dimensional point spread function computation for fluorescence microscopy. JOSA A, 34(6), 1029-1034.
End of explanation
"""
# Image properties
# Size of the PSF array, pixels
size_x = 256
size_y = 256
size_z = 1
# Precision control
num_basis = 100 # Number of rescaled Bessels that approximate the phase function
num_samples = 1000 # Number of pupil samples along radial direction
oversampling = 2 # Defines the upsampling ratio on the image space grid for computations
# Microscope parameters
NA = 1.4
wavelength = 0.610 # microns
M = 100 # magnification
ns = 1.33 # specimen refractive index (RI)
ng0 = 1.5 # coverslip RI design value
ng = 1.5 # coverslip RI experimental value
ni0 = 1.5 # immersion medium RI design value
ni = 1.5 # immersion medium RI experimental value
ti0 = 150 # microns, working distance (immersion medium thickness) design value
tg0 = 170 # microns, coverslip thickness design value
tg = 170 # microns, coverslip thickness experimental value
resPSF = 0.02 # microns (resPSF in the Java code)
resLateral = 0.1 # microns (resLateral in the Java code)
res_axial = 0.25 # microns
pZ = 2 # microns, particle distance from coverslip
z = [-2] # microns, stage displacement away from best focus
# Scaling factors for the Fourier-Bessel series expansion
min_wavelength = 0.436 # microns
scaling_factor = NA * (3 * np.arange(1, num_basis + 1) - 2) * min_wavelength / wavelength
"""
Explanation: Simulation setup
Define the simulation parameters
End of explanation
"""
# Place the origin at the center of the final PSF array
x0 = (size_x - 1) / 2
y0 = (size_y - 1) / 2
# Find the maximum possible radius coordinate of the PSF array by finding the distance
# from the center of the array to a corner
max_radius = round(sqrt((size_x - x0) * (size_x - x0) + (size_y - y0) * (size_y - y0))) + 1;
# Radial coordinates, image space
r = resPSF * np.arange(0, oversampling * max_radius) / oversampling
# Radial coordinates, pupil space
a = min([NA, ns, ni, ni0, ng, ng0]) / NA
rho = np.linspace(0, a, num_samples)
# Convert z to array
z = np.array(z)
"""
Explanation: Create the coordinate systems
End of explanation
"""
# Define the wavefront aberration
OPDs = pZ * np.sqrt(ns * ns - NA * NA * rho * rho) # OPD in the sample
OPDi = (z.reshape(-1,1) + ti0) * np.sqrt(ni * ni - NA * NA * rho * rho) - ti0 * np.sqrt(ni0 * ni0 - NA * NA * rho * rho) # OPD in the immersion medium
OPDg = tg * np.sqrt(ng * ng - NA * NA * rho * rho) - tg0 * np.sqrt(ng0 * ng0 - NA * NA * rho * rho) # OPD in the coverslip
W = 2 * np.pi / wavelength * (OPDs + OPDi + OPDg)
# Sample the phase
# Shape is (number of z samples by number of rho samples)
phase = np.cos(W) + 1j * np.sin(W)
# Define the basis of Bessel functions
# Shape is (number of basis functions by number of rho samples)
J = scipy.special.jv(0, scaling_factor.reshape(-1, 1) * rho)
# Compute the approximation to the sampled pupil phase by finding the least squares
# solution to the complex coefficients of the Fourier-Bessel expansion.
# Shape of C is (number of basis functions by number of z samples).
# Note the matrix transposes to get the dimensions correct.
C, residuals, _, _ = np.linalg.lstsq(J.T, phase.T)
"""
Explanation: Step 1: Approximate the pupil phase with a Fourier-Bessel series
z.reshape(-1,1) flips z from a row array to a column array so that it may be broadcast across rho.
The coefficients C are found by a least-squares solution to the equation
\begin{equation}
\mathbf{\phi} \left( \rho , z \right)= \mathbf{J} \left( \rho \right) \mathbf{c} \left( z \right)
\end{equation}
\( \mathbf{c} \) has dimensions num_basis \( \times \) len(z). The J array has dimensions num_basis \( \times \) len(rho) and the phase array has dimensions len(z) \( \times \) len(rho). The J and phase arrays are therefore transposed to get the dimensions right in the call to np.linalg.lstsq.
End of explanation
"""
b = 2 * np. pi * r.reshape(-1, 1) * NA / wavelength
# Convenience functions for J0 and J1 Bessel functions
J0 = lambda x: scipy.special.jv(0, x)
J1 = lambda x: scipy.special.jv(1, x)
# See equation 5 in Li, Xue, and Blu
denom = scaling_factor * scaling_factor - b * b
R = (scaling_factor * J1(scaling_factor * a) * J0(b * a) * a - b * J0(scaling_factor * a) * J1(b * a) * a)
R /= denom
"""
Explanation: Step 2: Compute the PSF
Here, we use the Fourier-Bessel series expansion of the phase function and a Bessel integral identity to compute the approximate PSF. Each coefficient \( c_{m} \left( z \right) \) needs to be multiplied by
\begin{equation}
R \left(r; \mathbf{p} \right) = \frac{\sigma_m J_1 \left( \sigma_m a \right) J_0 \left( \beta a \right)a - \beta J_0 \left( \sigma_m a \right) J_1 \left( \beta a \right)a }{\sigma_m^2 - \beta^2}
\end{equation}
and the resulting products summed over the number of basis functions. \( \mathbf{p} \) is the parameter vector for the Gibson-Lanni model, \( \sigma_m \) is the scaling factor for the argument to the \( m'th \) Bessel basis function, and \( \beta = kr\text{NA} \).
b is defined such that R has dimensions of len(r) \( \times \) len(rho).
End of explanation
"""
# The transpose places the axial direction along the first dimension of the array, i.e. rows
# This is only for convenience.
PSF_rz = (np.abs(R.dot(C))**2).T
"""
Explanation: Now compute the point-spread function via
\begin{equation}
PSF \left( r, z; z_p, \mathbf{p} \right) = \left| \mathbf{R} \left( r; \mathbf{p} \right) \mathbf{c} \left( z \right) \right|^2
\end{equation}
End of explanation
"""
# Create the fleshed-out xy grid of radial distances from the center
xy = np.mgrid[0:size_y, 0:size_x]
r_pixel = np.sqrt((xy[1] - x0) * (xy[1] - x0) + (xy[0] - y0) * (xy[0] - y0)) * resPSF
PSF = np.zeros((size_y, size_x, size_z))
for z_index in range(PSF.shape[2]):
# Interpolate the radial PSF function
PSF_interp = interp1d(r, PSF_rz[z_index, :])
# Evaluate the PSF at each value of r_pixel
PSF[:,:, z_index] = PSF_interp(r_pixel.ravel()).reshape(size_y, size_x)
# Normalize to the area
norm_const = np.sum(np.sum(PSF[:,:,0])) * resPSF**2
PSF /= norm_const
plt.imshow(PSF[:,:,0])
plt.show()
"""
Explanation: Step 3: Resample the PSF onto a rotationally-symmetric Cartesian grid
Here we generate a two dimensional grid where the value at each grid point is the distance of the point from the center of the grid. These values are supplied to an interpolation function computed from PSF_rz to produce a rotationally-symmetric 2D PSF at each z-position.
End of explanation
"""
cdf = np.cumsum(PSF[:,:,0], axis=1) * resPSF
cdf = np.cumsum(cdf, axis=0) * resPSF
print('Min: {:.4f}'.format(np.min(cdf)))
print('Max: {:.4f}'.format(np.max(cdf)))
plt.imshow(cdf)
plt.show()
"""
Explanation: Compute the cumulative distribution
End of explanation
"""
x = (resPSF * (xy[1] - x0))[0]
y = (resPSF * (xy[0] - y0))[:,0]
# Compute the interpolated CDF
f = RectBivariateSpline(x, y, cdf)
def generatePixelSignature(pX, pY, eX, eY, eZ):
value = f((pX - eX + 0.5) * resLateral, (pY - eY + 0.5) * resLateral) + \
f((pX - eX - 0.5) * resLateral, (pY - eY - 0.5) * resLateral) - \
f((pX - eX + 0.5) * resLateral, (pY - eY - 0.5) * resLateral) - \
f((pX - eX - 0.5) * resLateral, (pY - eY + 0.5) * resLateral)
return value
generatePixelSignature(0, 0, 0, -1, 0)
generatePixelSignature(1, 1, 1, 1, 0)
generatePixelSignature(2, 1, 1, 1, 0)
generatePixelSignature(0, 1, 1, 1, 0)
generatePixelSignature(1, 2, 1, 1, 0)
generatePixelSignature(1, 0, 1, 1, 0)
generatePixelSignature(-1, 1, 1, 1, 0)
generatePixelSignature(3, 1, 1, 1, 0)
"""
Explanation: Interpolate the cumulative distribution
Here, we also create the Python equivalent to the getPixelSignature function.
Note that the ground truth is not symmetric about the center pixel because of the finite sampling of the CDF; it becomes more symmetric the smaller resPSF is and the larger sizeX/Y are.
End of explanation
"""
|
jamesfolberth/NGC_STEM_camp_AWS | notebooks/data8_notebooks/lab06/lab06.ipynb | bsd-3-clause | # Run this cell to set up the notebook, but please don't change it.
# These lines import the Numpy and Datascience modules.
import numpy as np
from datascience import *
# These lines do some fancy plotting magic.
import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
import warnings
warnings.simplefilter('ignore', FutureWarning)
# These lines load the tests.
from client.api.assignment import load_assignment
tests = load_assignment('lab06.ok')
"""
Explanation: Resampling and the Bootstrap
Welcome to lab 6!
In textbook section 9.3, we saw an example of estimation. The British Royal Air Force wanted to know how many warplanes the Germans had (some number N, a population parameter), and they needed to estimate that quantity knowing only a random sample of the planes' serial numbers (from 1 to N). For example, one estimate was twice the mean of the sample serial numbers.
We investigated the random variation in these estimates by simulating sampling from the population many times and computing estimates from each sample. In real life, if the RAF had known what the population looked like, they would have known N and would not have had any reason to think about random sampling. They didn't know what the population looked like, so they couldn't have run the simulations we did. So that was useful as an exercise in understanding random variation in an estimate, but not as a tool for practical data analysis.
Now we'll flip that idea on its head to make it practical. Given just a random sample of serial numbers, we'll estimate N, and then we'll use simulation to find out how accurate our estimate probably is, without ever looking at the whole population. This is an example of statistical inference.
As usual, run the cell below to prepare the lab and the automatic tests.
End of explanation
"""
observations = Table.read_table("serial_numbers.csv")
num_observations = observations.num_rows
observations
"""
Explanation: 1. Preliminaries
Remember the setup: We (the RAF in World War II) want to know the number of warplanes fielded by the Germans. That number is N. The warplanes have serial numbers from 1 to N, so N is also equal to the largest serial number on any of the warplanes.
We only see a small number of serial numbers (assumed to be a random sample with replacement from among all the serial numbers), so we have to use estimation.
Question 1.1
Is N a population parameter or a statistic? If we compute a number using our random sample that's an estimate of N, is that a population parameter or a statistic?
Write your answer here, replacing this text.
Check your answer with a neighbor or a TA.
To make the situation realistic, we're going to hide the true number of warplanes from you. You'll have access only to this random sample:
End of explanation
"""
def plot_serial_numbers(numbers):
...
# Assuming the lines above produce a histogram, this next
# line may make your histograms look nicer. Feel free to
# delete it if you want.
plt.ylim(0, .25)
plot_serial_numbers(observations)
"""
Explanation: Question 1.2
Define a function named plot_serial_numbers to make a histogram of any table of serial numbers. It should take one argument, a table like observations with one column called "serial number". It should make a histogram using bars of width 1 ranging from 1 to 200. It should return nothing. Then, call that function to make a histogram of observations.
End of explanation
"""
def mean_based_estimator(nums):
...
mean_based_estimate = ...
mean_based_estimate
_ = tests.grade('q1_4')
"""
Explanation: Question 1.3
What does each little bar in the histogram represent?
Write your answer here, replacing this text.
We saw that one way to estimate N was to take twice the mean of the serial numbers we see.
Question 1.4
Write a function that computes that statistic. It should take as its argument an array of serial numbers and return twice their mean. Call it mean_based_estimator. Use it to compute an estimate of N called mean_based_estimate.
End of explanation
"""
max_estimate = ...
max_estimate
_ = tests.grade('q1_5')
"""
Explanation: Question 1.5
We also estimated N using the biggest serial number in the sample. Compute it, giving it the name max_estimate.
End of explanation
"""
# ???
N = ...
# Attempts to simulate one sample from the population of all serial
# numbers, returning an array of the sampled serial numbers.
def simulate_observations():
# You'll get an error message if you try to call this
# function, because we didn't define N properly!
serial_numbers = Table().with_column("serial number", np.arange(1, N+1))
return serial_numbers.sample(num_observations)
estimates = make_array()
for i in np.arange(5000):
estimate = mean_based_estimator(simulate_observations())
estimates = np.append(estimates, estimate)
Table().with_column("mean-based estimate", estimates).hist()
"""
Explanation: Question 1.6
Look at the values of max_estimate and mean_based_estimate that we happened to get for our dataset. The value of max_estimate tells you something about mean_based_estimate. Can it be equal to N (at least if we round it to the nearest integer)? If not, is it definitely higher, definitely lower, or can we not tell? Can you make a statement like "mean_based_estimate is at least [fill in a number] away from N"?
Write your answer here, replacing this text.
Check your answer with a neighbor or a TA.
We can't just confidently proclaim that max_estimate or mean_based_estimate is equal to N. What if we're really far off? So we want to get a sense of the accuracy of our estimates.
In section 9.3, we ran a simulation like this:
End of explanation
"""
def simulate_resample():
...
"""
Explanation: Since we don't know what the population looks like, we don't know N, and we can't run that simulation.
Question 1.7
Using the terminology you've learned, describe the kind of histogram that cell would have made if we had filled in N. If that histogram is an approximation to something, say what it approximates.
Write your answer here, replacing this text.
Check your answer with a neighbor or a TA.
2. Resampling
Instead, we'll use resampling. That is, we won't exactly simulate the observations the RAF would have really seen. Rather we sample from our sample, or "resample."
Why does that make any sense?
When we tried to estimate N, we would have liked to use the whole population. Since we had only a sample, we used that to estimate N instead.
This time, we would like to use the population of serial numbers to run a simulation about estimates of N. But we still only have our sample. We use our sample in place of the population to run the simulation.
So there is a simple analogy between estimating N and simulating the variability of estimates.
$$\text{computing }N\text{ from the population}$$
$$:$$
$$\text{computing an estimate of }N\text{ from a sample}$$
$$\text{as}$$
$$\text{simulating the distribution of estimates of }N\text{ using samples from the population}$$
$$:$$
$$\text{simulating an (approximate) distribution of estimates of }N\text{ using resamples from a sample}$$
Question 2.1
Write a function called simulate_resample. It should generate a resample from the observed serial numbers in observations and return that resample. (The resample should be a table like observations.) It should take no arguments.
End of explanation
"""
# This is a little magic to make sure that you see the same results
# we did.
np.random.seed(123)
one_resample = simulate_resample()
one_resample
"""
Explanation: Let's make one resample.
End of explanation
"""
...
...
"""
Explanation: Later, we'll use many resamples at once to see what estimates typically look like. We don't often pay attention to single resamples, so it's easy to misunderstand them. Let's examine some individual resamples before we start using them.
Question 2.2
Make a histogram of your resample using the plotting function you defined earlier in this lab, and a separate histogram of the original observations.
End of explanation
"""
resample_0 = ...
...
mean_based_estimate_0 = ...
max_based_estimate_0 = ...
print("Mean-based estimate for resample 0:", mean_based_estimate_0)
print("Max-based estimate for resample 0:", max_based_estimate_0)
resample_1 = ...
...
mean_based_estimate_1 = ...
max_based_estimate_1 = ...
print("Mean-based estimate for resample 1:", mean_based_estimate_1)
print("Max-based estimate for resample 1:", max_based_estimate_1)
"""
Explanation: Question 2.3
Which of the following are true:
1. In the plot of the resample, there are no bars at locations that weren't there in the plot of the original observations.
2. In the plot of the original observations, there are no bars at locations that weren't there in the plot of the resample.
3. The resample has exactly one copy of each serial number.
4. The sample has exactly one copy of each serial number.
Write your answer here, replacing this text.
Discuss your answers with a neighbor or TA.
Question 2.4
Create 2 more resamples. For each one, plot it, compute the max- and mean-based estimates using that resample, and find those estimates on the horizontal axis of the plot.
End of explanation
"""
def simulate_estimates(original_table, sample_size, statistic, num_replications):
# Our implementation of this function took 5 short lines of code.
...
# This should generate an empirical histogram of twice-mean estimates
# of N from samples of size 50 if N is 1000. This should be a bell-shaped
# curve centered at 1000 with most of its mass in [800, 1200]. To verify your
# answer, make sure that's what you see!
example_estimates = simulate_estimates(
Table().with_column("serial number", np.arange(1, 1000+1)),
50,
mean_based_estimator,
10000)
Table().with_column("mean-based estimate", example_estimates).hist(bins=np.arange(0, 1500, 25))
"""
Explanation: You may find that the max-based estimates from the resamples are both exactly 135. You will probably find that the two mean-based estimates do differ from the sample mean-based estimate (and from each other).
Question 2.5
Using the probability theory you've learned, compute the exact chance that a max-based estimate from one resample is 135. Using your intuition, explain why a mean-based estimate from a resample is less often exactly equal to the mean-based estimate from the original sample.
Write your answer here, replacing this text.
Discuss your answers with a neighbor or TA. If you have difficulty with the probability calculation, work with someone or ask for help; don't stay stuck on it for too long.
3. Simulating with resampling
Since resampling from a sample looks just like sampling from a population, the code should look almost the same. That means we can write a function that simulates either sampling from a population or resampling from a sample. If we pass it a population as its argument, it will do the former; if we pass it a sample, it will do the latter.
Question 3.1
Write a function called simulate_estimates. It should take 4 arguments:
1. A table from which the data should be sampled. The table will have 1 column named "serial number".
2. The size of each from that table, an integer. (For example, to do resampling, we would pass for this argument the number of rows in the table.)
3. A function that computes a statistic of a sample. This argument is a function that takes an array of serial numbers as its argument and returns a number.
4. The number of replications to perform.
It should simulate many samples with replacement from the given table. (The number of samples is the 4th argument.) For each of those samples, it should compute the statistic on that sample, and it should return an array containing each of those statistics. The code below provides an example use of your function and describes how you can verify that you've written it correctly.
End of explanation
"""
bootstrap_estimates = ...
...
"""
Explanation: Question 3.2
Was the example in the previous cell performing a bootstrap simulation (i.e. resampling from a sample) or an ordinary simulation (sampling from some population) of the kind we saw in chapter 9? What was the sample or population?
Write your answer here, replacing this text.
Now we can go back to the sample we actually observed (the table observations) and estimate how much our mean-based estimate of N would have varied from sample to sample.
Question 3.3
Using the bootstrap and the sample observations, simulate the approximate distribution of mean-based estimates of N. Use 5,000 replications. To visualize the simulated estimates, make a histogram of them. We suggest using bins of width around 4.
End of explanation
"""
left_end = ...
right_end = ...
print("Middle 95% of bootstrap estimates: [{:f}, {:f}]".format(left_end, right_end))
"""
Explanation: Question 3.4
Compute an interval that covers the middle 95% of the bootstrap estimates. Verify that your interval looks like it covers 95% of the area in the histogram above.
End of explanation
"""
population = Table().with_column("serial number", np.arange(1, 150+1))
new_observations = ...
new_mean_based_estimate = ...
new_bootstrap_estimates = ...
...
new_left_end = ...
new_right_end = ...
print("Middle 95% of bootstrap estimates: [{:f}, {:f}]".format(new_left_end, new_right_end))
"""
Explanation: Question 3.5
Your mean-based estimate of N should have been around 122. Given the above calculations, is it likely that N is exactly 122? Quantify the amount of error in the estimate by making a statement like this:
"Assuming the population looks similar to the sample, the difference between N and mean-based estimates of N from samples of size 17 is typically in the range [A NUMBER, ANOTHER NUMBER]."
Write your answer here, replacing this text.
Question 3.6
N was actually 150! Write code that simulates the sampling and bootstrapping process again, as follows:
Generate a new set of random observations the RAF might have seen, following the procedure laid out at the start of this lab.
Compute an estimate of N from these new observations, using mean_based_estimator.
Using only the new observations, compute 5,000 bootstrap estimates of N.
Plot these bootstrap estimates and compute an interval covering the middle 95%.
End of explanation
"""
# For your convenience, you can run this cell to run all the tests at once!
import os
_ = [tests.grade(q[:-3]) for q in os.listdir("tests") if q.startswith('q')]
# Run this cell to submit your work *after* you have passed all of the test cells.
# It's ok to run this cell multiple times. Only your final submission will be scored.
!TZ=America/Los_Angeles jupyter nbconvert --output=".lab06_$(date +%m%d_%H%M)_submission.html" lab06.ipynb && echo "Submitted successfully!"
"""
Explanation: Question 3.7
Does the interval covering the middle 95% of the new bootstrap estimates include N? If you ran that cell many times, would it always include N?
Write your answer here, replacing this text.
End of explanation
"""
|
o108minmin/blogcodes | 2016-01-31/fracintervaledit.ipynb | mit | import pint as pn
from pint import roundfloat as rf
from pint import roundmode as rdm
import fractions
def frac_interval_proto(a):
aH, aL = rf.split(a)
if aL ==0:
answer = pn.interval(a)
else:
aS = rf.succ(a)
aP = rf.pred(a)
aS_c, aS_p = aS.as_integer_ratio()
aP_c, aP_p = aP.as_integer_ratio()
a_c, a_p = a.as_integer_ratio()
bS_c = a_c * aS_p + aS_c * a_p
bS_p = a_p * aS_p * 2
bP_c = a_c * aP_p + aP_c * a_p
bP_p = a_p * aP_p * 2
answer = pn.interval(0.)
answer.inf = fractions.Fraction(bP_c, bP_p)
answer.sup = fractions.Fraction(bS_c, bS_p)
return answer
"""
Explanation: 分数区間を用いた、浮動小数点数の区間包含について
End of explanation
"""
a = 0.9735930826234084
# 0.9735930826234084 例外値
itv_a = pn.interval(a)
format(itv_a.inf, '.17g')
format(itv_a.sup, '.17g')
"""
Explanation: 今回は、代入演算子を用いて区間型を生成した際に、初期値を包含しない区間が発生するのを防ぐ分数区間を生成する。
すなわち
End of explanation
"""
itv_a.inf = rf.pred(a)
itv_a.sup = rf.succ(a)
format(itv_a.inf, '.17g')
format(itv_a.sup,'.17g')
"""
Explanation: のような区間が発生することを防ぐ。
なお、今回以外の解決方法としては
End of explanation
"""
aH, aL = rf.split(a)
"""
Explanation: のようにsucc predを用いることで解決可能である。
しかし、今回のアルゴリズムのほうが、より区間幅を縮小できる。
まず、上位ビットと下位ビットを分離する
End of explanation
"""
if aL ==0:
print(True)
answer = pn.interval(a)
else:
print(False)
"""
Explanation: もし、下位ビットが存在する場合、else文以降の通りにする。
(下位ビットが存在しない浮動小数点数で点区間を生成した場合、元の初期値を包含する区間になる可能性が高い
元の初期値の下位ビットが全て1で、近似の際にくりあがりで下位ビットが消去された場合、おかしくなるかも)
End of explanation
"""
aS = rf.succ(a)
aP = rf.pred(a)
aS_c, aS_p = aS.as_integer_ratio()
aP_c, aP_p = aP.as_integer_ratio()
a_c, a_p = a.as_integer_ratio()
"""
Explanation: 以下はelseの続き
次にaをsucc predし、as_intger_ratio()で近似分数にする
End of explanation
"""
bS_c = a_c * aS_p + aS_c * a_p
bS_p = a_p * aS_p * 2
bP_c = a_c * aP_p + aP_c * a_p
bP_p = a_p * aP_p * 2
"""
Explanation: 次に、aとsuccしたaの中点と、aとpredしたaの中点を求める。
End of explanation
"""
bP_c / bP_p == a == bS_c / bS_p
"""
Explanation: こうして得た、分数bSとbPとaは以下の性質を満たす
浮動小数点数として評価した場合、同値である。
0.9735930826234084 例外値
End of explanation
"""
bP_c / bP_p <= a <= bS_c / bS_p
bS_c / bS_p
a
bP_c / bP_p
"""
Explanation: これは常に成り立つ
End of explanation
"""
a_c * bS_p < bS_c * a_p
"""
Explanation: 分数として評価するとbP <= a <= bSが成立する
(以下は、分母と通分するためbS_pやら、a_pがかけられている。今回は bP < a < bSとなっていることをわかりやすくするため、等号の場合は考慮していない)
a <= bS
End of explanation
"""
bP_c * a_p < a_c * bP_p
"""
Explanation: bP <= a
End of explanation
"""
rf.pred(a) < bP_c / bP_p
bS_c / bS_p < rf.succ(a)
"""
Explanation: よって、分数で評価すると大小関係が存在するが、浮動小数点数で評価すると値が一致する分数区間が得られた
なお、succ predで得られたitv_aと比較すると
End of explanation
"""
answer = pn.interval(0.)
answer.inf = fractions.Fraction(bP_c, bP_p)
answer.sup = fractions.Fraction(bS_c, bS_p)
print(answer)
"""
Explanation: となり、succ predよりも狭い区間幅が得られた。
End of explanation
"""
|
BioNinja/gseapy | docs/gseapy_example.ipynb | mit | # %matplotlib inline
# %config InlineBackend.figure_format='retina' # mac
# %load_ext autoreload
# %autoreload 2
import pandas as pd
import gseapy as gp
import matplotlib.pyplot as plt
"""
Explanation: GSEAPY Example
Examples to use GSEApy inside python console
End of explanation
"""
gp.__version__
"""
Explanation: Check gseapy version
End of explanation
"""
# read in an example gene list
gene_list = pd.read_csv("./tests/data/gene_list.txt",header=None, sep="\t")
gene_list.head()
# convert dataframe or series to list
glist = gene_list.squeeze().str.strip().tolist()
print(glist[:10])
"""
Explanation: 1. (Optional) Convert IDs Using Biomart API
Don't use this if you don't know Biomart
python
>>> from gseapy.parser import Biomart
>>> bm = Biomart()
>>> ## view validated marts
>>> marts = bm.get_marts()
>>> ## view validated dataset
>>> datasets = bm.get_datasets(mart='ENSEMBL_MART_ENSEMBL')
>>> ## view validated attributes
>>> attrs = bm.get_attributes(dataset='hsapiens_gene_ensembl')
>>> ## view validated filters
>>> filters = bm.get_filters(dataset='hsapiens_gene_ensembl')
>>> ## query results
>>> queries = ['ENSG00000125285','ENSG00000182968'] # need to be a python list
>>> results = bm.query(dataset='hsapiens_gene_ensembl',
attributes=['ensembl_gene_id', 'external_gene_name', 'entrezgene_id', 'go_id'],
filters={'ensemble_gene_id': queries})
2. Enrichr Example
End of explanation
"""
names = gp.get_library_name() # default: Human
names[:10]
yeast = gp.get_library_name(organism='Yeast')
yeast[:10]
"""
Explanation: See all supported enrichr library names
Select database from { 'Human', 'Mouse', 'Yeast', 'Fly', 'Fish', 'Worm' }
Enrichr library could be used for gsea, ssgsea, and prerank, too
End of explanation
"""
# run enrichr
# if you are only intrested in dataframe that enrichr returned, please set no_plot=True
# list, dataframe, series inputs are supported
enr = gp.enrichr(gene_list="./tests/data/gene_list.txt",
gene_sets=['KEGG_2016','KEGG_2013'],
organism='Human', # don't forget to set organism to the one you desired! e.g. Yeast
description='test_name',
outdir='test/enrichr_kegg',
# no_plot=True,
cutoff=0.5 # test dataset, use lower value from range(0,1)
)
# obj.results stores all results
enr.results.head(5)
"""
Explanation: 2.1 Assign enrichr with pd.Series, pd.DataFrame, or list object
2.1.1 gene_sets support list, str.
Multi-libraries names supported, separate each name by comma or input a list.
For example:
python
# gene_list
gene_list="./data/gene_list.txt",
gene_list=glist
# gene_sets
gene_sets='KEGG_2016'
gene_sets='KEGG_2016,KEGG_2013'
gene_sets=['KEGG_2016','KEGG_2013']
End of explanation
"""
enr2 = gp.enrichr(gene_list="./tests/data/gene_list.txt",
# or gene_list=glist
description='test_name',
gene_sets="./tests/data/genes.gmt",
background='hsapiens_gene_ensembl', # or the number of genes, e.g 20000
outdir='test/enrichr_kegg2',
cutoff=0.5, # only used for testing.
verbose=True)
enr2.results.head(5)
"""
Explanation: 2.1.2 Local mode of GO analysis
If input a .gmt file or gene_set dict object, enrichr runs local.
You have to specify the background genes, if local mode used
For example:
python
gene_sets="./data/genes.gmt",
gene_sets={'A':['gene1', 'gene2',...],
'B':['gene2', 'gene4',...],
...}
End of explanation
"""
# simple plotting function
from gseapy.plot import barplot, dotplot
# to save your figure, make sure that ``ofname`` is not None
barplot(enr.res2d,title='KEGG_2013',)
# to save your figure, make sure that ``ofname`` is not None
dotplot(enr.res2d, title='KEGG_2013',cmap='viridis_r')
"""
Explanation: 2.1.3 Plotting
End of explanation
"""
# !gseapy enrichr -i ./data/gene_list.txt \
# --ds BP2017 \
# -g GO_Biological_Process_2017 \
# -v -o test/enrichr_BP
"""
Explanation: 2.2 Command line usage
You may also want to use enrichr in command line
the option -v will print out the progress of your job
End of explanation
"""
rnk = pd.read_csv("./tests/data/edb/gsea_data.gsea_data.rnk", header=None, sep="\t")
rnk.head()
# run prerank
# enrichr libraries are supported by prerank module. Just provide the name
# use 4 process to acceralate the permutation speed
# note: multiprocessing may not work on windows
pre_res = gp.prerank(rnk=rnk, gene_sets='KEGG_2016',
processes=4,
permutation_num=100, # reduce number to speed up testing
outdir='test/prerank_report_kegg', format='png', seed=6)
"""
Explanation: 3. Prerank example
3.1 Assign prerank() with a pd.DataFrame, pd.Series , or a txt file
Do not include header in your gene list !
GSEApy will skip any data after "#".
Only contains two columns, or one cloumn with gene_name indexed when assign a DataFrame to prerank
End of explanation
"""
#access results through obj.res2d attribute or obj.results
pre_res.res2d.sort_index().head()
"""
Explanation: Leading edge genes save to the final output results
End of explanation
"""
# extract geneset terms in res2d
terms = pre_res.res2d.index
terms
## easy way
from gseapy.plot import gseaplot
# to save your figure, make sure that ofname is not None
gseaplot(rank_metric=pre_res.ranking, term=terms[0], **pre_res.results[terms[0]])
# save figure
# gseaplot(rank_metric=pre_res.ranking, term=terms[0], ofname='your.plot.pdf', **pre_res.results[terms[0]])
"""
Explanation: 3.2 How to generate your GSEA plot inside python console
Visualize it using gseaplot
Make sure that ofname is not None, if you want to save your figure to the disk
End of explanation
"""
# ! gseapy prerank -r temp.rnk -g temp.gmt -o prerank_report_temp
"""
Explanation: 3) Command line usage
You may also want to use prerank in command line
End of explanation
"""
phenoA, phenoB, class_vector = gp.parser.gsea_cls_parser("./tests/data/P53.cls")
#class_vector used to indicate group attributes for each sample
print(class_vector)
gene_exp = pd.read_csv("./tests/data/P53.txt", sep="\t")
gene_exp.head()
print("positively correlated: ", phenoA)
print("negtively correlated: ", phenoB)
# run gsea
# enrichr libraries are supported by gsea module. Just provide the name
gs_res = gp.gsea(data=gene_exp, # or data='./P53_resampling_data.txt'
gene_sets='KEGG_2016', # enrichr library names
cls= './tests/data/P53.cls', # cls=class_vector
# set permutation_type to phenotype if samples >=15
permutation_type='phenotype',
permutation_num=100, # reduce number to speed up test
outdir=None, # do not write output to disk
no_plot=True, # Skip plotting
method='signal_to_noise',
processes=4, seed= 7,
format='png')
#access the dataframe results throught res2d attribute
gs_res.res2d.sort_index().head()
"""
Explanation: 4. GSEA Example
4.1 Assign gsea() with a pandas DataFrame, .gct format file, or a text file
and cls with a list object or just .cls format file
End of explanation
"""
from gseapy.plot import gseaplot, heatmap
terms = gs_res.res2d.index
# Make sure that ``ofname`` is not None, if you want to save your figure to disk
gseaplot(gs_res.ranking, term=terms[0], **gs_res.results[terms[0]])
# plotting heatmap
genes = gs_res.res2d.genes[0].split(";")
# Make sure that ``ofname`` is not None, if you want to save your figure to disk
heatmap(df = gs_res.heatmat.loc[genes], z_score=0, title=terms[0], figsize=(18,6))
"""
Explanation: 4.2 Show the gsea plots
The gsea module will generate heatmap for genes in each gene sets in the backgroud.
But if you need to do it yourself, use the code below
End of explanation
"""
# !gseapy gsea -d ./data/P53_resampling_data.txt \
# -g KEGG_2016 -c ./data/P53.cls \
# -o test/gsea_reprot_2 \
# -v --no-plot \
# -t phenotype
"""
Explanation: 4.3 Command line usage
You may also want to use gsea in command line
End of explanation
"""
# txt, gct file input
ss = gp.ssgsea(data="./tests/data/testSet_rand1200.gct",
gene_sets="./tests/data/randomSets.gmt",
outdir='test/ssgsea_report',
sample_norm_method='rank', # choose 'custom' for your own rank list
permutation_num=0, # skip permutation procedure, because you don't need it
no_plot=True, # skip plotting, because you don't need these figures
processes=4, format='png', seed=9)
ss.res2d.sort_index().head()
# or assign a dataframe, or Series to ssgsea()
ssdf = pd.read_csv("./tests/data/temp.txt", header=None, sep="\t")
ssdf.head()
# dataframe with one column is also supported by ssGSEA or Prerank
# But you have to set gene_names as index
ssdf2 = ssdf.set_index(0)
ssdf2.head()
type(ssdf2)
ssSeries = ssdf2.squeeze()
type(ssSeries)
# reuse data
df = pd.read_csv("./tests/data/P53_resampling_data.txt", sep="\t")
df.head()
# Series, DataFrame Example
# supports dataframe and series
ssgs = []
for i, dat in enumerate([ssdf, ssdf2, ssSeries, df]):
sstemp = gp.ssgsea(data=dat,
gene_sets="./tests/data/genes.gmt",
outdir='test/ssgsea_report_'+str(i),
scale=False, # set scale to False to get real original ES
permutation_num=0, # skip permutation procedure, because you don't need it
no_plot=True, # skip plotting, because you don't need these figures
processes=4, seed=10,
format='png')
ssgs.append(sstemp)
"""
Explanation: 5. Single Sample GSEA example
Note: When you run ssGSEA, all genes names in your gene_sets file should be found in your expression table
What's ssGSEA? Which one should I use? Prerank or ssGSEA
see FAQ here
5.1 Input format
Assign ssgsea() with a txt file, gct file, pd.DataFrame, or pd.Seires(gene name as index)
End of explanation
"""
# normalized es save to res2d attri
# one sample input
# NES
ssgs[0].res2d.sort_index().head()
"""
Explanation: 5.2 Access Enrichment Score (ES) and NES
results save to two attribute:
obj.resultsOnSamples: ES
obj.res2d: NES
End of explanation
"""
# ES
# convert dict to DataFrame
es = pd.DataFrame(ssgs[-1].resultsOnSamples)
es.sort_index().head()
# if set scale to True, then
# Scaled ES equal to es/gene_numbers
ses = es/df.shape[0]
ses
# NES
# scale or no have no affects on final nes value
nes = ssgs[-1].res2d
nes.sort_index().head()
"""
Explanation: Note:
If you want to obtain the real original enrichment score,
you have to set scale=False
End of explanation
"""
# set --no-scale to obtain the real original enrichment score
# !gseapy ssgsea -d ./data/testSet_rand1200.gct \
# -g data/temp.gmt \
# -o test/ssgsea_report2 \
# -p 4 --no-plot --no-scale
"""
Explanation: 3) command line usage of single sample gsea
End of explanation
"""
# run command inside python console
rep = gp.replot(indir="./tests/data", outdir="test/replot_test")
"""
Explanation: 6. Replot Example
6.1 locate your directory
notes: replot module need to find edb folder to work properly.
keep the file tree like this:
```
data
|--- edb
| |--- C1OE.cls
| |--- gene_sets.gmt
| |--- gsea_data.gsea_data.rnk
| |--- results.edb
```
End of explanation
"""
# !gseapy replot -i data -o test/replot_test
"""
Explanation: 6.2 command line usage of replot
End of explanation
"""
|
FRBs/FRB | docs/nb/FRB_Host_Associations.ipynb | bsd-3-clause | # imports
import numpy as np
from astropy.coordinates import SkyCoord
from astropy import units
from frb.frb import FRB
from frb.galaxies import hosts as frb_hosts
"""
Explanation: FRB Host Associations
- v1 (fussing around)
- v2 Bayesan ala Budavari
End of explanation
"""
frb190611 = FRB.by_name('FRB190611')
frb190611
frb190611.eellipse
"""
Explanation: Test case
Load FRB
End of explanation
"""
gal1_coord = SkyCoord("21h22m58.277s -79d23m50.09s", frame='icrs')
gal2_coord = SkyCoord("21h22m58.973s -79d23m51.69s", frame='icrs')
# Magnitudes
m1 = 22.0
m2 = 25.6
frb190611.coord.separation(gal2_coord).to('arcsec')
frb190611.coord.separation(gal1_coord).to('arcsec')
"""
Explanation: Fake galaxies
End of explanation
"""
mlim = 26.
"""
Explanation: Observations
End of explanation
"""
rflat = 3 * units.arcsec
"""
Explanation: Radial profile
Assume flat to 3" separation
End of explanation
"""
Pchance_1 = frb_hosts.chance_coincidence(m1, rflat)
Pchance_1
Pchance_2 = frb_hosts.chance_coincidence(m2, rflat)
Pchance_2
Pchance_miss = frb_hosts.chance_coincidence(mlim, rflat)
Pchance_miss
"""
Explanation: $P_{\rm chance}$ for everyone
End of explanation
"""
P_1 = Pchance_2 * Pchance_miss
P_2 = Pchance_1 * Pchance_miss
P_miss = Pchance_1 * Pchance_2
P_1, P_2, P_miss
"""
Explanation: P
End of explanation
"""
PS = 1 - Pchance_miss
PM1 = 1 - Pchance_1
PM2 = 1 - Pchance_2
#
A = 1. / (PS + PM1 + PM2)
A
PS*A, PM1*A, PM2*A
"""
Explanation: Priors
Normalize
End of explanation
"""
|
jinntrance/MOOC | coursera/ml-regression/assignments/week-1-simple-regression-assignment-blank.ipynb | cc0-1.0 | import graphlab
"""
Explanation: Regression Week 1: Simple Linear Regression
In this notebook we will use data on house sales in King County to predict house prices using simple (one input) linear regression. You will:
* Use graphlab SArray and SFrame functions to compute important summary statistics
* Write a function to compute the Simple Linear Regression weights using the closed form solution
* Write a function to make predictions of the output given the input feature
* Turn the regression around to predict the input given the output
* Compare two different models for predicting house prices
In this notebook you will be provided with some already complete code as well as some code that you should complete yourself in order to answer quiz questions. The code we provide to complte is optional and is there to assist you with solving the problems but feel free to ignore the helper code and write your own.
Fire up graphlab create
End of explanation
"""
sales = graphlab.SFrame('kc_house_data.gl/')
"""
Explanation: Load house sales data
Dataset is from house sales in King County, the region where the city of Seattle, WA is located.
End of explanation
"""
train_data,test_data = sales.random_split(.8,seed=0)
"""
Explanation: Split data into training and testing
We use seed=0 so that everyone running this notebook gets the same results. In practice, you may set a random seed (or let GraphLab Create pick a random seed for you).
End of explanation
"""
# Let's compute the mean of the House Prices in King County in 2 different ways.
prices = sales['price'] # extract the price column of the sales SFrame -- this is now an SArray
# recall that the arithmetic average (the mean) is the sum of the prices divided by the total number of houses:
sum_prices = prices.sum()
num_houses = prices.size() # when prices is an SArray .size() returns its length
avg_price_1 = sum_prices/num_houses
avg_price_2 = prices.mean() # if you just want the average, the .mean() function
print "average price via method 1: " + str(avg_price_1)
print "average price via method 2: " + str(avg_price_2)
"""
Explanation: Useful SFrame summary functions
In order to make use of the closed form soltion as well as take advantage of graphlab's built in functions we will review some important ones. In particular:
* Computing the sum of an SArray
* Computing the arithmetic average (mean) of an SArray
* multiplying SArrays by constants
* multiplying SArrays by other SArrays
End of explanation
"""
# if we want to multiply every price by 0.5 it's a simple as:
half_prices = 0.5*prices
# Let's compute the sum of squares of price. We can multiply two SArrays of the same length elementwise also with *
prices_squared = prices*prices
sum_prices_squared = prices_squared.sum() # price_squared is an SArray of the squares and we want to add them up.
print "the sum of price squared is: " + str(sum_prices_squared)
"""
Explanation: As we see we get the same answer both ways
End of explanation
"""
def simple_linear_regression(input_feature, output):
# compute the mean of input_feature and output
x = input_feature
y = output
avg_x = x.mean()
avg_y = y.mean()
n = x.size()
# compute the product of the output and the input_feature and its mean
# compute the squared value of the input_feature and its mean
# use the formula for the slope
x_err = x-avg_x
slope = (y*x_err).sum()/(x*x_err).sum()
# use the formula for the intercept
intercept = y.mean() - x.mean()*slope
return (intercept, slope)
"""
Explanation: Aside: The python notation x.xxe+yy means x.xx * 10^(yy). e.g 100 = 10^2 = 1*10^2 = 1e2
Build a generic simple linear regression function
Armed with these SArray functions we can use the closed form solution found from lecture to compute the slope and intercept for a simple linear regression on observations stored as SArrays: input_feature, output.
Complete the following function (or write your own) to compute the simple linear regression slope and intercept:
End of explanation
"""
test_feature = graphlab.SArray(range(5))
test_output = graphlab.SArray(1 + 1*test_feature)
(test_intercept, test_slope) = simple_linear_regression(test_feature, test_output)
print "Intercept: " + str(test_intercept)
print "Slope: " + str(test_slope)
"""
Explanation: We can test that our function works by passing it something where we know the answer. In particular we can generate a feature and then put the output exactly on a line: output = 1 + 1*input_feature then we know both our slope and intercept should be 1
End of explanation
"""
sqft_intercept, sqft_slope = simple_linear_regression(train_data['sqft_living'], train_data['price'])
print "Intercept: " + str(sqft_intercept)
print "Slope: " + str(sqft_slope)
"""
Explanation: Now that we know it works let's build a regression model for predicting price based on sqft_living. Rembember that we train on train_data!
End of explanation
"""
def get_regression_predictions(input_feature, intercept, slope):
# calculate the predicted values:
predicted_values = intercept + slope * input_feature
return predicted_values
"""
Explanation: Predicting Values
Now that we have the model parameters: intercept & slope we can make predictions. Using SArrays it's easy to multiply an SArray by a constant and add a constant value. Complete the following function to return the predicted output given the input_feature, slope and intercept:
End of explanation
"""
my_house_sqft = 2650
estimated_price = get_regression_predictions(my_house_sqft, sqft_intercept, sqft_slope)
print "The estimated price for a house with %d squarefeet is $%.2f" % (my_house_sqft, estimated_price)
"""
Explanation: Now that we can calculate a prediction given the slop and intercept let's make a prediction. Use (or alter) the following to find out the estimated price for a house with 2650 squarefeet according to the squarefeet model we estiamted above.
Quiz Question: Using your Slope and Intercept from (4), What is the predicted price for a house with 2650 sqft?
End of explanation
"""
def get_residual_sum_of_squares(input_feature, output, intercept, slope):
# First get the predictions
predictions = get_regression_predictions(input_feature, intercept, slope)
# then compute the residuals (since we are squaring it doesn't matter which order you subtract)
resd = predictions-output
# square the residuals and add them up
RSS = (resd*resd).sum()
return(RSS)
"""
Explanation: Residual Sum of Squares
Now that we have a model and can make predictions let's evaluate our model using Residual Sum of Squares (RSS). Recall that RSS is the sum of the squares of the residuals and the residuals is just a fancy word for the difference between the predicted output and the true output.
Complete the following (or write your own) function to compute the RSS of a simple linear regression model given the input_feature, output, intercept and slope:
End of explanation
"""
print get_residual_sum_of_squares(test_feature, test_output, test_intercept, test_slope) # should be 0.0
"""
Explanation: Let's test our get_residual_sum_of_squares function by applying it to the test model where the data lie exactly on a line. Since they lie exactly on a line the residual sum of squares should be zero!
End of explanation
"""
rss_prices_on_sqft = get_residual_sum_of_squares(train_data['sqft_living'], train_data['price'], sqft_intercept, sqft_slope)
print 'The RSS of predicting Prices based on Square Feet is : ' + str(rss_prices_on_sqft)
"""
Explanation: Now use your function to calculate the RSS on training data from the squarefeet model calculated above.
Quiz Question: According to this function and the slope and intercept from the squarefeet model What is the RSS for the simple linear regression using squarefeet to predict prices on TRAINING data?
End of explanation
"""
def inverse_regression_predictions(output, intercept, slope):
# solve output = intercept + slope*input_feature for input_feature. Use this equation to compute the inverse predictions:
estimated_feature = (output - intercept)/slope
return estimated_feature
"""
Explanation: Predict the squarefeet given price
What if we want to predict the squarefoot given the price? Since we have an equation y = a + b*x we can solve the function for x. So that if we have the intercept (a) and the slope (b) and the price (y) we can solve for the estimated squarefeet (x).
Comlplete the following function to compute the inverse regression estimate, i.e. predict the input_feature given the output!
End of explanation
"""
my_house_price = 800000
estimated_squarefeet = inverse_regression_predictions(my_house_price, sqft_intercept, sqft_slope)
print "The estimated squarefeet for a house worth $%.2f is %d" % (my_house_price, estimated_squarefeet)
"""
Explanation: Now that we have a function to compute the squarefeet given the price from our simple regression model let's see how big we might expect a house that coses $800,000 to be.
Quiz Question: According to this function and the regression slope and intercept from (3) what is the estimated square-feet for a house costing $800,000?
End of explanation
"""
# Estimate the slope and intercept for predicting 'price' based on 'bedrooms'
bedrooms_intercept, bedrooms_slope = simple_linear_regression(train_data['bedrooms'], train_data['price'])
"""
Explanation: New Model: estimate prices from bedrooms
We have made one model for predicting house prices using squarefeet, but there are many other features in the sales SFrame.
Use your simple linear regression function to estimate the regression parameters from predicting Prices based on number of bedrooms. Use the training data!
End of explanation
"""
# Compute RSS when using bedrooms on TEST data:
get_residual_sum_of_squares(test_data['bedrooms'], test_data['price'], sqft_intercept, sqft_slope)
# Compute RSS when using squarfeet on TEST data:
get_residual_sum_of_squares(test_data['sqft_living'], test_data['price'], sqft_intercept, sqft_slope)
"""
Explanation: Test your Linear Regression Algorithm
Now we have two models for predicting the price of a house. How do we know which one is better? Calculate the RSS on the TEST data (remember this data wasn't involved in learning the model). Compute the RSS from predicting prices using bedrooms and from predicting prices using squarefeet.
Quiz Question: Which model (square feet or bedrooms) has lowest RSS on TEST data? Think about why this might be the case.
End of explanation
"""
|
junhwanjang/DataSchool | Lecture/15. 과최적화와 정규화/3) 정규화 선형 회귀.ipynb | mit | np.random.seed(0)
n_samples = 30
X = np.sort(np.random.rand(n_samples))
y = np.cos(1.5 * np.pi * X) + np.random.randn(n_samples) * 0.1
dfX = pd.DataFrame(X, columns=["x"])
dfX = sm.add_constant(dfX)
dfy = pd.DataFrame(y, columns=["y"])
df = pd.concat([dfX, dfy], axis=1)
model = sm.OLS.from_formula("y ~ x + I(x**2) + I(x**3) + I(x**4) + I(x**5)", data=df)
result1 = model.fit()
result1.params
def plot_statsmodels(result):
plt.scatter(X, y)
xx = np.linspace(0, 1, 1000)
dfxx = pd.DataFrame(xx, columns=["x"])
dfxx = sm.add_constant(dfxx)
plt.plot(xx, result.predict(dfxx).values)
plt.show()
plot_statsmodels(result1)
result2 = model.fit_regularized(alpha=0.01, L1_wt=0)
print(result2.params)
plot_statsmodels(result2)
result2 = model.fit_regularized(alpha=0.01, L1_wt=0.5)
print(result2.params)
plot_statsmodels(result2)
result2 = model.fit_regularized(alpha=0.01, L1_wt=1)
print(result2.params)
plot_statsmodels(result2)
"""
Explanation: 정규화 선형 회귀
정규화(regularized) 선형 회귀 방법은 선형 회귀 계수(weight)에 대한 제약 조건을 추가함으로써 계수의 분산을 감소시키는 방법이다. Regularized Method, Penalized Method, Contrained Least Squares 이라고도 불리운다.
일반적으로 세가지 정규화 선형 회귀 모형이 사용된다.
Ridge 회귀 모형
LASSO 회귀 모형
Elastic Net 회귀 모형
Ridge 회귀 모형
Ridge 회귀 모형에서는 가중치들의 제곱합(squared sum of weights)을 최소화하는 것을 추가적인 제약 조건으로 한다.
$$
\begin{eqnarray}
\text{cost}
&=& \sum e_i^2 + \lambda \sum w_i^2
\end{eqnarray}
$$
$\lambda$는 기존의 잔차 제곱합과 추가적 제약 조건의 비중을 조절하기 위한 하이퍼 모수(hyper parameter)이다. $\lambda$가 크면 정규화 정도가 커지고 가중치의 값들이 작아진다. $\lambda$가 작아지면 정규화 정도가 작아지며 $\lambda$ 가 0이 되면 일반적인 선형 회귀 모형이 된다.
LASSO 회귀 모형
LASSO(Least Absolute Shrinkage and Selection Operator) 회귀 모형은 가중치의 절대값의 합을 최소화하는 것을 추가적인 제약 조건으로 한다.
$$
\begin{eqnarray}
\text{cost}
&=& \sum e_i^2 + \lambda \sum | w_i |
\end{eqnarray}
$$
Elastic Net 회귀 모형
Elastic Net 회귀 모형은 가중치의 절대값의 합과 제곱합을 동시에 제약 조건으로 가지는 모형이다.
$$
\begin{eqnarray}
\text{cost}
&=& \sum e_i^2 + \lambda_1 \sum | w_i | + \lambda_2 \sum w_i^2
\end{eqnarray}
$$
$\lambda_1$, $\lambda_2$ 두 개의 하이퍼 모수를 가진다.
statsmodels의 정규화 회귀 모형
statsmodels 패키지는 OLS 선형 회귀 모형 클래스의 fit_regularized 메서드를 사용하여 Elastic Net 모형 계수를 구할 수 있다.
http://www.statsmodels.org/dev/generated/statsmodels.regression.linear_model.OLS.fit_regularized.html
하이퍼 모수는 다음과 같이 모수 $\text{alpha} $ 와 $\text{L1_wt}$ 로 정의된다.
$$
0.5 \times \text{RSS}/N + \text{alpha} \times \big( 0.5 \times (1-\text{L1_wt})\sum w_i^2 + \text{L1_wt} \sum |w_i| \big)
$$
End of explanation
"""
def plot_sklearn(model):
plt.scatter(X, y)
xx = np.linspace(0, 1, 1000)
plt.plot(xx, model.predict(xx[:, np.newaxis]))
plt.show()
from sklearn.preprocessing import PolynomialFeatures
from sklearn.pipeline import make_pipeline
from sklearn.linear_model import LinearRegression, Ridge, Lasso, ElasticNet
poly = PolynomialFeatures(3)
model = make_pipeline(poly, Ridge(alpha=0.01)).fit(X[:, np.newaxis], y)
plot_sklearn(model)
model = make_pipeline(poly, Lasso(alpha=0.01)).fit(X[:, np.newaxis], y)
plot_sklearn(model)
model = make_pipeline(poly, ElasticNet(alpha=0.01, l1_ratio=0.5)).fit(X[:, np.newaxis], y)
plot_sklearn(model)
"""
Explanation: Scikit-Learn의 정규화 회귀 모형
Scikit-Learn 패키지에서는 정규화 회귀 모형을 위한 Ridge, Lasso, ElasticNet 이라는 별도의 클래스를 제공한다. 각 모형에 대한 최적화 목적 함수는 다음과 같다.
http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Ridge.html
$$
\text{RSS} + \text{alpha} \sum w_i^2
$$
http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Lasso.html
$$
0.5 \times \text{RSS}/N + \text{alpha} \sum |w_i|
$$
http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.ElasticNet.html
$$
0.5 \times \text{RSS}/N + 0.5 \times \text{alpha} \times \big(0.5 \times (1-\text{l1_ratio})\sum w_i^2 + \text{l1_ratio} \sum |w_i| \big)
$$
End of explanation
"""
X_train = np.c_[.5, 1].T
y_train = [.5, 1]
X_test = np.c_[-1, 3].T
np.random.seed(0)
models = {"LinearRegression": LinearRegression(),
"Ridge": Ridge(alpha=0.1)}
for i, (name, model) in enumerate(models.iteritems()):
ax = plt.subplot(1, 2, i+1)
for _ in range(10):
this_X = .1 * np.random.normal(size=(2, 1)) + X_train
model.fit(this_X, y_train)
ax.plot(X_test, model.predict(X_test), color='.5')
ax.scatter(this_X, y_train, s=100, c='.5', marker='o', zorder=10)
model.fit(X_train, y_train)
ax.plot(X_test, model.predict(X_test), linewidth=3, color='blue', alpha=0.5)
ax.scatter(X_train, y_train, s=100, c='r', marker='D', zorder=10)
plt.title(name)
ax.set_xlim(-0.5, 2)
ax.set_ylim(0, 1.6)
"""
Explanation: 정규화 모형의 장점
정규화 모형은 오차-분산 트레이드오프(bias-variance trade-off) 원리에 따라 분산을 감소시키는 효과를 가진다.
End of explanation
"""
from sklearn.datasets import load_diabetes
diabetes = load_diabetes()
X = diabetes.data
y = diabetes.target
ridge0 = Ridge(alpha=0).fit(X, y)
p0 = pd.Series(np.hstack([ridge0.intercept_, ridge0.coef_]))
ridge1 = Ridge(alpha=1).fit(X, y)
p1 = pd.Series(np.hstack([ridge1.intercept_, ridge1.coef_]))
ridge2 = Ridge(alpha=2).fit(X, y)
p2 = pd.Series(np.hstack([ridge2.intercept_, ridge2.coef_]))
pd.DataFrame([p0, p1, p2]).T
lasso0 = Lasso(alpha=0.0001).fit(X, y)
p0 = pd.Series(np.hstack([lasso0.intercept_, lasso0.coef_]))
lasso1 = Lasso(alpha=0.1).fit(X, y)
p1 = pd.Series(np.hstack([lasso1.intercept_, lasso1.coef_]))
lasso2 = Lasso(alpha=10).fit(X, y)
p2 = pd.Series(np.hstack([lasso2.intercept_, lasso2.coef_]))
pd.DataFrame([p0, p1, p2]).T
"""
Explanation: Ridge 모형과 Lasso 모형의 차이
Ridge 모형은 가중치 계수를 한꺼번에 축소시키는데 반해 Lasso 모형은 일부 가중치 계수가 먼저 0으로 수렴하는 특성이 있다.
<img src="https://datascienceschool.net/upfiles/10a19727037b4898984a4330c1285486.png">
End of explanation
"""
lasso = Lasso()
alphas, coefs, _ = lasso.path(X, y, alphas=np.logspace(-6, 1, 8))
df = pd.DataFrame(coefs, columns=alphas)
df
df.T.plot(logx=True)
plt.show()
"""
Explanation: path 메서드
Lasso 와 ElasticNet 클래스는 하이퍼 모수 alpha 값의 변화에 따른 계수의 변화를 자동으로 계산하는 path 메서드를 제공한다.
lasso_path(), enet_path() 명령어도 path 메서드와 동일한 기능을 수행한다.
End of explanation
"""
|
BuzzFeedNews/2015-07-h2-visas-and-enforcement | notebooks/h2-employers-investigated.ipynb | mit | import pandas as pd
import sys
import re
sys.path.append("../utils")
import loaders
"""
Explanation: H-2 Employers Investigated Per Fiscal Year
The Python code below calculates the number of WHD cases concluded each fiscal year that examined some aspect of H-2 regulations, and the number of distinct employer IDs associated with those cases. In addition, it provides a rough estimate of the number of employers certified for H-2 per fiscal year, to support the statement that "vast majority" of H-2 certified employers are not inspected.
Note: The visa certification data published by the OFLC do not include unique identifiers for employers. Additionally, the data contain multiple alternate spellings and mis-spellings of employer names, making it difficult to determine the number of distinct employers. For these reasons, this analysis intentionally does not provide an overall inspection rate. It should be clear from the numbers below, however, that number of H-2 employers that WHD inspects each year amounts to a small fraction of the number of employers that DOL certifies for H-2 visas.
Investigations Methodology
Load the CASE_ACT_SUMMARY rows for each case, and select those for the "H2A" and "H2B" ACT_IDs. This includes any case with H-2 findings, regardless of whether WHD identified any violations.
For each fiscal year 2010 - 2014, the case was "concluded," and by the CASE_ID (since some employers have summaries for both H-2A and H-2B findings).
For each fiscal year, FY 2010–2014, count the number of cases and unique employer IDs.
Certifications Methodology
Load list of the Office of Foreign Labor Certification's H-2 certification decisions.
Select only decisions to certify visas (rather than deny them), and exclude expired certifications as well as certifications for umbrella organizations (rather than specific employers).
Standardize the provided employer name by uppercasing the names and removing punctuation.
For each fiscal year of decisions, FY 2010–2014, count the number of unique (standardized) employer names.
Data Loading — Investigations
End of explanation
"""
employers = loaders.load_employers().set_index("CASE_ID")
cases = loaders.load_cases().set_index("CASE_ID")
cases_basics = cases[[ "DATE_CONCLUDED_FY", "INVEST_TOOL_DESC" ]]\
.join(employers[ "employer_id" ])\
.reset_index()
act_summaries = loaders.load_act_summaries()
h2_summaries = act_summaries[
act_summaries["ACT_ID"].isin([ "H2A", "H2B" ])
]
matching_cases = cases_basics[
cases_basics["CASE_ID"].isin(h2_summaries["CASE_ID"]) &
(cases_basics["DATE_CONCLUDED_FY"] >= 2010) &
(cases_basics["DATE_CONCLUDED_FY"] <= 2014)
]
invest_tool_counts = matching_cases["INVEST_TOOL_DESC"].value_counts()
case_counts = matching_cases.groupby([
"DATE_CONCLUDED_FY",
"INVEST_TOOL_DESC"
])["CASE_ID"].nunique()\
.unstack()\
.fillna(0)\
[invest_tool_counts.index.tolist()]
case_counts["[total]"] = case_counts.sum(axis=1)
employer_counts = pd.DataFrame({
"n_employer_ids": matching_cases.groupby("DATE_CONCLUDED_FY")["employer_id"].nunique()
})
"""
Explanation: Note: loaders is a custom module to handle most common data-loading operations in these analyses. It is available here.
End of explanation
"""
case_counts
"""
Explanation: Number of H-2–related cases by overall investigation type and fiscal year concluded:
End of explanation
"""
employer_counts
"""
Explanation: Note: The counts and below include all cases with at least some H-2 aspect indicated, regardless of whether H-2 was the primary focus or whether investigators found any H-2 violations.
Distinct employer IDs associated with the cases above:
End of explanation
"""
date_parser = lambda x: pd.to_datetime(x, format="%Y-%m-%d", coerce=True)
oflc_decisions = pd.read_csv("../data/oflc-decisions/processed/oflc-decisions.csv",
parse_dates=["last_event_date"],
date_parser=date_parser)
oflc_decisions["last_event_date_fy"] = oflc_decisions["last_event_date"].apply(loaders.date_to_fy)
certifications = oflc_decisions[
(oflc_decisions["is_certified"] == True) &
(oflc_decisions["is_expired"] == False) &
(oflc_decisions["is_duplicate"] == False) &
(oflc_decisions["last_event_date_fy"] >= 2010) &
(oflc_decisions["last_event_date_fy"] <= 2014)
].copy()
non_alphanum_pat = re.compile(r"[^A-Z0-9 ]+")
"""
Explanation: Data Loading — Certifications
End of explanation
"""
def standardize_name(x):
return re.sub(non_alphanum_pat, "", x.upper().strip())
certifications["employer_name_standard"] = certifications["employer_name"]\
.fillna("")\
.apply(standardize_name)
"""
Explanation: Basic Standardization of Employer Names
End of explanation
"""
certifications[["employer_name", "employer_name_standard"]].tail()
certs_by_fy = certifications.groupby("last_event_date_fy")
employer_cert_counts = pd.DataFrame({
"n_employer_names": certs_by_fy["employer_name_standard"].nunique()
})
"""
Explanation: Example of employer names before and after standardization:
End of explanation
"""
employer_cert_counts
"""
Explanation: Rough count of the number of employers certified for H-2 visas, per fiscal year:
End of explanation
"""
|
pegasus-isi/pegasus | tutorial/docker/notebooks/02-Debugging/02-Debugging.ipynb | apache-2.0 | !rm -f f.a
"""
Explanation: Workflow Debugging
When running complex computations (such as workflows) on complex computing infrastructure (for example HPC clusters), things will go wrong. It is therefore important to understand how to detect and debug issues as they appear. The good news is that Pegasus is doing a good job with the detection part, using for example exit codes, and provides tooling to help you debug. In this notebook, we will be using the same workflow as in the previous one, but introduce an error and see if we can detect it.
First, let's clean up some files so that we can run this notebook multiple times:
End of explanation
"""
import logging
from pathlib import Path
from Pegasus.api import *
logging.basicConfig(level=logging.DEBUG)
# --- Properties ---------------------------------------------------------------
props = Properties()
props["pegasus.monitord.encoding"] = "json"
props["pegasus.catalog.workflow.amqp.url"] = "amqp://friend:donatedata@msgs.pegasus.isi.edu:5672/prod/workflows"
props["pegasus.mode"] = "tutorial" # speeds up tutorial workflows - remove for production ones
props.write() # written to ./pegasus.properties
# --- Replicas -----------------------------------------------------------------
with open("f-problem.a", "w") as f:
f.write("This is sample input to KEG")
fa = File("f.a").add_metadata(creator="ryan")
rc = ReplicaCatalog().add_replica("local", fa, Path(".").resolve() / "f.a")
# --- Transformations ----------------------------------------------------------
preprocess = Transformation(
"preprocess",
site="condorpool",
pfn="/usr/bin/pegasus-keg",
is_stageable=False,
arch=Arch.X86_64,
os_type=OS.LINUX
)
findrange = Transformation(
"findrange",
site="condorpool",
pfn="/usr/bin/pegasus-keg",
is_stageable=False,
arch=Arch.X86_64,
os_type=OS.LINUX
)
analyze = Transformation(
"analyze",
site="condorpool",
pfn="/usr/bin/pegasus-keg",
is_stageable=False,
arch=Arch.X86_64,
os_type=OS.LINUX
)
tc = TransformationCatalog().add_transformations(preprocess, findrange, analyze)
# --- Workflow -----------------------------------------------------------------
'''
[f.b1] - (findrange) - [f.c1]
/ \
[f.a] - (preprocess) (analyze) - [f.d]
\ /
[f.b2] - (findrange) - [f.c2]
'''
wf = Workflow("blackdiamond")
fb1 = File("f.b1")
fb2 = File("f.b2")
job_preprocess = Job(preprocess)\
.add_args("-a", "preprocess", "-T", "3", "-i", fa, "-o", fb1, fb2)\
.add_inputs(fa)\
.add_outputs(fb1, fb2)
fc1 = File("f.c1")
job_findrange_1 = Job(findrange)\
.add_args("-a", "findrange", "-T", "3", "-i", fb1, "-o", fc1)\
.add_inputs(fb1)\
.add_outputs(fc1)
fc2 = File("f.c2")
job_findrange_2 = Job(findrange)\
.add_args("-a", "findrange", "-T", "3", "-i", fb2, "-o", fc2)\
.add_inputs(fb2)\
.add_outputs(fc2)
fd = File("f.d")
job_analyze = Job(analyze)\
.add_args("-a", "analyze", "-T", "3", "-i", fc1, fc2, "-o", fd)\
.add_inputs(fc1, fc2)\
.add_outputs(fd)
wf.add_jobs(job_preprocess, job_findrange_1, job_findrange_2, job_analyze)
wf.add_replica_catalog(rc)
wf.add_transformation_catalog(tc)
"""
Explanation:
End of explanation
"""
try:
wf.plan(submit=True)\
.wait()
except PegasusClientError as e:
print(e)
"""
Explanation: 2. Run the Workflow
End of explanation
"""
try:
wf.analyze()
except PegasusClientError as e:
print(e)
"""
Explanation: 3. Analyze
If the workflow failed you can use wf.analyze() do get help finding out what went wrong.
End of explanation
"""
!mv f-problem.a f.a
"""
Explanation: In the output we can see Expected local file does not exist: /home/scitech/notebooks/02-Debugging/f.a which tells us that an input did not exist. This is because we created it with the wrong name (f-problem.a) instead of the intended name (f.a).
3. Resolving the issue
Let's resolve the issue by renaming the wrongly named input file:
End of explanation
"""
try:
wf.run() \
.wait()
except PegasusClientError as e:
print(e)
"""
Explanation: 3. Restart the workflow
We can now restart the workflow from where it stopped. Alternativly to the run(), you could plan() a new instance, but in that case the workflow would start all the way from the beginning again.
End of explanation
"""
|
rashikaranpuria/Machine-Learning-Specialization | Regression/Assignment_four/.ipynb_checkpoints/week-4-ridge-regression-assignment-1-blank-checkpoint.ipynb | mit | import graphlab
graphlab.product_key.set_product_key("C0C2-04B4-D94B-70F6-8771-86F9-C6E1-E122")
"""
Explanation: Regression Week 4: Ridge Regression (interpretation)
In this notebook, we will run ridge regression multiple times with different L2 penalties to see which one produces the best fit. We will revisit the example of polynomial regression as a means to see the effect of L2 regularization. In particular, we will:
* Use a pre-built implementation of regression (GraphLab Create) to run polynomial regression
* Use matplotlib to visualize polynomial regressions
* Use a pre-built implementation of regression (GraphLab Create) to run polynomial regression, this time with L2 penalty
* Use matplotlib to visualize polynomial regressions under L2 regularization
* Choose best L2 penalty using cross-validation.
* Assess the final fit using test data.
We will continue to use the House data from previous notebooks. (In the next programming assignment for this module, you will implement your own ridge regression learning algorithm using gradient descent.)
Fire up graphlab create
End of explanation
"""
def polynomial_sframe(feature, degree):
# assume that degree >= 1
# initialize the SFrame:
poly_sframe = graphlab.SFrame()
# and set poly_sframe['power_1'] equal to the passed feature
poly_sframe['power_1'] = feature
# first check if degree > 1
if degree > 1:
# then loop over the remaining degrees:
# range usually starts at 0 and stops at the endpoint-1. We want it to start at 2 and stop at degree
for power in range(2, degree+1):
# first we'll give the column a name:
name = 'power_' + str(power)
# then assign poly_sframe[name] to the appropriate power of feature
poly_sframe[name] = feature**power
return poly_sframe
"""
Explanation: Polynomial regression, revisited
We build on the material from Week 3, where we wrote the function to produce an SFrame with columns containing the powers of a given input. Copy and paste the function polynomial_sframe from Week 3:
End of explanation
"""
import matplotlib.pyplot as plt
%matplotlib inline
sales = graphlab.SFrame('kc_house_data.gl/kc_house_data.gl')
"""
Explanation: Let's use matplotlib to visualize what a polynomial regression looks like on the house data.
End of explanation
"""
sales = sales.sort(['sqft_living','price'])
"""
Explanation: As in Week 3, we will use the sqft_living variable. For plotting purposes (connecting the dots), you'll need to sort by the values of sqft_living. For houses with identical square footage, we break the tie by their prices.
End of explanation
"""
l2_small_penalty = 1e-5
"""
Explanation: Let us revisit the 15th-order polynomial model using the 'sqft_living' input. Generate polynomial features up to degree 15 using polynomial_sframe() and fit a model with these features. When fitting the model, use an L2 penalty of 1e-5:
End of explanation
"""
poly1_data = polynomial_sframe(sales['sqft_living'], 15) # use equivalent of `polynomial_sframe`
my_features = poly1_data.column_names()
poly1_data['price'] = sales['price']
model1 = graphlab.linear_regression.create(poly1_data, target = 'price', features = my_features, validation_set = None, l2_penalty=l2_small_penalty)
model1.get("coefficients")
"""
Explanation: Note: When we have so many features and so few data points, the solution can become highly numerically unstable, which can sometimes lead to strange unpredictable results. Thus, rather than using no regularization, we will introduce a tiny amount of regularization (l2_penalty=1e-5) to make the solution numerically stable. (In lecture, we discussed the fact that regularization can also help with numerical stability, and here we are seeing a practical example.)
With the L2 penalty specified above, fit the model and print out the learned weights.
Hint: make sure to add 'price' column to the new SFrame before calling graphlab.linear_regression.create(). Also, make sure GraphLab Create doesn't create its own validation set by using the option validation_set=None in this call.
End of explanation
"""
(semi_split1, semi_split2) = sales.random_split(.5,seed=0)
(set_1, set_2) = semi_split1.random_split(0.5, seed=0)
(set_3, set_4) = semi_split2.random_split(0.5, seed=0)
"""
Explanation: QUIZ QUESTION: What's the learned value for the coefficient of feature power_1?
Observe overfitting
Recall from Week 3 that the polynomial fit of degree 15 changed wildly whenever the data changed. In particular, when we split the sales data into four subsets and fit the model of degree 15, the result came out to be very different for each subset. The model had a high variance. We will see in a moment that ridge regression reduces such variance. But first, we must reproduce the experiment we did in Week 3.
First, split the data into split the sales data into four subsets of roughly equal size and call them set_1, set_2, set_3, and set_4. Use .random_split function and make sure you set seed=0.
End of explanation
"""
poly1_data = polynomial_sframe(set_1['sqft_living'], 15) # use equivalent of `polynomial_sframe`
my_features = poly1_data.column_names()
poly1_data['price'] = set_1['price']
model1 = graphlab.linear_regression.create(poly1_data, target = 'price', features = my_features, verbose = False, validation_set = None, l2_penalty=l2_small_penalty)
model1.get("coefficients")
poly1_data = polynomial_sframe(set_2['sqft_living'], 15) # use equivalent of `polynomial_sframe`
my_features = poly1_data.column_names()
poly1_data['price'] = set_2['price']
model1 = graphlab.linear_regression.create(poly1_data, target = 'price', features = my_features, verbose = False, validation_set = None, l2_penalty=l2_small_penalty)
model1.get("coefficients")
poly1_data = polynomial_sframe(set_3['sqft_living'], 15) # use equivalent of `polynomial_sframe`
my_features = poly1_data.column_names()
poly1_data['price'] = set_3['price']
model1 = graphlab.linear_regression.create(poly1_data, target = 'price', features = my_features, verbose = False, validation_set = None, l2_penalty=l2_small_penalty)
model1.get("coefficients")
poly1_data = polynomial_sframe(set_4['sqft_living'], 15) # use equivalent of `polynomial_sframe`
my_features = poly1_data.column_names()
poly1_data['price'] = set_4['price']
model1 = graphlab.linear_regression.create(poly1_data, target = 'price', features = my_features, verbose = False, validation_set = None, l2_penalty=l2_small_penalty)
model1.get("coefficients")
"""
Explanation: Next, fit a 15th degree polynomial on set_1, set_2, set_3, and set_4, using 'sqft_living' to predict prices. Print the weights and make a plot of the resulting model.
Hint: When calling graphlab.linear_regression.create(), use the same L2 penalty as before (i.e. l2_small_penalty). Also, make sure GraphLab Create doesn't create its own validation set by using the option validation_set = None in this call.
End of explanation
"""
poly1_data = polynomial_sframe(set_1['sqft_living'], 15) # use equivalent of `polynomial_sframe`
my_features = poly1_data.column_names()
poly1_data['price'] = set_1['price']
model1 = graphlab.linear_regression.create(poly1_data, target = 'price', features = my_features, verbose = False, validation_set = None, l2_penalty=1e5)
model1.get("coefficients")
poly1_data = polynomial_sframe(set_2['sqft_living'], 15) # use equivalent of `polynomial_sframe`
my_features = poly1_data.column_names()
poly1_data['price'] = set_2['price']
model1 = graphlab.linear_regression.create(poly1_data, target = 'price', features = my_features, verbose = False, validation_set = None, l2_penalty=1e5)
model1.get("coefficients")
poly1_data = polynomial_sframe(set_3['sqft_living'], 15) # use equivalent of `polynomial_sframe`
my_features = poly1_data.column_names()
poly1_data['price'] = set_3['price']
model1 = graphlab.linear_regression.create(poly1_data, target = 'price', features = my_features, verbose = False, validation_set = None, l2_penalty=1e5)
model1.get("coefficients")
poly1_data = polynomial_sframe(set_4['sqft_living'], 15) # use equivalent of `polynomial_sframe`
my_features = poly1_data.column_names()
poly1_data['price'] = set_4['price']
model1 = graphlab.linear_regression.create(poly1_data, target = 'price', features = my_features, verbose = False, validation_set = None, l2_penalty=1e5)
model1.get("coefficients")
"""
Explanation: The four curves should differ from one another a lot, as should the coefficients you learned.
QUIZ QUESTION: For the models learned in each of these training sets, what are the smallest and largest values you learned for the coefficient of feature power_1? (For the purpose of answering this question, negative numbers are considered "smaller" than positive numbers. So -5 is smaller than -3, and -3 is smaller than 5 and so forth.)
Ridge regression comes to rescue
Generally, whenever we see weights change so much in response to change in data, we believe the variance of our estimate to be large. Ridge regression aims to address this issue by penalizing "large" weights. (Weights of model15 looked quite small, but they are not that small because 'sqft_living' input is in the order of thousands.)
With the argument l2_penalty=1e5, fit a 15th-order polynomial model on set_1, set_2, set_3, and set_4. Other than the change in the l2_penalty parameter, the code should be the same as the experiment above. Also, make sure GraphLab Create doesn't create its own validation set by using the option validation_set = None in this call.
End of explanation
"""
(train_valid, test) = sales.random_split(.9, seed=1)
train_valid_shuffled = graphlab.toolkits.cross_validation.shuffle(train_valid, random_seed=1)
"""
Explanation: These curves should vary a lot less, now that you applied a high degree of regularization.
QUIZ QUESTION: For the models learned with the high level of regularization in each of these training sets, what are the smallest and largest values you learned for the coefficient of feature power_1? (For the purpose of answering this question, negative numbers are considered "smaller" than positive numbers. So -5 is smaller than -3, and -3 is smaller than 5 and so forth.)
Selecting an L2 penalty via cross-validation
Just like the polynomial degree, the L2 penalty is a "magic" parameter we need to select. We could use the validation set approach as we did in the last module, but that approach has a major disadvantage: it leaves fewer observations available for training. Cross-validation seeks to overcome this issue by using all of the training set in a smart way.
We will implement a kind of cross-validation called k-fold cross-validation. The method gets its name because it involves dividing the training set into k segments of roughtly equal size. Similar to the validation set method, we measure the validation error with one of the segments designated as the validation set. The major difference is that we repeat the process k times as follows:
Set aside segment 0 as the validation set, and fit a model on rest of data, and evalutate it on this validation set<br>
Set aside segment 1 as the validation set, and fit a model on rest of data, and evalutate it on this validation set<br>
...<br>
Set aside segment k-1 as the validation set, and fit a model on rest of data, and evalutate it on this validation set
After this process, we compute the average of the k validation errors, and use it as an estimate of the generalization error. Notice that all observations are used for both training and validation, as we iterate over segments of data.
To estimate the generalization error well, it is crucial to shuffle the training data before dividing them into segments. GraphLab Create has a utility function for shuffling a given SFrame. We reserve 10% of the data as the test set and shuffle the remainder. (Make sure to use seed=1 to get consistent answer.)
End of explanation
"""
n = len(train_valid_shuffled)
k = 10 # 10-fold cross-validation
for i in xrange(k):
start = (n*i)/k
end = (n*(i+1))/k-1
print i, (start, end)
"""
Explanation: Once the data is shuffled, we divide it into equal segments. Each segment should receive n/k elements, where n is the number of observations in the training set and k is the number of segments. Since the segment 0 starts at index 0 and contains n/k elements, it ends at index (n/k)-1. The segment 1 starts where the segment 0 left off, at index (n/k). With n/k elements, the segment 1 ends at index (n*2/k)-1. Continuing in this fashion, we deduce that the segment i starts at index (n*i/k) and ends at (n*(i+1)/k)-1.
With this pattern in mind, we write a short loop that prints the starting and ending indices of each segment, just to make sure you are getting the splits right.
End of explanation
"""
train_valid_shuffled[0:10] # rows 0 to 9
"""
Explanation: Let us familiarize ourselves with array slicing with SFrame. To extract a continuous slice from an SFrame, use colon in square brackets. For instance, the following cell extracts rows 0 to 9 of train_valid_shuffled. Notice that the first index (0) is included in the slice but the last index (10) is omitted.
End of explanation
"""
validation4 = train_valid_shuffled[5818:7758] #rows 0 to 9
"""
Explanation: Now let us extract individual segments with array slicing. Consider the scenario where we group the houses in the train_valid_shuffled dataframe into k=10 segments of roughly equal size, with starting and ending indices computed as above.
Extract the fourth segment (segment 3) and assign it to a variable called validation4.
End of explanation
"""
print int(round(validation4['price'].mean(), 0))
"""
Explanation: To verify that we have the right elements extracted, run the following cell, which computes the average price of the fourth segment. When rounded to nearest whole number, the average should be $536,234.
End of explanation
"""
n = len(train_valid_shuffled)
first_two = train_valid_shuffled[0:2]
last_two = train_valid_shuffled[n-2:n]
"""
Explanation: After designating one of the k segments as the validation set, we train a model using the rest of the data. To choose the remainder, we slice (0:start) and (end+1:n) of the data and paste them together. SFrame has append() method that pastes together two disjoint sets of rows originating from a common dataset. For instance, the following cell pastes together the first and last two rows of the train_valid_shuffled dataframe.
End of explanation
"""
train4=train_valid_shuffled[0:5818].append(train_valid_shuffled[7758:19396])
"""
Explanation: Extract the remainder of the data after excluding fourth segment (segment 3) and assign the subset to train4.
End of explanation
"""
print int(round(train4['price'].mean(), 0))
"""
Explanation: To verify that we have the right elements extracted, run the following cell, which computes the average price of the data with fourth segment excluded. When rounded to nearest whole number, the average should be $539,450.
End of explanation
"""
def get_RSS(prediction, output):
residual = output - prediction
# square the residuals and add them up
RS = residual*residual
RSS = RS.sum()
return(RSS)
def k_fold_cross_validation(k, l2_penalty, data, features_list):
n = len(data)
RSS = 0
for i in xrange(k):
start = (n*i)/k
end = (n*(i+1))/k-1
validation=data[start:end+1]
train=data[0:start].append(data[end+1:n])
model = graphlab.linear_regression.create(train, target='price', features = features_list, l2_penalty=l2_penalty,validation_set=None,verbose = False)
predictions=model.predict(validation)
A =get_RSS(predictions,validation['price'])
RSS = RSS + A
Val_err = RSS/k
return Val_err
"""
Explanation: Now we are ready to implement k-fold cross-validation. Write a function that computes k validation errors by designating each of the k segments as the validation set. It accepts as parameters (i) k, (ii) l2_penalty, (iii) dataframe, (iv) name of output column (e.g. price) and (v) list of feature names. The function returns the average validation error using k segments as validation sets.
For each i in [0, 1, ..., k-1]:
Compute starting and ending indices of segment i and call 'start' and 'end'
Form validation set by taking a slice (start:end+1) from the data.
Form training set by appending slice (end+1:n) to the end of slice (0:start).
Train a linear model using training set just formed, with a given l2_penalty
Compute validation error using validation set just formed
End of explanation
"""
import numpy as np
poly_data = polynomial_sframe(train_valid_shuffled['sqft_living'], 15)
my_features = poly_data.column_names()
poly_data['price'] = train_valid_shuffled['price']
for l2_penalty in np.logspace(1, 7, num=13):
Val_err = k_fold_cross_validation(10, l2_penalty, poly_data, my_features)
print l2_penalty
print Val_err
"""
Explanation: Once we have a function to compute the average validation error for a model, we can write a loop to find the model that minimizes the average validation error. Write a loop that does the following:
* We will again be aiming to fit a 15th-order polynomial model using the sqft_living input
* For l2_penalty in [10^1, 10^1.5, 10^2, 10^2.5, ..., 10^7] (to get this in Python, you can use this Numpy function: np.logspace(1, 7, num=13).)
* Run 10-fold cross-validation with l2_penalty
* Report which L2 penalty produced the lowest average validation error.
Note: since the degree of the polynomial is now fixed to 15, to make things faster, you should generate polynomial features in advance and re-use them throughout the loop. Make sure to use train_valid_shuffled when generating polynomial features!
End of explanation
"""
# Plot the l2_penalty values in the x axis and the cross-validation error in the y axis.
# Using plt.xscale('log') will make your plot more intuitive.
"""
Explanation: QUIZ QUESTIONS: What is the best value for the L2 penalty according to 10-fold validation?
You may find it useful to plot the k-fold cross-validation errors you have obtained to better understand the behavior of the method.
End of explanation
"""
poly1_data = polynomial_sframe(train_valid_shuffled['sqft_living'], 15) # use equivalent of `polynomial_sframe`
my_features = poly1_data.column_names()
poly1_data['price'] = train_valid_shuffled['price']
model1 = graphlab.linear_regression.create(poly1_data, target = 'price', features = my_features, verbose = False, validation_set = None, l2_penalty=1000)
Val_err = k_fold_cross_validation(10, 1000, poly1_data, my_features)
"""
Explanation: Once you found the best value for the L2 penalty using cross-validation, it is important to retrain a final model on all of the training data using this value of l2_penalty. This way, your final model will be trained on the entire dataset.
End of explanation
"""
|
Chipe1/aima-python | knowledge_current_best.ipynb | mit | from knowledge import *
from notebook import pseudocode, psource
"""
Explanation: KNOWLEDGE
The knowledge module covers Chapter 19: Knowledge in Learning from Stuart Russel's and Peter Norvig's book Artificial Intelligence: A Modern Approach.
Execute the cell below to get started.
End of explanation
"""
pseudocode('Current-Best-Learning')
"""
Explanation: CONTENTS
Overview
Current-Best Learning
OVERVIEW
Like the learning module, this chapter focuses on methods for generating a model/hypothesis for a domain; however, unlike the learning chapter, here we use prior knowledge to help us learn from new experiences and find a proper hypothesis.
First-Order Logic
Usually knowledge in this field is represented as first-order logic; a type of logic that uses variables and quantifiers in logical sentences. Hypotheses are represented by logical sentences with variables, while examples are logical sentences with set values instead of variables. The goal is to assign a value to a special first-order logic predicate, called goal predicate, for new examples given a hypothesis. We learn this hypothesis by infering knowledge from some given examples.
Representation
In this module, we use dictionaries to represent examples, with keys being the attribute names and values being the corresponding example values. Examples also have an extra boolean field, 'GOAL', for the goal predicate. A hypothesis is represented as a list of dictionaries. Each dictionary in that list represents a disjunction. Inside these dictionaries/disjunctions we have conjunctions.
For example, say we want to predict if an animal (cat or dog) will take an umbrella given whether or not it rains or the animal wears a coat. The goal value is 'take an umbrella' and is denoted by the key 'GOAL'. An example:
{'Species': 'Cat', 'Coat': 'Yes', 'Rain': 'Yes', 'GOAL': True}
A hypothesis can be the following:
[{'Species': 'Cat'}]
which means an animal will take an umbrella if and only if it is a cat.
Consistency
We say that an example e is consistent with an hypothesis h if the assignment from the hypothesis for e is the same as e['GOAL']. If the above example and hypothesis are e and h respectively, then e is consistent with h since e['Species'] == 'Cat'. For e = {'Species': 'Dog', 'Coat': 'Yes', 'Rain': 'Yes', 'GOAL': True}, the example is no longer consistent with h, since the value assigned to e is False while e['GOAL'] is True.
CURRENT-BEST LEARNING
Overview
In Current-Best Learning, we start with a hypothesis and we refine it as we iterate through the examples. For each example, there are three possible outcomes: the example is consistent with the hypothesis, the example is a false positive (real value is false but got predicted as true) and the example is a false negative (real value is true but got predicted as false). Depending on the outcome we refine the hypothesis accordingly:
Consistent: We do not change the hypothesis and move on to the next example.
False Positive: We specialize the hypothesis, which means we add a conjunction.
False Negative: We generalize the hypothesis, either by removing a conjunction or a disjunction, or by adding a disjunction.
When specializing or generalizing, we should make sure to not create inconsistencies with previous examples. To avoid that caveat, backtracking is needed. Thankfully, there is not just one specialization or generalization, so we have a lot to choose from. We will go through all the specializations/generalizations and we will refine our hypothesis as the first specialization/generalization consistent with all the examples seen up to that point.
Pseudocode
End of explanation
"""
psource(current_best_learning, specializations, generalizations)
"""
Explanation: Implementation
As mentioned earlier, examples are dictionaries (with keys being the attribute names) and hypotheses are lists of dictionaries (each dictionary is a disjunction). Also, in the hypothesis, we denote the NOT operation with an exclamation mark (!).
We have functions to calculate the list of all specializations/generalizations, to check if an example is consistent/false positive/false negative with a hypothesis. We also have an auxiliary function to add a disjunction (or operation) to a hypothesis, and two other functions to check consistency of all (or just the negative) examples.
You can read the source by running the cell below:
End of explanation
"""
animals_umbrellas = [
{'Species': 'Cat', 'Rain': 'Yes', 'Coat': 'No', 'GOAL': True},
{'Species': 'Cat', 'Rain': 'Yes', 'Coat': 'Yes', 'GOAL': True},
{'Species': 'Dog', 'Rain': 'Yes', 'Coat': 'Yes', 'GOAL': True},
{'Species': 'Dog', 'Rain': 'Yes', 'Coat': 'No', 'GOAL': False},
{'Species': 'Dog', 'Rain': 'No', 'Coat': 'No', 'GOAL': False},
{'Species': 'Cat', 'Rain': 'No', 'Coat': 'No', 'GOAL': False},
{'Species': 'Cat', 'Rain': 'No', 'Coat': 'Yes', 'GOAL': True}
]
"""
Explanation: You can view the auxiliary functions in the knowledge module. A few notes on the functionality of some of the important methods:
specializations: For each disjunction in the hypothesis, it adds a conjunction for values in the examples encountered so far (if the conjunction is consistent with all the examples). It returns a list of hypotheses.
generalizations: It adds to the list of hypotheses in three phases. First it deletes disjunctions, then it deletes conjunctions and finally it adds a disjunction.
add_or: Used by generalizations to add an or operation (a disjunction) to the hypothesis. Since the last example is the problematic one which wasn't consistent with the hypothesis, it will model the new disjunction to that example. It creates a disjunction for each combination of attributes in the example and returns the new hypotheses consistent with the negative examples encountered so far. We do not need to check the consistency of positive examples, since they are already consistent with at least one other disjunction in the hypotheses' set, so this new disjunction doesn't affect them. In other words, if the value of a positive example is negative under the disjunction, it doesn't matter since we know there exists a disjunction consistent with the example.
Since the algorithm stops searching the specializations/generalizations after the first consistent hypothesis is found, usually you will get different results each time you run the code.
Examples
We will take a look at two examples. The first is a trivial one, while the second is a bit more complicated (you can also find it in the book).
Earlier, we had the "animals taking umbrellas" example. Now we want to find a hypothesis to predict whether or not an animal will take an umbrella. The attributes are Species, Rain and Coat. The possible values are [Cat, Dog], [Yes, No] and [Yes, No] respectively. Below we give seven examples (with GOAL we denote whether an animal will take an umbrella or not):
End of explanation
"""
initial_h = [{'Species': 'Cat'}]
for e in animals_umbrellas:
print(guess_value(e, initial_h))
"""
Explanation: Let our initial hypothesis be [{'Species': 'Cat'}]. That means every cat will be taking an umbrella. We can see that this is not true, but it doesn't matter since we will refine the hypothesis using the Current-Best algorithm. First, let's see how that initial hypothesis fares to have a point of reference.
End of explanation
"""
h = current_best_learning(animals_umbrellas, initial_h)
for e in animals_umbrellas:
print(guess_value(e, h))
"""
Explanation: We got 5/7 correct. Not terribly bad, but we can do better. Lets now run the algorithm and see how that performs in comparison to our current result.
End of explanation
"""
print(h)
"""
Explanation: We got everything right! Let's print our hypothesis:
End of explanation
"""
def r_example(Alt, Bar, Fri, Hun, Pat, Price, Rain, Res, Type, Est, GOAL):
return {'Alt': Alt, 'Bar': Bar, 'Fri': Fri, 'Hun': Hun, 'Pat': Pat,
'Price': Price, 'Rain': Rain, 'Res': Res, 'Type': Type, 'Est': Est,
'GOAL': GOAL}
"""
Explanation: If an example meets any of the disjunctions in the list, it will be True, otherwise it will be False.
Let's move on to a bigger example, the "Restaurant" example from the book. The attributes for each example are the following:
Alternative option (Alt)
Bar to hang out/wait (Bar)
Day is Friday (Fri)
Is hungry (Hun)
How much does it cost (Price, takes values in [$, $$, $$$])
How many patrons are there (Pat, takes values in [None, Some, Full])
Is raining (Rain)
Has made reservation (Res)
Type of restaurant (Type, takes values in [French, Thai, Burger, Italian])
Estimated waiting time (Est, takes values in [0-10, 10-30, 30-60, >60])
We want to predict if someone will wait or not (Goal = WillWait). Below we show twelve examples found in the book.
With the function r_example we will build the dictionary examples:
End of explanation
"""
restaurant = [
r_example('Yes', 'No', 'No', 'Yes', 'Some', '$$$', 'No', 'Yes', 'French', '0-10', True),
r_example('Yes', 'No', 'No', 'Yes', 'Full', '$', 'No', 'No', 'Thai', '30-60', False),
r_example('No', 'Yes', 'No', 'No', 'Some', '$', 'No', 'No', 'Burger', '0-10', True),
r_example('Yes', 'No', 'Yes', 'Yes', 'Full', '$', 'Yes', 'No', 'Thai', '10-30', True),
r_example('Yes', 'No', 'Yes', 'No', 'Full', '$$$', 'No', 'Yes', 'French', '>60', False),
r_example('No', 'Yes', 'No', 'Yes', 'Some', '$$', 'Yes', 'Yes', 'Italian', '0-10', True),
r_example('No', 'Yes', 'No', 'No', 'None', '$', 'Yes', 'No', 'Burger', '0-10', False),
r_example('No', 'No', 'No', 'Yes', 'Some', '$$', 'Yes', 'Yes', 'Thai', '0-10', True),
r_example('No', 'Yes', 'Yes', 'No', 'Full', '$', 'Yes', 'No', 'Burger', '>60', False),
r_example('Yes', 'Yes', 'Yes', 'Yes', 'Full', '$$$', 'No', 'Yes', 'Italian', '10-30', False),
r_example('No', 'No', 'No', 'No', 'None', '$', 'No', 'No', 'Thai', '0-10', False),
r_example('Yes', 'Yes', 'Yes', 'Yes', 'Full', '$', 'No', 'No', 'Burger', '30-60', True)
]
"""
Explanation: In code:
End of explanation
"""
initial_h = [{'Alt': 'Yes'}]
h = current_best_learning(restaurant, initial_h)
for e in restaurant:
print(guess_value(e, h))
"""
Explanation: Say our initial hypothesis is that there should be an alternative option and lets run the algorithm.
End of explanation
"""
print(h)
"""
Explanation: The predictions are correct. Let's see the hypothesis that accomplished that:
End of explanation
"""
|
CSchoel/learn-wavelets | wavelet-denoising.ipynb | mit | %matplotlib inline
# we will use numpy and matplotlib for all the following examples
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import pywt
def doppler(freqs, dt, amp_inc=10, t0=0, f0=np.pi*2):
t = np.arange(len(freqs)) * dt + t0
amp = np.linspace(1, np.sqrt(amp_inc), len(freqs))**2
sig = amp * np.sin(freqs * f0 * t)
return t,sig
def noisify(sig, noise_amp=1):
return sig + (np.random.random(len(sig))-0.5)*2*noise_amp
t_dop, sig_dop = doppler(np.arange(10,20,0.01)[::-1], 0.002)
sig_dop_n2 = noisify(sig_dop, noise_amp=2)
plt.figure(figsize=(16,4))
plt.subplot(121)
plt.plot(t_dop, sig_dop)
plt.title("original signal")
plt.subplot(122)
plt.plot(t_dop, sig_dop_n2)
plt.title("noisy signal")
plt.show()
"""
Explanation: Wavelet denoising with PyWavelets
by Christopher Schölzel
Author's Note:
This notebook is a documentation of my own learning process regarding wavelet denoising.
I have covered the basics of the wavelet transform in another notebook.
Here, I will therefore assume that the reader is familiar with the basics and dive right into denoising.
For a fast implementation of the DWT we will use PyWavelets.
The problem
So, first of all, what is the problem that we want to solve?
Well, we have a signal that is distorted by some kind of noise (we assume white noise here).
End of explanation
"""
def fourier_denoising(sig, min_freq, max_freq, dt=1.0):
trans = np.fft.fft(sig)
freqs = np.fft.fftfreq(len(sig), d=dt)
trans[np.where(np.logical_or(np.abs(freqs) < min_freq, np.abs(freqs) > max_freq))] = 0
res = np.fft.ifft(trans)
return res.real
fsig_dop = np.abs(np.fft.fft(sig_dop))
fsig_dop_n2 = np.abs(np.fft.fft(sig_dop_n2))
freqs_dop = np.fft.fftfreq(len(sig_dop),d=0.002)
idx = np.where(np.abs(freqs_dop) < 50)
plt.plot(freqs_dop[idx], fsig_dop[idx], label="fft(signal)")
plt.plot(freqs_dop[idx], fsig_dop_n2[idx], label="fft(signal + noise)")
plt.legend(loc="best")
plt.show()
fsig_dop_fden = fourier_denoising(sig_dop_n2, 0, 20, dt=0.002)
plt.plot(t_dop, sig_dop, lw=6, alpha=0.3, label="signal w/o. noise")
plt.plot(t_dop, fsig_dop_fden, "r-", label="denoising result")
plt.legend(loc="best")
plt.show()
"""
Explanation: Our objective is to transform the signal on the right side back into the signal on the left side.
If we know the frequency range of the original signal, we could try to do this with an ideal low- or high-pass filter using the fourier transform.
End of explanation
"""
# print the wavelet families available
print(pywt.families())
# print a list of available wavelets from one family
print(pywt.wavelist("sym"))
"""
Explanation: This approach works reasonably well assuming that the frequency range of the original sequence is known.
This may be the case e.g. for sound data, but still the fourier filters do not address the part of the noise that falls within this frequency range.
Introduction to PyWavelets
As we learned before, the discrete wavelet transform is similar to a (windowed) fourier transform and thus there exist approaches for wavelet denoising that are similar to this cropping of frequency ranges.
Before we can have a look into wavelet denoising, we first have to make ourselves familiar with the DWT implementation provided by PyWavelets.
End of explanation
"""
# the haar wavelet
haar = pywt.Wavelet("haar")
h, g, hr, gr = haar.filter_bank
print("Haar wavelet highpass filter:",h)
print("Haar wavelet lowpass filter: ",g)
print(haar) # prints a summary of the wavelet properties
"""
Explanation: With this functions we get a list of supported wavelet families and individual wavelets.
Now, let's have a look at a single wavelet that we are already familiar with: The Haar wavelet.
End of explanation
"""
# Daubechies "least asymmetric" wavelets with 12 vanishing points
sym12 = pywt.Wavelet("sym12")
phi_s12, psi_s12, x_s12 = sym12.wavefun(8)
plt.figure(figsize=(16,4))
plt.subplot(121)
plt.title("$\phi$")
plt.plot(x_s12,phi_s12)
plt.subplot(122)
plt.title("$\psi$")
plt.plot(x_s12,psi_s12)
plt.show()
"""
Explanation: The variable filter_bank contains the filters for decomposition and reconstruction.
We printed only the decomposition filters h and g, because for wavelets that constitute an orthogonal basis the reconstruction filters hr and gr are only reversed versions of h and g.
For the Haar wavelet, it is not really interesting to plot the wavelet functions $\phi$ and $\psi$.
Therefore we will now look at another wavelet from the "least asymmetric" wavelet family introduced by Daubechies (also called "Symlets".
End of explanation
"""
cA, cD = pywt.dwt(sig_dop_n2, "sym12", mode="zero")
plt.plot(cA, label="approximation coefficients")
plt.plot(cD, label="detail coefficients")
plt.legend(loc="best")
plt.show()
"""
Explanation: As an interesting side-note, the Symlet wavelets actually do not have a closed form solution.
The scaling function $\phi$ (also called the "father wavelet") and the wavelet function $\psi$ (also called the "father wavelet") both have to be estimated for any given detail level.
Now that we have seen a wavelet function it is time to learn how to perform a discrete wavelet transform with PyWavelets.
End of explanation
"""
coeffs = pywt.wavedec(sig_dop, "sym12")
approx = coeffs[0]
details = coeffs[1:]
coeffs_n = pywt.wavedec(sig_dop_n2, "sym12")
approx_n = coeffs_n[0]
details_n = coeffs_n[1:]
def plot_dwt(details, approx, xlim=(-300,300), **line_kwargs):
for i in range(len(details)):
plt.subplot(len(details)+1,1,i+1)
d = details[len(details)-1-i]
half = len(d)//2
xvals = np.arange(-half,-half+len(d))* 2**i
plt.plot(xvals, d, **line_kwargs)
plt.xlim(xlim)
plt.title("detail[{}]".format(i))
plt.subplot(len(details)+1,1,len(details)+1)
plt.title("approx")
plt.plot(xvals, approx, **line_kwargs)
plt.xlim(xlim)
plt.figure(figsize=(15,24))
plot_dwt(details, approx)
plot_dwt(details_n, approx_n, color="red", alpha=0.5)
plt.show()
"""
Explanation: We can see the approximation coefficients that actually follow the original signal and the detail coefficiens that look more like the noise that we added to the signal.
Actually, the detail coefficients at this level of the DWT are a result of noise.
The original signal does not contain such high frequencies.
We can confirm this expectation by looking at the whole range of coefficients for a full decomposition with the DWT.
End of explanation
"""
def neigh_block(details, n, sigma):
res = []
L0 = int(np.log2(n) // 2)
L1 = max(1, L0 // 2)
L = L0 + 2 * L1
def nb_beta(sigma, L, detail):
S2 = np.sum(detail ** 2)
lmbd = 4.50524 # solution of lmbd - log(lmbd) = 3
beta = (1 - lmbd * L * sigma**2 / S2)
return max(0, beta)
for d in details:
d2 = d.copy()
for start_b in range(0, len(d2), L0):
end_b = min(len(d2), start_b + L0)
start_B = start_b - L1
end_B = start_B + L
if start_B < 0:
end_B -= start_B
start_B = 0
elif end_B > len(d2):
start_B -= end_B - len(d2)
end_B = len(d2)
assert end_B - start_B == L
d2[start_b:end_b] *= nb_beta(sigma, L, d2[start_B:end_B])
res.append(d2)
return res
details_nb = neigh_block(details_n, len(sig_dop), 0.8)
plt.figure(figsize=(15,24))
plot_dwt(details, approx)
plot_dwt(details_n, approx_n, color="red", alpha=0.5)
plot_dwt(details_nb, approx_n, color="green", alpha=0.5, lw=2)
plt.show()
"""
Explanation: Indeed, we can see that detail[0], detail[1] and detail[2] are mostly zero for the original signal but have higher values for the noisy signal.
Another interesting observation is that the coefficients wander from left to right with increasing coarseness of the approximation, which corresponds to the frequency range of the signal: On the left side we have a high frequency, which decreases to the left.
We can of course use this property for our denoising by setting the detail coefficients of the first levels of the decomposition to zero, but this would only result in a less precise fourier denoising.
Instead we can follow another lead pointed out by Tony Cai and Bernard Silverman[1], who developed the "NeighBlock" method.
<a name="ref1">[1]</a> Cai, T. T. & Silverman, B. W. Incorporating information on neighbouring coefficients into wavelet estimation. Sankhyā: The Indian Journal of Statistics, Series B 127–148 (2001)
Wavelet denoising with NeighBlock
In general it is more likely that a detail coefficient is part of a signal if the neighboring coefficients also contain some signal.
TODO: more description
End of explanation
"""
sig_dop_dn = pywt.waverec([approx_n] + details_nb, "sym12")
plt.figure(figsize=(15,4))
plt.title("denoised signal vs original signal")
plt.plot(sig_dop)
plt.plot(sig_dop_dn)
#plt.plot(fsig_dop_fden)
plt.show()
"""
Explanation: TODO: Description
End of explanation
"""
|
judithyueli/pyFKF | .ipynb_checkpoints/FristExample-checkpoint.ipynb | mit | %matplotlib inline
%load_ext autoreload
%autoreload 2
from CO2simulation import CO2simulation
import matplotlib.pyplot as plt
import numpy as np
import visualizeCO2 as vco2
"""
Explanation: Fast Kalman Filter for Temporal-spatial Data Analysis
End of explanation
"""
CO2 = CO2simulation('low')
data = []
x = []
for i in range(10):
data.append(CO2.move_and_sense())
x.append(CO2.x)
param = vco2.getImgParam('low')
vco2.plotCO2map(x,param)
plt.show()
"""
Explanation: Tracking a CO$_2$ Plume
CO$_2$ from an industrial site can be compressed and injected into a deep saline aquifer for storage. This technology is called CO$_2$ capture and storage or CCS, proposed in (TODO) to combat global warming. As CO$_2$ is lighter than the saline water, it may leak through a natural fracture and contanimate the drinking water. Therefore, monitoring and predicting the long term fate of CO$_2$ at the deep aquifer level is crucial as it will provide an early warning for the CO$_2$ leakage. The goal is to interprete the time-series data recorded in the seismic sensors into spatial maps of a moving CO$_2$ plume, a problem very similar to CT scanning widely used in medical imaging.
The goal is
* Predict and monitor the location of CO$_2$ plume
*
Simulating the Movement of CO$_2$ Plume
Here is a simulated CO$_2$ plume for $5$ days resulted from injecting $300$ tons of CO$_2$ at a depth of $1657m$.
$$ x_{k+1} = f(x_k) + w $$
run code that displays the simulated moving CO$_2$ plume, stored the plume data in SQL?? (TODO)
End of explanation
"""
reload(visualizeCO2)
vco2.plotCO2data(data,0,47)
"""
Explanation: Simulating the Sensor Measurement
The sensor measures the travel time of a seismic signal from a source to a receiver.
$$ y = Hx + v $$
$x$ is the grid block value of CO$_2$ slowness, an idicator of how much CO$_2$ in a block. The product $Hx$ simulate the travel time measurements by integrating $x$ along a raypath. $v$ is the measurement noise.
The presence of CO$_2$ slows down the seismic signal and increases its travel time along a ray path. If the ray path does not intercepts the CO$_2$ plume, the travel time remains constant over time (Ray path 1), otherwise it tends to increase once the CO$_2$ plume intercepts the ray path (Ray path 2).
End of explanation
"""
np.dot(1,5)
run runCO2simulation
"""
Explanation: TODO:
Fig: Run animation/image of the ray path (shooting from one source and receiver) on top of a CO$_2$ plume and display the travel time changes over time.
Fig: Show the time-series data (Path 1 and Path 2) at a receiver with and without noise.
optional: run getraypath will give me all the index of the cells and the length of the ray path within each cell, this can help me compute the travel time along this particular ray path
Kalman filtering
Initialization step
Define $x$, $P$. Before injection took place, there was no CO$_2$ in the aquifer.
End of explanation
"""
from filterpy.common import Q_discrete_white_noise
kf.F = np.diag(np.ones(dim_x))
# kf.Q = Q_discrete_white_noise(dim = dim_x, dt = 0.1, var = 2.35)
kf.Q = 2.5
kf.predict()
print kf.x[:10]
"""
Explanation: Implementing the Prediction Step
$$ x_{k+1} = x_{k} + w_k $$
Note here a simplified Random Walk forecast model is used to substitute $f(x)$. The advantage of using a random walk forecast model is that now we are dealing with a linear instead of nonlinear filtering problem, and the computational cost is much lower as we don't need to evaluate $f(x)$. However, when $dt$ is very large, this random walk forecast model will give poor predictions, and the prediction error cannot be well approximated by $w_k\approx N(0,Q)$, a zero mean Gaussian process noise term. Therefore, the random walk forecast model is only useful when the measuremetns are sampled at a high frequency, and $Q$ has to be seclected to reflect the true model error.
End of explanation
"""
kf.H = CO2.H_mtx
kf.R *= 0.5
z = data[0]
kf.update(z)
"""
Explanation: Implementing the Update Step
End of explanation
"""
from HiKF import HiKF
hikf = HiKF(dim_x, dim_z)
hikf.x
"""
Explanation: TODO
- Fig: Estimate at k, Forecast at k+1, Estimate at k+1, True at k+1
- A table showing:
x: the time CO2 reaches the monitoring well
y: the time CO2 reaches the ray path
PREDICT: x var y UPDATE: x var y
- Fig: MSE vs time
- Fig: Data fitting, slope 45 degree indicates a perfect fit
Use HiKF instead of KF
End of explanation
"""
|
rahulbakshee/TensorFlow-Basics | 01_tf_basics.ipynb | mit | # import
import tensorflow as tf
"""
Explanation: 01 TensorFlow Basics
End of explanation
"""
#add a constant to the graph
hello = tf.constant("TensorFlow Playground")
#create tf session
sess = tf.Session()
#run the session
print(sess.run(hello))
#tf.constant
a = tf.constant(3.0, tf.float32) #to specify a constant right away
b = tf.constant(5)
sess = tf.Session()
print(sess.run(a))
print(sess.run(b), b.dtype)
#tf.constant for matrix multiplications
mat1 = tf.constant([[6., 0.]])
print("mat1 shape:", mat1.shape)
mat2 = tf.constant([[-0.5], [9]])
print("mat2 shape:", mat2.shape)
with tf.Session() as sess:
prod = tf.matmul(mat1, mat2)
print(sess.run(prod))
print("finally shape", prod.shape)
#tf.placeholder
#to specify a placeholder and value will be provided later
c = tf.placeholder(tf.float32)
d = tf.placeholder(tf.float32)
#operation
addition = tf.add(c,d)
product = tf.multiply(c,d)
sess = tf.Session()
print(sess.run(addition, feed_dict={c: 10, d: -2}))
print(sess.run(product, {c: 25, d: 1.2}))
#tf.Variables allow us to add trainable parameters to a graph.
#They are constructed with a type and initial value:
w = tf.Variable([.3], tf.float32)
b = tf.Variable([-.3], tf.float32)
x = tf.placeholder(tf.float32)
model = w*x + b
#To initialize all the variables in a TensorFlow program, you must explicitly call a special operation as follows:
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
#Since x is a placeholder, we can evaluate linear_model for several values of x simultaneously as follows:
print(sess.run(model, {x: [1,2,3,4]}))
#We've created a model, but we don't know how good it is yet.
#To evaluate the model on training data, we need a y placeholder to provide the desired values,
#and we need to write a loss function.
y = tf.placeholder(tf.float32)
#squaring the error
squared_deltas = tf.square(model - y)
#sum all the sqaured errors
loss = tf.reduce_sum(squared_deltas)
print(sess.run(loss, {x:[1,2,3,4], y:[0,-1,-2,-3]}))
sess.close()
#after getting the optimal parameteres we can assign the final optimal values to our tf.Variable using tf.assign
fixw = tf.assign(w, [-1])
fixb = tf.assign(b, [1])
sess = tf.Session()
sess.run([fixw, fixb])
print(sess.run(loss,{x:[1,2,3,4], y:[0,-1,-2,-3]}))
sess.close()
"""
Explanation: Tensors
Tensor is an n-dimensional matrix
0-d tensor: scalar (number)
1-d tensor: vector
2-d tensor: matrix
...
Tensor's Rank
The number of dimensions in a tensor.
[]: a rank 0 tensor
[1,2,3]: a rank 1 tensor - a vector with shape [3]
[[1,2,3], [4,5,6]]: a rank 2 tensor- matrix with shape [2,3]
[[[1,2,3]], [[7,8,9]]]: a rank 3 tensor shape [2,1,3]
Computational Graph
Series of tensorflow operations arranged into graph of nodes. To actually evaluate the nodes, we must run the computational graph within a session. A session encapsulates the control and state of the TensorFlow runtime.
End of explanation
"""
#imports
import numpy as np
import tensorflow as tf
#model parameters
w = tf.Variable([.3], tf.float32)
b = tf.Variable([.3], tf.float32)
#model input and output
x = tf.placeholder(tf.float32)
model = w * x + b
y = tf.placeholder(tf.float32)
#loss
loss = tf.reduce_sum(tf.square(model-y))
#optimiser
optimiser = tf.train.GradientDescentOptimizer(0.01)
train = optimiser.minimize(loss)
#trainings data
x_train = [1,2,3,4]
y_train = [0,-1,-2,-3]
#initialise the variables
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
#training loop
for i in range(1000):
sess.run(train, {x:x_train, y:y_train})
#accuracy
final_w, final_b, final_loss = sess.run([w,b,loss], {x:x_train, y:y_train})
print("w: %s b: %s loss: %s" %(final_w, final_b, final_loss))
sess.close()
"""
Explanation: complete program
End of explanation
"""
|
tensorflow/docs-l10n | site/zh-cn/hub/tutorials/bigbigan_with_tf_hub.ipynb | apache-2.0 | # Copyright 2019 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""
Explanation: Copyright 2019 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
"""
module_path = 'https://tfhub.dev/deepmind/bigbigan-resnet50/1' # ResNet-50
# module_path = 'https://tfhub.dev/deepmind/bigbigan-revnet50x4/1' # RevNet-50 x4
"""
Explanation: 使用 BigBiGAN 生成图像
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://tensorflow.google.cn/hub/tutorials/bigbigan_with_tf_hub"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">在 TensorFlow.org 查看</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/hub/tutorials/bigbigan_with_tf_hub.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">在 Google Colab 中运行 </a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/hub/tutorials/bigbigan_with_tf_hub.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png">在 GitHub 上查看源代码</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/hub/tutorials/bigbigan_with_tf_hub.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png">下载笔记本</a></td>
<td><a href="https://tfhub.dev/s?q=experts%2Fbert"><img src="https://tensorflow.google.cn/images/hub_logo_32px.png">查看 TF Hub 模型</a></td>
</table>
此笔记本演示了 TF Hub 上可用的 BigBiGAN 模型。
BigBiGAN 通过添加可用于无监督表示学习的编码器模块,对标准 (Big)GAN 进行了扩展。大致来说,在给定实际数据 x 的情况下,编码器可以通过预测潜在的 z 来使生成器逆转。有关这些模型的更多信息,请参阅 arXiv 上的 BigBiGAN 论文 [1]。
连接到运行时后,请按照以下说明开始操作:
(可选)在下面的第一个代码单元中更新所选的 module_path,为不同的编码器架构加载 BigBiGAN 生成器。
点击 Runtime > Run all 按顺序运行每个单元。然后,下方会自动显示输出(包括 BigBiGAN 样本的可视化和重构)。
注:如果遇到任何问题,可以点击 Runtime > Restart and run all...,重启运行时并从头开始运行所有单元。
[1] Jeff Donahue and Karen Simonyan. Large Scale Adversarial Representation Learning. arxiv:1907.02544, 2019.
首先,设置模块路径。默认情况下,我们从 https://tfhub.dev/deepmind/bigbigan-resnet50/1 使用基于 ResNet-50 的较小编码器加载 BigBiGAN 模型。要加载基于 RevNet-50-x4 的较大模型以获得最佳的表示学习结果,请注释掉有效的 module_path 设置,然后取消注释另一个设置。
End of explanation
"""
import io
import IPython.display
import PIL.Image
from pprint import pformat
import numpy as np
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
import tensorflow_hub as hub
"""
Explanation: 设置
End of explanation
"""
def imgrid(imarray, cols=4, pad=1, padval=255, row_major=True):
"""Lays out a [N, H, W, C] image array as a single image grid."""
pad = int(pad)
if pad < 0:
raise ValueError('pad must be non-negative')
cols = int(cols)
assert cols >= 1
N, H, W, C = imarray.shape
rows = N // cols + int(N % cols != 0)
batch_pad = rows * cols - N
assert batch_pad >= 0
post_pad = [batch_pad, pad, pad, 0]
pad_arg = [[0, p] for p in post_pad]
imarray = np.pad(imarray, pad_arg, 'constant', constant_values=padval)
H += pad
W += pad
grid = (imarray
.reshape(rows, cols, H, W, C)
.transpose(0, 2, 1, 3, 4)
.reshape(rows*H, cols*W, C))
if pad:
grid = grid[:-pad, :-pad]
return grid
def interleave(*args):
"""Interleaves input arrays of the same shape along the batch axis."""
if not args:
raise ValueError('At least one argument is required.')
a0 = args[0]
if any(a.shape != a0.shape for a in args):
raise ValueError('All inputs must have the same shape.')
if not a0.shape:
raise ValueError('Inputs must have at least one axis.')
out = np.transpose(args, [1, 0] + list(range(2, len(a0.shape) + 1)))
out = out.reshape(-1, *a0.shape[1:])
return out
def imshow(a, format='png', jpeg_fallback=True):
"""Displays an image in the given format."""
a = a.astype(np.uint8)
data = io.BytesIO()
PIL.Image.fromarray(a).save(data, format)
im_data = data.getvalue()
try:
disp = IPython.display.display(IPython.display.Image(im_data))
except IOError:
if jpeg_fallback and format != 'jpeg':
print ('Warning: image was too large to display in format "{}"; '
'trying jpeg instead.').format(format)
return imshow(a, format='jpeg')
else:
raise
return disp
def image_to_uint8(x):
"""Converts [-1, 1] float array to [0, 255] uint8."""
x = np.asarray(x)
x = (256. / 2.) * (x + 1.)
x = np.clip(x, 0, 255)
x = x.astype(np.uint8)
return x
"""
Explanation: 定义一些显示图像的函数
End of explanation
"""
# module = hub.Module(module_path, trainable=True, tags={'train'}) # training
module = hub.Module(module_path) # inference
for signature in module.get_signature_names():
print('Signature:', signature)
print('Inputs:', pformat(module.get_input_info_dict(signature)))
print('Outputs:', pformat(module.get_output_info_dict(signature)))
print()
"""
Explanation: 加载 BigBiGAN TF Hub 模块并显示其可用功能
End of explanation
"""
class BigBiGAN(object):
def __init__(self, module):
"""Initialize a BigBiGAN from the given TF Hub module."""
self._module = module
def generate(self, z, upsample=False):
"""Run a batch of latents z through the generator to generate images.
Args:
z: A batch of 120D Gaussian latents, shape [N, 120].
Returns: a batch of generated RGB images, shape [N, 128, 128, 3], range
[-1, 1].
"""
outputs = self._module(z, signature='generate', as_dict=True)
return outputs['upsampled' if upsample else 'default']
def make_generator_ph(self):
"""Creates a tf.placeholder with the dtype & shape of generator inputs."""
info = self._module.get_input_info_dict('generate')['z']
return tf.placeholder(dtype=info.dtype, shape=info.get_shape())
def gen_pairs_for_disc(self, z):
"""Compute generator input pairs (G(z), z) for discriminator, given z.
Args:
z: A batch of latents (120D standard Gaussians), shape [N, 120].
Returns: a tuple (G(z), z) of discriminator inputs.
"""
# Downsample 256x256 image x for 128x128 discriminator input.
x = self.generate(z)
return x, z
def encode(self, x, return_all_features=False):
"""Run a batch of images x through the encoder.
Args:
x: A batch of data (256x256 RGB images), shape [N, 256, 256, 3], range
[-1, 1].
return_all_features: If True, return all features computed by the encoder.
Otherwise (default) just return a sample z_hat.
Returns: the sample z_hat of shape [N, 120] (or a dict of all features if
return_all_features).
"""
outputs = self._module(x, signature='encode', as_dict=True)
return outputs if return_all_features else outputs['z_sample']
def make_encoder_ph(self):
"""Creates a tf.placeholder with the dtype & shape of encoder inputs."""
info = self._module.get_input_info_dict('encode')['x']
return tf.placeholder(dtype=info.dtype, shape=info.get_shape())
def enc_pairs_for_disc(self, x):
"""Compute encoder input pairs (x, E(x)) for discriminator, given x.
Args:
x: A batch of data (256x256 RGB images), shape [N, 256, 256, 3], range
[-1, 1].
Returns: a tuple (downsample(x), E(x)) of discriminator inputs.
"""
# Downsample 256x256 image x for 128x128 discriminator input.
x_down = tf.nn.avg_pool(x, ksize=2, strides=2, padding='SAME')
z = self.encode(x)
return x_down, z
def discriminate(self, x, z):
"""Compute the discriminator scores for pairs of data (x, z).
(x, z) must be batches with the same leading batch dimension, and joint
scores are computed on corresponding pairs x[i] and z[i].
Args:
x: A batch of data (128x128 RGB images), shape [N, 128, 128, 3], range
[-1, 1].
z: A batch of latents (120D standard Gaussians), shape [N, 120].
Returns:
A dict of scores:
score_xz: the joint scores for the (x, z) pairs.
score_x: the unary scores for x only.
score_z: the unary scores for z only.
"""
inputs = dict(x=x, z=z)
return self._module(inputs, signature='discriminate', as_dict=True)
def reconstruct_x(self, x, use_sample=True, upsample=False):
"""Compute BigBiGAN reconstructions of images x via G(E(x)).
Args:
x: A batch of data (256x256 RGB images), shape [N, 256, 256, 3], range
[-1, 1].
use_sample: takes a sample z_hat ~ E(x). Otherwise, deterministically
use the mean. (Though a sample z_hat may be far from the mean z,
typically the resulting recons G(z_hat) and G(z) are very
similar.
upsample: if set, upsample the reconstruction to the input resolution
(256x256). Otherwise return the raw lower resolution generator output
(128x128).
Returns: a batch of recons G(E(x)), shape [N, 256, 256, 3] if
`upsample`, otherwise [N, 128, 128, 3].
"""
if use_sample:
z = self.encode(x)
else:
z = self.encode(x, return_all_features=True)['z_mean']
recons = self.generate(z, upsample=upsample)
return recons
def losses(self, x, z):
"""Compute per-module BigBiGAN losses given data & latent sample batches.
Args:
x: A batch of data (256x256 RGB images), shape [N, 256, 256, 3], range
[-1, 1].
z: A batch of latents (120D standard Gaussians), shape [M, 120].
For the original BigBiGAN losses, pass batches of size N=M=2048, with z's
sampled from a 120D standard Gaussian (e.g., np.random.randn(2048, 120)),
and x's sampled from the ImageNet (ILSVRC2012) training set with the
"ResNet-style" preprocessing from:
https://github.com/tensorflow/tpu/blob/master/models/official/resnet/resnet_preprocessing.py
Returns:
A dict of per-module losses:
disc: loss for the discriminator.
enc: loss for the encoder.
gen: loss for the generator.
"""
# Compute discriminator scores on (x, E(x)) pairs.
# Downsample 256x256 image x for 128x128 discriminator input.
scores_enc_x_dict = self.discriminate(*self.enc_pairs_for_disc(x))
scores_enc_x = tf.concat([scores_enc_x_dict['score_xz'],
scores_enc_x_dict['score_x'],
scores_enc_x_dict['score_z']], axis=0)
# Compute discriminator scores on (G(z), z) pairs.
scores_gen_z_dict = self.discriminate(*self.gen_pairs_for_disc(z))
scores_gen_z = tf.concat([scores_gen_z_dict['score_xz'],
scores_gen_z_dict['score_x'],
scores_gen_z_dict['score_z']], axis=0)
disc_loss_enc_x = tf.reduce_mean(tf.nn.relu(1. - scores_enc_x))
disc_loss_gen_z = tf.reduce_mean(tf.nn.relu(1. + scores_gen_z))
disc_loss = disc_loss_enc_x + disc_loss_gen_z
enc_loss = tf.reduce_mean(scores_enc_x)
gen_loss = tf.reduce_mean(-scores_gen_z)
return dict(disc=disc_loss, enc=enc_loss, gen=gen_loss)
"""
Explanation: 定义封装容器类来方便地访问各种函数
End of explanation
"""
bigbigan = BigBiGAN(module)
# Make input placeholders for x (`enc_ph`) and z (`gen_ph`).
enc_ph = bigbigan.make_encoder_ph()
gen_ph = bigbigan.make_generator_ph()
# Compute samples G(z) from encoder input z (`gen_ph`).
gen_samples = bigbigan.generate(gen_ph)
# Compute reconstructions G(E(x)) of encoder input x (`enc_ph`).
recon_x = bigbigan.reconstruct_x(enc_ph, upsample=True)
# Compute encoder features used for representation learning evaluations given
# encoder input x (`enc_ph`).
enc_features = bigbigan.encode(enc_ph, return_all_features=True)
# Compute discriminator scores for encoder pairs (x, E(x)) given x (`enc_ph`)
# and generator pairs (G(z), z) given z (`gen_ph`).
disc_scores_enc = bigbigan.discriminate(*bigbigan.enc_pairs_for_disc(enc_ph))
disc_scores_gen = bigbigan.discriminate(*bigbigan.gen_pairs_for_disc(gen_ph))
# Compute losses.
losses = bigbigan.losses(enc_ph, gen_ph)
"""
Explanation: 创建张量稍后用于计算样本、重构、判别器得分和损失
End of explanation
"""
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
"""
Explanation: 创建 TensorFlow 会话并初始化变量
End of explanation
"""
feed_dict = {gen_ph: np.random.randn(32, 120)}
_out_samples = sess.run(gen_samples, feed_dict=feed_dict)
print('samples shape:', _out_samples.shape)
imshow(imgrid(image_to_uint8(_out_samples), cols=4))
"""
Explanation: 生成器样本
首先,我们对来自标准高斯(通过 np.random.randn)的生成器输入 z 进行采样,并显示其生成的图像,从而对预训练 BigBiGAN 生成器的样本进行可视化。到目前为止,我们并没有超越标准 GAN 的上限,目前仅使用了生成器(并忽略了编码器)。
End of explanation
"""
def get_flowers_data():
"""Returns a [32, 256, 256, 3] np.array of preprocessed TF-Flowers samples."""
import tensorflow_datasets as tfds
ds, info = tfds.load('tf_flowers', split='train', with_info=True)
# Just get the images themselves as we don't need labels for this demo.
ds = ds.map(lambda x: x['image'])
# Filter out small images (with minor edge length <256).
ds = ds.filter(lambda x: tf.reduce_min(tf.shape(x)[:2]) >= 256)
# Take the center square crop of the image and resize to 256x256.
def crop_and_resize(image):
imsize = tf.shape(image)[:2]
minor_edge = tf.reduce_min(imsize)
start = (imsize - minor_edge) // 2
stop = start + minor_edge
cropped_image = image[start[0] : stop[0], start[1] : stop[1]]
resized_image = tf.image.resize_bicubic([cropped_image], [256, 256])[0]
return resized_image
ds = ds.map(crop_and_resize)
# Convert images from [0, 255] uint8 to [-1, 1] float32.
ds = ds.map(lambda image: tf.cast(image, tf.float32) / (255. / 2.) - 1)
# Take the first 32 samples.
ds = ds.take(32)
return np.array(list(tfds.as_numpy(ds)))
test_images = get_flowers_data()
"""
Explanation: 从 TF-Flowers 数据集加载 test_images
BigBiGAN 在 ImageNet 上进行了训练,但由于它太大而无法在本演示中使用,因此我们使用较小的 TF-Flowers [1] 数据集作为可视化重构和计算编码器特征的输入。
在下面的单元中,我们加载 TF-Flowers(如果需要,请下载数据集),并将 256x256 RGB 图像样本的固定批次存储在 NumPy 数组 test_images 中。
[1] https://tensorflow.google.cn/datasets/catalog/tf_flowers
End of explanation
"""
test_images_batch = test_images[:16]
_out_recons = sess.run(recon_x, feed_dict={enc_ph: test_images_batch})
print('reconstructions shape:', _out_recons.shape)
inputs_and_recons = interleave(test_images_batch, _out_recons)
print('inputs_and_recons shape:', inputs_and_recons.shape)
imshow(imgrid(image_to_uint8(inputs_and_recons), cols=2))
"""
Explanation: 重构
现在,我们通过编码器传递真实图像并通过生成器传回,以此方式在给定图像 x 的情况下计算 G(E(x)),从而实现对 BigBiGAN 重构的可视化。输入图像 x 显示在下方左列中,而相应的重构显示在右列中。
请注意,重构并不是输入图像的像素级完美匹配;相反,它们倾向于捕获输入的高级语义内容,同时“忽略”大部分低级细节。这表明 BigBiGAN 编码器可能会学习捕获关于图像的高级语义信息(即,我们希望在表示学习方法中看到的那些信息)的类型。
还要注意,256x256 输入图像的原始重构的分辨率为生成器生成的较低分辨率 (128x128)。出于可视化的目的,我们会对它们进行上采样。
End of explanation
"""
_out_features = sess.run(enc_features, feed_dict={enc_ph: test_images_batch})
print('AvePool features shape:', _out_features['avepool_feat'].shape)
print('BN+CReLU features shape:', _out_features['bn_crelu_feat'].shape)
"""
Explanation: 编码器特征
现在,我们演示如何通过用于标准表示学习评估的编码器来计算特征。
这些特征可用在基于线性或最近邻的分类器中。我们还包括全局平均池化(avepool_feat 键)后获取的标准特征,以及用于获得最佳结果的较大的“BN+CReLU”特征(bn_crelu_feat 键)。
End of explanation
"""
feed_dict = {enc_ph: test_images, gen_ph: np.random.randn(32, 120)}
_out_scores_enc, _out_scores_gen, _out_losses = sess.run(
[disc_scores_enc, disc_scores_gen, losses], feed_dict=feed_dict)
print('Encoder scores:', {k: v.mean() for k, v in _out_scores_enc.items()})
print('Generator scores:', {k: v.mean() for k, v in _out_scores_gen.items()})
print('Losses:', _out_losses)
"""
Explanation: 判别器得分和损失
最后,我们将在编码器和生成器对的批次上计算判别器得分和损失。这些损失可以传递给优化器来训练 BigBiGAN。
我们将上述图像批次用作编码器输入 x,将编码器得分作为 D(x, E(x)) 进行计算。对于生成器输入,我们通过 np.random.randn 从 120D 标准高斯对 z 进行采样,将生成器得分作为 D(G(z), z) 进行计算。
判别器预测 (x, z) 对的联合得分 score_xz,以及 x 和 z 的一元得分 score_x 和 score_z。经过训练,它可为编码器对给出高(正)分,并为生成器对给出低(负)分。尽管一元 score_z 在两种情况下均为负值,但这对于下面的代码基本成立,这表明编码器输出 E(x) 与高斯的实际样本类似。
End of explanation
"""
|
kiote/review-spam-prediction | prediction-model-builder.ipynb | mit | from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer(analyzer = "word", \
tokenizer = None, \
preprocessor = None, \
stop_words = None, \
max_features = 5000)
train_data_features = vectorizer.fit_transform(data['lower'])
# Numpy arrays are easy to work with, so convert the result to an
# array
train_data_features = train_data_features.toarray()
# let's see what we have there
vectorizer.get_feature_names()[-5:]
len(data)
"""
Explanation: Create the bag of words
End of explanation
"""
from scipy.spatial.distance import cosine
cosines = {}
# print("First sentence: %s\nSpam: %s\n\n" % (data['lower'][0], data['Status'][0]))
first_vector = train_data_features[0]
for i in range(1, len(data)):
cosines[i] = cosine(first_vector, train_data_features[i])
# print(cosines)
false_status = 0
true_status = 0
FN = 0
for i in range(1, len(data)):
if cosines[i] < 1.0:
if data['Status'][i] == True:
true_status += 1
else:
false_status += 1
else:
if data['Status'][i] == False:
FN += 1
TP = false_status
FP = true_status
F1 = 2*TP/(2*TP+FP+FN)
print("F1 = %0.4f" % F1)
"""
Explanation: Claculate cosine
I've tried here to compare first centence's vector to all other vectors.
First vector status is not spam (=False). I also calculate how many true positives (vectors with cosine < 1, which is also not spam) and false positive (cosine < 1, but marked as a spam).
I'll evaluate each method with F1 metric: $$F1=\dfrac{2TP}{(2TP + FP + FN)}$$
End of explanation
"""
|
CitrineInformatics/lolo | python/examples/profile/scaling-test.ipynb | apache-2.0 | %matplotlib inline
from matplotlib import pyplot as plt
from tqdm import tqdm_notebook as tqdm
from matminer.datasets.dataset_retrieval import load_dataset
from matminer.featurizers.composition import ElementProperty
from lolopy.learners import RandomForestRegressor
from lolopy.loloserver import find_lolo_jar
from sklearn.ensemble import RandomForestRegressor as SKRFRegressor
from scipy.interpolate import interp1d
from subprocess import PIPE, Popen
from pymatgen import Composition
from time import perf_counter
import pandas as pd
import numpy as np
"""
Explanation: Assessing the Performance of Lolopy
Transferring data to and from JVM is remarkably costly. In this notebook, we quantify how costly the transfer is compared to the training as a function of training set size. We will use a standard ML problem: predicting glass-forming ability of ternary metallic alloys.
End of explanation
"""
def time_function(fun, n, *args, **kwargs):
"""Run a certain function and return timing
Args:
fun (function): Function to be evaluated
n (int): Number of times to run function
args: Input to function
Returns:
([float]) Function run times
"""
times = []
for i in range(n):
st = perf_counter()
fun(*args, **kwargs)
times.append(perf_counter() - st)
return times
"""
Explanation: Make a timing function.
End of explanation
"""
data = load_dataset('mp_nostruct')
"""
Explanation: Create the Dataset
We'll use the Materials Project dataset
Pull down the dataset
End of explanation
"""
data = data[~ data['formula'].isnull()]
"""
Explanation: Eliminate entries without formulas
End of explanation
"""
data = data.sample(1000)
"""
Explanation: Downselect to $10^3$ entries
End of explanation
"""
X = np.array(ElementProperty.from_preset('magpie').featurize_many(data['formula'].apply(Composition), pbar=False))
y = data['e_form'].values
"""
Explanation: Generate some features
End of explanation
"""
lolojar = find_lolo_jar()
def get_scala_timings(X, y, X_run):
"""Train a RF with standard settings using Lolo, generate uncertainties for whole dataset, report timings
Args:
X, y (ndarray): Training dataset
X_run (ndarray): Dataset to evaluate
Returns:
train, expected, uncertainty (float): Training time, expected and uncertainty evaluation times
"""
np.savetxt('train.csv', np.hstack((X, y[:, None])), delimiter=',')
np.savetxt('run.csv', np.hstack((X_run, np.zeros((len(X_run), 1)))), delimiter=',')
p = Popen('scala -J-Xmx8g -cp {} scala-benchmark.scala train.csv run.csv'.format(lolojar), stdout=PIPE,
stderr=PIPE, shell=True)
result = p.stdout.read().decode()
return map(float, result.split(','))
scala_train, scala_expect, scala_uncert = get_scala_timings(X, y, X)
print('Lolo train time:', scala_train)
print('Lolo apply time', scala_expect + scala_uncert)
"""
Explanation: Make a function to run the scala benchmark
End of explanation
"""
model = RandomForestRegressor(num_trees=len(X))
"""
Explanation: Profile Fitting the Model
We are looking to comprae the total time for fitting a model to the time required to send data over
End of explanation
"""
rf_fit = time_function(model.fit, 16, X, y)
print('Average fit time:', np.mean(rf_fit))
"""
Explanation: Fit the model 16 times, measure the times
End of explanation
"""
x_java, _ = model._convert_train_data(X, y, None)
rf_transfer = time_function(model._convert_train_data, 16, X, y)
print('Average transfer time:', np.mean(rf_transfer))
"""
Explanation: Run only transfering the data to Java, record the time
End of explanation
"""
rf_apply = time_function(model.predict, 16, X, return_std=True)
print('Average predict time:', np.mean(rf_apply))
rf_apply_transfer = time_function(model._convert_run_data, 16, X)
print('Average transfer time for prediction:', np.mean(rf_apply_transfer))
"""
Explanation: Compute uncertainities
End of explanation
"""
sk_model = SKRFRegressor(n_estimators=len(X), n_jobs=-1)
sk_train = time_function(sk_model.fit, 16, X, y)
print('Sklearn fitting time:', np.mean(sk_train))
"""
Explanation: Time Scikit-Learn
Compare against a scikit-learn model with the same amount of trees as Lolo.
End of explanation
"""
results = []
for n in tqdm(np.logspace(1, np.log10(len(X)), 8, dtype=int)):
# Initialize output
r = {'n': n}
# Get the training and test set sizes
X_n = X[:n, :]
y_n = y[:n]
# Time using lolo via Scala
scala_train, scala_expect, scala_uncert = get_scala_timings(X_n, y_n, X)
r['scala_train'] = scala_train
r['scala_apply'] = scala_expect
r['scala_apply_wuncert'] = scala_expect + scala_uncert
# Time using lolo via lolopy
model.set_params(num_trees=len(X_n))
r['lolopy_train'] = np.mean(time_function(model.fit, 16, X_n, y_n))
r['lolopy_train_transfer'] = np.mean(time_function(model._convert_train_data, 16, X_n, y_n))
r['lolopy_apply'] = np.mean(time_function(model.predict, 16, X, return_std=False))
r['lolopy_apply_wuncert'] = np.mean(time_function(model.predict, 16, X, return_std=True))
r['lolopy_apply_transfer'] = np.mean(time_function(model._convert_run_data, 16, X))
model.clear_model() # To save memory
# Time using RF
sk_model = SKRFRegressor(n_estimators=n)
r['sklearn_fit'] = np.mean(time_function(sk_model.fit, 16, X_n, y_n))
r['sklearn_apply'] = np.mean(time_function(sk_model.predict, 16, X))
# Append results and continue
results.append(r)
results = pd.DataFrame(results)
results
"""
Explanation: Compare as a Function of Scale
Measure the performance of each model as a function of training/test set size
End of explanation
"""
fig, ax = plt.subplots()
ax.fill_between(results['n'], results['lolopy_train_transfer'], 0.001, alpha=0.1)
ax.loglog(results['n'], results['lolopy_train'], 'r', label='lolopy')
ax.loglog(results['n'], results['scala_train'], 'b--', label='lolo')
ax.loglog(results['n'], results['sklearn_fit'], 'g:', label='sklearn')
ax.set_ylim(0.005, max(ax.get_ylim()))
ax.set_xlabel('Training Set Size')
ax.set_ylabel('Train Time (s)')
ax.legend()
fig.set_size_inches(3.5, 2.5)
fig.tight_layout()
fig.savefig('training-performance.png')
"""
Explanation: Plot the training results. The blue shading is the data transfer time
End of explanation
"""
fig, axs = plt.subplots(1, 2)
# Plot results without uncertainties
axs[0].loglog(results['n'], len(X) / results['lolopy_apply'], 'r', label='lolopy')
axs[0].loglog(results['n'], len(X) / results['scala_apply'], 'b--', label='lolo')
axs[0].loglog(results['n'], len(X) / results['sklearn_apply'], 'g:', label='sklearn')
axs[0].set_title('Without Uncertainties')
# Plot results with uncertainities
axs[1].loglog(results['n'], len(X) / results['lolopy_apply_wuncert'], 'r', label='lolopy')
axs[1].loglog(results['n'], len(X) / results['scala_apply_wuncert'], 'b--', label='lolo')
axs[1].set_title('With Uncertainties')
for ax in axs:
ax.set_xlabel('Training Set Size')
ax.set_ylabel('Evaluation Speed (entry/s)')
ax.legend()
fig.set_size_inches(6.5, 2.5)
fig.tight_layout()
fig.savefig('evaluation-performance.png')
"""
Explanation: Plot the evaluation speed. Note that the number of trees scales with the training set size (hence the decrease in speed with training set size)
End of explanation
"""
lolopy_timing = interp1d(results['n'], results['lolopy_train'])
lolo_timing = interp1d(results['n'], results['scala_train'])
slowdown = lolopy_timing(100) / lolo_timing(100)
print('Training slowdown: {:.2f}'.format(slowdown))
assert slowdown < 2
lolopy_timing = interp1d(results['n'], results['lolopy_apply'])
lolo_timing = interp1d(results['n'], results['scala_apply'])
slowdown = lolopy_timing(100) / lolo_timing(100)
print('Evaluation without uncertainties slowdown: {:.2f}'.format(slowdown))
assert slowdown < 2
lolopy_timing = interp1d(results['n'], results['lolopy_apply_wuncert'])
lolo_timing = interp1d(results['n'], results['scala_apply_wuncert'])
slowdown = lolopy_timing(100) / lolo_timing(100)
print('Evaluation with uncertainties slowdown: {:.2f}'.format(slowdown))
"""
Explanation: Verify that performance is within acceptable bounds: Less than a 2x slowdown for model training or evaluation at a training set size of 100 entries.
End of explanation
"""
|
boya-zhou/kaggle_bimbo_reformat | notebooks/1_predata_whole.ipynb | mit | agencia_for_cliente_producto = train_dataset[['Cliente_ID','Producto_ID'
,'Agencia_ID']].groupby(['Cliente_ID',
'Producto_ID']).agg(lambda x:x.value_counts().index[0]).reset_index()
canal_for_cliente_producto = train_dataset[['Cliente_ID',
'Producto_ID','Canal_ID']].groupby(['Cliente_ID',
'Producto_ID']).agg(lambda x:x.value_counts().index[0]).reset_index()
ruta_for_cliente_producto = train_dataset[['Cliente_ID',
'Producto_ID','Ruta_SAK']].groupby(['Cliente_ID',
'Producto_ID']).agg(lambda x:x.value_counts().index[0]).reset_index()
gc.collect()
agencia_for_cliente_producto.to_pickle('agencia_for_cliente_producto.csv')
canal_for_cliente_producto.to_pickle('canal_for_cliente_producto.csv')
ruta_for_cliente_producto.to_pickle('ruta_for_cliente_producto.csv')
agencia_for_cliente_producto = pd.read_pickle('agencia_for_cliente_producto.csv')
canal_for_cliente_producto = pd.read_pickle('canal_for_cliente_producto.csv')
ruta_for_cliente_producto = pd.read_pickle('ruta_for_cliente_producto.csv')
# train_dataset['log_demand'] = train_dataset['Demanda_uni_equil'].apply(np.log1p)
pivot_train = pd.pivot_table(data= train_dataset[['Cliente_ID','Producto_ID','log_demand','Semana']],
values='log_demand', index=['Cliente_ID','Producto_ID'],
columns=['Semana'], aggfunc=np.mean,fill_value = 0).reset_index()
pivot_train.head()
pivot_train = pd.merge(left = pivot_train, right = agencia_for_cliente_producto, how = 'inner', on = ['Cliente_ID','Producto_ID'])
pivot_train = pd.merge(left = pivot_train, right = canal_for_cliente_producto, how = 'inner', on = ['Cliente_ID','Producto_ID'])
pivot_train = pd.merge(left = pivot_train, right = ruta_for_cliente_producto, how = 'inner', on = ['Cliente_ID','Producto_ID'])
pivot_train.to_pickle('pivot_train_with_zero.pickle')
pivot_train = pd.read_pickle('pivot_train_with_zero.pickle')
pivot_train.to_pickle('pivot_train_with_nan.pickle')
pivot_train = pd.read_pickle('pivot_train_with_nan.pickle')
pivot_train = pivot_train.rename(columns={3: 'Sem3', 4: 'Sem4',5: 'Sem5', 6: 'Sem6',7: 'Sem7', 8: 'Sem8',9: 'Sem9'})
pivot_train.head()
pivot_train.columns.values
"""
Explanation: make the train_pivot, duplicate exist when index = ['Cliente','Producto']
for each cliente & producto, first find its most common Agencia_ID, Canal_ID, Ruta_SAK
End of explanation
"""
test_dataset = pd.read_csv('origin/test.csv')
test_dataset.head()
test_dataset[test_dataset['Semana'] == 10].shape
test_dataset[test_dataset['Semana'] == 11].shape
pivot_test = pd.merge(left=pivot_train, right = test_dataset[['id','Cliente_ID','Producto_ID','Semana']],
on =['Cliente_ID','Producto_ID'],how = 'inner' )
pivot_test.head()
pivot_test_new = pd.merge(pivot_train[['Cliente_ID', 'Producto_ID', 'Sem3', 'Sem4', 'Sem5', 'Sem6', 'Sem7',
'Sem8', 'Sem9']],right = test_dataset, on = ['Cliente_ID','Producto_ID'],how = 'right')
pivot_test_new.head()
pivot_test_new.to_pickle('pivot_test.pickle')
pivot_test.to_pickle('pivot_test.pickle')
pivot_test = pd.read_pickle('pivot_test.pickle')
pivot_test.head()
"""
Explanation: make pivot table of test
End of explanation
"""
train_dataset.head()
import itertools
col_list = ['Agencia_ID', 'Ruta_SAK', 'Cliente_ID', 'Producto_ID']
all_combine = itertools.combinations(col_list,2)
list_2element_combine = [list(tuple) for tuple in all_combine]
col_1elm_2elm = col_list + list_2element_combine
col_1elm_2elm
train_dataset_test = train_dataset[train_dataset['Semana'] < 8].copy()
"""
Explanation: groupby use Agencia_ID, Ruta_SAK, Cliente_ID, Producto_ID
End of explanation
"""
def categorical_useful(train_dataset,pivot_train):
# if is_train:
# train_dataset_test = train_dataset[train_dataset['Semana'] < 8].copy()
# elif is_train == False:
train_dataset_test = train_dataset.copy()
log_demand_by_agen = train_dataset_test[['Agencia_ID','log_demand']].groupby('Agencia_ID').mean().reset_index()
log_demand_by_ruta = train_dataset_test[['Ruta_SAK','log_demand']].groupby('Ruta_SAK').mean().reset_index()
log_demand_by_cliente = train_dataset_test[['Cliente_ID','log_demand']].groupby('Cliente_ID').mean().reset_index()
log_demand_by_producto = train_dataset_test[['Producto_ID','log_demand']].groupby('Producto_ID').mean().reset_index()
log_demand_by_agen_ruta = train_dataset_test[['Agencia_ID', 'Ruta_SAK',
'log_demand']].groupby(['Agencia_ID', 'Ruta_SAK']).mean().reset_index()
log_demand_by_agen_cliente = train_dataset_test[['Agencia_ID', 'Cliente_ID',
'log_demand']].groupby(['Agencia_ID', 'Cliente_ID']).mean().reset_index()
log_demand_by_agen_producto = train_dataset_test[['Agencia_ID', 'Producto_ID',
'log_demand']].groupby(['Agencia_ID', 'Producto_ID']).mean().reset_index()
log_demand_by_ruta_cliente = train_dataset_test[['Ruta_SAK', 'Cliente_ID',
'log_demand']].groupby(['Ruta_SAK', 'Cliente_ID']).mean().reset_index()
log_demand_by_ruta_producto = train_dataset_test[['Ruta_SAK', 'Producto_ID',
'log_demand']].groupby(['Ruta_SAK', 'Producto_ID']).mean().reset_index()
log_demand_by_cliente_producto = train_dataset_test[['Cliente_ID', 'Producto_ID',
'log_demand']].groupby(['Cliente_ID', 'Producto_ID']).mean().reset_index()
log_demand_by_cliente_producto_agen = train_dataset_test[[
'Cliente_ID','Producto_ID','Agencia_ID','log_demand']].groupby(['Cliente_ID',
'Agencia_ID','Producto_ID']).mean().reset_index()
log_sum_by_cliente = train_dataset_test[['Cliente_ID','log_demand']].groupby('Cliente_ID').sum().reset_index()
ruta_freq_semana = train_dataset[['Semana','Ruta_SAK']].groupby(['Ruta_SAK']).count().reset_index()
clien_freq_semana = train_dataset[['Semana','Cliente_ID']].groupby(['Cliente_ID']).count().reset_index()
agen_freq_semana = train_dataset[['Semana','Agencia_ID']].groupby(['Agencia_ID']).count().reset_index()
prod_freq_semana = train_dataset[['Semana','Producto_ID']].groupby(['Producto_ID']).count().reset_index()
pivot_train = pd.merge(left = pivot_train,right = ruta_freq_semana,
how = 'left', on = ['Ruta_SAK']).rename(columns={'Semana': 'ruta_freq'})
pivot_train = pd.merge(left = pivot_train,right = clien_freq_semana,
how = 'left', on = ['Cliente_ID']).rename(columns={'Semana': 'clien_freq'})
pivot_train = pd.merge(left = pivot_train,right = agen_freq_semana,
how = 'left', on = ['Agencia_ID']).rename(columns={'Semana': 'agen_freq'})
pivot_train = pd.merge(left = pivot_train,right = prod_freq_semana,
how = 'left', on = ['Producto_ID']).rename(columns={'Semana': 'prod_freq'})
pivot_train = pd.merge(left = pivot_train,
right = log_demand_by_agen,
how = 'left', on = ['Agencia_ID']).rename(columns={'log_demand': 'agen_for_log_de'})
pivot_train = pd.merge(left = pivot_train,
right = log_demand_by_ruta,
how = 'left', on = ['Ruta_SAK']).rename(columns={'log_demand': 'ruta_for_log_de'})
pivot_train = pd.merge(left = pivot_train,
right = log_demand_by_cliente,
how = 'left', on = ['Cliente_ID']).rename(columns={'log_demand': 'cliente_for_log_de'})
pivot_train = pd.merge(left = pivot_train,
right = log_demand_by_producto,
how = 'left', on = ['Producto_ID']).rename(columns={'log_demand': 'producto_for_log_de'})
pivot_train = pd.merge(left = pivot_train,
right = log_demand_by_agen_ruta,
how = 'left', on = ['Agencia_ID', 'Ruta_SAK']).rename(columns={'log_demand': 'agen_ruta_for_log_de'})
pivot_train = pd.merge(left = pivot_train,
right = log_demand_by_agen_cliente,
how = 'left', on = ['Agencia_ID', 'Cliente_ID']).rename(columns={'log_demand': 'agen_cliente_for_log_de'})
pivot_train = pd.merge(left = pivot_train,
right = log_demand_by_agen_producto,
how = 'left', on = ['Agencia_ID', 'Producto_ID']).rename(columns={'log_demand': 'agen_producto_for_log_de'})
pivot_train = pd.merge(left = pivot_train,
right = log_demand_by_ruta_cliente,
how = 'left', on = ['Ruta_SAK', 'Cliente_ID']).rename(columns={'log_demand': 'ruta_cliente_for_log_de'})
pivot_train = pd.merge(left = pivot_train,
right = log_demand_by_ruta_producto,
how = 'left', on = ['Ruta_SAK', 'Producto_ID']).rename(columns={'log_demand': 'ruta_producto_for_log_de'})
pivot_train = pd.merge(left = pivot_train,
right = log_demand_by_cliente_producto,
how = 'left', on = ['Cliente_ID', 'Producto_ID']).rename(columns={'log_demand': 'cliente_producto_for_log_de'})
pivot_train = pd.merge(left = pivot_train,
right = log_sum_by_cliente,
how = 'left', on = ['Cliente_ID']).rename(columns={'log_demand': 'cliente_for_log_sum'})
pivot_train = pd.merge(left = pivot_train,
right = log_demand_by_cliente_producto_agen,
how = 'left', on = ['Cliente_ID', 'Producto_ID',
'Agencia_ID']).rename(columns={'log_demand': 'cliente_producto_agen_for_log_sum'})
pivot_train['corr'] = pivot_train['producto_for_log_de'] * pivot_train['cliente_for_log_de'] / train_dataset_test['log_demand'].median()
return pivot_train
def define_time_features(df, to_predict = 't_plus_1' , t_0 = 8):
if(to_predict == 't_plus_1' ):
df['t_min_1'] = df['Sem'+str(t_0-1)]
if(to_predict == 't_plus_2' ):
df['t_min_6'] = df['Sem'+str(t_0-6)]
df['t_min_2'] = df['Sem'+str(t_0-2)]
df['t_min_3'] = df['Sem'+str(t_0-3)]
df['t_min_4'] = df['Sem'+str(t_0-4)]
df['t_min_5'] = df['Sem'+str(t_0-5)]
if(to_predict == 't_plus_1' ):
df['t1_min_t2'] = df['t_min_1'] - df['t_min_2']
df['t1_min_t3'] = df['t_min_1'] - df['t_min_3']
df['t1_min_t4'] = df['t_min_1'] - df['t_min_4']
df['t1_min_t5'] = df['t_min_1'] - df['t_min_5']
if(to_predict == 't_plus_2' ):
df['t2_min_t6'] = df['t_min_2'] - df['t_min_6']
df['t3_min_t6'] = df['t_min_3'] - df['t_min_6']
df['t4_min_t6'] = df['t_min_4'] - df['t_min_6']
df['t5_min_t6'] = df['t_min_5'] - df['t_min_6']
df['t2_min_t3'] = df['t_min_2'] - df['t_min_3']
df['t2_min_t4'] = df['t_min_2'] - df['t_min_4']
df['t2_min_t5'] = df['t_min_2'] - df['t_min_5']
df['t3_min_t4'] = df['t_min_3'] - df['t_min_4']
df['t3_min_t5'] = df['t_min_3'] - df['t_min_5']
df['t4_min_t5'] = df['t_min_4'] - df['t_min_5']
return df
def lin_regr(row, to_predict, t_0, semanas_numbers):
row = row.copy()
row.index = semanas_numbers
row = row.dropna()
if(len(row>2)):
X = np.ones(shape=(len(row), 2))
X[:,1] = row.index
y = row.values
regr = linear_model.LinearRegression()
regr.fit(X, y)
if(to_predict == 't_plus_1'):
return regr.predict([[1,t_0+1]])[0]
elif(to_predict == 't_plus_2'):
return regr.predict([[1,t_0+2]])[0]
else:
return None
def lin_regr_features(pivot_df,to_predict, semanas_numbers,t_0):
pivot_df = pivot_df.copy()
semanas_names = ['Sem%i' %i for i in semanas_numbers]
columns = ['Sem%i' %i for i in semanas_numbers]
columns.append('Producto_ID')
pivot_grouped = pivot_df[columns].groupby('Producto_ID').aggregate('mean')
pivot_grouped['LR_prod'] = np.zeros(len(pivot_grouped))
pivot_grouped['LR_prod'] = pivot_grouped[semanas_names].apply(lin_regr, axis = 1,
to_predict = to_predict,
t_0 = t_0, semanas_numbers = semanas_numbers )
pivot_df = pd.merge(pivot_df, pivot_grouped[['LR_prod']], how='left', left_on = 'Producto_ID', right_index=True)
pivot_df['LR_prod_corr'] = pivot_df['LR_prod'] * pivot_df['cliente_for_log_sum'] / 100
return pivot_df
cliente_tabla = pd.read_csv('origin/cliente_tabla.csv')
town_state = pd.read_csv('origin/town_state.csv')
town_state['town_id'] = town_state['Town'].str.split()
town_state['town_id'] = town_state['Town'].str.split(expand = True)
def add_pro_info(dataset):
train_basic_feature = dataset[['Cliente_ID','Producto_ID','Agencia_ID']].copy()
train_basic_feature.drop_duplicates(inplace = True)
cliente_per_town = pd.merge(train_basic_feature,cliente_tabla,on = 'Cliente_ID',how= 'inner' )
# print cliente_per_town.shape
cliente_per_town = pd.merge(cliente_per_town,town_state[['Agencia_ID','town_id']],on = 'Agencia_ID',how= 'inner' )
# print cliente_per_town.shape
cliente_per_town_count = cliente_per_town[['NombreCliente','town_id']].groupby('town_id').count().reset_index()
# print cliente_per_town_count.head()
cliente_per_town_count_final = pd.merge(cliente_per_town[['Cliente_ID','Producto_ID','town_id','Agencia_ID']],
cliente_per_town_count,on = 'town_id',how = 'inner')
# print cliente_per_town_count_final.head()
cliente_per_town_count_final.drop_duplicates(inplace = True)
dataset_final = pd.merge(dataset,cliente_per_town_count_final[['Cliente_ID','Producto_ID','NombreCliente','Agencia_ID']],
on = ['Cliente_ID','Producto_ID','Agencia_ID'],how = 'left')
return dataset_final
pre_product = pd.read_csv('preprocessed_products.csv',index_col = 0)
pre_product['weight_per_piece'] = pd.to_numeric(pre_product['weight_per_piece'], errors='coerce')
pre_product['weight'] = pd.to_numeric(pre_product['weight'], errors='coerce')
pre_product['pieces'] = pd.to_numeric(pre_product['pieces'], errors='coerce')
def add_product(dataset):
dataset = pd.merge(dataset,pre_product[['ID','weight','weight_per_piece','pieces']],
left_on = 'Producto_ID',right_on = 'ID',how = 'left')
return dataset
"""
Explanation: if predict week 8, use data from 3,4,5,6,7
if predict week 9, use data from 3,4,5,6,7
End of explanation
"""
train_34567 = train_dataset.loc[train_dataset['Semana'].isin([3,4,5,6,7]), :].copy()
train_pivot_34567_to_9 = pivot_train_zero.loc[(pivot_train['Sem9'].notnull()),:].copy()
train_pivot_34567_to_9 = categorical_useful(train_34567,train_pivot_34567_to_9)
del train_34567
gc.collect()
train_pivot_34567_to_9 = define_time_features(train_pivot_34567_to_9, to_predict = 't_plus_2' , t_0 = 9)
train_pivot_34567_to_9 = lin_regr_features(train_pivot_34567_to_9,to_predict ='t_plus_2',
semanas_numbers = [3,4,5,6,7],t_0 = 9)
train_pivot_34567_to_9['target'] = train_pivot_34567_to_9['Sem9']
train_pivot_34567_to_9.drop(['Sem8','Sem9'],axis =1,inplace = True)
#add cum_sum
train_pivot_cum_sum = train_pivot_34567_to_9[['Sem3','Sem4','Sem5','Sem6','Sem7']].cumsum(axis = 1)
train_pivot_34567_to_9.drop(['Sem3','Sem4','Sem5','Sem6','Sem7'],axis =1,inplace = True)
train_pivot_34567_to_9 = pd.concat([train_pivot_34567_to_9,train_pivot_cum_sum],axis =1)
train_pivot_34567_to_9 = train_pivot_34567_to_9.rename(columns={'Sem3': 't_m_6_cum',
'Sem4': 't_m_5_cum','Sem5': 't_m_4_cum',
'Sem6': 't_m_3_cum','Sem7': 't_m_2_cum'})
# add geo_info
train_pivot_34567_to_9 = add_pro_info(train_pivot_34567_to_9)
#add product info
train_pivot_34567_to_9 = add_product(train_pivot_34567_to_9)
train_pivot_34567_to_9.drop(['ID'],axis = 1,inplace = True)
gc.collect()
train_pivot_34567_to_9.head()
train_pivot_34567_to_9.columns.values
len(train_pivot_34567_to_9.columns.values)
train_pivot_34567_to_9.to_csv('train_pivot_34567_to_9.csv')
train_pivot_34567_to_9 = pd.read_csv('train_pivot_34567_to_9.csv',index_col = 0)
"""
Explanation: data for predict week [34567----9], time plus 2 week
End of explanation
"""
pivot_test.head()
pivot_test_week11 = pivot_test.loc[pivot_test['sem10_sem11'] == 11]
pivot_test_week11.reset_index(drop=True,inplace = True)
pivot_test_week11 = pivot_test_week11.fillna(0)
pivot_test_week11.head()
pivot_test_week11.shape
train_56789 = train_dataset.loc[train_dataset['Semana'].isin([5,6,7,8,9]), :].copy()
train_pivot_56789_to_11 = pivot_test_week11.copy()
train_pivot_56789_to_11 = categorical_useful(train_56789,train_pivot_56789_to_11)
del train_56789
gc.collect()
train_pivot_56789_to_11 = define_time_features(train_pivot_56789_to_11, to_predict = 't_plus_2' , t_0 = 11)
train_pivot_56789_to_11 = lin_regr_features(train_pivot_56789_to_11,to_predict ='t_plus_2' ,
semanas_numbers = [5,6,7,8,9],t_0 = 9)
train_pivot_56789_to_11.drop(['Sem3','Sem4'],axis =1,inplace = True)
#add cum_sum
train_pivot_cum_sum = train_pivot_56789_to_11[['Sem5','Sem6','Sem7','Sem8','Sem9']].cumsum(axis = 1)
train_pivot_56789_to_11.drop(['Sem5','Sem6','Sem7','Sem8','Sem9'],axis =1,inplace = True)
train_pivot_56789_to_11 = pd.concat([train_pivot_56789_to_11,train_pivot_cum_sum],axis =1)
train_pivot_56789_to_11 = train_pivot_56789_to_11.rename(columns={'Sem5': 't_m_6_cum',
'Sem6': 't_m_5_cum','Sem7': 't_m_4_cum',
'Sem8': 't_m_3_cum','Sem9': 't_m_2_cum'})
# add product_info
train_pivot_56789_to_11 = add_pro_info(train_pivot_56789_to_11)
#
train_pivot_56789_to_11 = add_product(train_pivot_56789_to_11)
train_pivot_56789_to_11.drop(['ID'],axis =1,inplace = True)
for col in train_pivot_56789_to_11.columns.values:
train_pivot_56789_to_11[col] = train_pivot_56789_to_11[col].astype(np.float32)
train_pivot_56789_to_11.head()
train_pivot_56789_to_11.columns.values
train_pivot_56789_to_11.shape
new_feature = ['id', 'ruta_freq', 'clien_freq', 'agen_freq',
'prod_freq', 'agen_for_log_de', 'ruta_for_log_de',
'cliente_for_log_de', 'producto_for_log_de', 'agen_ruta_for_log_de',
'agen_cliente_for_log_de', 'agen_producto_for_log_de',
'ruta_cliente_for_log_de', 'ruta_producto_for_log_de',
'cliente_producto_for_log_de', 'cliente_for_log_sum',
'cliente_producto_agen_for_log_sum', 'corr', 't_min_6', 't_min_2',
't_min_3', 't_min_4', 't_min_5', 't2_min_t6', 't3_min_t6',
't4_min_t6', 't5_min_t6', 't2_min_t3', 't2_min_t4', 't2_min_t5',
't3_min_t4', 't3_min_t5', 't4_min_t5', 'LR_prod', 'LR_prod_corr',
't_m_6_cum', 't_m_5_cum', 't_m_4_cum', 't_m_3_cum', 't_m_2_cum',
'NombreCliente', 'weight', 'weight_per_piece', 'pieces']
len(new_feature)
train_pivot_56789_to_11 = train_pivot_56789_to_11[new_feature]
train_pivot_56789_to_11.head()
train_pivot_56789_to_11['id'] = train_pivot_56789_to_11['id'].astype(int)
train_pivot_56789_to_11.head()
train_pivot_56789_to_11.to_csv('train_pivot_56789_to_11_private.csv',index = False)
"""
Explanation: test_for private data, week 11
End of explanation
"""
pivot_test_week10 = pivot_test.loc[pivot_test['sem10_sem11'] == 10]
pivot_test_week10.reset_index(drop=True,inplace = True)
pivot_test_week10 = pivot_test_week10.fillna(0)
pivot_test_week10.head()
train_45678 = train_dataset.loc[train_dataset['Semana'].isin([4,5,6,7,8]), :].copy()
train_pivot_45678_to_10 = pivot_test_week10.copy()
train_pivot_45678_to_10 = categorical_useful(train_45678,train_pivot_45678_to_10)
del train_45678
gc.collect()
train_pivot_45678_to_10 = define_time_features(train_pivot_45678_to_10, to_predict = 't_plus_2' , t_0 = 10)
train_pivot_45678_to_10 = lin_regr_features(train_pivot_45678_to_10,to_predict ='t_plus_2' ,
semanas_numbers = [4,5,6,7,8],t_0 = 8)
train_pivot_45678_to_10.drop(['Sem3','Sem9'],axis =1,inplace = True)
#add cum_sum
train_pivot_cum_sum = train_pivot_45678_to_10[['Sem4','Sem5','Sem6','Sem7','Sem8']].cumsum(axis = 1)
train_pivot_45678_to_10.drop(['Sem4','Sem5','Sem6','Sem7','Sem8'],axis =1,inplace = True)
train_pivot_45678_to_10 = pd.concat([train_pivot_45678_to_10,train_pivot_cum_sum],axis =1)
train_pivot_45678_to_10 = train_pivot_45678_to_10.rename(columns={'Sem4': 't_m_6_cum',
'Sem5': 't_m_5_cum','Sem6': 't_m_4_cum',
'Sem7': 't_m_3_cum','Sem8': 't_m_2_cum'})
# add product_info
train_pivot_45678_to_10 = add_pro_info(train_pivot_45678_to_10)
#
train_pivot_45678_to_10 = add_product(train_pivot_45678_to_10)
train_pivot_45678_to_10.drop(['ID'],axis =1,inplace = True)
for col in train_pivot_45678_to_10.columns.values:
train_pivot_45678_to_10[col] = train_pivot_45678_to_10[col].astype(np.float32)
train_pivot_45678_to_10.head()
train_pivot_45678_to_10.columns.values
train_pivot_45678_to_10 = train_pivot_45678_to_10[new_feature]
train_pivot_45678_to_10['id'] = train_pivot_45678_to_10['id'].astype(int)
train_pivot_45678_to_10.head()
train_pivot_45678_to_10.to_pickle('validation_45678_10.pickle')
"""
Explanation: for two week ahead 45678 to 10
End of explanation
"""
train_45678 = train_dataset.loc[train_dataset['Semana'].isin([4,5,6,7,8]), :].copy()
train_pivot_45678_to_9 = pivot_train_zero.loc[(pivot_train['Sem9'].notnull()),:].copy()
train_pivot_45678_to_9 = categorical_useful(train_45678,train_pivot_45678_to_9)
train_pivot_45678_to_9 = define_time_features(train_pivot_45678_to_9, to_predict = 't_plus_1' , t_0 = 9)
del train_45678
gc.collect()
train_pivot_45678_to_9 = lin_regr_features(train_pivot_45678_to_9,to_predict ='t_plus_1',
semanas_numbers = [4,5,6,7,8],t_0 = 8)
train_pivot_45678_to_9['target'] = train_pivot_45678_to_9['Sem9']
train_pivot_45678_to_9.drop(['Sem3','Sem9'],axis =1,inplace = True)
#add cum_sum
train_pivot_cum_sum = train_pivot_45678_to_9[['Sem4','Sem5','Sem6','Sem7','Sem8']].cumsum(axis = 1)
train_pivot_45678_to_9.drop(['Sem4','Sem5','Sem6','Sem7','Sem8'],axis =1,inplace = True)
train_pivot_45678_to_9 = pd.concat([train_pivot_45678_to_9,train_pivot_cum_sum],axis =1,copy = False)
train_pivot_45678_to_9 = train_pivot_45678_to_9.rename(columns={'Sem4': 't_m_5_cum',
'Sem5': 't_m_4_cum','Sem6': 't_m_3_cum', 'Sem7': 't_m_2_cum','Sem8': 't_m_1_cum'})
# add geo_info
train_pivot_45678_to_9 = add_pro_info(train_pivot_45678_to_9)
#add product info
train_pivot_45678_to_9 = add_product(train_pivot_45678_to_9)
train_pivot_45678_to_9.drop(['ID'],axis = 1,inplace = True)
for col in train_pivot_45678_to_9.columns.values:
train_pivot_45678_to_9[col] = train_pivot_45678_to_9[col].astype(np.float32)
gc.collect()
train_pivot_45678_to_9.head()
train_pivot_45678_to_9.columns.values
train_pivot_45678_to_9 = train_pivot_45678_to_9[['ruta_freq', 'clien_freq', 'agen_freq', 'prod_freq',
'agen_for_log_de', 'ruta_for_log_de', 'cliente_for_log_de',
'producto_for_log_de', 'agen_ruta_for_log_de',
'agen_cliente_for_log_de', 'agen_producto_for_log_de',
'ruta_cliente_for_log_de', 'ruta_producto_for_log_de',
'cliente_producto_for_log_de', 'cliente_for_log_sum',
'cliente_producto_agen_for_log_sum', 'corr', 't_min_1', 't_min_2',
't_min_3', 't_min_4', 't_min_5', 't1_min_t2', 't1_min_t3',
't1_min_t4', 't1_min_t5', 't2_min_t3', 't2_min_t4', 't2_min_t5',
't3_min_t4', 't3_min_t5', 't4_min_t5', 'LR_prod', 'LR_prod_corr',
'target', 't_m_5_cum', 't_m_4_cum', 't_m_3_cum', 't_m_2_cum',
't_m_1_cum', 'NombreCliente', 'weight', 'weight_per_piece', 'pieces']]
train_pivot_45678_to_9.shape
train_pivot_45678_to_9.to_csv('train_pivot_45678_to_9_whole_zero.csv')
# train_pivot_45678_to_9_old = pd.read_csv('train_pivot_45678_to_9.csv',index_col = 0)
sum(train_pivot_45678_to_9['target'].isnull())
"""
Explanation: data for predict week 8&9, time plus 1 week
train_45678 for 8+1 =9
End of explanation
"""
train_34567 = train_dataset.loc[train_dataset['Semana'].isin([3,4,5,6,7]), :].copy()
train_pivot_34567_to_8 = pivot_train_zero.loc[(pivot_train['Sem8'].notnull()),:].copy()
train_pivot_34567_to_8 = categorical_useful(train_34567,train_pivot_34567_to_8)
train_pivot_34567_to_8 = define_time_features(train_pivot_34567_to_8, to_predict = 't_plus_1' , t_0 = 8)
del train_34567
gc.collect()
train_pivot_34567_to_8 = lin_regr_features(train_pivot_34567_to_8,to_predict = 't_plus_1',
semanas_numbers = [3,4,5,6,7],t_0 = 7)
train_pivot_34567_to_8['target'] = train_pivot_34567_to_8['Sem8']
train_pivot_34567_to_8.drop(['Sem8','Sem9'],axis =1,inplace = True)
#add cum_sum
train_pivot_cum_sum = train_pivot_34567_to_8[['Sem3','Sem4','Sem5','Sem6','Sem7']].cumsum(axis = 1)
train_pivot_34567_to_8.drop(['Sem3','Sem4','Sem5','Sem6','Sem7'],axis =1,inplace = True)
train_pivot_34567_to_8 = pd.concat([train_pivot_34567_to_8,train_pivot_cum_sum],axis =1)
train_pivot_34567_to_8 = train_pivot_34567_to_8.rename(columns={'Sem3': 't_m_5_cum','Sem4': 't_m_4_cum',
'Sem5': 't_m_3_cum','Sem6': 't_m_2_cum',
'Sem7': 't_m_1_cum'})
# add product_info
train_pivot_34567_to_8 = add_pro_info(train_pivot_34567_to_8)
#add product
train_pivot_34567_to_8 = add_product(train_pivot_34567_to_8)
train_pivot_34567_to_8.drop(['ID'],axis = 1,inplace = True)
for col in train_pivot_34567_to_8.columns.values:
train_pivot_34567_to_8[col] = train_pivot_34567_to_8[col].astype(np.float32)
gc.collect()
train_pivot_34567_to_8.head()
train_pivot_34567_to_8.shape
train_pivot_34567_to_8.columns.values
train_pivot_34567_to_8.to_csv('train_pivot_34567_to_8.csv')
train_pivot_34567_to_8 = pd.read_csv('train_pivot_34567_to_8.csv',index_col = 0)
gc.collect()
"""
Explanation: train_34567 7+1 = 8
End of explanation
"""
train_pivot_xgb_time1 = pd.concat([train_pivot_45678_to_9, train_pivot_34567_to_8],axis = 0,copy = False)
train_pivot_xgb_time1 = train_pivot_xgb_time1[['ruta_freq', 'clien_freq', 'agen_freq', 'prod_freq',
'agen_for_log_de', 'ruta_for_log_de', 'cliente_for_log_de',
'producto_for_log_de', 'agen_ruta_for_log_de',
'agen_cliente_for_log_de', 'agen_producto_for_log_de',
'ruta_cliente_for_log_de', 'ruta_producto_for_log_de',
'cliente_producto_for_log_de', 'cliente_for_log_sum',
'cliente_producto_agen_for_log_sum', 'corr', 't_min_1', 't_min_2',
't_min_3', 't_min_4', 't_min_5', 't1_min_t2', 't1_min_t3',
't1_min_t4', 't1_min_t5', 't2_min_t3', 't2_min_t4', 't2_min_t5',
't3_min_t4', 't3_min_t5', 't4_min_t5', 'LR_prod', 'LR_prod_corr',
'target', 't_m_5_cum', 't_m_4_cum', 't_m_3_cum', 't_m_2_cum',
't_m_1_cum', 'NombreCliente', 'weight', 'weight_per_piece', 'pieces']]
train_pivot_xgb_time1.columns.values
train_pivot_xgb_time1.shape
np.sum(train_pivot_xgb_time1.memory_usage())/(1024**3)
train_pivot_xgb_time1.to_csv('train_pivot_xgb_time1_44fea_zero.csv',index = False)
train_pivot_xgb_time1.to_csv('train_pivot_xgb_time1.csv')
del train_pivot_xgb_time1
del train_pivot_45678_to_9
del train_pivot_34567_to_8
gc.collect()
"""
Explanation: concat train_pivot_45678_to_9 & train_pivot_34567_to_8 to perform t_plus_1, train_data is over
End of explanation
"""
pivot_test.head()
pivot_test_week10 = pivot_test.loc[pivot_test['sem10_sem11'] == 10]
pivot_test_week10.reset_index(drop=True,inplace = True)
pivot_test_week10 = pivot_test_week10.fillna(0)
pivot_test_week10.head()
pivot_test_week10.shape
train_56789 = train_dataset.loc[train_dataset['Semana'].isin([5,6,7,8,9]), :].copy()
train_pivot_56789_to_10 = pivot_test_week10.copy()
train_pivot_56789_to_10 = categorical_useful(train_56789,train_pivot_56789_to_10)
del train_56789
gc.collect()
train_pivot_56789_to_10 = define_time_features(train_pivot_56789_to_10, to_predict = 't_plus_1' , t_0 = 10)
train_pivot_56789_to_10 = lin_regr_features(train_pivot_56789_to_10,to_predict ='t_plus_1' ,
semanas_numbers = [5,6,7,8,9],t_0 = 9)
train_pivot_56789_to_10.drop(['Sem3','Sem4'],axis =1,inplace = True)
#add cum_sum
train_pivot_cum_sum = train_pivot_56789_to_10[['Sem5','Sem6','Sem7','Sem8','Sem9']].cumsum(axis = 1)
train_pivot_56789_to_10.drop(['Sem5','Sem6','Sem7','Sem8','Sem9'],axis =1,inplace = True)
train_pivot_56789_to_10 = pd.concat([train_pivot_56789_to_10,train_pivot_cum_sum],axis =1)
train_pivot_56789_to_10 = train_pivot_56789_to_10.rename(columns={'Sem5': 't_m_5_cum',
'Sem6': 't_m_4_cum','Sem7': 't_m_3_cum',
'Sem8': 't_m_2_cum','Sem9': 't_m_1_cum'})
# add product_info
train_pivot_56789_to_10 = add_pro_info(train_pivot_56789_to_10)
#
train_pivot_56789_to_10 = add_product(train_pivot_56789_to_10)
train_pivot_56789_to_10.drop(['ID'],axis =1,inplace = True)
for col in train_pivot_56789_to_10.columns.values:
train_pivot_56789_to_10[col] = train_pivot_56789_to_10[col].astype(np.float32)
train_pivot_56789_to_10.head()
train_pivot_56789_to_10 = train_pivot_56789_to_10[['id','ruta_freq', 'clien_freq', 'agen_freq',
'prod_freq', 'agen_for_log_de', 'ruta_for_log_de',
'cliente_for_log_de', 'producto_for_log_de', 'agen_ruta_for_log_de',
'agen_cliente_for_log_de', 'agen_producto_for_log_de',
'ruta_cliente_for_log_de', 'ruta_producto_for_log_de',
'cliente_producto_for_log_de', 'cliente_for_log_sum',
'cliente_producto_agen_for_log_sum', 'corr', 't_min_1', 't_min_2',
't_min_3', 't_min_4', 't_min_5', 't1_min_t2', 't1_min_t3',
't1_min_t4', 't1_min_t5', 't2_min_t3', 't2_min_t4', 't2_min_t5',
't3_min_t4', 't3_min_t5', 't4_min_t5', 'LR_prod', 'LR_prod_corr',
't_m_5_cum', 't_m_4_cum', 't_m_3_cum', 't_m_2_cum', 't_m_1_cum',
'NombreCliente', 'weight', 'weight_per_piece', 'pieces']]
train_pivot_56789_to_10.head()
train_pivot_56789_to_10.shape
len(train_pivot_56789_to_10.columns.values)
train_pivot_56789_to_10.to_pickle('train_pivot_56789_to_10_44fea_zero.pickle')
"""
Explanation: prepare for test data, for week 10, we use 5,6,7,8,9
End of explanation
"""
train_3456 = train_dataset.loc[train_dataset['Semana'].isin([3,4,5,6]), :].copy()
train_pivot_3456_to_8 = pivot_train.loc[(pivot_train['Sem8'].notnull()),:].copy()
train_pivot_3456_to_8 = categorical_useful(train_3456,train_pivot_3456_to_8)
del train_3456
gc.collect()
train_pivot_3456_to_8 = define_time_features(train_pivot_3456_to_8, to_predict = 't_plus_2' , t_0 = 8)
#notice that the t_0 means different
train_pivot_3456_to_8 = lin_regr_features(train_pivot_3456_to_8,to_predict = 't_plus_2', semanas_numbers = [3,4,5,6],t_0 = 6)
train_pivot_3456_to_8['target'] = train_pivot_3456_to_8['Sem8']
train_pivot_3456_to_8.drop(['Sem7','Sem8','Sem9'],axis =1,inplace = True)
#add cum_sum
train_pivot_cum_sum = train_pivot_3456_to_8[['Sem3','Sem4','Sem5','Sem6']].cumsum(axis = 1)
train_pivot_3456_to_8.drop(['Sem3','Sem4','Sem5','Sem6'],axis =1,inplace = True)
train_pivot_3456_to_8 = pd.concat([train_pivot_3456_to_8,train_pivot_cum_sum],axis =1)
train_pivot_3456_to_8 = train_pivot_3456_to_8.rename(columns={'Sem4': 't_m_4_cum',
'Sem5': 't_m_3_cum','Sem6': 't_m_2_cum', 'Sem3': 't_m_5_cum'})
# add product_info
train_pivot_3456_to_8 = add_pro_info(train_pivot_3456_to_8)
train_pivot_3456_to_8 = add_product(train_pivot_3456_to_8)
train_pivot_3456_to_8.drop(['ID'],axis =1,inplace = True)
train_pivot_3456_to_8.head()
train_pivot_3456_to_8.columns.values
train_pivot_3456_to_8.to_csv('train_pivot_3456_to_8.csv')
"""
Explanation: begin predict for week 11
train_3456 for 6+2 = 8
End of explanation
"""
train_4567 = train_dataset.loc[train_dataset['Semana'].isin([4,5,6,7]), :].copy()
train_pivot_4567_to_9 = pivot_train.loc[(pivot_train['Sem9'].notnull()),:].copy()
train_pivot_4567_to_9 = categorical_useful(train_4567,train_pivot_4567_to_9)
del train_4567
gc.collect()
train_pivot_4567_to_9 = define_time_features(train_pivot_4567_to_9, to_predict = 't_plus_2' , t_0 = 9)
#notice that the t_0 means different
train_pivot_4567_to_9 = lin_regr_features(train_pivot_4567_to_9,to_predict = 't_plus_2',
semanas_numbers = [4,5,6,7],t_0 = 7)
train_pivot_4567_to_9['target'] = train_pivot_4567_to_9['Sem9']
train_pivot_4567_to_9.drop(['Sem3','Sem8','Sem9'],axis =1,inplace = True)
#add cum_sum
train_pivot_cum_sum = train_pivot_4567_to_9[['Sem7','Sem4','Sem5','Sem6']].cumsum(axis = 1)
train_pivot_4567_to_9.drop(['Sem7','Sem4','Sem5','Sem6'],axis =1,inplace = True)
train_pivot_4567_to_9 = pd.concat([train_pivot_4567_to_9,train_pivot_cum_sum],axis =1)
train_pivot_4567_to_9 = train_pivot_4567_to_9.rename(columns={'Sem4': 't_m_5_cum',
'Sem5': 't_m_4_cum','Sem6': 't_m_3_cum', 'Sem7': 't_m_2_cum'})
# add product_info
train_pivot_4567_to_9 = add_pro_info(train_pivot_4567_to_9)
train_pivot_4567_to_9 = add_product(train_pivot_4567_to_9)
train_pivot_4567_to_9.drop(['ID'],axis =1,inplace = True)
train_pivot_4567_to_9.head()
train_pivot_4567_to_9.columns.values
train_pivot_4567_to_9.to_csv('train_pivot_4567_to_9.csv')
"""
Explanation: train_4567 for 7 + 2 = 9
End of explanation
"""
train_pivot_xgb_time2 = pd.concat([train_pivot_3456_to_8, train_pivot_4567_to_9],axis = 0,copy = False)
train_pivot_xgb_time2.columns.values
train_pivot_xgb_time2.shape
train_pivot_xgb_time2.to_csv('train_pivot_xgb_time2_38fea.csv')
train_pivot_xgb_time2 = pd.read_csv('train_pivot_xgb_time2.csv',index_col = 0)
train_pivot_xgb_time2.head()
del train_pivot_3456_to_8
del train_pivot_4567_to_9
del train_pivot_xgb_time2
del train_pivot_34567_to_8
del train_pivot_45678_to_9
del train_pivot_xgb_time1
gc.collect()
"""
Explanation: concat
End of explanation
"""
pivot_test_week11 = pivot_test_new.loc[pivot_test_new['Semana'] == 11]
pivot_test_week11.reset_index(drop=True,inplace = True)
pivot_test_week11.head()
pivot_test_week11.shape
train_6789 = train_dataset.loc[train_dataset['Semana'].isin([6,7,8,9]), :].copy()
train_pivot_6789_to_11 = pivot_test_week11.copy()
train_pivot_6789_to_11 = categorical_useful(train_6789,train_pivot_6789_to_11)
del train_6789
gc.collect()
train_pivot_6789_to_11 = define_time_features(train_pivot_6789_to_11, to_predict = 't_plus_2' , t_0 = 11)
train_pivot_6789_to_11 = lin_regr_features(train_pivot_6789_to_11,to_predict ='t_plus_2' ,
semanas_numbers = [6,7,8,9],t_0 = 9)
train_pivot_6789_to_11.drop(['Sem3','Sem4','Sem5'],axis =1,inplace = True)
#add cum_sum
train_pivot_cum_sum = train_pivot_6789_to_11[['Sem6','Sem7','Sem8','Sem9']].cumsum(axis = 1)
train_pivot_6789_to_11.drop(['Sem6','Sem7','Sem8','Sem9'],axis =1,inplace = True)
train_pivot_6789_to_11 = pd.concat([train_pivot_6789_to_11,train_pivot_cum_sum],axis =1)
train_pivot_6789_to_11 = train_pivot_6789_to_11.rename(columns={'Sem6': 't_m_5_cum',
'Sem7': 't_m_4_cum', 'Sem8': 't_m_3_cum','Sem9': 't_m_2_cum'})
# add product_info
train_pivot_6789_to_11 = add_pro_info(train_pivot_6789_to_11)
train_pivot_6789_to_11 = add_product(train_pivot_6789_to_11)
train_pivot_6789_to_11.drop(['ID'],axis = 1,inplace = True)
train_pivot_6789_to_11.head()
train_pivot_6789_to_11.shape
train_pivot_6789_to_11.to_pickle('train_pivot_6789_to_11_new.pickle')
"""
Explanation: for test data week 11, we use 6,7,8,9
End of explanation
"""
% time pivot_train_categorical_useful = categorical_useful(train_dataset,pivot_train,is_train = True)
% time pivot_train_categorical_useful = categorical_useful(train_dataset,pivot_train,is_train = True)
pivot_train_categorical_useful_train.to_csv('pivot_train_categorical_useful_with_nan.csv')
pivot_train_categorical_useful_train = pd.read_csv('pivot_train_categorical_useful_with_nan.csv',index_col = 0)
pivot_train_categorical_useful_train.head()
"""
Explanation: over
End of explanation
"""
pivot_train_categorical_useful.head()
pivot_train_categorical_useful_time = define_time_features(pivot_train_categorical_useful,
to_predict = 't_plus_1' , t_0 = 8)
pivot_train_categorical_useful_time.head()
pivot_train_categorical_useful_time.columns
"""
Explanation: create time feature
End of explanation
"""
# Linear regression features
pivot_train_categorical_useful_time_LR = lin_regr_features(pivot_train_categorical_useful_time, semanas_numbers = [3,4,5,6,7])
pivot_train_categorical_useful_time_LR.head()
pivot_train_categorical_useful_time_LR.columns
pivot_train_categorical_useful_time_LR.to_csv('pivot_train_categorical_useful_time_LR.csv')
pivot_train_categorical_useful_time_LR = pd.read_csv('pivot_train_categorical_useful_time_LR.csv',index_col = 0)
pivot_train_categorical_useful_time_LR.head()
"""
Explanation: fit mean feature on target
End of explanation
"""
# pivot_train_canal = pd.get_dummies(pivot_train_categorical_useful_train['Canal_ID'])
# pivot_train_categorical_useful_train = pivot_train_categorical_useful_train.join(pivot_train_canal)
# pivot_train_categorical_useful_train.head()
"""
Explanation: add dummy feature
End of explanation
"""
%ls
pre_product = pd.read_csv('preprocessed_products.csv',index_col = 0)
pre_product.head()
pre_product['weight_per_piece'] = pd.to_numeric(pre_product['weight_per_piece'], errors='coerce')
pre_product['weight'] = pd.to_numeric(pre_product['weight'], errors='coerce')
pre_product['pieces'] = pd.to_numeric(pre_product['pieces'], errors='coerce')
pivot_train_categorical_useful_time_LR_weight = pd.merge(pivot_train_categorical_useful_time_LR,
pre_product[['ID','weight','weight_per_piece']],
left_on = 'Producto_ID',right_on = 'ID',how = 'left')
pivot_train_categorical_useful_time_LR_weight.head()
pivot_train_categorical_useful_time_LR_weight = pd.merge(pivot_train_categorical_useful_time_LR,
pre_product[['ID','weight','weight_per_piece']],
left_on = 'Producto_ID',right_on = 'ID',how = 'left')
pivot_train_categorical_useful_time_LR_weight.head()
pivot_train_categorical_useful_time_LR_weight.to_csv('pivot_train_categorical_useful_time_LR_weight.csv')
pivot_train_categorical_useful_time_LR_weight = pd.read_csv('pivot_train_categorical_useful_time_LR_weight.csv',index_col = 0)
pivot_train_categorical_useful_time_LR_weight.head()
"""
Explanation: add product feature
End of explanation
"""
%cd '/media/siyuan/0009E198000CD19B/bimbo/origin'
%ls
cliente_tabla = pd.read_csv('cliente_tabla.csv')
town_state = pd.read_csv('town_state.csv')
town_state['town_id'] = town_state['Town'].str.split()
town_state['town_id'] = town_state['Town'].str.split(expand = True)
train_basic_feature = pivot_train_categorical_useful_time_LR_weight[['Cliente_ID','Producto_ID','Agencia_ID']]
cliente_per_town = pd.merge(train_basic_feature,cliente_tabla,on = 'Cliente_ID',how= 'inner' )
cliente_per_town = pd.merge(cliente_per_town,town_state[['Agencia_ID','town_id']],on = 'Agencia_ID',how= 'inner' )
cliente_per_town_count = cliente_per_town[['NombreCliente','town_id']].groupby('town_id').count().reset_index()
cliente_per_town_count['NombreCliente'] = cliente_per_town_count['NombreCliente']/float(100000)
cliente_per_town_count_final = pd.merge(cliente_per_town[['Cliente_ID','Producto_ID','Agencia_ID','town_id']],
cliente_per_town_count,on = 'town_id',how = 'left')
pivot_train_categorical_useful_time_LR_weight_town = pd.merge(pivot_train_categorical_useful_time_LR_weight,
cliente_per_town_count_final[['Cliente_ID','Producto_ID','NombreCliente']],
on = ['Cliente_ID','Producto_ID'],how = 'left')
cliente_tabla.head()
town_state.head()
town_state['town_id'] = town_state['Town'].str.split()
town_state['town_id'] = town_state['Town'].str.split(expand = True)
town_state.head()
pivot_train_categorical_useful_time_LR_weight.columns.values
train_basic_feature = pivot_train_categorical_useful_time_LR_weight[['Cliente_ID','Producto_ID','Agencia_ID']]
cliente_per_town = pd.merge(train_basic_feature,cliente_tabla,on = 'Cliente_ID',how= 'inner' )
cliente_per_town = pd.merge(cliente_per_town,town_state[['Agencia_ID','town_id']],on = 'Agencia_ID',how= 'inner' )
cliente_per_town.head()
cliente_per_town_count = cliente_per_town[['NombreCliente','town_id']].groupby('town_id').count().reset_index()
cliente_per_town_count['NombreCliente'] = cliente_per_town_count['NombreCliente']/float(100000)
cliente_per_town_count.head()
cliente_per_town_count_final = pd.merge(cliente_per_town[['Cliente_ID','Producto_ID','Agencia_ID','town_id']],
cliente_per_town_count,on = 'town_id',how = 'left')
cliente_per_town_count_final.head()
pivot_train_categorical_useful_time_LR_weight_town = pd.merge(pivot_train_categorical_useful_time_LR_weight,
cliente_per_town_count_final[['Cliente_ID','Producto_ID','NombreCliente']],
on = ['Cliente_ID','Producto_ID'],how = 'left')
pivot_train_categorical_useful_time_LR_weight_town.head()
pivot_train_categorical_useful_time_LR_weight_town.columns.values
"""
Explanation: add town feature
End of explanation
"""
train_pivot_xgb_time1.columns.values
train_pivot_xgb_time1 = train_pivot_xgb_time1.drop(['Cliente_ID','Producto_ID','Agencia_ID',
'Ruta_SAK','Canal_ID'],axis = 1)
pivot_train_categorical_useful_train_time_no_nan = pivot_train_categorical_useful_train[pivot_train_categorical_useful_train['Sem8'].notnull()]
# pivot_train_categorical_useful_train_time_no_nan = pivot_train_categorical_useful_train[pivot_train_categorical_useful_train['Sem9'].notnull()]
pivot_train_categorical_useful_train_time_no_nan_sample = pivot_train_categorical_useful_train_time_no_nan.sample(1000000)
train_feature = pivot_train_categorical_useful_train_time_no_nan_sample.drop(['Sem8','Sem9'],axis = 1)
train_label = pivot_train_categorical_useful_train_time_no_nan_sample[['Sem8','Sem9']]
#seperate train and test data
# datasource: sparse_week_Agencia_Canal_Ruta_normalized_csr label:train_label
%time train_set, valid_set, train_labels, valid_labels = train_test_split(train_feature,\
train_label, test_size=0.10)
# dtrain = xgb.DMatrix(train_feature,label = train_label['Sem8'],missing=NaN)
dtrain = xgb.DMatrix(train_feature,label = train_label['Sem8'],missing=NaN)
param = {'booster':'gbtree',
'nthread': 7,
'max_depth':6,
'eta':0.2,
'silent':0,
'subsample':0.7,
'objective':'reg:linear',
'eval_metric':'rmse',
'colsample_bytree':0.7}
# param = {'eta':0.1, 'eval_metric':'rmse','nthread': 8}
# evallist = [(dvalid,'eval'), (dtrain,'train')]
num_round = 1000
# plst = param.items()
# bst = xgb.train( plst, dtrain, num_round, evallist )
cvresult = xgb.cv(param, dtrain, num_round, nfold=5,show_progress=True,show_stdv=False,
seed = 0, early_stopping_rounds=10)
print(cvresult.tail())
"""
Explanation: begin xgboost training
End of explanation
"""
# xgb.plot_importance(cvresult)
"""
Explanation: for 1 week later
cv rmse 0.451181 with dummy canal, time regr,
cv rmse 0.450972 without dummy canal, time regr,
cv rmse 0.4485676 without dummy canal, time regr, producto info
cv rmse 0.4487434 without dummy canal, time regr, producto info, cliente_per_town
for 2 week later
cv rmse 0.4513236 without dummy canal, time regr, producto info
End of explanation
"""
|
martibayoalemany/Algorithms | stats/Java sorting.ipynb | mit | # Using strip to filter the values in the txt
import pandas as pd
import numpy as np
def read_stats(data_file):
data = pd.read_csv(data_file, sep="|")
data.columns = [ x.strip() for x in data.columns]
# Filter integer indexes
str_idxs = [idx for idx,dtype in zip(range(0,len(data.dtypes)), data.dtypes) if dtype != 'int64' ]
# Strip fields
for i in str_idxs:
key = data.columns[i]
if data[key].dtype == np.dtype('str'):
data.loc[:,key] = [ x.strip() for x in data.loc[:, key]]
return data
data = read_stats("java_sorting_127.0.1.1_Di_1._Aug_07:39:03_UTC_2017.csv")
# data.to_csv("java_sorting_127.0.1.1_Di_1._Aug_07:39:03_UTC_2017.csv")
[x for x in zip(range(0, len(data.columns)),data.columns)]
import plotly
import plotly.plotly as py
import plotly.figure_factory as ff
from plotly.graph_objs import *
#plotly.offline.init_notebook_mode()
def filter_by(data, name, value):
data_length = len(data)
return [idx for idx in range(0, data_length) if data.loc[idx,name] == value]
# using ~/.plotly/.credentials
# plotly.tools.set_credentials_file(username="", api_key="")
algorithms = set(data.loc[:, 'name'])
alg = algorithms.pop()
idxs = filter_by(data, 'name', alg)
X = data.loc[idxs, 'elements']
Y = data.loc[idxs, 'duration_ms']
plot_data = [Bar(x = X, y = Y, name=alg)]
layout = Layout(title= alg + ' performance (java) ',
xaxis=dict(title='Elements'),
yaxis=dict(title='Time'))
fig = Figure(data=plot_data, layout=layout)
py.iplot(fig)
alg = algorithms.pop()
idxs = filter_by(data, 'name', alg)
X = data.loc[idxs, 'elements']
Y = data.loc[idxs, 'duration_ms']
plot_data = [Bar(x = X, y = Y, name=alg)]
layout = Layout(title= alg + ' performance (java) ',
xaxis=dict(title='Elements'),
yaxis=dict(title='Time'))
fig = Figure(data=plot_data, layout=layout)
py.iplot(fig)
alg = algorithms.pop()
idxs = filter_by(data, 'name', alg)
X = data.loc[idxs, 'elements']
Y = data.loc[idxs, 'duration_ms']
plot_data = [Bar(x = X, y = Y, name=alg)]
layout = Layout(title= alg + ' performance (java) ',
xaxis=dict(title='Elements'),
yaxis=dict(title='Time'))
fig = Figure(data=plot_data, layout=layout)
py.iplot(fig)
alg = algorithms.pop()
idxs = filter_by(data, 'name', alg)
X = data.loc[idxs, 'elements']
Y = data.loc[idxs, 'duration_ms']
plot_data = [Bar(x = X, y = Y, name=alg)]
layout = Layout(title= alg + ' performance (java) ',
xaxis=dict(title='Elements'),
yaxis=dict(title='Time'))
fig = Figure(data=plot_data, layout=layout)
py.iplot(fig)
alg = algorithms.pop()
idxs = filter_by(data, 'name', alg)
X = data.loc[idxs, 'elements']
Y = data.loc[idxs, 'duration_ms']
plot_data = [Bar(x = X, y = Y, name=alg)]
layout = Layout(title= alg + ' performance (java) ',
xaxis=dict(title='Elements'),
yaxis=dict(title='Time'))
fig = Figure(data=plot_data, layout=layout)
py.iplot(fig)
"""
Explanation: Initial code to strip the fields with Regex
```
import re
import pandas as pd
data = pd.read_csv("java_sorting_24_7_17.txt", sep="|")
def filter_data(data):
data.columns= [re.sub(r'\s+(\S+)\s+', r'\1', x) for x in data.columns]
for i in range(1, len(data.columns)):
try:
data.iloc[:,i] = data.iloc[:,i].apply(lambda x: re.sub(r'\s+(\S+)\s+', r'\1', x))
except Exception as e:
print(e)
data.loc[:, 'shuffle'] = data.loc[:, 'shuffle'].apply(lambda x: re.sub(r'\/(\d+)', r'\1',x))
return data
data = filter_data(data)
```
End of explanation
"""
data2 = read_stats("java_sorting_127.0.1.1_Fr_4._Aug_23:59:33_UTC_2017.txt")
algorithms = set(data2.loc[:, 'name'])
alg = algorithms.pop()
idxs = filter_by(data2, 'name', alg)
X = data2.loc[idxs, 'elements']
Y = data2.loc[idxs, 'duration_ms']
plot_data = [Bar(x = X, y = Y, name=alg)]
layout = Layout(title= alg + ' performance (java) ',
xaxis=dict(title='Elements'),
yaxis=dict(title='Time'))
fig = Figure(data=plot_data, layout=layout)
py.iplot(fig)
alg = algorithms.pop()
idxs = filter_by(data2, 'name', alg)
X = data2.loc[idxs, 'elements']
Y = data2.loc[idxs, 'duration_ms']
plot_data = [Bar(x = X, y = Y, name=alg)]
layout = Layout(title= alg + ' performance (java) ',
xaxis=dict(title='Elements'),
yaxis=dict(title='Time'))
fig = Figure(data=plot_data, layout=layout)
py.iplot(fig)
alg = algorithms.pop()
idxs = filter_by(data2, 'name', alg)
X = data2.loc[idxs, 'elements']
Y = data2.loc[idxs, 'duration_ms']
plot_data = [Bar(x = X, y = Y, name=alg)]
layout = Layout(title= alg + ' performance (java) ',
xaxis=dict(title='Elements'),
yaxis=dict(title='Time'))
fig = Figure(data=plot_data, layout=layout)
py.iplot(fig)
alg = algorithms.pop()
idxs = filter_by(data2, 'name', alg)
X = data2.loc[idxs, 'elements']
Y = data2.loc[idxs, 'duration_ms']
plot_data = [Bar(x = X, y = Y, name=alg)]
layout = Layout(title= alg + ' performance (java) ',
xaxis=dict(title='Elements'),
yaxis=dict(title='Time'))
fig = Figure(data=plot_data, layout=layout)
py.iplot(fig)
alg = algorithms.pop()
idxs = filter_by(data2, 'name', alg)
X = data2.loc[idxs, 'elements']
Y = data2.loc[idxs, 'duration_ms']
plot_data = [Bar(x = X, y = Y, name=alg)]
layout = Layout(title= alg + ' performance (java) ',
xaxis=dict(title='Elements'),
yaxis=dict(title='Time'))
fig = Figure(data=plot_data, layout=layout)
py.iplot(fig)
"""
Explanation: The same stats as before but with 9 Million data
The merge sort algorithm we developed is a bit less than O(N). We couldn't find out in that run the worst case performance of O(n log(n)) see.
The worst case of our merge sort (single threaded) is better than the worst case of the java platform Arrays.sort, however the stats are not independend the runs were not isolated.
We loop through all sorting algorithms, the garbage collection of the previous algorithm
might affect the performance of the next one. The garbage collection of merge sort might change the performance of Arrays.sort
End of explanation
"""
data2.loc[:,'name'] =[x.strip() for x in data2.loc[:,'name']]
algorithms = set(data2.loc[:, 'name'])
algorithms
import plotly.graph_objs as go
algorithms.remove('Linked Hashmap')
def get_bar(data, algorithm_name):
idxs = filter_by(data, 'name', algorithm_name)
X1 = data2.loc[idxs, 'elements']
Y1 = data2.loc[idxs, 'duration_ms']
return go.Bar(x=X1, y=Y1, name=algorithm_name)
plot_data = [get_bar(data2, name) for name in algorithms]
layout = go.Layout(title= 'Performance comparison',
xaxis=dict(title='Elements (32 bits / -2,147,483,648 to +2,147,483,647)'),
yaxis=dict(title='Time (ms)'),
barmode='stack')
fig = go.Figure(data=plot_data, layout=layout)
py.iplot(fig)
"""
Explanation: Better visualization
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/ec-earth-consortium/cmip6/models/ec-earth3-cc/land.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ec-earth-consortium', 'ec-earth3-cc', 'land')
"""
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: EC-EARTH-CONSORTIUM
Source ID: EC-EARTH3-CC
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:59
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation
"""
|
metpy/MetPy | v0.11/_downloads/8c91fa5ab51e12860cfa1e679eaa746d/xarray_tutorial.ipynb | bsd-3-clause | import cartopy.crs as ccrs
import cartopy.feature as cfeature
import matplotlib.pyplot as plt
import xarray as xr
# Any import of metpy will activate the accessors
import metpy.calc as mpcalc
from metpy.testing import get_test_data
from metpy.units import units
"""
Explanation: xarray with MetPy Tutorial
xarray <http://xarray.pydata.org/>_ is a powerful Python package that provides N-dimensional
labeled arrays and datasets following the Common Data Model. While the process of integrating
xarray features into MetPy is ongoing, this tutorial demonstrates how xarray can be used
within the current version of MetPy. MetPy's integration primarily works through accessors
which allow simplified projection handling and coordinate identification. Unit and calculation
support is currently available in a limited fashion, but should be improved in future
versions.
End of explanation
"""
# Open the netCDF file as a xarray Dataset
data = xr.open_dataset(get_test_data('irma_gfs_example.nc', False))
# View a summary of the Dataset
print(data)
"""
Explanation: Getting Data
While xarray can handle a wide variety of n-dimensional data (essentially anything that can
be stored in a netCDF file), a common use case is working with model output. Such model
data can be obtained from a THREDDS Data Server using the siphon package, but for this
tutorial, we will use an example subset of GFS data from Hurrican Irma (September 5th,
2017).
End of explanation
"""
# To parse the full dataset, we can call parse_cf without an argument, and assign the returned
# Dataset.
data = data.metpy.parse_cf()
# If we instead want just a single variable, we can pass that variable name to parse_cf and
# it will return just that data variable as a DataArray.
data_var = data.metpy.parse_cf('Temperature_isobaric')
# If we want only a subset of variables, we can pass a list of variable names as well.
data_subset = data.metpy.parse_cf(['u-component_of_wind_isobaric',
'v-component_of_wind_isobaric'])
# To rename variables, supply a dictionary between old and new names to the rename method
data = data.rename({
'Vertical_velocity_pressure_isobaric': 'omega',
'Relative_humidity_isobaric': 'relative_humidity',
'Temperature_isobaric': 'temperature',
'u-component_of_wind_isobaric': 'u',
'v-component_of_wind_isobaric': 'v',
'Geopotential_height_isobaric': 'height'
})
"""
Explanation: Preparing Data
To make use of the data within MetPy, we need to parse the dataset for projection
information following the CF conventions. For this, we use the
data.metpy.parse_cf() method, which will return a new, parsed DataArray or
Dataset.
Additionally, we rename our data variables for easier reference.
End of explanation
"""
data['temperature'].metpy.convert_units('degC')
"""
Explanation: Units
MetPy's DataArray accessor has a unit_array property to obtain a pint.Quantity array
of just the data from the DataArray (metadata is removed) and a convert_units method to
convert the the data from one unit to another (keeping it as a DataArray). For now, we'll
just use convert_units to convert our temperature to degC.
End of explanation
"""
# Get multiple coordinates (for example, in just the x and y direction)
x, y = data['temperature'].metpy.coordinates('x', 'y')
# If we want to get just a single coordinate from the coordinates method, we have to use
# tuple unpacking because the coordinates method returns a generator
vertical, = data['temperature'].metpy.coordinates('vertical')
# Or, we can just get a coordinate from the property
time = data['temperature'].metpy.time
# To verify, we can inspect all their names
print([coord.name for coord in (x, y, vertical, time)])
"""
Explanation: Coordinates
You may have noticed how we directly accessed the vertical coordinates above using their
names. However, in general, if we are working with a particular DataArray, we don't have to
worry about that since MetPy is able to parse the coordinates and so obtain a particular
coordinate type directly. There are two ways to do this:
Use the data_var.metpy.coordinates method
Use the data_var.metpy.x, data_var.metpy.y, data_var.metpy.vertical,
data_var.metpy.time properties
The valid coordinate types are:
x
y
vertical
time
(Both approaches and all four types are shown below)
End of explanation
"""
print(data['height'].metpy.sel(vertical=850 * units.hPa))
"""
Explanation: Indexing and Selecting Data
MetPy provides wrappers for the usual xarray indexing and selection routines that can handle
quantities with units. For DataArrays, MetPy also allows using the coordinate axis types
mentioned above as aliases for the coordinates. And so, if we wanted 850 hPa heights,
we would take:
End of explanation
"""
data_crs = data['temperature'].metpy.cartopy_crs
print(data_crs)
"""
Explanation: For full details on xarray indexing/selection, see
xarray's documentation <http://xarray.pydata.org/en/stable/indexing.html>_.
Projections
Getting the cartopy coordinate reference system (CRS) of the projection of a DataArray is as
straightforward as using the data_var.metpy.cartopy_crs property:
End of explanation
"""
data_globe = data['temperature'].metpy.cartopy_globe
print(data_globe)
"""
Explanation: The cartopy Globe can similarly be accessed via the data_var.metpy.cartopy_globe
property:
End of explanation
"""
lat, lon = xr.broadcast(y, x)
f = mpcalc.coriolis_parameter(lat)
dx, dy = mpcalc.lat_lon_grid_deltas(lon, lat, initstring=data_crs.proj4_init)
heights = data['height'].metpy.loc[{'time': time[0], 'vertical': 500. * units.hPa}]
u_geo, v_geo = mpcalc.geostrophic_wind(heights, f, dx, dy)
print(u_geo)
print(v_geo)
"""
Explanation: Calculations
Most of the calculations in metpy.calc will accept DataArrays by converting them
into their corresponding unit arrays. While this may often work without any issues, we must
keep in mind that because the calculations are working with unit arrays and not DataArrays:
The calculations will return unit arrays rather than DataArrays
Broadcasting must be taken care of outside of the calculation, as it would only recognize
dimensions by order, not name
As an example, we calculate geostropic wind at 500 hPa below:
End of explanation
"""
heights = data['height'].metpy.loc[{'time': time[0], 'vertical': 500. * units.hPa}]
lat, lon = xr.broadcast(y, x)
f = mpcalc.coriolis_parameter(lat)
dx, dy = mpcalc.grid_deltas_from_dataarray(heights)
u_geo, v_geo = mpcalc.geostrophic_wind(heights, f, dx, dy)
print(u_geo)
print(v_geo)
"""
Explanation: Also, a limited number of calculations directly support xarray DataArrays or Datasets (they
can accept and return xarray objects). Right now, this includes
Derivative functions
first_derivative
second_derivative
gradient
laplacian
Cross-section functions
cross_section_components
normal_component
tangential_component
absolute_momentum
More details can be found by looking at the documentation for the specific function of
interest.
There is also the special case of the helper function, grid_deltas_from_dataarray, which
takes a DataArray input, but returns unit arrays for use in other calculations. We could
rewrite the above geostrophic wind example using this helper function as follows:
End of explanation
"""
# A very simple example example of a plot of 500 hPa heights
data['height'].metpy.loc[{'time': time[0], 'vertical': 500. * units.hPa}].plot()
plt.show()
# Let's add a projection and coastlines to it
ax = plt.axes(projection=ccrs.LambertConformal())
data['height'].metpy.loc[{'time': time[0],
'vertical': 500. * units.hPa}].plot(ax=ax, transform=data_crs)
ax.coastlines()
plt.show()
# Or, let's make a full 500 hPa map with heights, temperature, winds, and humidity
# Select the data for this time and level
data_level = data.metpy.loc[{time.name: time[0], vertical.name: 500. * units.hPa}]
# Create the matplotlib figure and axis
fig, ax = plt.subplots(1, 1, figsize=(12, 8), subplot_kw={'projection': data_crs})
# Plot RH as filled contours
rh = ax.contourf(x, y, data_level['relative_humidity'], levels=[70, 80, 90, 100],
colors=['#99ff00', '#00ff00', '#00cc00'])
# Plot wind barbs, but not all of them
wind_slice = slice(5, -5, 5)
ax.barbs(x[wind_slice], y[wind_slice],
data_level['u'].metpy.unit_array[wind_slice, wind_slice].to('knots'),
data_level['v'].metpy.unit_array[wind_slice, wind_slice].to('knots'),
length=6)
# Plot heights and temperature as contours
h_contour = ax.contour(x, y, data_level['height'], colors='k', levels=range(5400, 6000, 60))
h_contour.clabel(fontsize=8, colors='k', inline=1, inline_spacing=8,
fmt='%i', rightside_up=True, use_clabeltext=True)
t_contour = ax.contour(x, y, data_level['temperature'], colors='xkcd:deep blue',
levels=range(-26, 4, 2), alpha=0.8, linestyles='--')
t_contour.clabel(fontsize=8, colors='xkcd:deep blue', inline=1, inline_spacing=8,
fmt='%i', rightside_up=True, use_clabeltext=True)
# Add geographic features
ax.add_feature(cfeature.LAND.with_scale('50m'), facecolor=cfeature.COLORS['land'])
ax.add_feature(cfeature.OCEAN.with_scale('50m'), facecolor=cfeature.COLORS['water'])
ax.add_feature(cfeature.STATES.with_scale('50m'), edgecolor='#c7c783', zorder=0)
ax.add_feature(cfeature.LAKES.with_scale('50m'), facecolor=cfeature.COLORS['water'],
edgecolor='#c7c783', zorder=0)
# Set a title and show the plot
ax.set_title('500 hPa Heights (m), Temperature (\u00B0C), Humidity (%) at '
+ time[0].dt.strftime('%Y-%m-%d %H:%MZ').item())
plt.show()
"""
Explanation: Plotting
Like most meteorological data, we want to be able to plot these data. DataArrays can be used
like normal numpy arrays in plotting code, which is the recommended process at the current
point in time, or we can use some of xarray's plotting functionality for quick inspection of
the data.
(More detail beyond the following can be found at xarray's plotting reference
<http://xarray.pydata.org/en/stable/plotting.html>_.)
End of explanation
"""
|
conversationai/conversationai-models | attention-tutorial/Attention_Model_Tutorial.ipynb | apache-2.0 | %load_ext autoreload
%autoreload 2
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import pandas as pd
import tensorflow as tf
import numpy as np
import time
import os
from sklearn import metrics
from visualize_attention import attentionDisplay
from process_figshare import download_figshare, process_figshare
tf.set_random_seed(1234)
"""
Explanation: Attention Based Classification Tutorial
Recommended time: 30 minutes
Contributors: nthain, martin-gorner
This tutorial provides an introduction to building text classification models in tensorflow that use attention to provide insight into how classification decisions are being made. We will build our tensorflow graph following the Embed - Encode - Attend - Predict paradigm introduced by Matthew Honnibal. For more information about this approach, you can refer to:
Slides: https://goo.gl/BYT7au
Video: https://youtu.be/pzOzmxCR37I
Figure 1 below provides a representation of the full tensorflow graph we will build in this tutorial. The green squares represent RNN cells and the blue trapezoids represent neural networks for computing attention weights which will be discussed in more detail below. We will implement each piece of this model graph in a seperate function. The whole model will then simply be calling all of these functions in turn.
This tutorial was created in collaboration with the Tensorflow without a PhD series. To check out more episodes, tutorials, and codelabs from this series, please visit:
https://github.com/GoogleCloudPlatform/tensorflow-without-a-phd
Imports
End of explanation
"""
download_figshare()
process_figshare()
"""
Explanation: Load & Explore Data
Let's begin by downloading the data from Figshare and cleaning and splitting it for use in training.
End of explanation
"""
SPLITS = ['train', 'dev', 'test']
wiki = {}
for split in SPLITS:
wiki[split] = pd.read_csv('data/wiki_%s.csv' % split)
"""
Explanation: We then load these splits as pandas dataframes.
End of explanation
"""
wiki['train'].head()
"""
Explanation: We display the top few rows of the dataframe to see what we're dealing with. The key columns are 'comment' which contains the text of a comment from a Wikipedia talk page and 'toxicity' which contains the fraction of annotators who found this comment to be toxic. More information about the other fields and how this data was collected can be found on this wiki and research paper.
End of explanation
"""
hparams = {'max_document_length': 60,
'embedding_size': 50,
'rnn_cell_size': 128,
'batch_size': 256,
'attention_size': 32,
'attention_depth': 2}
MAX_LABEL = 2
WORDS_FEATURE = 'words'
NUM_STEPS = 300
"""
Explanation: Hyperparameters
Hyperparameters are used to specify various aspects of our model's architecture. In practice, these are often critical to model performance and are carefully tuned using some type of hyperparameter search. For this tutorial, we will choose a reasonable set of hyperparameters and treat them as fixed.
End of explanation
"""
# Initialize the vocabulary processor
vocab_processor = tf.contrib.learn.preprocessing.VocabularyProcessor(hparams['max_document_length'])
def process_inputs(vocab_processor, df, train_label = 'train', test_label = 'test'):
# For simplicity, we call our features x and our outputs y
x_train = df['train'].comment
y_train = df['train'].is_toxic
x_test = df['test'].comment
y_test = df['test'].is_toxic
# Train the vocab_processor from the training set
x_train = vocab_processor.fit_transform(x_train)
# Transform our test set with the vocabulary processor
x_test = vocab_processor.transform(x_test)
# We need these to be np.arrays instead of generators
x_train = np.array(list(x_train))
x_test = np.array(list(x_test))
y_train = np.array(y_train).astype(int)
y_test = np.array(y_test).astype(int)
n_words = len(vocab_processor.vocabulary_)
print('Total words: %d' % n_words)
# Return the transformed data and the number of words
return x_train, y_train, x_test, y_test, n_words
x_train, y_train, x_test, y_test, n_words = process_inputs(vocab_processor, wiki)
"""
Explanation: Step 0: Text Preprocessing
Before we can build a neural network on comment strings, we first have to complete a number of preprocessing steps. In particular, it is important that we "tokenize" the string, splitting it into an array of tokens. In our case, each token will be a word in our sentence and they will be seperated by spaces and punctuation. Many alternative tokenizers exist, some of which use characters as tokens, and others which include punctuation, emojis, or even cleverly handle misspellings.
Once we've tokenized the sentences, each word will be replaced with an integer representative. This will make the embedding (Step 1) much easier.
Happily the tensorflow function VocabularyProcessor takes care of both the tokenization and integer mapping. We only have to give it the max_document_length argument which will determine the length of the output arrays. If sentences are shorter than this length, they will be padded and if they are longer, they will be trimmed. The VocabularyProcessor is then trained on the training set to build the initial vocabulary and map the words to integers.
End of explanation
"""
def embed(features):
word_vectors = tf.contrib.layers.embed_sequence(
features[WORDS_FEATURE],
vocab_size=n_words,
embed_dim=hparams['embedding_size'])
return word_vectors
"""
Explanation: Step 1: Embed
Neural networks at their core are a composition of operators from linear algebra and non-linear activation functions. In order to perform these computations on our input sentences, we must first embed them as a vector of numbers. There are two main approaches to perform this embedding:
Pre-trained: It is often beneficial to initialize our embedding matrix using pre-trained embeddings like Word2Vec or GloVe. These embeddings are trained on a huge corpus of text with a general purpose problem so that they incorporate syntactic and semantic properties of the words being embedded and are amenable to transfer learning on new problems. Once initialized, you can optionally train them further for your specific problem by allowing the embedding matrix in the graph to be a trainable variable in our tensorflow graph.
Random: Alternatively, embeddings can be "trained from scratch" by initializing the embedding matrix randomly and then training it like any other parameter in the tensorflow graph.
In this notebook, we will be using a random initialization. To perform this embedding we use the embed_sequence function from the layers package. This will take our input features, which are the arrays of integers we produced in Step 0, and will randomly initialize a matrix to embed them into. The parameters of this matrix will then be trained with the rest of the graph.
End of explanation
"""
def encode(word_vectors):
# Create a Gated Recurrent Unit cell with hidden size of RNN_SIZE.
# Since the forward and backward RNNs will have different parameters, we instantiate two seperate GRUS.
rnn_fw_cell = tf.contrib.rnn.GRUCell(hparams['rnn_cell_size'])
rnn_bw_cell = tf.contrib.rnn.GRUCell(hparams['rnn_cell_size'])
# Create an unrolled Bi-Directional Recurrent Neural Networks to length of
# max_document_length and passes word_list as inputs for each unit.
outputs, _ = tf.nn.bidirectional_dynamic_rnn(rnn_fw_cell,
rnn_bw_cell,
word_vectors,
dtype=tf.float32,
time_major=False)
return outputs
"""
Explanation: Step 2: Encode
A recurrent neural network is a deep learning architecture that is useful for encoding sequential information like sentences. They are built around a single cell which contains one of several standard neural network architectures (e.g. simple RNN, GRU, or LSTM). We will not focus on the details of the architectures, but at each point in time the cell takes in two inputs and produces two outputs. The inputs are the input token for that step in the sequence and some state from the previous steps in the sequence. The outputs produced are the encoded vectors for the current sequence step and a state to pass on to the next step of the sequence.
Figure 2 shows what this looks like for an unrolled RNN. Each cell (represented by a green square) has two input arrows and two output arrrows. Note that all of the green squares represent the same cell and share parameters. One major advantage of this cell replication is that, at inference time, it allows us to deal with arbitrary length input and not be restricted by the input sizes of our training set.
For our model, we will use a bi-directional RNN. This is simply the concatentation of two RNNs, one which processes the sequence from left to right (the "forward" RNN) and one which process from right to left (the "backward" RNN). By using both directions, we get a stronger encoding as each word can be encoded using the context of its neighbors on boths sides rather than just a single side. For our cells, we use gated recurrent units (GRUs). Figure 3 gives a visual representation of this.
End of explanation
"""
def attend(inputs, attention_size, attention_depth):
inputs = tf.concat(inputs, axis = 2)
inputs_shape = inputs.shape
sequence_length = inputs_shape[1].value
final_layer_size = inputs_shape[2].value
x = tf.reshape(inputs, [-1, final_layer_size])
for _ in range(attention_depth-1):
x = tf.layers.dense(x, attention_size, activation = tf.nn.relu)
x = tf.layers.dense(x, 1, activation = None)
logits = tf.reshape(x, [-1, sequence_length, 1])
alphas = tf.nn.softmax(logits, dim = 1)
output = tf.reduce_sum(inputs * alphas, 1)
return output, alphas
"""
Explanation: Step 3: Attend
There are a number of ways to use the encoded states of a recurrent neural network for prediction. One traditional approach is to simply use the final encoded state of the network, as seen in Figure 2. However, this could lose some useful information encoded in the previous steps of the sequence. In order to keep that information, one could instead use an average of the encoded states outputted by the RNN. There is not reason to believe, though, that all of the encoded states of the RNN are equally valuable. Thus, we arrive at the idea of using a weighted sum of these encoded states to make our prediction.
We will call the weights of this weighted sum "attention weights" as we will see below that they correspond to how important our model thinks each token of the sequence is in making a prediction decision. We compute these attention weights simply by building a small fully connected neural network on top of each encoded state. This network will have a single unit final layer which will correspond to the attention weight we will assign. As for RNNs, the parameters of this network will be the same for each step of the sequence, allowing us to accomodate variable length inputs. Figure 4 shows us what the graph would look like if we applied attention to a uni-directional RNN.
Again, as our model uses a bi-directional RNN, we first concatenate the hidden states from each RNN before computing the attention weights and applying the weighted sum. Figure 5 below visualizes this step.
End of explanation
"""
def estimator_spec_for_softmax_classification(
logits, labels, mode, alphas):
"""Returns EstimatorSpec instance for softmax classification."""
predicted_classes = tf.argmax(logits, 1)
if mode == tf.estimator.ModeKeys.PREDICT:
return tf.estimator.EstimatorSpec(
mode=mode,
predictions={
'class': predicted_classes,
'prob': tf.nn.softmax(logits),
'attention': alphas
})
onehot_labels = tf.one_hot(labels, MAX_LABEL, 1, 0)
loss = tf.losses.softmax_cross_entropy(
onehot_labels=onehot_labels, logits=logits)
if mode == tf.estimator.ModeKeys.TRAIN:
optimizer = tf.train.AdamOptimizer(learning_rate=0.01)
train_op = optimizer.minimize(loss,
global_step=tf.train.get_global_step())
return tf.estimator.EstimatorSpec(mode,
loss=loss,
train_op=train_op)
eval_metric_ops = {
'accuracy': tf.metrics.accuracy(
labels=labels, predictions=predicted_classes),
'auc': tf.metrics.auc(
labels=labels, predictions=predicted_classes),
}
return tf.estimator.EstimatorSpec(
mode=mode, loss=loss, eval_metric_ops=eval_metric_ops)
"""
Explanation: Step 4: Predict
To genereate a class prediction about whether a comment is toxic or not, the final part of our tensorflow graph takes the weighted average of hidden states generated in the attention step and uses a fully connected layer with a softmax activation function to generate probability scores for each of our prediction classes. While training, the model will use the cross-entropy loss function to train its parameters.
As we will use the estimator framework to train our model, we write an estimator_spec function to specify how our model is trained and what values to return during the prediction stage. We also specify the evaluation metrics of accuracy and auc, which we will use to evaluate our model in Step 7.
End of explanation
"""
def predict(encoding, labels, mode, alphas):
logits = tf.layers.dense(encoding, MAX_LABEL, activation=None)
return estimator_spec_for_softmax_classification(
logits=logits, labels=labels, mode=mode, alphas=alphas)
"""
Explanation: The predict component of our graph then just takes the output of our attention step, i.e. the weighted average of the bi-RNN hidden layers, and adds one more fully connected layer to compute the logits. These logits are fed into a our estimator_spec which uses a softmax to get the final class probabilties and a softmax_cross_entropy to build a loss function.
End of explanation
"""
def bi_rnn_model(features, labels, mode):
"""RNN model to predict from sequence of words to a class."""
word_vectors = embed(features)
outputs = encode(word_vectors)
encoding, alphas = attend(outputs,
hparams['attention_size'],
hparams['attention_depth'])
return predict(encoding, labels, mode, alphas)
"""
Explanation: Step 5: Complete Model Architecture
We are now ready to put it all together. As you can see from the bi_rnn_model function below, once you have the components for embed, encode, attend, and predict, putting the whole graph together is extremely simple!
End of explanation
"""
current_time = str(int(time.time()))
model_dir = os.path.join('checkpoints', current_time)
classifier = tf.estimator.Estimator(model_fn=bi_rnn_model,
model_dir=model_dir)
"""
Explanation: Step 6: Train Model
We will use the estimator framework to train our model. To define our classifier, we just provide it with the complete model graph (i.e. the bi_rnn_model function) and a directory where the models will be saved.
End of explanation
"""
# Train.
train_input_fn = tf.estimator.inputs.numpy_input_fn(
x={WORDS_FEATURE: x_train},
y=y_train,
batch_size=hparams['batch_size'],
num_epochs=None,
shuffle=True)
"""
Explanation: The estimator framework also requires us to define an input function. This will take the input data and provide it during model training in batches. We will use the provided numpy_input_function, which takes numpy arrays as features and labels. We also specify the batch size and whether we want to shuffle the data between epochs.
End of explanation
"""
classifier.train(input_fn=train_input_fn,
steps=NUM_STEPS)
"""
Explanation: Now, it's finally time to train our model! With estimator, this is as easy as calling the train function and specifying how long we'd like to train for.
End of explanation
"""
# Predict.
test_input_fn = tf.estimator.inputs.numpy_input_fn(
x={WORDS_FEATURE: x_test},
y=y_test,
num_epochs=1,
shuffle=False)
predictions = classifier.predict(input_fn=test_input_fn)
"""
Explanation: Step 7: Predict and Evaluate Model
To evaluate the function, we will use it to predict the values of examples from our test set. Again, we define a numpy_input_fn, for the test data in this case, and then have the classifier run predictions on this input function.
End of explanation
"""
y_predicted = []
alphas_predicted = []
for p in predictions:
y_predicted.append(p['class'])
alphas_predicted.append(p['attention'])
"""
Explanation: These predictions are returned to us as a generator. The code below gives an example of how we can extract the class and attention weights for each prediction.
End of explanation
"""
scores = classifier.evaluate(input_fn=test_input_fn)
print('Accuracy: {0:f}'.format(scores['accuracy']))
print('AUC: {0:f}'.format(scores['auc']))
"""
Explanation: To evaluate our model, we can use the evaluate function provided by estimator to get the accuracy and ROC-AUC scores as we defined them in our estimator_spec.
End of explanation
"""
display = attentionDisplay(vocab_processor, classifier)
display.display_prediction_attention("Fuck off, you idiot.")
display.display_prediction_attention("Thanks for your help editing this.")
display.display_prediction_attention("You're such an asshole. But thanks anyway.")
display.display_prediction_attention("I'm going to shoot you!")
display.display_prediction_attention("Oh shoot. Well alright.")
display.display_prediction_attention("First of all who the fuck died and made you the god.")
display.display_prediction_attention("Gosh darn it!")
display.display_prediction_attention("God damn it!")
display.display_prediction_attention("You're not that smart are you?")
"""
Explanation: Step 8: Display Attention
Now that we have a trained attention based toxicity model, let's use it to visualize how our model makes its classification decisions. We use the helpful attentionDisplay class from the visualize_attention package. Given any sentence, this class uses our trained classifier to determine whether the sentence is toxic and also returns a representation of the attention weights. In the arrays below, the more red a word is, the more weight classifier puts on encoded word. Try it out on some sentences of your own and see what patterns you can find!
Note: If you are viewing this on Github, the colors in the cells won't display properly. We recommend viewing it locally or with nbviewer to see the correct rendering of the attention weights.
End of explanation
"""
|
GoogleCloudPlatform/mlops-on-gcp | skew_detection/02_covertype_logs_parsing.ipynb | apache-2.0 | !pip install -U -q google-api-python-client
!pip install -U -q pandas
"""
Explanation: Parsing and querying AI Platform Prediction request-response logs in BigQuery
This tutorial shows you how to create a view to parse raw request instances and response predictions logged from AI Platform Prediction to BigQuery.
The tutorial covers the following tasks:
Define dataset metadata.
Generate the CREATE VIEW script that parses the raw data.
Execute the CREATE VIEW script.
Query the view to retrieve the parsed data.
Setup
Install packages and dependencies
End of explanation
"""
PROJECT_ID = "sa-data-validation"
MODEL_NAME = 'covertype_classifier'
VERSION_NAME = 'v1'
BQ_DATASET_NAME = 'prediction_logs'
BQ_TABLE_NAME = 'covertype_classifier_logs'
!gcloud config set project $PROJECT_ID
"""
Explanation: Configure Google Cloud environment settings
End of explanation
"""
try:
from google.colab import auth
auth.authenticate_user()
print("Colab user is authenticated.")
except: pass
"""
Explanation: Authenticate your Google Cloud account
This step is required if you run the notebook in Colab.
End of explanation
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import pandas as pd
from google.cloud import bigquery
"""
Explanation: Import libraries
End of explanation
"""
HEADER = ['Elevation', 'Aspect', 'Slope','Horizontal_Distance_To_Hydrology',
'Vertical_Distance_To_Hydrology', 'Horizontal_Distance_To_Roadways',
'Hillshade_9am', 'Hillshade_Noon', 'Hillshade_3pm',
'Horizontal_Distance_To_Fire_Points', 'Wilderness_Area', 'Soil_Type',
'Cover_Type']
TARGET_FEATURE_NAME = 'Cover_Type'
FEATURE_LABELS = ['0', '1', '2', '3', '4', '5', '6']
NUMERIC_FEATURE_NAMES = ['Aspect', 'Elevation', 'Hillshade_3pm',
'Hillshade_9am', 'Hillshade_Noon',
'Horizontal_Distance_To_Fire_Points',
'Horizontal_Distance_To_Hydrology',
'Horizontal_Distance_To_Roadways','Slope',
'Vertical_Distance_To_Hydrology']
CATEGORICAL_FEATURES_WITH_VOCABULARY = {
'Soil_Type': ['2702', '2703', '2704', '2705', '2706', '2717', '3501', '3502',
'4201', '4703', '4704', '4744', '4758', '5101', '6101', '6102',
'6731', '7101', '7102', '7103', '7201', '7202', '7700', '7701',
'7702', '7709', '7710', '7745', '7746', '7755', '7756', '7757',
'7790', '8703', '8707', '8708', '8771', '8772', '8776'],
'Wilderness_Area': ['Cache', 'Commanche', 'Neota', 'Rawah']
}
FEATURE_NAMES = list(CATEGORICAL_FEATURES_WITH_VOCABULARY.keys()) + NUMERIC_FEATURE_NAMES
HEADER_DEFAULTS = [[0] if feature_name in NUMERIC_FEATURE_NAMES + [TARGET_FEATURE_NAME] else ['NA']
for feature_name in HEADER]
NUM_CLASSES = len(FEATURE_LABELS)
"""
Explanation: 1. Define dataset metadata
End of explanation
"""
LABEL_KEY = 'predicted_label'
SCORE_KEY = 'confidence'
SIGNATURE_NAME = 'serving_default'
def _extract_json(column, feature_name):
return "JSON_EXTRACT({}, '$.{}')".format(column, feature_name)
def _replace_brackets(field):
return "REPLACE(REPLACE({}, ']', ''), '[','')".format(field)
def _replace_quotes(field):
return 'REPLACE({}, "\\"","")'.format(field)
def _cast_to_numeric(field):
return "CAST({} AS NUMERIC)".format(field)
def _add_alias(field, feature_name):
return "{} AS {}".format(field, feature_name)
view_name = "vw_"+BQ_TABLE_NAME+"_"+VERSION_NAME
colum_names = FEATURE_NAMES
input_features = ', \r\n '.join(colum_names)
json_features_extraction = []
for feature_name in colum_names:
field = _extract_json('instance', feature_name)
field = _replace_brackets(field)
if feature_name in NUMERIC_FEATURE_NAMES:
field = _cast_to_numeric(field)
else:
field = _replace_quotes(field)
field = _add_alias(field, feature_name)
json_features_extraction.append(field)
json_features_extraction = ', \r\n '.join(json_features_extraction)
json_prediction_extraction = []
for feature_name in [LABEL_KEY, SCORE_KEY]:
field = _extract_json('prediction', feature_name)
field = _replace_brackets(field)
if feature_name == SCORE_KEY:
field = _cast_to_numeric(field)
else:
field = _replace_quotes(field)
field = _add_alias(field, feature_name)
json_prediction_extraction.append(field)
json_prediction_extraction = ', \r\n '.join(json_prediction_extraction)
sql_script = '''
CREATE OR REPLACE VIEW @dataset_name.@view_name
AS
WITH step1
AS
(
SELECT
model,
model_version,
time,
SPLIT(JSON_EXTRACT(raw_data, '$.instances'), '}],[{') instance_list,
SPLIT(JSON_EXTRACT(raw_prediction, '$.predictions'), '}],[{') as prediction_list
FROM
`@project.@dataset_name.@table_name`
WHERE
model = '@model_name' AND
model_version = '@version'
),
step2
AS
(
SELECT
model,
model_version,
time,
REPLACE(REPLACE(instance, '[{', '{'),'}]', '}') AS instance,
REPLACE(REPLACE(prediction, '[{', '{'),'}]', '}') AS prediction,
FROM step1
JOIN UNNEST(step1.instance_list) AS instance
WITH OFFSET AS f1
JOIN UNNEST(step1.prediction_list) AS prediction
WITH OFFSET AS f2
ON f1=f2
),
step3 AS
(
SELECT
model,
model_version,
time,
@json_features_extraction,
@json_prediction_extraction
FROM step2
)
SELECT *
FROM step3
'''
sql_script = sql_script.replace("@project", PROJECT_ID)
sql_script = sql_script.replace("@dataset_name", BQ_DATASET_NAME)
sql_script = sql_script.replace("@table_name", BQ_TABLE_NAME)
sql_script = sql_script.replace("@view_name", view_name)
sql_script = sql_script.replace("@model_name", MODEL_NAME)
sql_script = sql_script.replace("@version", VERSION_NAME)
sql_script = sql_script.replace("@input_features", input_features)
sql_script = sql_script.replace("@json_features_extraction", json_features_extraction)
sql_script = sql_script.replace("@json_prediction_extraction", json_prediction_extraction)
"""
Explanation: 2. Generate the CREATE VIEW script
End of explanation
"""
print(sql_script)
"""
Explanation: Optionally, print the generated script:
End of explanation
"""
client = bigquery.Client(PROJECT_ID)
client.query(query = sql_script)
print("View was created or replaced.")
"""
Explanation: 3. Execute the CREATE VIEW script
End of explanation
"""
query = '''
SELECT * FROM
`{}.{}`
LIMIT {}
'''.format(BQ_DATASET_NAME, view_name, 3)
pd.io.gbq.read_gbq(
query, project_id=PROJECT_ID).T
"""
Explanation: 4. Query the view
End of explanation
"""
|
google/telluride_decoding | Telluride_Decoding_Toolbox_TF2_Demo.ipynb | apache-2.0 | #@title Default title text
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: <a href="https://colab.research.google.com/github/google/telluride_decoding/blob/master/Telluride_Decoding_Toolbox_TF2_Demo.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Telluride Decoding Toolbox, TF2 Version
This colab demonstrates the basic operation of the Telluride Decoding Toolbox. The toolbox is designed for large-scale experiments, where the raw data is stored on disk because is too large to fit into memory, or you want to run multiple decoding jobs in parallel for hyperparameter searches.
In general, there are three stages to our decoding pipeline: ingest, decode, decide. This colab demonstrates the first two.
Ingest the raw data, and store it on disk in TFRecord format. This is the native format for Tensorflow, and each file consists of the raw audio and brain data, one (vector) sample per time step for one continuous trial. Multiple forms of the data can be stored in each record. Thus you might have 64 channels of EEG data, the audio intensity, as well as perhaps the phone class and the corresponding spectrogram. All data is sampled as the same frame rate (i.e. 100Hz).
Build a decoding model. The decoding model can take several forms, such as linear regression, CCA or several forms of DNN. When building regression models you specify which input fields (from the TFRecord data) are used as input, along with any desired temporal context, and which field is the output and to be predicted. For a backward model you might use EEG with 30 frames of post_context to predict the current (audio) intensity. For CCA you specify two sets of input fields, which are rotated to provide the highest correlation.
Decide to which signal the subject is attending. This code implements a State Space Model (from Maryland) and a simple Markov model suggested by KUL.
It uses EEG data, first published as part of version 1 of the Telluride Decoding Toolbox. This data consists of 32 trials from four simultaneous subjects listening to 4 different audio tracks. There are 64 electrodes in each trial, split across the 4 subjects. The original data is provided in Matlab format and is downloaded by the colab.
This colab shows how to download and import the toolbox, download and import the original Matlab data, and then measure linear and CCA models predicting the intensity that generated the EEG data.
This is a work in progress. Comments and/or questions are welcome as we finalize the documention and the code. The source code (a lousy way to document the toolkit) is available on GitHub.
Copyright 2018 Google LLC.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
"""
!pip uninstall -y telluride-decoding
# To get test versions and install them on this machine.
# !pip install mock pyedflib # Need to list these since they are not on test.pypi
# !pip install --index-url https://test.pypi.org/project telluride-decoding
# To install the latest released version:
!pip install telluride-decoding
from matplotlib import pyplot as plt
import numpy as np
import tensorflow as tf
tf.compat.v1.enable_v2_behavior()
from telluride_decoding import brain_data
from telluride_decoding import brain_model
from telluride_decoding import decoding
from telluride_decoding import regression
from telluride_decoding import regression_data
# Reset the plot library backend to the default.
import matplotlib
matplotlib.use('module://ipykernel.pylab.backend_inline')
# Force flags to be parsed (none here) to set flag defaults. There are some parts
# of the toolbox that still use FLAGS, but shouldn't :-(
from absl import flags
flags.FLAGS(['colab']) # Cause flags library to set the default.
telluride_data = regression_data.RegressionDataTelluride4()
cache_dir = '/tmp'
telluride4_url = 'https://drive.google.com/uc?id=0ByZjGXodIlspWmpBcUhvenVQa1k'
if not telluride_data.is_data_local(cache_dir):
print('Downloading the Telluride4 data...')
telluride_data.download_data(telluride4_url, cache_dir)
tf_dir = '/tmp/telluride_tf'
!mkdir {tf_dir}
if not telluride_data.is_data_ingested(tf_dir):
print('Ingesting the Telluride4 data...')
telluride_data.ingest_data(cache_dir, tf_dir, 100)
!ls {tf_dir}
!cat {tf_dir}/README.txt
"""
Explanation: Install the right software
(telluride-decoding plus prereqs)
End of explanation
"""
# If you have problems or get unexplained errors, you might want to turn on
# log messages with these calls. Change the False to True to enable the logging.
if False:
from absl import logging
logging.set_verbosity(logging.INFO)
logging.set_stderrthreshold('info')
logging.info('Testing')
telluride4_options = decoding.DecodingOptions()
telluride4_options.input_field = 'eeg'
telluride4_options.output_field = 'intensity'
telluride4_options.input2_field = 'intensity'
telluride4_options.tfexample_dir = tf_dir
telluride4_options.dnn_regressor = 'cca'
telluride4_options.post_context = 21
telluride4_options.input2_pre_context = 15
telluride4_options.input2_post_context = 15
telluride4_options.test_metric = 'cca_pearson_correlation_first'
telluride4_options.shuffle_buffer_size = 0 # No need when training a CCA model
telluride4_options.cca_dimensions = 5
print(telluride4_options.experiment_parameters('\n'))
telluride4_data = regression.get_brain_data_object(telluride4_options)
# Get the actual TF dataset, as an example of the full TDT data object.
telluride4_dataset = telluride4_data.create_dataset('train')
# Create the brain model (CCA as specified above.)
brain_model = decoding.create_brain_model(telluride4_options, telluride4_dataset)
# Now train the regressor. Since this regressor is built with CCA, only one
# pass through the data is needed to collect the statistics and create the model.
train_results, test_results = decoding.train_and_test(telluride4_options, telluride4_data, brain_model)
test_results
"""
Explanation: Decode the Telluride4 EEG Data
End of explanation
"""
telluride4_regression = regression.Telluride4CCA(telluride4_options)
reg_values = np.power(10.0, np.arange(-3, 2, 1))
results = telluride4_regression.jackknife_over_regularizations(telluride4_options,
reg_values)
results
reg_values = list(results.keys())
result_means = [results[k][0] for k in reg_values]
result_stddev = [results[k][1] for k in reg_values]
plt.errorbar(reg_values, result_means, result_stddev)
matplotlib.pyplot.xscale('log')
plt.xlabel('Regularization Value')
plt.ylabel('Jackknifed Correlation');
"""
Explanation: Run complete jackknife test
Run a complete jackknife (leave-one-out) test using the Telluride4 data. This takes a while to run, but gives us error bounds (really standard deviations). Compute the test-set correlation when using CCA.
End of explanation
"""
|
probml/pyprobml | notebooks/book2/04/gibbs_demo_potts_jax.ipynb | mit | import jax
import jax.numpy as jnp
from jax import lax
from jax import vmap
from jax import random
from jax import jit
import numpy as np
import matplotlib.pyplot as plt
try:
from tqdm import trange
except ModuleNotFoundError:
%pip install -qq tqdm
from tqdm import trange
"""
Explanation: Gibbs sampling for a Potts model on a 2d lattice
Ming Liang Ang.
The math behind the model
The potts model
$$p(x) = \frac{1}{Z}\exp{-\mathcal{E}(x)}\
\mathcal{E}(x) = - J\sum_{i\sim j}\mathbb{I}(x_i = x_j)\
p(x_i = k | x_{-i}) = \frac{\exp(J\sum_{n\in \text{nbr}}\mathbb{I}(x_n = k))}{\sum_{k'}\exp(J\sum_{n\in \text{nbr}}\mathbb{I}(x_n = k))}$$
In order to efficiently compute
$$
\sum_{n\in \text{nbr}}$$
for all the different states in our potts model we use a convolution. The idea is to first reperesent each potts model state as a one-hot state and then apply a convolution to compute the logits.
$$\begin{pmatrix}
S_{11} & S_{12} & \ldots & S_{1n} \
S_{21} & S_{22} & \ldots & S_{2n} \
\vdots & &\ddots & \vdots\
S_{n1} & S_{n2} & \ldots & S_{nn} \
\end{pmatrix} \underset{\longrightarrow}{\text{padding}} \begin{pmatrix}
0 & \ldots & 0 & \ldots & 0 & 0\
0 & S_{11} & S_{12} & \ldots & S_{1n} & 0 \
0 & S_{21} & S_{22} & \ldots & S_{2n}&0 \
\vdots & &\ddots & \vdots\
0 & S_{n1} & S_{n2} & \ldots & S_{nn} & 0 \
0 & \ldots & 0 & \ldots & 0 & 0\
\end{pmatrix} \underset{\longrightarrow}{\text{convolution}} \begin{pmatrix}
E_{11} & E_{12} & \ldots & E_{1n} \
E_{21} & E_{22} & \ldots & E_{2n} \
\vdots & &\ddots & \vdots\
E_{n1} & E_{n2} & \ldots & E_{nn} \
\end{pmatrix} $$
An example
$$\begin{pmatrix}
1 & 1 & 1 \
1 & 1 & 1 \
1 & 1 & 1
\end{pmatrix} \underset{\longrightarrow}{\text{padding}} \begin{pmatrix}
0 & 0 & 0 & 0 & 0\
0 & 1 & 1 & 1 & 0 \
0 & 1 & 1 & 1 & 0\
0 & 1 & 1 & 1 & 0 \
0 & 0 & 0 & 0 & 0
\end{pmatrix} \underset{\longrightarrow}{\text{convolution}} \begin{pmatrix}
2 & 3 & 2 \
3 & 4 & 3 \
2 & 3 & 2
\end{pmatrix} $$
Where the matrix $$\begin{pmatrix}
2 & 3 & 2 \
3 & 4 & 3 \
2 & 3 & 2
\end{pmatrix} $$ correspond to the number of neighbours with the same value around in the matrix \begin{pmatrix}
1 & 1 & 1 \
1 & 1 & 1 \
1 & 1 & 1
\end{pmatrix}
For more than 2 states, we represent the above matrix as a 3d tensor which you can imagine as the state matrix but with each element as a one hot vector.
Import libaries
End of explanation
"""
key = random.PRNGKey(12234)
"""
Explanation: RNG key
End of explanation
"""
K = 10
ix = 128
iy = 128
"""
Explanation: The number of states and size of the 2d grid
End of explanation
"""
kernel = jnp.zeros((3, 3, 1, 1), dtype=jnp.float32)
kernel += jnp.array([[0, 1, 0], [1, 0, 1], [0, 1, 0]])[:, :, jnp.newaxis, jnp.newaxis]
dn = lax.conv_dimension_numbers(
(K, ix, iy, 1), # only ndim matters, not shape
kernel.shape, # only ndim matters, not shape
("NHWC", "HWIO", "NHWC"),
) # the important bit
"""
Explanation: The convolutional kernel for computing energy of markov blanket of each node
End of explanation
"""
mask = jnp.indices((K, iy, ix, 1)).sum(axis=0) % 2
def checkerboard_pattern1(x):
return mask[0, :, :, 0]
def checkerboard_pattern2(x):
return mask[1, :, :, 0]
def make_checkerboard_pattern1():
arr = vmap(checkerboard_pattern1, in_axes=0)(jnp.array(K * [1]))
return jnp.expand_dims(arr, -1)
def make_checkerboard_pattern2():
arr = vmap(checkerboard_pattern2, in_axes=0)(jnp.array(K * [1]))
return jnp.expand_dims(arr, -1)
def test_state_mat_update(state_mat_update):
"""
Checking the checkerboard pattern is the same for each channel
"""
mask = make_checkerboard_pattern1()
inverse_mask = make_checkerboard_pattern2()
state_mat = jnp.zeros((K, 128, 128, 1))
sample = jnp.ones((K, 128, 128, 1))
new_state = state_mat_update(mask, inverse_mask, sample, state_mat)
assert jnp.array_equal(new_state[0, :, :, 0], new_state[1, :, :, 0])
def test_state_mat_update2(state_mat_update):
"""
Checking the checkerboard pattern is the same for each channel
"""
mask = make_checkerboard_pattern1()
inverse_mask = make_checkerboard_pattern2()
state_mat = jnp.ones((K, 128, 128, 1))
sample = jnp.zeros((K, 128, 128, 1))
new_state = state_mat_update(mask, inverse_mask, sample, state_mat)
assert jnp.array_equal(new_state[0, :, :, 0], new_state[1, :, :, 0])
def test_energy(energy):
"""
If you give the convolution all ones, it will produce the number of edges
it is connected to on a grid i.e the number of neighbours around it.
"""
X = jnp.ones((3, 3))
state_mat = jax.nn.one_hot(X, K, axis=0)[:, :, :, jnp.newaxis]
energy = energy(state_mat, 1)
assert np.array_equal(energy[1, :, :, 0], jnp.array([[2, 3, 2], [3, 4, 3], [2, 3, 2]]))
def sampler(K, key, logits):
# Sample from the energy using gumbel trick
u = random.uniform(key, shape=(K, ix, iy, 1))
sample = jnp.argmax(logits - jnp.log(-jnp.log(u)), axis=0)
sample = jax.nn.one_hot(sample, K, axis=0)
return sample
def state_mat_update(mask, inverse_mask, sample, state_mat):
# Update the state_mat using masking
masked_sample = mask * sample
masked_state_mat = inverse_mask * state_mat
state_mat = masked_state_mat + masked_sample
return state_mat
def energy(state_mat, jvalue):
# Calculate energy
logits = lax.conv_general_dilated(state_mat, jvalue * kernel, (1, 1), "SAME", (1, 1), (1, 1), dn)
return logits
def gibbs_sampler(key, jvalue, niter=1):
key, key2 = random.split(key)
X = random.randint(key, shape=(ix, iy), minval=0, maxval=K)
state_mat = jax.nn.one_hot(X, K, axis=0)[:, :, :, jnp.newaxis]
mask = make_checkerboard_pattern1()
inverse_mask = make_checkerboard_pattern2()
@jit
def state_update(key, state_mat, mask, inverse_mask):
logits = energy(state_mat, jvalue)
sample = sampler(K, key, logits)
state_mat = state_mat_update(mask, inverse_mask, sample, state_mat)
return state_mat
for iter in tqdm(range(niter)):
key, key2 = random.split(key2)
state_mat = state_update(key, state_mat, mask, inverse_mask)
mask, inverse_mask = inverse_mask, mask
return jnp.squeeze(jnp.argmax(state_mat, axis=0), axis=-1)
"""
Explanation: Creating the checkerboard
End of explanation
"""
test_state_mat_update(state_mat_update)
test_state_mat_update2(state_mat_update)
test_energy(energy)
"""
Explanation: Running the test
End of explanation
"""
Jvals = [1.42, 1.43, 1.44]
gibbs_sampler(key, 1, niter=2)
dfig, axs = plt.subplots(1, len(Jvals), figsize=(8, 8))
for t in tqdm(range(len(Jvals))):
arr = gibbs_sampler(key, Jvals[t], niter=8000)
axs[t].imshow(arr, cmap="Accent", interpolation="nearest")
axs[t].set_title(f"J = {Jvals[t]}")
"""
Explanation: Running the model
End of explanation
"""
|
mnschmit/LMU-Syntax-nat-rlicher-Sprachen | 07-notebook-solution.ipynb | apache-2.0 | grammar = """
S -> NP VP
NP -> DET[GEN=?x] NOM[GEN=?x]
NOM[GEN=?x] -> ADJ NOM[GEN=?x] | N[GEN=?x]
ADJ -> "schöne" | "kluge" | "dicke"
DET[GEN=mask,KAS=nom] -> "der"
DET[GEN=fem,KAS=dat] -> "der"
DET[GEN=fem,KAS=nom] -> "die"
DET[GEN=fem,KAS=akk] -> "die"
DET[GEN=neut,KAS=nom] -> "das"
DET[GEN=neut,KAS=akk] -> "das"
N[GEN=mask] -> "Mann"
N[GEN=fem] -> "Frau"
N[GEN=neut] -> "Buch"
VP -> V NP NP | V NP | V
V -> "gibt" | "schenkt" | "schläft" | "gefällt" | "kennt"
"""
import nltk
from IPython.display import display
import sys
def test_grammar(grammar, sentences):
cfg = nltk.grammar.FeatureGrammar.fromstring(grammar)
parser = nltk.parse.FeatureEarleyChartParser(cfg)
for i, sent in enumerate(sentences, 1):
print("Satz {}: {}".format(i, sent))
sys.stdout.flush()
results = parser.parse(sent.split())
analyzed = False
for tree in results:
display(tree) # tree.draw() oder print(tree)
analyzed = True
if not analyzed:
print("Keine Analyse möglich", file=sys.stderr)
sys.stderr.flush()
pos_sentences = [
"der Mann schläft",
"der schöne Mann schläft",
"der Mann gibt der Frau das Buch"
]
neg_sentences = ["das Mann schläft", "das schöne Mann schläft"]
test_grammar(grammar, neg_sentences)
test_grammar(grammar, pos_sentences)
"""
Explanation: Übungsblatt 7
Präsenzaufgaben
Aufgabe 1 CFG: Kongruenz in Nominalphrasen
Die folgende Grammatik entspricht der Grammatik von Übungsblatt 4 am Ende der Präsenzaufgaben. (Sie können also stattdessen auch Ihre im Zuge der Übung von damals selbst erstellte Grammatik als Grundlage verwenden.)
Orientieren Sie sich an folgender Tabelle zur Mehrdeutigkeit der Formen des bestimmen Artikels im Deutschen und passen Sie die Grammatik so an, dass sie nur noch grammatikalisch korrekte Nominalphrasen als Teil von Sätzen akzeptiert. Konzentrieren Sie sich auf die Kongruenz von Artikel und Nomen im Genus.
|Form|mögliche Merkmale|
|----|-----------------|
|der|[NUM=sg, GEN=mas, KAS=nom]|
||[NUM=sg, GEN=fem, KAS=dat]|
||[NUM=sg, GEN=fem, KAS=GEN]|
||[NUM=pl, KAS=GEN]|
|die|[NUM=sg, GEN=fem, KAS=nom]|
||[NUM=sg, GEN=fem, KAS=akk]|
||[NUM=pl, KAS=nom]|
||[NUM=pl, KAS=akk]|
|das|[NUM=sg, GEN=neu, KAS=nom]|
||[NUM=sg, GEN=neu, KAS=akk]|
End of explanation
"""
grammar = """
S -> NP[KAS=nom] VP
NP[KAS=?y] -> DET[GEN=?x,KAS=?y] NOM[GEN=?x]
NOM[GEN=?x] -> ADJ NOM[GEN=?x] | N[GEN=?x]
ADJ -> "schöne" | "kluge" | "dicke"
DET[GEN=mask,KAS=nom] -> "der"
DET[GEN=fem,KAS=dat] -> "der"
DET[GEN=fem,KAS=nom] -> "die"
DET[GEN=fem,KAS=akk] -> "die"
DET[GEN=neut,KAS=nom] -> "das"
DET[GEN=neut,KAS=akk] -> "das"
N[GEN=mask] -> "Mann"
N[GEN=fem] -> "Frau"
N[GEN=neut] -> "Buch"
VP -> V[SUBCAT=ditr, VAL1=?x, VAL2=?y] NP[KAS=?x] NP[KAS=?y]
VP -> V[VAL=?x,SUBCAT=tr] NP[KAS=?x]
VP -> V[SUBCAT=intr]
V[SUBCAT=ditr, VAL1=dat, VAL2=akk] -> "gibt" | "schenkt"
V[SUBCAT=intr] -> "schläft"
V[SUBCAT=tr,VAL=dat] -> "gefällt"
V[SUBCAT=tr,VAL=akk] -> "kennt"
"""
pos_sentences.extend([
"das Buch gefällt der Frau",
"das Buch kennt die Frau"
])
neg_sentences.extend([
"der Mann schläft das Buch",
"die Frau gefällt das Buch",
"das Buch kennt",
"die Frau gibt das Buch",
"die Frau gibt die Frau das Buch"
])
test_grammar(grammar, pos_sentences)
test_grammar(grammar, neg_sentences)
"""
Explanation: Aufgabe 2 CFG: Kasus
Als nächstes sollen Kasusbedingungen in die Grammatik integriert werden:
Es gibt nur eine Nominalphrase im Nominativ (Subjekt).
Je nach Valenzstellen des Verbes sollen nur Nominalphrasen in den korrekten Kasus akzeptiert werden.
Optional Versuchen Sie die freie Satzstellung des Deutschen zu berücksichtigen.
End of explanation
"""
grammar = """
S -> NP[KAS=nom,NUM=?x] VP[NUM=?x]
NP[KAS=?y,NUM=?z] -> DET[GEN=?x,KAS=?y,NUM=?z] NOM[GEN=?x,NUM=?z]
NOM[GEN=?x,NUM=?z] -> ADJ[NUM=?z] NOM[GEN=?x,NUM=?z] | N[GEN=?x,NUM=?z]
ADJ[NUM=sg] -> "schöne" | "kluge" | "dicke"
ADJ[NUM=pl] -> "schönen" | "klugen" | "dicken"
DET[GEN=mask,KAS=nom,NUM=sg] -> "der"
DET[GEN=fem,KAS=dat,NUM=sg] -> "der"
DET[GEN=fem,KAS=nom,NUM=sg] -> "die"
DET[GEN=fem,KAS=akk,NUM=sg] -> "die"
DET[GEN=neut,KAS=nom,NUM=sg] -> "das"
DET[GEN=neut,KAS=akk,NUM=sg] -> "das"
DET[KAS=nom,NUM=pl] -> "die"
DET[KAS=akk,NUM=pl] -> "die"
N[GEN=mask,NUM=sg] -> "Mann"
N[GEN=mask,NUM=pl] -> "Männer"
N[GEN=fem,NUM=sg] -> "Frau"
N[GEN=fem,NUM=pl] -> "Frauen"
N[GEN=neut,NUM=sg] -> "Buch"
N[GEN=neut,NUM=pl] -> "Bücher"
VP[NUM=?z] -> V[SUBCAT=ditr, VAL1=?x, VAL2=?y, NUM=?z] NP[KAS=?x] NP[KAS=?y]
VP[NUM=?z] -> V[VAL=?x,SUBCAT=tr, NUM=?z] NP[KAS=?x]
VP[NUM=?z] -> V[SUBCAT=intr, NUM=?z]
V[SUBCAT=ditr, VAL1=dat, VAL2=akk,NUM=sg] -> "gibt" | "schenkt"
V[SUBCAT=ditr, VAL1=dat, VAL2=akk,NUM=pl] -> "geben" | "schenken"
V[SUBCAT=intr,NUM=sg] -> "schläft"
V[SUBCAT=intr,NUM=pl] -> "schlafen"
V[SUBCAT=tr,VAL=dat,NUM=sg] -> "gefällt"
V[SUBCAT=tr,VAL=dat,NUM=pl] -> "gefallen"
V[SUBCAT=tr,VAL=akk,NUM=sg] -> "kennt"
V[SUBCAT=tr,VAL=akk,NUM=pl] -> "kennen"
"""
pos_sentences.extend([
"die Männer geben der Frau das Buch",
"die Bücher gefallen der Frau",
"die Frauen schlafen"
])
neg_sentences.extend([
"der Mann geben der Frau das Buch",
"das Buch gefällt der Frauen",
"die Frauen schläft"
])
test_grammar(grammar, pos_sentences)
test_grammar(grammar, neg_sentences)
"""
Explanation: Hausaufgaben
Aufgabe 7 Plural für das Subjekt
Ergänzen Sie die in den Präsenzaufgaben erstellte Grammatik um die Möglichkeit, das Subjekt in den Plural zu setzen.
Dafür müssen Sie folgendes tun:
1. Erstellen Sie lexikalische Regeln für Pluralformen der Verben, Adjektive und Substantive (Nominativ ist ausreichend.).
1. Vervollständigen Sie die lexikalischen Regeln für die Form des Artikels die um die korrekte Merkmalstruktur für den Plural.
1. Formulieren Sie eine Kongruenzbedingung in Numerus zwischen Verb und Subjekt.
End of explanation
"""
grammar = """
S -> NP[KAS=nom,NUM=?x] VP[NUM=?x,-SBJ]
S -> ADV VP[+SBJ]
NP[KAS=?y,NUM=?z] -> DET[GEN=?x,KAS=?y,NUM=?z] NOM[GEN=?x,NUM=?z]
NOM[GEN=?x,NUM=?z] -> ADJ[NUM=?z] NOM[GEN=?x,NUM=?z] | N[GEN=?x,NUM=?z]
ADJ[NUM=sg] -> "schöne" | "kluge" | "dicke"
ADJ[NUM=pl] -> "schönen" | "klugen" | "dicken"
DET[GEN=mask,KAS=nom,NUM=sg] -> "der"
DET[GEN=fem,KAS=dat,NUM=sg] -> "der"
DET[GEN=fem,KAS=nom,NUM=sg] -> "die"
DET[GEN=fem,KAS=akk,NUM=sg] -> "die"
DET[GEN=neut,KAS=nom,NUM=sg] -> "das"
DET[GEN=neut,KAS=akk,NUM=sg] -> "das"
DET[KAS=nom,NUM=pl] -> "die"
DET[KAS=akk,NUM=pl] -> "die"
N[GEN=mask,NUM=sg] -> "Mann"
N[GEN=mask,NUM=pl] -> "Männer"
N[GEN=fem,NUM=sg] -> "Frau"
N[GEN=fem,NUM=pl] -> "Frauen"
N[GEN=neut,NUM=sg] -> "Buch"
N[GEN=neut,NUM=pl] -> "Bücher"
VP[NUM=?z,+SBJ] -> V[SUBCAT=ditr, VAL1=?x, VAL2=?y, NUM=?z] NP[KAS=nom,NUM=?z] NP[KAS=?x] NP[KAS=?y]
VP[NUM=?z,-SBJ] -> V[SUBCAT=ditr, VAL1=?x, VAL2=?y, NUM=?z] NP[KAS=?x] NP[KAS=?y]
VP[NUM=?z,-SBJ] -> V[SUBCAT=ditr, VAL1=?x, VAL2=?y, NUM=?z] ADV NP[KAS=?x] NP[KAS=?y]
VP[NUM=?z,-SBJ] -> V[SUBCAT=ditr, VAL1=?x, VAL2=?y, NUM=?z] NP[KAS=?x] ADV NP[KAS=?y]
VP[NUM=?z,-SBJ] -> V[SUBCAT=ditr, VAL1=?x, VAL2=?y, NUM=?z] NP[KAS=?x] NP[KAS=?y] ADV
VP[NUM=?z,+SBJ] -> V[VAL=?x,SUBCAT=tr, NUM=?z] NP[KAS=nom,NUM=?z] NP[KAS=?x]
VP[NUM=?z,-SBJ] -> V[VAL=?x,SUBCAT=tr, NUM=?z] NP[KAS=?x]
VP[NUM=?z,-SBJ] -> V[VAL=?x,SUBCAT=tr, NUM=?z] ADV NP[KAS=?x]
VP[NUM=?z,-SBJ] -> V[VAL=?x,SUBCAT=tr, NUM=?z] NP[KAS=?x] ADV
VP[NUM=?z,+SBJ] -> V[SUBCAT=intr, NUM=?z] NP[KAS=nom,NUM=?z]
VP[NUM=?z,-SBJ] -> V[SUBCAT=intr, NUM=?z]
VP[NUM=?z,-SBJ] -> V[SUBCAT=intr, NUM=?z] ADV
V[SUBCAT=ditr, VAL1=dat, VAL2=akk,NUM=sg] -> "gibt" | "schenkt"
V[SUBCAT=ditr, VAL1=dat, VAL2=akk,NUM=pl] -> "geben" | "schenken"
V[SUBCAT=intr,NUM=sg] -> "schläft"
V[SUBCAT=intr,NUM=pl] -> "schlafen"
V[SUBCAT=tr,VAL=dat,NUM=sg] -> "gefällt"
V[SUBCAT=tr,VAL=dat,NUM=pl] -> "gefallen"
V[SUBCAT=tr,VAL=akk,NUM=sg] -> "kennt"
V[SUBCAT=tr,VAL=akk,NUM=pl] -> "kennen"
ADV -> "heute" | "morgen"
"""
pos_sentences.extend([
"heute gibt der Mann der Frau das Buch",
"der Mann gibt heute der Frau das Buch",
"der Mann gibt der Frau heute das Buch",
"der Mann gibt der Frau das Buch heute",
"heute geben die Männer der Frau das Buch"
])
neg_sentences.extend([
"heute der Mann gibt der Frau das Buch",
"heute gibt der Frau das Buch",
"heute geben der Mann der Frau das Buch"
])
test_grammar(grammar, pos_sentences)
test_grammar(grammar, neg_sentences)
"""
Explanation: Aufgabe 8 Adverben und Verbzweitstellung
Fügen Sie der Grammatik jetzt die zwei Adverben heute und morgen hinzu. Adverben können prinzipiell sehr frei im Satz platziert werden. Eine Besonderheit des Deutschen ist aber die sog. Verbzweitstellung, wie sie z. B. in Sätzen wie Heute schläft der Mann. deutlich wird.
Versuchen Sie alle Möglichkeiten zu implementieren:
End of explanation
"""
|
brainiak/brainiak | examples/reconstruct/iem_example_synthetic_RF_data.ipynb | apache-2.0 | # Set up parameters
n_channels = 6
cos_exponent = 5
range_start = 0
range_stop = 360
feature_resolution = 360
iem_obj = IEM.InvertedEncoding1D(n_channels, cos_exponent, stimulus_mode='circular', range_start=range_start,
range_stop=range_stop, channel_density=feature_resolution)
# You can also try the half-circular space. Here's the associated code:
# range_stop = 180 # since 0 and 360 degrees are the same, we want to stop shy of 360
# feature_resolution = 180
# iem_obj = IEM.InvertedEncoding1D(n_channels, cos_exponent, stimulus_mode='halfcircular', range_start=range_start,
# range_stop=range_stop, channel_density=feature_resolution, verbose=True)
stim_vals = np.linspace(0, feature_resolution - (feature_resolution/6), 6).astype(int)
"""
Explanation: In this example, we will assume that the stimuli are patches of different motion directions. These stimuli span a 360-degree, circular feature space. We will build an encoding model that has 6 channels, or basis functions, which also span this feature space.
End of explanation
"""
# Generate synthetic data s.t. each voxel has a Gaussian tuning function
def generate_voxel_RFs(n_voxels, feature_resolution, random_tuning=True, RF_noise=0.):
if random_tuning:
# Voxel selectivity is random
voxel_tuning = np.floor((np.random.rand(n_voxels) * range_stop) + range_start).astype(int)
else:
# Voxel selectivity is evenly spaced along the feature axis
voxel_tuning = np.linspace(range_start, range_stop, n_voxels+1)
voxel_tuning = voxel_tuning[0:-1]
voxel_tuning = np.floor(voxel_tuning).astype(int)
gaussian = scipy.signal.gaussian(feature_resolution, 15)
voxel_RFs = np.zeros((n_voxels, feature_resolution))
for i in range(0, n_voxels):
voxel_RFs[i, :] = np.roll(gaussian, voxel_tuning[i] - ((feature_resolution//2)-1))
voxel_RFs += np.random.rand(n_voxels, feature_resolution)*RF_noise # add noise to voxel RFs
voxel_RFs = voxel_RFs / np.max(voxel_RFs, axis=1)[:, None]
return voxel_RFs, voxel_tuning
def generate_voxel_data(voxel_RFs, n_voxels, trial_list, feature_resolution,
trial_noise=0.25):
one_hot = np.eye(feature_resolution)
# Generate trial-wise responses based on voxel RFs
if range_start > 0:
trial_list = trial_list + range_start
elif range_start < 0:
trial_list = trial_list - range_start
stim_X = one_hot[:, trial_list] #@ basis_set.transpose()
trial_data = voxel_RFs @ stim_X
trial_data += np.random.rand(n_voxels, trial_list.size)*(trial_noise*np.max(trial_data))
return trial_data
"""
Explanation: Now we'll generate synthetic data. Ideally, each voxel that we measure from is roughly tuned to some part of the feature space (see Sprague, Boynton, Serences, 2019). So we will generate data that has a receptive field (RF). We can define the RF along the same feature axis as the channels that we generated above.
The following two functions will generate the voxel RFs, and then generate several trials of that dataset. There are options to add uniform noise to either the RF or the trials.
End of explanation
"""
np.random.seed(100)
n_voxels = 50
n_train_trials = 120
training_stim = np.repeat(stim_vals, n_train_trials/6)
voxel_RFs, voxel_tuning = generate_voxel_RFs(n_voxels, feature_resolution, random_tuning=False, RF_noise=0.1)
train_data = generate_voxel_data(voxel_RFs, n_voxels, training_stim, feature_resolution, trial_noise=0.25)
print(np.linalg.cond(train_data))
# print("Voxels are tuned to: ", voxel_tuning)
# Generate plots to look at the RF of an example voxel.
voxi = 20
f = plt.figure()
plt.subplot(1, 2, 1)
plt.plot(train_data[voxi, :])
plt.xlabel("trial")
plt.ylabel("activation")
plt.title("Activation over trials")
plt.subplot(1, 2, 2)
plt.plot(voxel_RFs[voxi, :])
plt.xlabel("degrees (motion direction)")
plt.axvline(voxel_tuning[voxi])
plt.title("Receptive field at {} deg".format(voxel_tuning[voxi]))
plt.suptitle("Example voxel")
plt.figure()
plt.imshow(train_data)
plt.ylabel('voxel')
plt.xlabel('trial')
plt.suptitle('Simulated data from each voxel')
"""
Explanation: Now let's generate some training data and look at it. This code will create a plot that depicts the response of an example voxel for different trials.
End of explanation
"""
# Fit an IEM
iem_obj.fit(train_data.transpose(), training_stim)
"""
Explanation: Using this synthetic training data, we can fit the IEM.
End of explanation
"""
# Let's visualize the basis functions.
channels = iem_obj.channels_
feature_axis = iem_obj.channel_domain
print(channels.shape)
plt.figure()
plt.subplot(1, 2, 1)
for i in range(0, channels.shape[0]):
plt.plot(feature_axis, channels[i,:])
plt.title('Channels (i.e. basis functions)')
plt.subplot(1, 2, 2)
plt.plot(np.sum(channels, 0))
plt.ylim(0, 2.5)
plt.title('Sum across channels')
"""
Explanation: Calling the IEM fit method defines the channels, or the basis set, which span the feature domain. We can examine the channels and plot them to check that they look appropriate.
Remember that the plot below is in circular space. Hence, the channels wrap around the x-axis. For example, the channel depicted in blue is centered at 0 degrees (far left of plot), which is the same as 360 degrees (far right of plot).
We can check whether the channels properly tile the feature space by summing across all of them. This is shown on the right plot. It should be a straight horizontal line.
End of explanation
"""
# Generate test data
n_test_trials = 12
test_stim = np.repeat(stim_vals, n_test_trials/len(stim_vals))
np.random.seed(330)
test_data = generate_voxel_data(voxel_RFs, n_voxels, test_stim, feature_resolution, trial_noise=0.25)
# Predict test stim & get R^2 score
pred_feature = iem_obj.predict(test_data.transpose())
R2 = iem_obj.score(test_data.transpose(), test_stim)
print("Predicted features are: {} degrees.".format(pred_feature))
print("Actual features are: {} degrees.".format(test_stim))
print("Test R^2 is {}".format(R2))
"""
Explanation: Now we can generate test data and see how well we can predict the test stimuli.
End of explanation
"""
# Now get the model-based reconstructions, which are continuous
# functions that should peak at each test stimulus feature
recons = iem_obj._predict_feature_responses(test_data.transpose())
f = plt.figure()
for i in range(0, n_test_trials-1):
plt.plot(feature_axis, recons[:, i])
for i in stim_vals:
plt.axvline(x=i, color='k', linestyle='--')
plt.title("Reconstructions of {} degrees".format(np.unique(test_stim)))
"""
Explanation: In addition to predicting the exact feature, we can examine the model-based reconstructions in the feature domain. That is, instead of getting single predicted values for each feature, we can look at a reconstructed function which peaks at the predicted feature.
Below we will plot all of the reconstructions. There will be some variability because of the noise added during the synthetic data generation.
End of explanation
"""
iem_obj.verbose = False
def train_and_test(nvox, ntrn, ntst, rfn, tn):
vRFs, vox_tuning = generate_voxel_RFs(nvox, feature_resolution, random_tuning=True, RF_noise=rfn)
trn = np.repeat(stim_vals, ntrn/6).astype(int)
trnd = generate_voxel_data(vRFs, nvox, trn, feature_resolution, trial_noise=tn)
tst = np.repeat(stim_vals, ntst/6).astype(int)
tstd = generate_voxel_data(vRFs, nvox, tst, feature_resolution, trial_noise=tn)
iem_obj.fit(trnd.transpose(), trn)
recons = iem_obj._predict_feature_responses(tstd.transpose())
pred_ori = iem_obj.predict(tstd.transpose())
R2 = iem_obj.score(tstd.transpose(), tst)
return recons, pred_ori, R2, tst
"""
Explanation: For a sanity check, let's check how R^2 changes as the number of voxels increases. We can write a quick wrapper function to train and test on a given set of motion directions, as below.
End of explanation
"""
np.random.seed(300)
vox_list = (5, 10, 15, 25, 50)
R2_list = np.zeros(len(vox_list))
for idx, nvox in enumerate(vox_list):
recs, preds, R2_list[idx], test_features = train_and_test(nvox, 120, 30, 0.1, 0.25)
print("The R2 values for increasing numbers of voxels: ")
print(R2_list)
"""
Explanation: We'll iterate through the list and look at the resulting R^2 values.
End of explanation
"""
|
flowersteam/naminggamesal | notebooks/1_Intro_Vocabulary.ipynb | agpl-3.0 | from naminggamesal import ngvoc
"""
Explanation: Introducing the objects
Here we will introduce the different objects involved in the Naming Games models we are using. You can go directly to subsections and execute the code from there, they are independant.
Vocabulary
First object is the vocabulary. It represents a lexical description of objects. That is to say, associations between $\textit{words}$ and $\textit{meanings}$. Here we consider vocabularies as being matrices filled with 0s or 1s, of size (#meanings,#words). The words and meanings are here symbolic, it means they can be refered to only by their respective column (for words) or line (for meanings) number in the matrix.
End of explanation
"""
voc_cfg={
'voc_type':'lil_matrix',
'M':5,
'W':10
}
voctest=ngvoc.Vocabulary(**voc_cfg)
voctest
"""
Explanation: We create an object vocabulary, of type sparse (more info on other possibilities: Design_newVocabulary.ipynb), and size M=5,W=10
End of explanation
"""
print(voctest)
"""
Explanation: It is initiated completely empty.
End of explanation
"""
voctest.add(0,1,1)
print(voctest)
"""
Explanation: Manipulate the vocabulary
We can then <u>add</u> an association between meaning 3 and word 4 (of value 1). This means that to refer to meaning 3, an agent using this vocabulary would use word 4.
End of explanation
"""
voctest.add(3,4,0)
print(voctest)
"""
Explanation: To remove the link, simply add it with value 0.
End of explanation
"""
voctest.fill()
print(voctest)
"""
Explanation: Let's <u>fill</u> the entire matrix with ones
End of explanation
"""
voctest.rm_hom(2,2)
print(voctest)
voctest.fill()
voctest.rm_syn(3,4)
print(voctest)
"""
Explanation: We can <u>remove homonyms or synonyms</u> of a meaning/word association
End of explanation
"""
import random
import matplotlib.pyplot as plt
voc_cfg2={
'voc_type':'lil_matrix',
'M':5,
'W':10
}
nlink=10
voctest2=ngvoc.Vocabulary(**voc_cfg2)
for i in range(0,nlink):
voctest2.add(random.randint(0, voc_cfg2['M']-1),random.randint(0, voc_cfg2['W']-1),round(random.random(),3))
print(voctest2)
"""
Explanation: Useful functions
Such as finding special subsets of meanings or words, and picking meanings and words among them. First we initialize a random vocabulary, and then apply all the functions.
Note: Small values of M and W let you see more clearly what's happening locally, high values may be more interesting for the visualizations.
End of explanation
"""
#voctest2.add(0,0,1)
#voctest2.add(0,0,0)
print("Vocabulary:")
print(voctest2)
print("")
print("Known words:")
print(voctest2.get_known_words())
print("Random known word:")
print(voctest2.get_random_known_w())
print("")
print("Unknown words:")
print(voctest2.get_unknown_words())
print("New unknown word:")
print(voctest2.get_new_unknown_w())
print("")
print("Known meanings:")
print(voctest2.get_known_meanings())
print("Random known meaning:")
print(voctest2.get_random_known_m())
print("")
print("Unknown meanings:")
print(voctest2.get_unknown_meanings())
print("New unknown meaning:")
print(voctest2.get_new_unknown_m())
print("")
print("")
print("Known words for meaning 1:")
print(voctest2.get_known_words(1))
print("Random known word for meaning 1:")
print(voctest2.get_random_known_w(1))
print("")
print("Unknown words for meaning 1:")
print(voctest2.get_unknown_words(1))
print("")
print("Known meanings for word 2:")
print(voctest2.get_known_meanings(2))
print("Random known meaning for word 2:")
print(voctest2.get_random_known_m(2))
print("")
print("Unknown meanings for word 2:")
print(voctest2.get_unknown_meanings(2))
"""
Explanation: Here you can modify the $voctest2$ variable by hand before executing the code:
End of explanation
"""
voctest2._cache
voctest2.visual(vtype="hom")
plt.figure()
voctest2.visual(vtype="syn")
"""
Explanation: We introduce here a representation of the degree of synonymy/homonymy of the vocabulary. Colors are the same on a line/column. Light colors indicate high degree of synonymy/homonymy, dark ones low degree.
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.