repo_name stringlengths 6 77 | path stringlengths 8 215 | license stringclasses 15 values | content stringlengths 335 154k |
|---|---|---|---|
ES-DOC/esdoc-jupyterhub | notebooks/snu/cmip6/models/sandbox-2/ocnbgchem.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'snu', 'sandbox-2', 'ocnbgchem')
"""
Explanation: ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era: CMIP6
Institute: SNU
Source ID: SANDBOX-2
Topic: Ocnbgchem
Sub-Topics: Tracers.
Properties: 65 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:38
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean biogeochemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean biogeochemistry model code (PISCES 2.0,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean biogeochemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
"""
Explanation: 1.4. Elemental Stoichiometry
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe elemental stoichiometry (fixed, variable, mix of the two)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.5. Elemental Stoichiometry Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe which elements have fixed/variable stoichiometry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all prognostic tracer variables in the ocean biogeochemistry component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.7. Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all diagnotic tracer variables in the ocean biogeochemistry component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.8. Damping
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any tracer damping used (such as artificial correction or relaxation to climatology,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for passive tracers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for passive tracers (if different from ocean)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for biology sources and sinks
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for biology sources and sinks (if different from ocean)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transport scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 4.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Transport scheme used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.3. Use Different Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Decribe transport scheme if different than that of ocean model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how atmospheric deposition is modeled
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
"""
Explanation: 5.2. River Input
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how river input is modeled
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Sediments From Boundary Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.4. Sediments From Explicit Model
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from explicit sediment model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CO2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.2. CO2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe CO2 gas exchange
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.3. O2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is O2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.4. O2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe O2 gas exchange
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.5. DMS Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is DMS gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.6. DMS Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify DMS gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.7. N2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.8. N2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.9. N2O Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2O gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.10. N2O Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2O gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.11. CFC11 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC11 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.12. CFC11 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC11 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.13. CFC12 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC12 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.14. CFC12 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC12 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.15. SF6 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is SF6 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.16. SF6 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify SF6 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.17. 13CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 13CO2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.18. 13CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 13CO2 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.19. 14CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 14CO2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.20. 14CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 14CO2 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.21. Other Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any other gas exchange
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how carbon chemistry is modeled
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7.2. PH Scale
Is Required: FALSE Type: ENUM Cardinality: 0.1
If NOT OMIP protocol, describe pH scale.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Constants If Not OMIP
Is Required: FALSE Type: STRING Cardinality: 0.1
If NOT OMIP protocol, list carbon chemistry constants.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of tracers in ocean biogeochemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.2. Sulfur Cycle Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sulfur cycle modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Nutrients Present
Is Required: TRUE Type: ENUM Cardinality: 1.N
List nutrient species present in ocean biogeochemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.4. Nitrous Species If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous species.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.5. Nitrous Processes If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous processes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required: TRUE Type: STRING Cardinality: 1.1
Definition of upper trophic level (e.g. based on size) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Upper Trophic Levels Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Define how upper trophic level are treated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
"""
Explanation: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of phytoplankton
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.2. Pft
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton functional types (PFT) (if applicable)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.3. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton size classes (if applicable)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of zooplankton
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Zooplankton size classes (if applicable)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there bacteria representation ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Lability
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe treatment of lability in dissolved organic matter
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is particulate carbon represented in ocean biogeochemistry?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Types If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, type(s) of particulate matter taken into account
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
"""
Explanation: 13.3. Size If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13.4. Size If Discrete
Is Required: FALSE Type: STRING Cardinality: 0.1
If prognostic and discrete size, describe which size classes are used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Sinking Speed If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, method for calculation of sinking speed of particules
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
"""
Explanation: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which carbon isotopes are modelled (C13, C14)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 14.2. Abiotic Carbon
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is abiotic carbon modelled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
"""
Explanation: 14.3. Alkalinity
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is alkalinity modelled ?
End of explanation
"""
|
antoniomezzacapo/qiskit-tutorial | community/aqua/general/vqe.ipynb | apache-2.0 | from qiskit_aqua import Operator, run_algorithm
from qiskit_aqua.input import get_input_instance
"""
Explanation: Using Qiskit Aqua algorithms, a how to guide
This notebook demonstrates how to use the Qiskit Aqua library to invoke a specific algorithm and process the result.
Further information is available for the algorithms in the github repo aqua/readme.md
End of explanation
"""
pauli_dict = {
'paulis': [{"coeff": {"imag": 0.0, "real": -1.052373245772859}, "label": "II"},
{"coeff": {"imag": 0.0, "real": 0.39793742484318045}, "label": "ZI"},
{"coeff": {"imag": 0.0, "real": -0.39793742484318045}, "label": "IZ"},
{"coeff": {"imag": 0.0, "real": -0.01128010425623538}, "label": "ZZ"},
{"coeff": {"imag": 0.0, "real": 0.18093119978423156}, "label": "XX"}
]
}
qubitOp = Operator.load_from_dict(pauli_dict)
"""
Explanation: Here an Operator instance is created for our Hamiltonian. In this case the paulis are from a previously computed Hamiltonian for simplicity
End of explanation
"""
algorithm_cfg = {
'name': 'ExactEigensolver',
}
params = {
'algorithm': algorithm_cfg
}
algo_input = get_input_instance('EnergyInput')
algo_input.qubit_op = qubitOp
result = run_algorithm(params,algo_input)
print(result)
"""
Explanation: We can now use the Operator without regard to how it was created. First we need to prepare the configuration params to invoke the algorithm. Here we will use the ExactEigensolver first to return the smallest eigenvalue. Backend is not required since this is computed classically not using quantum computation. We then add in the qubitOp Operator in dictionary format. Now the complete params can be passed to the algorithm and run. The result is a dictionary.
End of explanation
"""
algorithm_cfg = {
'name': 'VQE',
'operator_mode': 'matrix'
}
optimizer_cfg = {
'name': 'L_BFGS_B',
'maxfun': 1000
}
var_form_cfg = {
'name': 'RYRZ',
'depth': 3,
'entanglement': 'linear'
}
params = {
'algorithm': algorithm_cfg,
'optimizer': optimizer_cfg,
'variational_form': var_form_cfg,
'backend': {'name': 'statevector_simulator'}
}
result = run_algorithm(params,algo_input)
print(result)
"""
Explanation: Now we want VQE and so change it and add its other configuration parameters. VQE also needs and optimizer and variational form. While we can omit them from the dictionary, such that defaults are used, here we specify them explicitly so we can set their parameters as we desire.
End of explanation
"""
|
jinzishuai/learn2deeplearn | google_dl_udacity/lesson3/3_regularization.ipynb | gpl-3.0 | # These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import numpy as np
import tensorflow as tf
from six.moves import cPickle as pickle
"""
Explanation: Deep Learning
Assignment 3
Previously in 2_fullyconnected.ipynb, you trained a logistic regression and a neural network model.
The goal of this assignment is to explore regularization techniques.
End of explanation
"""
pickle_file = '../notMNIST.pickle'
with open(pickle_file, 'rb') as f:
save = pickle.load(f)
train_dataset = save['train_dataset']
train_labels = save['train_labels']
valid_dataset = save['valid_dataset']
valid_labels = save['valid_labels']
test_dataset = save['test_dataset']
test_labels = save['test_labels']
del save # hint to help gc free up memory
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
"""
Explanation: First reload the data we generated in 1_notmnist.ipynb.
End of explanation
"""
image_size = 28
num_labels = 10
def reformat(dataset, labels):
dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)
# Map 1 to [0.0, 1.0, 0.0 ...], 2 to [0.0, 0.0, 1.0 ...]
labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)
return dataset, labels
train_dataset, train_labels = reformat(train_dataset, train_labels)
valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)
test_dataset, test_labels = reformat(test_dataset, test_labels)
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
def accuracy(predictions, labels):
return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))
/ predictions.shape[0])
"""
Explanation: Reformat into a shape that's more adapted to the models we're going to train:
- data as a flat matrix,
- labels as float 1-hot encodings.
End of explanation
"""
graph = tf.Graph()
with graph.as_default():
...
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=logits))+ \
tf.scalar_mul(beta, tf.nn.l2_loss(weights1)+tf.nn.l2_loss(weights2))
"""
Explanation: Problem 1
Introduce and tune L2 regularization for both logistic and neural network models. Remember that L2 amounts to adding a penalty on the norm of the weights to the loss. In TensorFlow, you can compute the L2 loss for a tensor t using nn.l2_loss(t). The right amount of regularization should improve your validation / test accuracy.
End of explanation
"""
offset = 0 #offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
"""
Explanation: summary
With
python
batch_size = 128
num_hidden_nodes = 1024
beta = 1e-3
num_steps = 3001
Results
* Test accuracy: 88.5% with beta=0.000000 (no L2 regulization)
* Test accuracy: 86.7% with beta=0.000010
* Test accuracy: 88.8% with beta=0.000100
* Test accuracy: 92.6% with beta=0.001000
* Test accuracy: 89.7% with beta=0.010000
* Test accuracy: 82.2% with beta=0.100000
* Test accuracy: 10.0% with beta=1.000000
Problem 2
Let's demonstrate an extreme case of overfitting. Restrict your training data to just a few batches. What happens?
End of explanation
"""
keep_rate = 0.5
dropout = tf.nn.dropout(activated_hidden_layer, keep_rate) #dropout if applied after activation
logits = tf.matmul(dropout, weights2) + biases2
"""
Explanation: With
python
batch_size = 128
num_hidden_nodes = 1024
beta = 1e-3
num_steps = 3001
Results
* Original Test accuracy: 92.6% with beta=0.001000
* With offset = 0: Test accuracy: 67.5% with beta=0.001000
Problem 3
Introduce Dropout on the hidden layer of the neural network. Remember: Dropout should only be introduced during training, not evaluation, otherwise your evaluation results would be stochastic as well. TensorFlow provides nn.dropout() for that, but you have to make sure it's only inserted during training.
What happens to our extreme overfitting case?
End of explanation
"""
|
sdpython/pymyinstall | _doc/notebooks/example_profiling.ipynb | mit | from jyquickhelper import add_notebook_menu
add_notebook_menu()
"""
Explanation: test about profiling
How to profile from a notebook with cProfile, memory_profiler
End of explanation
"""
def big_list1(n):
l = []
for i in range(n):
l.append(i)
return l
def big_list2(n):
return list(range(n))
def big_list(n):
big_list1(n)
big_list2(n)
%prun -q -T profile_example.txt -D profile_example.stat big_list(100000)
with open('profile_example.txt', 'r') as f: content = f.read()
print(content)
import pstats
p = pstats.Stats('profile_example.stat')
p.strip_dirs().sort_stats('cumulative').print_stats()
"""
Explanation: profiling with cProfile
End of explanation
"""
from memory_profiler import memory_usage
mem_usage = memory_usage(-1, interval=.2, timeout=1)
mem_usage
"""
Explanation: Memory profile
End of explanation
"""
%%file script_test.py
def big_list1(n):
l = []
for i in range(n):
l.append(i)
return l
def big_list2(n):
return list(range(n))
def big_list(n):
big_list1(n)
big_list2(n)
from script_test import big_list, big_list1, big_list2
"""
Explanation: Thz functions to test must be part of file and cannot be implemented in the notebook. So we save the funtion in script and we import it just after.
End of explanation
"""
%load_ext memory_profiler
prof = %mprun -r -f big_list1 -f big_list2 -T profile_example.mem -r big_list(100000)
with open('profile_example.mem', 'r') as f : content = f.read()
print(content)
"""
Explanation: We run the momory profiling:
End of explanation
"""
%load_ext snakeviz
%system snakeviz --help
"""
Explanation: SnakeViz
End of explanation
"""
|
woobe/h2o_tutorials | introduction_to_machine_learning/py_03c_regression_ensembles.ipynb | mit | # Import all required modules
import h2o
from h2o.estimators.gbm import H2OGradientBoostingEstimator
from h2o.estimators.random_forest import H2ORandomForestEstimator
from h2o.estimators.deeplearning import H2ODeepLearningEstimator
from h2o.estimators.stackedensemble import H2OStackedEnsembleEstimator
from h2o.grid.grid_search import H2OGridSearch
# Start and connect to a local H2O cluster
h2o.init(nthreads = -1)
"""
Explanation: Machine Learning with H2O - Tutorial 3c: Regression Models (Ensembles)
<hr>
Objective:
This tutorial explains how to create stacked ensembles of regression models for better out-of-bag performance.
<hr>
Wine Quality Dataset:
Source: https://archive.ics.uci.edu/ml/datasets/Wine+Quality
CSV (https://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality-white.csv)
<hr>
Steps:
Build GBM models using random grid search and extract the best one.
Build DRF models using random grid search and extract the best one.
Build DNN models using random grid search and extract the best one.
Use model stacking to combining different models.
<hr>
Full Technical Reference:
http://docs.h2o.ai/h2o/latest-stable/h2o-py/docs/modeling.html
http://docs.h2o.ai/h2o/latest-stable/h2o-docs/data-science/stacked-ensembles.html
<br>
End of explanation
"""
# Import wine quality data from a local CSV file
wine = h2o.import_file("winequality-white.csv")
wine.head(5)
# Define features (or predictors)
features = list(wine.columns) # we want to use all the information
features.remove('quality') # we need to exclude the target 'quality' (otherwise there is nothing to predict)
features
# Split the H2O data frame into training/test sets
# so we can evaluate out-of-bag performance
wine_split = wine.split_frame(ratios = [0.8], seed = 1234)
wine_train = wine_split[0] # using 80% for training
wine_test = wine_split[1] # using the rest 20% for out-of-bag evaluation
wine_train.shape
wine_test.shape
"""
Explanation: <br>
End of explanation
"""
# define the criteria for random grid search
search_criteria = {'strategy': "RandomDiscrete",
'max_models': 9,
'seed': 1234}
"""
Explanation: <br>
Define Search Criteria for Random Grid Search
End of explanation
"""
# define the range of hyper-parameters for GBM grid search
# 27 combinations in total
hyper_params = {'sample_rate': [0.7, 0.8, 0.9],
'col_sample_rate': [0.7, 0.8, 0.9],
'max_depth': [3, 5, 7]}
# Set up GBM grid search
# Add a seed for reproducibility
gbm_rand_grid = H2OGridSearch(
H2OGradientBoostingEstimator(
model_id = 'gbm_rand_grid',
seed = 1234,
ntrees = 10000,
nfolds = 5,
fold_assignment = "Modulo", # needed for stacked ensembles
keep_cross_validation_predictions = True, # needed for stacked ensembles
stopping_metric = 'mse',
stopping_rounds = 15,
score_tree_interval = 1),
search_criteria = search_criteria,
hyper_params = hyper_params)
# Use .train() to start the grid search
gbm_rand_grid.train(x = features,
y = 'quality',
training_frame = wine_train)
# Sort and show the grid search results
gbm_rand_grid_sorted = gbm_rand_grid.get_grid(sort_by='mse', decreasing=False)
print(gbm_rand_grid_sorted)
# Extract the best model from random grid search
best_gbm_model_id = gbm_rand_grid_sorted.model_ids[0]
best_gbm_from_rand_grid = h2o.get_model(best_gbm_model_id)
best_gbm_from_rand_grid.summary()
"""
Explanation: <br>
Step 1: Build GBM Models using Random Grid Search and Extract the Best Model
End of explanation
"""
# define the range of hyper-parameters for DRF grid search
# 27 combinations in total
hyper_params = {'sample_rate': [0.5, 0.6, 0.7],
'col_sample_rate_per_tree': [0.7, 0.8, 0.9],
'max_depth': [3, 5, 7]}
# Set up DRF grid search
# Add a seed for reproducibility
drf_rand_grid = H2OGridSearch(
H2ORandomForestEstimator(
model_id = 'drf_rand_grid',
seed = 1234,
ntrees = 200,
nfolds = 5,
fold_assignment = "Modulo", # needed for stacked ensembles
keep_cross_validation_predictions = True), # needed for stacked ensembles
search_criteria = search_criteria,
hyper_params = hyper_params)
# Use .train() to start the grid search
drf_rand_grid.train(x = features,
y = 'quality',
training_frame = wine_train)
# Sort and show the grid search results
drf_rand_grid_sorted = drf_rand_grid.get_grid(sort_by='mse', decreasing=False)
print(drf_rand_grid_sorted)
# Extract the best model from random grid search
best_drf_model_id = drf_rand_grid_sorted.model_ids[0]
best_drf_from_rand_grid = h2o.get_model(best_drf_model_id)
best_drf_from_rand_grid.summary()
"""
Explanation: <br>
Step 2: Build DRF Models using Random Grid Search and Extract the Best Model
End of explanation
"""
# define the range of hyper-parameters for DNN grid search
# 81 combinations in total
hyper_params = {'activation': ['tanh', 'rectifier', 'maxout'],
'hidden': [[50], [50,50], [50,50,50]],
'l1': [0, 1e-3, 1e-5],
'l2': [0, 1e-3, 1e-5]}
# Set up DNN grid search
# Add a seed for reproducibility
dnn_rand_grid = H2OGridSearch(
H2ODeepLearningEstimator(
model_id = 'dnn_rand_grid',
seed = 1234,
epochs = 20,
nfolds = 5,
fold_assignment = "Modulo", # needed for stacked ensembles
keep_cross_validation_predictions = True), # needed for stacked ensembles
search_criteria = search_criteria,
hyper_params = hyper_params)
# Use .train() to start the grid search
dnn_rand_grid.train(x = features,
y = 'quality',
training_frame = wine_train)
# Sort and show the grid search results
dnn_rand_grid_sorted = dnn_rand_grid.get_grid(sort_by='mse', decreasing=False)
print(dnn_rand_grid_sorted)
# Extract the best model from random grid search
best_dnn_model_id = dnn_rand_grid_sorted.model_ids[0]
best_dnn_from_rand_grid = h2o.get_model(best_dnn_model_id)
best_dnn_from_rand_grid.summary()
"""
Explanation: <br>
Step 3: Build DNN Models using Random Grid Search and Extract the Best Model
End of explanation
"""
# Define a list of models to be stacked
# i.e. best model from each grid
all_ids = [best_gbm_model_id, best_drf_model_id, best_dnn_model_id]
# Set up Stacked Ensemble
ensemble = H2OStackedEnsembleEstimator(model_id = "my_ensemble",
base_models = all_ids)
# use .train to start model stacking
# GLM as the default metalearner
ensemble.train(x = features,
y = 'quality',
training_frame = wine_train)
"""
Explanation: <br>
Model Stacking
End of explanation
"""
print('Best GBM model from Grid (MSE) : ', best_gbm_from_rand_grid.model_performance(wine_test).mse())
print('Best DRF model from Grid (MSE) : ', best_drf_from_rand_grid.model_performance(wine_test).mse())
print('Best DNN model from Grid (MSE) : ', best_dnn_from_rand_grid.model_performance(wine_test).mse())
print('Stacked Ensembles (MSE) : ', ensemble.model_performance(wine_test).mse())
"""
Explanation: <br>
Comparison of Model Performance on Test Data
End of explanation
"""
|
ARM-software/lisa | ipynb/deprecated/releases/ReleaseNotes_v16.12.ipynb | apache-2.0 | !head -n12 $LISA_HOME/logging.conf
"""
Explanation: Target Connectivity
Configurable logging system
All LISA modules have been updated to use a more consistent logging which can be configured using a single configuraton file:
End of explanation
"""
!head -n30 $LISA_HOME/logging.conf | tail -n5
"""
Explanation: Each module has a unique name which can be used to assign a priority level for messages generated by that module.
End of explanation
"""
import logging
from conf import LisaLogging
LisaLogging.setup(level=logging.INFO)
"""
Explanation: The default logging level for a notebook can also be easily configured using this few lines
End of explanation
"""
from env import TestEnv
te = TestEnv({
'platform' : 'linux',
'board' : 'juno',
'host' : '10.1.210.45',
'username' : 'root'
})
target = te.target
"""
Explanation: Removed Juno/Juno2 distinction
Juno R0 and Juno R2 boards are now accessible by specifying "juno" in the target configuration.
The previous distinction was required because of a different way for the two boards to report HWMON channels.
This distinction is not there anymore and thus Juno boards can now be connected using the same platform data.
End of explanation
"""
tests_conf = {
"confs" : [
{
"tag" : "base",
"flags" : "ftrace",
"sched_features" : "NO_ENERGY_AWARE",
"cpufreq" : {
"governor" : "performance",
},
"files" : {
'/proc/sys/kernel/sched_is_big_little' : '0',
'!/proc/sys/kernel/sched_migration_cost_ns' : '500000'
},
}
]
}
"""
Explanation: Executor Module
Simplified tests definition using in-code configurations
Automated LISA tests previously configured the Executor using JSON files. This is still possible, but the existing tests now use Python dictionaries directly in the code. In the short term, this allows de-duplicating configuration elements that are shared between multiple tests. It will later allow more flexible test configuration.
See tests/eas/acceptance.py for an example of how this is currently used.
Support to write files from Executor configuration
https://github.com/ARM-software/lisa/pull/209
A new "files" attribute can be added to Executor configurations which allows
to specify a list files (e.g. sysfs and procfs) and values to be written to that files.
For example, the following test configuration:
End of explanation
"""
from trace import Trace
import json
with open('/home/patbel01/Code/lisa/results/LisaInANutshell_Backup/platform.json', 'r') as fh:
platform = json.load(fh)
trace = Trace('/home/patbel01/Code/lisa/results/LisaInANutshell_Backup/trace.dat',
['sched_switch'], platform
))
logging.info("%d tasks loaded from trace", len(trace.getTasks()))
logging.info("The rt-app task in this trace has these PIDs:")
logging.info(" %s", trace.getTasks()['rt-app'])
"""
Explanation: can be used to run a test where the platform is configured to
- disable the "sched_is_big_little" flag (if present)
- set to 50ms the "sched_migration_cost_ns"
Nortice that a value written in a file is verified only if the file path is
prefixed by a '/'. Otherwise, the write never fails, e.g. if the file does not exists.
Support to freeze user-space across a test
https://github.com/ARM-software/lisa/pull/227
Executor learned the "freeze_userspace" conf flag. When this flag is present, LISA uses the devlib freezer to freeze as much of userspace as possible while the experiment workload is executing, in order to reduce system noise.
The Executor example notebook:
https://github.com/ARM-software/lisa/blob/master/ipynb/examples/utils/executor_example.ipynb
gives an example of using this feature.
Trace module
Tasks name pre-loading
When the Trace module is initialized, by default all the tasks in that trace are identified and exposed via the usual getTask() method:
End of explanation
"""
!cat $LISA_HOME/libs/utils/platforms/pixel.json
from env import TestEnv
te = TestEnv({
'platform' : 'android',
'board' : 'pixel',
'ANDROID_HOME' : '/home/patbel01/Code/lisa/tools/android-sdk-linux/'
}, force_new=True)
target = te.target
"""
Explanation: Android Support
Added support for Pixel Phones
A new platform definition file has been added which allows to easily setup
a connection with an Pixel device:
End of explanation
"""
!tree -L 1 ~/Code/lisa/ipynb
"""
Explanation: Added UiBench workload
A new Android benchmark has been added to run UiBench provided tests.
Here is a notebook which provides an example of how to run this test on your
android target:
https://github.com/ARM-software/lisa/blob/master/ipynb/examples/android/benchmarks/Android_UiBench.ipynb
Tests
Intial version of the preliminary tests
Preliminary tests aim at verifying some basic support required for a
complete functional EAS solution.
A initial version of these preliminary tests is now available:
https://github.com/ARM-software/lisa/blob/master/tests/eas/preliminary.py
and it will be extended in the future to include more and more tests.
Capacity capping test
A new test has been added to verify that capacity capping is working
as expected:
https://github.com/ARM-software/lisa/blob/master/tests/eas/capacity_capping.py
Acceptance tests reworked
The EAS acceptace test collects a set of platform independent tests to verify
basic EAS beahviours.
This test has been cleaned up and it's now avaiable with a detailed documentation:
https://github.com/ARM-software/lisa/blob/master/tests/eas/acceptance.py
Notebooks
Added scratchpad notebooks
A new scratchpad folder has been added under the ipynb folder which collects the available notebooks:
End of explanation
"""
!tree -L 1 ~/Code/lisa/ipynb/examples
"""
Explanation: This folder is configured to be ignored by git, thus it's the best place to place your work-in-progress notebooks.
Example notebook restructoring
Example notebooks has been consolidated and better organized by topic:
End of explanation
"""
|
AC209ConsumerConfidence/AC209ConsumerConfidence.github.io | DynamicVAR_Final.ipynb | gpl-3.0 | # Load Datasets
dateparse = lambda dates: pd.datetime.strptime(dates, '%Y-%m')
cci = pd.read_csv('Economic_Sentiment_Forecast/CCI.csv', parse_dates=True, index_col='TIME',date_parser=dateparse)
cci = cci["Value"]
cci.columns = ["CCI"]
cci = cci['1990-01-01':]
dateparse = lambda dates: pd.datetime.strptime(dates, '%Y-%m-%d')
df2 = pd.read_csv('./AC209_Project_data/daily_sentiment_1990_2016.csv', parse_dates=True, index_col='date',date_parser=dateparse)
df2 = df2[["avg_score"]]
df2.columns = ["value"]
sen = df2[df2.index.day < 32].resample('MS').mean()
sen.columns = ["SEN"]
df = pd.concat([cci, sen], axis=1)
df.columns = ['CCI', 'SEN']
df = df.ix['1990-01-01':'2016-08-01']
"""
Explanation: Incorporating lags of the SWN scores and LDA topic scores as predictors in a Dynamic Vector Autoregression (VAR) Model.
So far, we have only considered the same month values of the SWN and LDA scores as exogenous controls in ARIMA models trying to predict the CCI values. However, the latter might be best explained by controlling for lagged values of the explanatory variables. Before including these explicitly in our machine learning models, we test out the value of setting up Vector Autoregression (VAR).
VAR models are a popular method in time-series econometrics, which estimate all variables as endogenous responses of the lagged values of all the variables. Hence, this system of equations approach is useful in estimating impulse-responses between different variables, capturing potential general equilibrium and dynamic relationships between the variables. Moreover, these models can be set up as to make roll-forward predictions, avoiding potential overfitting. This roll-forward specification for making predictions is called the Dynamic VAR, and we capture the R2 of its predictions as our measurement of the model's predictive quality.
In this Notebook, we first set up a Dynamic VAR model of the CCI score and the SWN score and evaluate its predictive power. Then, we set up a final Dynamic VAR model with the CCI, SWN and LDA Topics and evaluate its predictive quality.
Summary:
Roll forward predictions on the last year of CCI data do not improve with either of these models.
1) Dynamic VAR - CCI + SWN:
Given that this model works with lagged values as predictors, we see no value in working only with the data of the first 15 days for prediction purposes. Hence, we work with the SWN data for the full month and not just the first two weeks of data.
End of explanation
"""
def test_stationarity(timeseries):
#Determing rolling statistics
rolmean = pd.rolling_mean(timeseries, window=12)
rolstd = pd.rolling_std(timeseries, window=12)
#Plot rolling statistics:
fig, ax = plt.subplots(1,1, figsize=(20, 5))
ax = plt.plot(timeseries, color='blue',label='Original')
ax = plt.plot(rolmean, color='red', label='Rolling Mean')
#ax = plt.plot(rolstd, color='black', label = 'Rolling Std')
#ax = plt.plot(timeseries-rolmean, color='green', label = 'Noise')
ax = plt.legend(loc='best')
ax = plt.title('CCI and its Rolling Mean')
plt.show(block=False)
#Perform Dickey-Fuller test:
for j in ['nc', 'c', 'ct']:
print ''
print 'Results of Dickey-Fuller Test (Reg. {}):'.format(j)
dftest = tsa.adfuller(timeseries, autolag='AIC', regression = j)
dfoutput = pd.Series(dftest[0:4], index=['Test Statistic','p-value','#Lags Used','Number of Observations Used'])
for key,value in dftest[4].items():
dfoutput['Critical Value (%s)'%key] = value
print dfoutput
def is_unit_root(data, threshold="5%"):
nc = tsa.adfuller(data, regression="nc")
c = tsa.adfuller(data, regression="c")
ct = tsa.adfuller(data, regression="ct")
votes = 0
for test in [nc, c, ct]:
if(abs(test[0]) < abs(test[4][threshold])):
votes += 1
return votes >= 2, {"nc": nc, "c":c, "ct":ct}
print 'We cannot reject null hypothesis that CCI is Unit Root: '
test_stationarity(df['CCI'])
print ''
print ''
print 'We cannot reject null hypothesis that SWN is Unit Root: '
test_stationarity(df['SEN'])
"""
Explanation: Test stationarity of CCI and SWN
End of explanation
"""
dfs = pd.concat([cci.diff(), sen.diff()], axis=1)
dfs.columns = ['CCI_d', 'SEN_d']
dfs = dfs.ix['1990-01-02':'2016-08-01']
"""
Explanation: Surprisingly, we cannot reject the null hypothesis that the SWN series for the full month follows a random walk, while we were able to confidently do so with the data for the first 15 days. For this reason, we will work with both series in one-month differences for our Dynamic VAR models.
End of explanation
"""
# Define application of VAR to relevant dataset
var = tsa.VAR(dfs)
# Select lag order by information criteria
var.select_order(10)
"""
Explanation: Lag order specification
End of explanation
"""
# Fit VAR model with 5 lags (simpler model of those identified)
result = var.fit(5)
print 'Results Summary of the VAR model: '
result.summary()
# Forecasting
lag = result.k_ar
fc = result.forecast(df.values[-lag:], 36)
# Impulse Response Function
irf = result.irf(36)
irf.plot()
# Cummulative Impulse Response Function
irf.plot_cum_effects(orth=False)
"""
Explanation: From the information criteria outlined above for lag order selection, it seems that the most prevalent lag order is 5. Hence, we proceed with the VAR Model with this specification.
Initial Results
End of explanation
"""
# Granger Causality
print ''
print 'SEN Granger Causes CCI_d?'
a = result.test_causality('CCI_d', 'SEN_d')
print ''
print 'CCI_d Granger Causes SEN_d?'
a = result.test_causality('SEN_d', 'CCI_d')
"""
Explanation: Interestingly, a positive shock on the SWN score is associated with a negative change in the CCI score. There seems to be no statistically significant effect in the inverse direction for the earliest lags, but lags accumulate in a way that a statistically significant effect on the SWN score is noted.
End of explanation
"""
# Dynamic Specification of the VAR Model
var = tsa.DynamicVAR(dfs, lag_order=5, window_type='expanding')
print 'Coefficients for each lag in each of the two models: '
var.coefs.major_xs(datetime(2015, 11, 01)).T
preds = var.forecast(1)
var.plot_forecast(1)
raw_preds = preds['CCI_d']
cci_n = cci['1991-07-01':]
sum_preds = raw_preds.cumsum()
pred = pd.Series(cci_n.ix[0], index=cci_n.index)
pred = pred.add(sum_preds, fill_value=0)
from sklearn.linear_model import LinearRegression as LR
x = np.asarray(pred[-12:]).reshape(12, 1)
y = np.asarray(cci_n[-12:]).reshape(12, 1)
lr = LR().fit(x, y)
a = lr.score(x, y)
plt.figure(figsize = (15,5))
plt.plot(cci_n)
plt.plot(pred, color = 'r')
plt.title('CCI: Actual vs. Dynamic VAR Prediction \n R$^2$ on last year of data = {}'.format(a))
plt.legend(['Actual', 'Dynamic VAR Prediction'], loc='best')
plt.tight_layout()
plt.show()
"""
Explanation: The model suggests that both variables 'Granger' cause each other, meaning that the null hypotheses that shocks in one do not affect the other can be confidently rejected.
Specifying the Dynamic VAR Model
With the same lag order, we now estimate the rolling-forward models to obtain out of sample predictions.
End of explanation
"""
# Load datasets
top = np.load('topicsByMonthBigrams8.npy')
top = pd.DataFrame(data = top, index = sen.index)
top.columns = ['T1','T2','T3','T4','T5','T6','T7','T8']
top1 = np.load('topicsByMonthBigrams8.npy')
top1 = pd.DataFrame(data = top1, index = sen.index)
top1.columns = ['T1','T2','T3','T4','T5','T6','T7','T8']
top1 = top1 - top1.shift(12)
top1.dropna(inplace=True)
top2 = np.load('topicsByMonthBigrams8.npy')
top2 = pd.DataFrame(data = top2, index = sen.index)
top2.columns = ['T1','T2','T3','T4','T5','T6','T7','T8']
for i in ['T1','T2','T3','T4','T5','T6','T7','T8']:
temp1 = tsa.seasonal_decompose(top2[i].values, freq = 12)
top2[i] = temp1.trend + temp1.resid
top2.dropna(inplace=True)
dfs = pd.concat([cci, sen], axis = 1)
dfs = dfs - dfs.shift(1)
dfs = pd.concat([dfs, top1], axis = 1)
dfs.dropna(inplace=True)
dfs.columns = ['CCI', 'SEN', 'T1','T2','T3','T4','T5','T6','T7','T8']
"""
Explanation: In contrast to the simple ARIMA and ARIMA + SWN, this model does worse in predicting the last 12 months of CCI data.
2) Dynamic VAR - CCI + SWN + LDA Topics:
We now include the LDA topics into the VAR setting. As was discussed in previous Notebooks, LDA topics are not only non-stationary but they behave with a strong seasonal component. For this reason, we include the 12 month differences of the LDA topics into our VAR model.
End of explanation
"""
# Define application of VAR to relevant dataset
var = tsa.VAR(dfs)
# Select lag order by information criteria
var.select_order(12)
"""
Explanation: Lag Order Specification
End of explanation
"""
# Dynamic Specification of the VAR Model
var = tsa.DynamicVAR(dfs, lag_order=2, window_type='expanding')
preds = var.forecast(1)
raw_preds = preds['CCI']
cci_n = cci['1993-01-01':]
sum_preds = raw_preds.cumsum()
pred = pd.Series(cci_n.ix[0], index=cci_n.index)
pred = pred.add(sum_preds, fill_value=0)
from sklearn.linear_model import LinearRegression as LR
x = np.asarray(pred[-12:]).reshape(12, 1)
y = np.asarray(cci_n[-12:]).reshape(12, 1)
lr = LR().fit(x, y)
a = lr.score(x, y)
plt.figure(figsize = (15,5))
plt.plot(cci_n)
plt.plot(pred, color = 'r')
plt.title('CCI: Actual vs. Dynamic VAR Prediction \n R$^2$ for the last year of data = {}'.format(a))
plt.legend(['Actual', 'Dynamic VAR Prediction'], loc='best')
plt.tight_layout()
plt.show()
"""
Explanation: We observe 2 information criteria pointing to an optimal lag order of 2, while 2 other information criteria point to an optimal lag order of 12. We opt to select the simpler model with a lag order of 2. Given the very high number of regressors in this specification, we do not show the summary of results, impulse-response and cummulative impulse-response graphs, and other preliminary results to the dynamic VAR specification.
Specifying the Dynamic VAR Model
End of explanation
"""
|
AaronCWong/phys202-2015-work | assignments/assignment09/IntegrationEx01.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from scipy import integrate
"""
Explanation: Integration Exercise 1
Imports
End of explanation
"""
def trapz(f, a, b, N):
h = (b - a)/N
i = np.arange(1,N)
c = h*(0.5*f(a)+f(b)*0.5+f(a+i*h).sum())
return c
f = lambda x: x**2
g = lambda x: np.sin(x)
I = trapz(f, 0, 1, 1000)
assert np.allclose(I, 0.33333349999999995)
J = trapz(g, 0, np.pi, 1000)
assert np.allclose(J, 1.9999983550656628)
"""
Explanation: Trapezoidal rule
The trapezoidal rule generates a numerical approximation to the 1d integral:
$$ I(a,b) = \int_a^b f(x) dx $$
by dividing the interval $[a,b]$ into $N$ subdivisions of length $h$:
$$ h = (b-a)/N $$
Note that this means the function will be evaluated at $N+1$ points on $[a,b]$. The main idea of the trapezoidal rule is that the function is approximated by a straight line between each of these points.
Write a function trapz(f, a, b, N) that performs trapezoidal rule on the function f over the interval $[a,b]$ with N subdivisions (N+1 points).
End of explanation
"""
a = integrate.quad(f,0,1)
b = integrate.quad(g,0,np.pi)
print (a)
print (b)
assert True # leave this cell to grade the previous one
"""
Explanation: Now use scipy.integrate.quad to integrate the f and g functions and see how the result compares with your trapz function. Print the results and errors.
End of explanation
"""
|
matousc89/PPSI | podklady/notebooks/funkce_a_tridy.ipynb | mit | def my_function(a, b):
"""
This function sum together two variables (if they are summable).
"""
return a + b
"""
Explanation: Funkce a třídy
Funkce
Následuje příklad definice funkce, která sečte dva argumenty - a, b.
End of explanation
"""
my_function(2, 5)
my_function("Spam ", "eggs")
my_function([1, 2, "A"], [5, 5.3])
"""
Explanation: Funkce může být opakovaně použita kde sčítání různých argumentů (čísel, textu i listů)
End of explanation
"""
def my_function(arg1, arg2, kwarg1=0, kwarg2=0):
"""
This function accepts two args and two kwargs.
Product is sum of all args and kwargs
"""
return arg1 + arg2 + kwarg1 + kwarg2
my_function(2, 3., kwarg1=1.5, kwarg2=2)
my_function(2, 3., 1.5, 2.)
my_function(2, 3.)
my_function(2, 3., kwarg2=3.)
"""
Explanation: Poznámka: Funkce nemusí mít žádné argumenty a nemusí ani mít příkaz return. Funce bez return vrací None.
Argumenty a klíčové argumenty
Argumenty (arguments, args) jsou povinné. Když voláte funkce, argumenty musí být zadané. Záleží na jejich pořadí.
Klíčové argumenty (keyword arguments, kwargs) jsou nepovinné, můžou mít zadanou defaultní hodnotu. Nezáleží na jejich pořadí.
Následuje příklad, kde jsou užity navíc i klíčové argumenty.
End of explanation
"""
class Example():
def __init__(self):
"""
This is constructor. This function runs during creation.
"""
print("Instance created.")
def __del__(self):
"""
This is something like destructor.
This function runs when the last pointer to the instance is lost.
It is the last will of the instance.
"""
print("Instance deleted")
"""
Explanation: Třídy
Třída je předpis, podle kterého je možné vytvářet instance - objekty. Příklad jednoduché třídy následuje.
End of explanation
"""
f = Example()
f = None
"""
Explanation: Instance objektu může být vytvořena následovně. Všimněte si, že po přespání refence na instanci, je instance hned zničena.
End of explanation
"""
class Food():
def __init__(self, portion_size, unit_mass):
self.portion_size = portion_size # make it accessible from outside
self.unit_mass = unit_mass
self.UNIT = "g"
def get_portion_mass(self):
"""
This function returns mass of the portion with unit as string.
"""
return str(self.portion_size * self.unit_mass) + " " + self.UNIT
"""
Explanation: Funkce del (destruktor) se většinou nepoužívá. Ale funkce init (konstruktor) je nejčastější způsob jak inicializovat proměnné nebo činnost instance. V následujícím příkladě má konstruktor instance dva argumenty, které se uloží tak, aby byly přístupné i ostatním funkcím v dané instanci.
End of explanation
"""
f = Food(10, 30) # create food with specific parameters
f.get_portion_mass() # get mass of a single portion
"""
Explanation: Následuje příklad jak předat argumenty konstruktoru a jak zavolat vytvořenou funkci dané instance.
End of explanation
"""
class Fruit(Food):
def __init__(self, portion_size, unit_mass, sweetness=0):
super(self.__class__, self).__init__(portion_size, unit_mass)
self.sweetness = sweetness
class Vegetable(Food):
def __init__(self, portion_size, unit_mass, is_green=False):
super(self.__class__, self).__init__(portion_size, unit_mass)
self.is_green = is_green
"""
Explanation: Dědičnost
Ukázka využíti dědičnosti následuje.
End of explanation
"""
apple = Fruit(10, 30, 50)
apple.sweetness
apple.get_portion_mass()
"""
Explanation: Takto zděděné třídy mají všechny proměnné a funkce původní třídy Food.
End of explanation
"""
|
sujitpal/polydlot | src/tensorflow/04-mnist-rnn.ipynb | apache-2.0 | from __future__ import division, print_function
from tensorflow.contrib import keras
from sklearn.metrics import accuracy_score, confusion_matrix
from sklearn.preprocessing import OneHotEncoder
import matplotlib.pyplot as plt
import numpy as np
import os
import tensorflow as tf
%matplotlib inline
DATA_DIR = "../../data"
TRAIN_FILE = os.path.join(DATA_DIR, "mnist_train.csv")
TEST_FILE = os.path.join(DATA_DIR, "mnist_test.csv")
LOG_DIR = os.path.join(DATA_DIR, "tf-mnist-rnn-logs")
MODEL_FILE = os.path.join(DATA_DIR, "tf-mnist-rnn")
IMG_SIZE = 28
LEARNING_RATE = 0.001
BATCH_SIZE = 128
NUM_CLASSES = 10
NUM_EPOCHS = 5
"""
Explanation: MNIST Digit Classification - RNN
End of explanation
"""
def parse_file(filename):
xdata, ydata = [], []
fin = open(filename, "rb")
i = 0
for line in fin:
if i % 10000 == 0:
print("{:s}: {:d} lines read".format(
os.path.basename(filename), i))
cols = line.strip().split(",")
ydata.append(int(cols[0]))
xdata.append(np.reshape(
np.array([float(x) / 255. for x in cols[1:]]),
(IMG_SIZE, IMG_SIZE)))
i += 1
fin.close()
print("{:s}: {:d} lines read".format(os.path.basename(filename), i))
y = np.array(ydata).astype("float32")
X = np.array(xdata).astype("float32")
return X, y
Xtrain, ytrain = parse_file(TRAIN_FILE)
Xtest, ytest = parse_file(TEST_FILE)
print(Xtrain.shape, ytrain.shape, Xtest.shape, ytest.shape)
def datagen(X, y, batch_size=BATCH_SIZE, num_classes=NUM_CLASSES):
ohe = OneHotEncoder(n_values=num_classes)
while True:
shuffled_indices = np.random.permutation(np.arange(len(y)))
num_batches = len(y) // batch_size
for bid in range(num_batches):
batch_indices = shuffled_indices[bid*batch_size:(bid+1)*batch_size]
Xbatch = np.zeros((batch_size, X.shape[1], X.shape[2]),
dtype="float32")
Ybatch = np.zeros((batch_size, num_classes), dtype="float32")
for i in range(batch_size):
Xbatch[i] = X[batch_indices[i]]
Ybatch[i] = ohe.fit_transform(y[batch_indices[i]]).todense()
yield Xbatch, Ybatch
self_test_gen = datagen(Xtrain, ytrain)
Xbatch, Ybatch = self_test_gen.next()
print(Xbatch.shape, Xbatch.dtype, Ybatch.shape, Ybatch.dtype)
"""
Explanation: Prepare Data
End of explanation
"""
X = tf.placeholder(tf.float32, [BATCH_SIZE, IMG_SIZE, IMG_SIZE], name="X")
Y = tf.placeholder(tf.float32, [BATCH_SIZE, NUM_CLASSES], name="Y")
# Current data input shape: (batch_size, n_steps, n_input)
# Required shape: 'n_steps' tensors list of shape (batch_size, n_input)
x = tf.unstack(X, IMG_SIZE, 1)
lstm_cell = tf.contrib.rnn.BasicLSTMCell(512)
lstm_outputs, lstm_states = tf.contrib.rnn.static_rnn(lstm_cell, x,
dtype=tf.float32)
# # Linear activation, using rnn inner loop last output
lstm_ctx = lstm_outputs[-1]
# dropout
H1 = tf.nn.dropout(lstm_ctx, 0.2)
# Fully connected layer
W = tf.Variable(tf.truncated_normal(shape=[512, NUM_CLASSES],
stddev=0.01))
b = tf.Variable(tf.zeros(shape=[NUM_CLASSES]))
Y_ = tf.nn.softmax(tf.add(tf.matmul(H1, W), b))
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
logits=Y_, labels=Y))
optimizer = tf.train.AdamOptimizer(LEARNING_RATE).minimize(loss)
correct_preds = tf.equal(tf.argmax(Y_, 1), tf.argmax(Y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_preds, tf.float32))
tf.summary.scalar("loss", loss)
tf.summary.scalar("accuracy", accuracy)
# Merge all summaries into a single op
summary_op = tf.summary.merge_all()
"""
Explanation: Define Network
End of explanation
"""
history = []
sess = tf.Session()
with sess.as_default():
# shuts off "harmless warning", see
# https://github.com/tensorflow/tensorflow/issues/9939
if (tf.get_collection_ref("LAYER_NAME_UIDS") is not None and
len(tf.get_collection_ref("LAYER_NAME_UIDS")) > 0):
del tf.get_collection_ref('LAYER_NAME_UIDS')[0]
sess.run(tf.global_variables_initializer())
saver = tf.train.Saver()
logger = tf.summary.FileWriter(LOG_DIR, sess.graph)
train_gen = datagen(Xtrain, ytrain, BATCH_SIZE)
num_batches = len(Xtrain) // BATCH_SIZE
for epoch in range(NUM_EPOCHS):
total_loss, total_acc = 0., 0.
for bid in range(num_batches):
# train
Xbatch, Ybatch = train_gen.next()
_, batch_loss, batch_acc, Ybatch_, summary = sess.run(
[optimizer, loss, accuracy, Y_, summary_op],
feed_dict={X: Xbatch, Y:Ybatch})
# write to tensorboard
logger.add_summary(summary, epoch * num_batches + bid)
# accumulate for reporting
total_loss += batch_loss
total_acc += batch_acc
total_loss /= num_batches
total_acc /= num_batches
print("Epoch {:d}/{:d}: loss={:.3f}, accuracy={:.3f}".format(
(epoch + 1), NUM_EPOCHS, total_loss, total_acc))
saver.save(sess, MODEL_FILE, (epoch + 1))
history.append((total_loss, total_acc))
logger.close()
losses = [x[0] for x in history]
accs = [x[1] for x in history]
plt.subplot(211)
plt.title("Accuracy")
plt.plot(accs)
plt.subplot(212)
plt.title("Loss")
plt.plot(losses)
plt.tight_layout()
plt.show()
"""
Explanation: Train Network
End of explanation
"""
BEST_MODEL = os.path.join(DATA_DIR, "tf-mnist-rnn-5")
saver = tf.train.Saver()
ys, ys_ = [], []
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
saver.restore(sess, BEST_MODEL)
test_gen = datagen(Xtest, ytest, BATCH_SIZE)
val_loss, val_acc = 0., 0.
num_batches = len(Xtrain) // BATCH_SIZE
for _ in range(num_batches):
Xbatch, Ybatch = test_gen.next()
Ybatch_ = sess.run(Y_, feed_dict={X: Xbatch, Y:Ybatch})
ys.extend(np.argmax(Ybatch, axis=1))
ys_.extend(np.argmax(Ybatch_, axis=1))
acc = accuracy_score(ys_, ys)
cm = confusion_matrix(ys_, ys)
print("Accuracy: {:.4f}".format(acc))
print("Confusion Matrix")
print(cm)
"""
Explanation: Visualize with Tensorboard
We have also requested the total_loss and total_accuracy scalars to be logged in our computational graph, so the above charts can also be seen from the built-in tensorboard tool. The scalars are logged to the directory given by LOG_DIR, so we can start the tensorboard tool from the command line:
$ cd ../../data
$ tensorboard --logdir=tf-mnist-rnn-logs
Starting TensorBoard 54 at http://localhost:6006
(Press CTRL+C to quit)
We can then view the [visualizations on tensorboard] (http://localhost:6006)
Evaluate Network
End of explanation
"""
|
ivukotic/ML_platform_tests | tutorial/jupyter python numpy plotting/4_Plotting_Basics.ipynb | gpl-3.0 | fig, ax = pl.subplots(2,2, figsize=(8,6))
fig
ax
ax[0,0]
"""
Explanation: We start by importing NumPy which you should be familiar with from the previous tutorial. The next library introduced is called MatPlotLib which is the roughly the Python equivalent of Matlab's plotting functionality. Think of it as a Mathematical Plotting Library.
Let's use NumPy to create a Gaussian distribution and then plot it.
End of explanation
"""
# create x values from [0,99)
x = np.arange(100)
x
# generate y values based on a Gaussian PDF
y1 = np.random.normal(loc=0.0, scale=1.0, size=x.size) # mu=0.0, sigma=1.0
y2 = np.random.normal(loc=2.0, scale=2.0, size=x.size) # mu=1.0, sigma=2.0
y3 = np.random.normal(loc=-2.0, scale=0.5, size=x.size)# mu=-1.0, sigma=0.5
y1[:20] # just show the first 20 as an example
"""
Explanation: We make a figure object that allows us to draw things inside of it. This is our canvas which lets us save the entire thing as an image or a PDF to our computer.
We also split up this canvas to a 2x2 grid and tell matplotlib that we want 4 axes object. Each axes object is a separate plot that we can draw into. For the purposes of the exercise, we'll demonstrate the different linestyles in each subplot. The ordering is by setting [0,0] to the top-left and [n,m] to the bottom-right. As this returns a 2D array, you access each axis by ax[i,j] notation.
End of explanation
"""
for axis, linestyle in zip(ax.reshape(-1), ['-', '--', '-.', ':']):
axis.plot(x, y1, color="red", linewidth=1.0, linestyle=linestyle)
axis.plot(x, y2, color="blue", linewidth=1.0, linestyle=linestyle)
axis.plot(x, y3, color="green", linewidth=1.0, linestyle=linestyle)
axis.set_title('line style: '+linestyle)
axis.set_xlabel("$x$")
axis.set_ylabel("$e^{-\\frac{(x-\\mu)^2}{2\\sigma}}$")
"""
Explanation: Now, for each axes, we want to draw one of the four different example linestyles so you can get an idea of how this works.
End of explanation
"""
fig
"""
Explanation: You can see that we use ax.reshape(-1) which flattens our axes object, so we can just loop over all 4 entries without nested loops, and we combine this with the different linestyles we want to look at: ['-', '--', '-.', ':'].
So for each axis, we plot y1, y2, and y3 with different colors for the same linestyle and then set the title. Let's look at the plots we just made:
End of explanation
"""
pl.tight_layout() # a nice command that just fixes overlaps
fig
pl.clf() # clear current figure
"""
Explanation: But as a perfectionist, I dislike that things look like they overlap... let's fix this using matplotlib.tight_layout()
End of explanation
"""
data_2d = np.random.multivariate_normal([10, 5], [[9,3],[3,18]], size=1000000)
"""
Explanation: Sharing Axes
A nice example to demonstrate another feature of NumPy and Matplotlib together for analysis and visualization is to make one of my favorite kinds of plots
End of explanation
"""
pl.hist2d(data_2d[:, 0], data_2d[:,1])
pl.show()
"""
Explanation: Draw size=1000000 random samples from a multivariate normal distribution. We first specify the means: [10, 5], then the covariance matrix of the distribution [[3,2],[2,3]]. What does this look like?
End of explanation
"""
pl.hist2d(data_2d[:, 0], data_2d[:, 1], bins=100)
pl.show()
"""
Explanation: Oh, that looks weird, maybe we should increase the binning.
End of explanation
"""
fig, ax = pl.subplots()
ax.hist(data_2d[:,0], bins=100, color="red", alpha=0.5) # draw x-histogram
ax.hist(data_2d[:,1], bins=100, color="blue", alpha=0.5) # draw y-histogram
pl.show()
pl.clf()
"""
Explanation: And we can understand the underlying histograms that lie alone each axis.
End of explanation
"""
fig, ax = pl.subplots(2,2, sharex='col', sharey='row', figsize=(10,10))
# draw x-histogram at top-left
ax[0,0].hist(data_2d[:,0], bins=100, color="red") # draw x-histogram
# draw y-histogram at bottom-right
ax[1,1].hist(data_2d[:,1], bins=100, color="blue",orientation="horizontal")
# draw 2d histogram at bottom-left
ax[1,0].hist2d(data_2d[:, 0], data_2d[:, 1], bins=100)
# delete top-right
fig.delaxes(ax[0,1])
fig
"""
Explanation: Now let's combine the these plots in a way that teaches someone what a 2D histogram represents along each dimension. In order to get our histogram for the y-axis "rotated", we just need to specify a orientiation='horizontal' when drawing the histogram.
End of explanation
"""
pl.subplots_adjust(wspace=0, hspace=0)
fig
"""
Explanation: But again, I am not a huge fan of the whitespace between subplots, so I run the following
End of explanation
"""
|
mgalardini/2017_python_course | notebooks/6-Useful_third_party_libraries_for_data_analysis.ipynb | gpl-2.0 | # setup.py example
# %%bash
# wget https://github.com/biopython/biopython/archive/biopython-168.tar.gz
# tar -xvf biopython-168.tar.gz
# cd biopython-168.tar.gz
# sudo python setup.py install
# using pip
# !pip install biopython
# using anaconda
# !conda install biopython
# using ap-get
# !sudo apt-get install python-biopython python3-biopython
"""
Explanation: Useful third-party libraries numpy, scipy, biopython and networkx
How to install third-party libraries
There are basically two ways to install third party libraries:
using the provided setup.py file
using python software managers like pip
using software managers like anaconda (binaries) or linuxbrew/homebrew
using your operating system package managers (e.g. apt-get for debian-based linux distributions)
End of explanation
"""
import numpy as np
# simple array creation
a = np.array([2,3,4])
a
# what type is my array?
a.dtype
b = np.array([1.2, 3.5, 5.1])
b
b.dtype
"""
Explanation: NumPy: array operations in python
You already encountered numpy as the "engine" that powers pandas. It is also used by many other third-party library to allow fast computation on arrays (i.e. scipy or scikit-learn).
The most important feature of NumPy is the ndarray (n-dimensional-array), which is a multidimensional matrix with fixed-type. This allows faster computation and more intuitive array manipulations.
End of explanation
"""
b.shape
"""
Explanation: We can check the dimensions of the array by using the shape method
End of explanation
"""
# like range, but returns an array
np.arange(10, step=0.2)
# evenly-spaced numbers
np.linspace(0, 10, num=20)
# evenly spaced numbers on a logaritmic space (base 2)
np.logspace(0, 10, num=20, base=2)
np.zeros(10)
# multidimensional
z = np.zeros((10, 5))
z
z.shape
np.ones((10, 5))
"""
Explanation: Numpy offers several array constructors that can be quite handy.
End of explanation
"""
a = np.array((np.linspace(0, 10, num=10),
np.logspace(0, 10, num=10),))
a
a.shape
# equivalent to a.transpose(), but more concise
a.T
a.T.shape
a.mean()
np.std(a)
np.median(a)
"""
Explanation: Some of the operations you applied to dataframes apply to numpy arrays too
End of explanation
"""
# rows/columns
a.reshape(4, 5)
# rows/columns
a.reshape(6, 5)
"""
Explanation: We can also cahnge the dimension of the array
End of explanation
"""
b = np.array((np.linspace(0, 10, num=5),
np.logspace(0, 10, num=5),))
a + b
b = np.array((np.linspace(0, 10, num=10),
np.logspace(0, 10, num=10),))
a + b
a - b
a**2
a > 5
a[a > 5]
# element-wise product
a * b
a.shape
# matrix product
a = np.random.random((3, 3))
b = np.random.random((3, 3))
np.dot(a, b)
"""
Explanation: Mathematical operations on arrays are quite simple
End of explanation
"""
a = np.random.random((3, 5))
# get row 1 (2nd row)
a[1]
# get column 1 (2nd column)
a[:,1]
# three-dimensional array
a = np.random.random((3, 5, 2))
a
a[1, 1:4, 1]
"""
Explanation: Indexing and splicing
End of explanation
"""
from scipy import stats
# get samples from a normal distribution
# loc: mean
# scale: std
n = stats.norm.rvs(loc=0, scale=1, size=100)
n
stats.normaltest(n)
n1 = stats.norm.rvs(loc=0, scale=1, size=100)
n2 = stats.norm.rvs(loc=0.5, scale=1, size=100)
# ttest
stats.ttest_ind(n1, n2)
# Kolmogorov-Smirnoff test
stats.ks_2samp(n1, n2)
table = [[1, 15],
[10, 20]]
stats.fisher_exact(table)
table = [[1, 15],
[10, 20]]
stats.fisher_exact(table, alternative='less')
from scipy.spatial import distance
a = np.random.random((3, 5))
a
distance.pdist(a, metric='canberra')
distance.squareform(distance.pdist(a, metric='canberra'))
"""
Explanation: SciPy: scientific python
SciPy is a very large library of scientific calculations and statistics to be performed on numpy array.
The modules contained in this library are the following:
Special functions (scipy.special)
Integration (scipy.integrate)
Optimization (scipy.optimize)
Interpolation (scipy.interpolate)
Fourier Transforms (scipy.fftpack)
Signal Processing (scipy.signal)
Linear Algebra (scipy.linalg)
Sparse Eigenvalue Problems with ARPACK
Compressed Sparse Graph Routines (scipy.sparse.csgraph)
Spatial data structures and algorithms (scipy.spatial)
Statistics (scipy.stats)
Multidimensional image processing (scipy.ndimage)
File IO (scipy.io)
Weave (scipy.weave)
They are clearly too many to go through them all, but it is worth highlighting the statistical (stats) and spatial modules.
End of explanation
"""
from Bio import SeqIO
s = SeqIO.read('../data/proteome.faa', 'fasta')
sequences = SeqIO.parse('../data/proteome.faa', 'fasta')
sequences
for s in sequences:
print(s.id)
break
type(s)
dir(s)
s.description
"""
Explanation: Biopython: the swiss-army-knife library for bioinformatics
Biopython (http://biopython.org/wiki/Biopython) is a collection of libraries to manipulate files related to computational biology, from sequence data to pdb files. It allows the conversion between formats and even the interrogation of commonly used databases, such as NCBI and KEGG.
Sequence manipulations
Biopython uses a complex series of objects to respresent biological sequences: SeqRecord, Seq and so on. In most cases the user is not expected to create a sequence but to read it, so learning how to manipulate sequences is relatively easy.
When a sequence is read from a file it comes as a SeqRecord object, which can handle annotations on top of a sequence.
As in many biopython modules, parsing can be done either through the parse or the read method. The first one acts as an iterator, which means that it can be used in a for loop to access one sequence at a time. The latter is used when the file contains one and only one record.
End of explanation
"""
# first 10 aminoacids
s[:10]
s[:10].seq
str(s[10:20].seq)
"""
Explanation: Sequence objects can be sliced as strings; the actual sequence can be found under the attribute seq.
End of explanation
"""
# a quick look at how a GenBank file looks
!head ../data/ecoli.gbk
# a quick look at how a GenBank file looks
!tail ../data/ecoli.gbk
s = SeqIO.read('../data/ecoli.gbk', 'genbank')
SeqIO.write(s, 'ecoli.fasta', 'fasta')
# a quick check of the result
!head ecoli.fasta
!tail ecoli.fasta
"""
Explanation: Sequence formats conversion
We are going to take a genome in genbank format (https://www.ncbi.nlm.nih.gov/Sitemap/samplerecord.html) and convert it to the much simpler Fasta format.
End of explanation
"""
# forward
str(s[1000:2000].seq)
# reverse complement
str(s[1000:2000].reverse_complement().seq)
"""
Explanation: Sequence manipulation example
End of explanation
"""
# first four features of this genbank file
for feat in s.features[:5]:
print(feat)
type(feat)
"""
Explanation: GenBank format features extraction
End of explanation
"""
feat.location.start, feat.location.end, feat.strand
feat.qualifiers
# we can also translate the original sequence
s[feat.location.start:feat.location.end].seq.translate()
# we can also translate the original sequence (let's remove the last codon)
s[feat.location.start:feat.location.end-3].seq.translate()
"""
Explanation: Features (more properly SeqFeature objects), contain all the information related to an annotation that belongs to a sequence. The most notable attributes are position, strand and qualifiers.
End of explanation
"""
s.seq, type(s.seq)
s.seq.alphabet, type(s.seq.alphabet)
"""
Explanation: SeqRecord as a prime example of OOP in python
As you might have noticed, parsing sequence data in python involves storing a variety of information, not only the sequence name and the sequence itself. This is especially true for annotation-rich formats such as the Genbank format (or its cousing, the GFF format).
In order to flexibly expose those useful annotations whenever a sequence file is parsed, Biopython uses the SeqRecord object. As you have seen it has several attributes with "standard" types (id, description, ...), a series of methods (reverse_complement, ...) and more interestingly, a series of attributes that are instances of other BioPython objects.
End of explanation
"""
from Bio import Alphabet
dir(Alphabet)
help(Alphabet.Alphabet)
help(Alphabet.SingleLetterAlphabet)
help(Alphabet.NucleotideAlphabet)
help(Alphabet.ThreeLetterProtein)
"""
Explanation: The Bio.Alphabet submodule contains a series of alphabets to build biological sequences. This ensures that no forbidden chars are used in making a sequence. Since the alphabet is an abstract concept, it is prone to be inherited and extended by subclasses.
End of explanation
"""
(s.seq, type(s.seq))
dir(s.seq)
"""
Explanation: The Bio.Seq.Seq object is a lower level representation of the biological sequence, meant to represent the sequence itself and its name only. It also features several utility functions, such as reverse_complement, transcribe, translate, and so on...
End of explanation
"""
s.features[:10]
feat = s.features[2]
feat
dir(feat)
"""
Explanation: As you can see, the Seq class seem to be an extension of the string class, with which it shares several methods and attributes.
End of explanation
"""
feat.location.start, type(feat.location.start)
from Bio import SeqFeature
help(SeqFeature.ExactPosition)
feat.location
"""
Explanation: The SeqFeature class is also a very interesting example: apart from having a series of "regular" attributes (id, qualifiers, ...), it also contains instances of biopython-defined classes. The most interesting one is probably the location attribute, which contains a complex representation of the position of the feature inside the parent sequence. This is because locations are not always exactly defined (biology is messier than informatics!).
End of explanation
"""
# fetch pdb file
!wget http://www.rcsb.org/pdb/files/1g59.pdb
from Bio.PDB.PDBParser import PDBParser
parser = PDBParser()
structure = parser.get_structure('1g59', '1g59.pdb')
header = parser.get_header()
# fetch the structural method and the resolution
print('Method: {0}'.format(header['structure_method']))
print('Resolution: {0}'.format(header['resolution']))
"""
Explanation: If we had to draw an extremely simplified version of the SeqRecord hierarchy, it will look like this:
SeqRecord
|
+---- Alphabet
|
+---- Seq
|
+---- features
|
+---- SeqFeature
|
+---- FeatureLocation
|
+---- position
[...]
Reading PDB files
BioPython also contains a useful module to parse protein 3D structures, again in a variety of formats
End of explanation
"""
model = structure[0]
chain = model['A']
residue = chain[(' ', 1, ' ')]
atom = residue['CE']
chain.id
residue.id[1], residue.resname
atom.name, atom.occupancy, atom.bfactor, atom.coord
"""
Explanation: The returned object (structure) has a complex structure, which follows the structure, model, chain, residue, atom hierarchy (SMCRA).
Structure['1g59']
|
+---- Model[0]
|
+---- Chain['A']
| |
| +---- Residue[' ', 1, ' ']
| | |
| | +---- Atom['N']
| | |
| | +---- [...]
| | |
| | +---- Atom['CE']
| |
| +---- [...]
| |
| +---- Residue[' ', 468, ' '] [...]
|
+---- Chain['B'] [...]
|
+---- Chain['C'] [...]
|
+---- Chain['D'] [...]
|
+---- Chain[' ']
|
+---- Residue['W', 1, ' ']
| |
| +---- Atom['O']
|
+---- [...]
|
+---- Residue['W', 283, ' '] [...]
Q: why do you think that there can be more than one model inside a single PDB file?
End of explanation
"""
from Bio import Phylo
tree = Phylo.read('../data/tree.nwk', 'nexus')
tree = Phylo.read('../data/tree.nwk', 'newick')
dir(tree)
# very simple visualization of a tree
Phylo.draw_ascii(tree)
# get a list of terminal nodes
tree.get_terminals()
"""
Explanation: Read/manipulate phylogenetic trees
The Bio.Phylo module allow to read/write/manipulate phylogenetic treesd, as well as run complex evolutionary analysis software like codeml.
Note: even though Bio.Phylo can be used to draw phylogenetic trees, other libraries such as ete3 are suggested for their great power and versatility.
End of explanation
"""
node = tree.get_terminals()[0]
dir(node)
# distance between root and our node
print('Distance between root and "{0}": {1}'.format(node.name,
tree.distance(tree.root, node)))
# the root can be changed too
tree.root_at_midpoint()
Phylo.draw_ascii(tree)
"""
Explanation: Each bifurcation and terminal node in the tree is a Clade object, with several network-like properties. Most of the attributes and methods are shared with the Tree object.
End of explanation
"""
from Bio import Entrez
Entrez.email = 'your@email.org'
"""
Explanation: Interrogate the NCBI database using Bio.Entrez
The NCBI has a very useful programmatic interface for data retrieval, for which BioPython has a very complex module. Find more information about Entrez here: https://www.ncbi.nlm.nih.gov/books/NBK3837/
End of explanation
"""
r = Entrez.esearch(db='bioproject',
term='PRJNA57779')
h = Entrez.read(r)
h
bioproject_id = h['IdList'][0]
bioproject_id
r = Entrez.elink(dbfrom='bioproject', id=bioproject_id, linkname='bioproject_taxonomy')
h = Entrez.read(r)
h
taxonomy_id = h[0]['LinkSetDb'][0]['Link'][0]['Id']
taxonomy_id
r = Entrez.efetch(db='taxonomy', id=taxonomy_id)
h = Entrez.read(r)
h
"""
Explanation: In this minimal example we are going to link a Bioproject ID to a NCBI taxonomy record. The possibility of the interface are numerous and complex, given that also pubmed and its reach metadata can be reached through Entrez.
End of explanation
"""
handle = Entrez.esearch(db='pubmed',
sort='relevance',
retmax='20',
retmode='xml',
term='Escherichia coli')
results = Entrez.read(handle)
results
handle = Entrez.efetch(db='pubmed',
retmode='xml',
id=results['IdList'][0])
results = Entrez.read(handle)
results
"""
Explanation: If you have to deal with complex analysis involving taxonomy, a better way to go is to use the ete3 library. Have a look at this page for more details.
The Entrez module is also useful for doing literature searches through PubMed.
End of explanation
"""
import networkx as nx
# undirected graph
g = nx.Graph()
# add nodes
g.add_node('eggs', price=2.5)
g.add_node('spam', price=3.1, rating=1)
# add edges
g.add_edge('eggs', 'spam', rating=3)
g.add_edge('steak', 'spam')
# add edges (and implicitly new nodes)
g.add_edge('eggs', 'omelette')
g.add_edge('vanilla', 'fudge')
g.nodes()
g.edges()
# access nodes and edges with a dictionary-like syntax
g.node['eggs']
g['eggs']
g['eggs']['spam']
g['spam']['eggs']
"""
Explanation: NetworkX: Cytoscape-like library
This library collects many well-known algorithms to inspect graphs and network properties.
Graphs are encoded in a dictionary-like way, allowing easy and intuitive parsing. Simple plotting functions are available as well.
End of explanation
"""
nx.degree(g)
nx.betweenness_centrality(g)
nx.edge_betweenness_centrality(g)
for component in nx.connected_components(g):
print(component)
"""
Explanation: All the obvious properties can be easily computed.
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('white')
import random
nx.draw_networkx_nodes?
# Generate a series of random graphs
gs = [nx.random_graphs.powerlaw_cluster_graph(n=random.randint(10, 20),
m=random.randint(1, 3),
p=random.random()*0.05)
for x in range(7)]
# Concatenate then in a single graph
# (there might be a more efficient way)
g = gs[0]
for g1 in gs[1:]:
i = max(g.nodes()) + 1
g.add_edges_from([(x+i, y+i) for (x, y) in g1.edges()])
# Calculate nodes and edge properties
# to have something to plot
betw_cent = nx.betweenness.betweenness_centrality(g).values()
edge_betw_cent = nx.edge_betweenness_centrality(g).values()
# Graph layout
graph_pos = nx.layout.fruchterman_reingold_layout(g)
plt.figure(figsize=(9, 9))
# Draw nodes
nx.draw_networkx_nodes(g, graph_pos,
# Node size depends on node degree
node_size=[x*15 for x in nx.degree(g).values()],
# Node color depends on node centrality
node_color=list(betw_cent),
cmap=plt.get_cmap('Blues'),
vmax=max(betw_cent),
vmin=0)
# Draw edges
nx.draw_networkx_edges(g, graph_pos,
# Width depends on edge centrality
width=[x*250 for x in edge_betw_cent],
color='k')
sns.despine(bottom=True, left=True)
plt.xticks([])
plt.yticks([]);
"""
Explanation: Graph visualization example
End of explanation
"""
|
hdesmond/StatisticalMethods | examples/SDSScatalog/CorrFunc.ipynb | gpl-2.0 | %load_ext autoreload
%autoreload 2
import numpy as np
import SDSS
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import copy
# We want to select galaxies, and then are only interested in their positions on the sky.
data = pd.read_csv("downloads/SDSSobjects.csv",usecols=['ra','dec','u','g',\
'r','i','size'])
# Filter out objects with bad magnitude or size measurements:
data = data[(data['u'] > 0) & (data['g'] > 0) & (data['r'] > 0) & (data['i'] > 0) & (data['size'] > 0)]
# Make size cuts, to exclude stars and nearby galaxies, and magnitude cuts, to get good galaxy detections:
data = data[(data['size'] > 0.8) & (data['size'] < 10.0) & (data['i'] > 17) & (data['i'] < 22)]
# Drop the things we're not so interested in:
del data['u'], data['g'], data['r'], data['i'],data['size']
data.head()
Ngals = len(data)
ramin,ramax = np.min(data['ra']),np.max(data['ra'])
decmin,decmax = np.min(data['dec']),np.max(data['dec'])
print Ngals,"galaxy-like objects in (ra,dec) range (",ramin,":",ramax,",",decmin,":",decmax,")"
"""
Explanation: "Spatial Clustering" - the Galaxy Correlation Function
The degree to which objects positions are correlated with each other - "clustered" - is of great interest in astronomy.
We expect galaxies to appear in groups and clusters, as they fall together under gravity: the statistics of galaxy clustering should contain information about galaxy evolution during hierarchical structure formation.
Let's try and measure a clustering signal in our SDSS photometric object catalog.
End of explanation
"""
# !pip install --upgrade TreeCorr
"""
Explanation: The Correlation Function
The 2-point correlation function $\xi(\theta)$ is defined as "the probability of finding two galaxies separated by an angular distance $\theta$ with respect to that expected for a random distribution" (Peebles 1980), and is an excellent summary statistic for quantifying the clustering of galaxies.
The simplest possible estimator for this excess probability is just
$\hat{\xi}(\theta) = \frac{DD - RR}{RR}$,
where $DD(\theta) = N_{\rm pairs}(\theta) / N_D(N_D-1)/2$. Here, $N_D$ is the total number of galaxies in the dataset, and $N_{\rm pairs}(\theta)$ is the number of galaxy pairs with separation lying in a bin centered on $\theta$. $RR(\theta)$ is the same quantity computed in a "random catalog," covering the same field of view but with uniformly randomly distributed positions.
We'll use Mike Jarvis' TreeCorr code (Jarvis et al 2004) to compute this correlation function estimator efficiently. You can read more about better estimators starting from the TreeCorr wiki.
End of explanation
"""
random = pd.DataFrame({'ra' : ramin + (ramax-ramin)*np.random.rand(Ngals), 'dec' : decmin + (decmax-decmin)*np.random.rand(Ngals)})
print len(random), type(random)
"""
Explanation: Random Catalogs
First we'll need a random catalog. Let's make it the same size as the data one.
End of explanation
"""
fig, ax = plt.subplots(nrows=1, ncols=2)
fig.set_size_inches(15, 6)
plt.subplots_adjust(wspace=0.2)
random.plot(kind='scatter', x='ra', y='dec', ax=ax[0], title='Random')
ax[0].set_xlabel('RA / deg')
ax[0].set_ylabel('Dec. / deg')
data.plot(kind='scatter', x='ra', y='dec', ax=ax[1], title='Data')
ax[1].set_xlabel('RA / deg')
ax[1].set_ylabel('Dec. / deg')
"""
Explanation: Now let's plot both catalogs, and compare.
End of explanation
"""
import treecorr
random_cat = treecorr.Catalog(ra=random['ra'], dec=random['dec'], ra_units='deg', dec_units='deg')
data_cat = treecorr.Catalog(ra=data['ra'], dec=data['dec'], ra_units='deg', dec_units='deg')
# Set up some correlation function estimator objects:
sep_units='arcmin'
min_sep=0.5
max_sep=10.0
N = 7
bin_size = np.log10(1.0*max_sep/min_sep)/(1.0*N)
dd = treecorr.NNCorrelation(bin_size=bin_size, min_sep=min_sep, max_sep=max_sep, sep_units=sep_units, bin_slop=0.05/bin_size)
rr = treecorr.NNCorrelation(bin_size=bin_size, min_sep=min_sep, max_sep=max_sep, sep_units=sep_units, bin_slop=0.05/bin_size)
# Process the data:
dd.process(data_cat)
rr.process(random_cat)
# Combine into a correlation function and its variance:
xi, varxi = dd.calculateXi(rr)
plt.figure(figsize=(15,8))
plt.rc('xtick', labelsize=16)
plt.rc('ytick', labelsize=16)
plt.errorbar(np.exp(dd.logr),xi,np.sqrt(varxi),c='blue',linewidth=2)
# plt.xscale('log')
plt.xlabel('$\\theta / {\\rm arcmin}$',fontsize=20)
plt.ylabel('$\\xi(\\theta)$',fontsize=20)
plt.ylim([-0.1,0.2])
plt.grid(True)
"""
Explanation: Estimating $\xi(\theta)$
End of explanation
"""
|
sbenthall/bigbang | examples/experimental_notebooks/EME Diversity Analysis.ipynb | agpl-3.0 | import bigbang.mailman as mailman
import bigbang.process as process
from bigbang.archive import Archive
import pandas as pd
import datetime
from commonregex import CommonRegex
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: This work was done by Harsh Gupta as part of his internship at The Center for Internet & Society India
End of explanation
"""
def filter_messages(df, column, keywords):
filters = []
for keyword in keywords:
filters.append(df[column].str.contains(keyword, case=False))
return df[reduce(lambda p, q: p | q, filters)]
# Get the Archieves
pd.options.display.mpl_style = 'default' # pandas has a set of preferred graph formatting options
mlist = mailman.open_list_archives("https://lists.w3.org/Archives/Public/public-html/", archive_dir="./archives")
# The spaces around eme are **very** important otherwise it can catch things like "emerging", "implement" etc
eme_messages = filter_messages(mlist, 'Subject', [' EME ', 'Encrypted Media', 'Digital Rights Managagement'])
eme_activites = Archive.get_activity(Archive(eme_messages))
eme_activites.sum(0).sum()
# XXX: Bugzilla might also contain discussions
eme_activites.drop("bugzilla@jessica.w3.org", axis=1, inplace=True)
# Remove Dupicate senders
levdf = process.sorted_matrix(eme_activites)
consolidates = []
# gather pairs of names which have a distance of less than 10
for col in levdf.columns:
for index, value in levdf.loc[levdf[col] < 10, col].iteritems():
if index != col: # the name shouldn't be a pair for itself
consolidates.append((col, index))
# Handpick special cases which aren't covered with string matching
consolidates.extend([(u'Kornel Lesi\u0144ski <kornel@geekhood.net>',
u'wrong string <kornel@geekhood.net>'),
(u'Charles McCathie Nevile <chaals@yandex-team.ru>',
u'Charles McCathieNevile <chaals@opera.com>')])
eme_activites = process.consolidate_senders_activity(eme_activites, consolidates)
sender_categories = pd.read_csv('people_tag.csv',delimiter=',', encoding="utf-8-sig")
# match sender using email only
sender_categories['email'] = map(lambda x: CommonRegex(x).emails[0].lower(), sender_categories['name_email'])
sender_categories.index = sender_categories['email']
cat_dicts = {
"region":{
1: "Asia",
2: "Australia and New Zealand",
3: "Europe",
4: "Africa",
5: "North America",
6: "South America"
},
"work":{
1: "Foss Browser Developer",
2: "Content Provider",
3: "DRM platform provider",
4: "Accessibility",
5: "Security Researcher",
6: "Other W3C Empoyee",
7: "Privacy",
8: "None of the above"
}
}
def get_cat_val_func(cat):
"""
Given category type, returns a function which gives the category value for a sender.
"""
def _get_cat_val(sender):
try:
sender_email = CommonRegex(sender).emails[0].lower()
return cat_dicts[cat][sender_categories.loc[sender_email][cat]]
except KeyError:
return "Unknow"
return _get_cat_val
grouped = eme_activites.groupby(get_cat_val_func("region"), axis=1)
print("Emails sent per region\n")
print(grouped.sum().sum())
print("Total emails: %s" % grouped.sum().sum().sum())
print("Participants per region")
for group in grouped.groups:
print "%s: %s" % (group,len(grouped.get_group(group).sum()))
print("Total participants: %s" % len(eme_activites.columns))
"""
Explanation: Encrypted Media Extension Diversity Analysis
Encrypted Media Extension (EME) is the controvertial draft standard at W3C which aims to aims to prevent copyright infrigement in digital video but opens up door for lots of issues regarding security, accessibility, privacy and interoperability. This notebook tries to analyze if the interests of the important stakeholders were well represented in the debate that happened on public-html mailing list of W3C.
Methodology
Any emails with EME, Encrypted Media or Digital Rights Managagement in the subject line is considered to about EME. Then each of the participant is categorized on the basis of region of the world they belong to and their employeer's interest to the debate. Notes about the participants can be found here.
Region Methodology:
Look up their personal website and social media accounts (Twitter, LinkedIn,
Github) and see if it mentions the country they live in. (Works in Most of the cases)
If the person's email has uses a country specific top level domain, assume that as the country
If github profile is available look up the timezone on last 5 commits.
For people who have moved from their home country consider the country where they live now.
Work Methodology
Look up their personal website and social media accounts (Twitter, LinkedIn, Github) and see if it mentions the employer and categorize accordingly.
People who work on Accessibility, Privacy or Security but also fit into first three categories are categorized in one of the first three categories. For example someone who works on privacy in Google will be placed in "DRM platform provider" instead of "Privacy".
If no other category can be assigned, then assign "None of the Above"
Other Notes
Google's position is very interesting, it is DRM provider as a browser manufacturer but also a content provider in Youtube and fair number of Google Employers are against EME due to other concerns. I've categorized Christian as Content provider because he works on Youtube, and I've placed everyone else as DRM provider.
End of explanation
"""
grouped = eme_activites.groupby(get_cat_val_func("work"), axis=1)
print("Emails sent per work category")
print(grouped.sum().sum())
print("Participants per work category")
for group in grouped.groups:
print "%s: %s" % (group,len(grouped.get_group(group).sum()))
"""
Explanation: Notice that there is absolutely no one from Asia, Africa or South America. This is important because the DRM laws, attitude towards IP vary considerably across the world.
End of explanation
"""
|
mdiaz236/DeepLearningFoundations | sentiment_network/Sentiment Classification - Mini Project 2.ipynb | mit | def pretty_print_review_and_label(i):
print(labels[i] + "\t:\t" + reviews[i][:80] + "...")
g = open('reviews.txt','r') # What we know!
reviews = list(map(lambda x:x[:-1],g.readlines()))
g.close()
g = open('labels.txt','r') # What we WANT to know!
labels = list(map(lambda x:x[:-1].upper(),g.readlines()))
g.close()
len(reviews)
reviews[0]
labels[0]
"""
Explanation: Sentiment Classification & How To "Frame Problems" for a Neural Network
by Andrew Trask
Twitter: @iamtrask
Blog: http://iamtrask.github.io
What You Should Already Know
neural networks, forward and back-propagation
stochastic gradient descent
mean squared error
and train/test splits
Where to Get Help if You Need it
Re-watch previous Udacity Lectures
Leverage the recommended Course Reading Material - Grokking Deep Learning (40% Off: traskud17)
Shoot me a tweet @iamtrask
Tutorial Outline:
Intro: The Importance of "Framing a Problem"
Curate a Dataset
Developing a "Predictive Theory"
PROJECT 1: Quick Theory Validation
Transforming Text to Numbers
PROJECT 2: Creating the Input/Output Data
Putting it all together in a Neural Network
PROJECT 3: Building our Neural Network
Understanding Neural Noise
PROJECT 4: Making Learning Faster by Reducing Noise
Analyzing Inefficiencies in our Network
PROJECT 5: Making our Network Train and Run Faster
Further Noise Reduction
PROJECT 6: Reducing Noise by Strategically Reducing the Vocabulary
Analysis: What's going on in the weights?
Lesson: Curate a Dataset
End of explanation
"""
print("labels.txt \t : \t reviews.txt\n")
pretty_print_review_and_label(2137)
pretty_print_review_and_label(12816)
pretty_print_review_and_label(6267)
pretty_print_review_and_label(21934)
pretty_print_review_and_label(5297)
pretty_print_review_and_label(4998)
"""
Explanation: Lesson: Develop a Predictive Theory
End of explanation
"""
from collections import Counter
import numpy as np
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
for i in range(len(reviews)):
if(labels[i] == 'POSITIVE'):
for word in reviews[i].split(" "):
positive_counts[word] += 1
total_counts[word] += 1
else:
for word in reviews[i].split(" "):
negative_counts[word] += 1
total_counts[word] += 1
positive_counts.most_common()
pos_neg_ratios = Counter()
for term,cnt in list(total_counts.most_common()):
if(cnt > 100):
pos_neg_ratio = positive_counts[term] / float(negative_counts[term]+1)
pos_neg_ratios[term] = pos_neg_ratio
for word,ratio in pos_neg_ratios.most_common():
if(ratio > 1):
pos_neg_ratios[word] = np.log(ratio)
else:
pos_neg_ratios[word] = -np.log((1 / (ratio+0.01)))
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
"""
Explanation: Project 1: Quick Theory Validation
End of explanation
"""
from IPython.display import Image
review = "This was a horrible, terrible movie."
Image(filename='sentiment_network.png')
review = "The movie was excellent"
Image(filename='sentiment_network_pos.png')
vocab = set(total_counts.keys())
vocab_size = len(vocab)
print(vocab_size)
layer_0 = np.zeros((1, vocab_size))
layer_0
word2Index = {}
for i, word in enumerate(vocab):
word2Index[word] = i
word2Index
def update_input_layer(review):
""" Modify the global layer_0 to represent the vector form of review.
The element at a given index of layer_0 should represent \
how many times the given word occurs in the review.
Args:
review(string) - the string of the review
Returns:
None
"""
global layer_0
# clear out previous state, reset the layer to be all 0s
layer_0 *= 0
## Your code here
for word in review.split(" "):
layer_0[0][word2Index[word]] += 1
update_input_layer(reviews[0])
layer_0
def get_target_for_label(label):
"""Convert a label to `0` or `1`.
Args:
label(string) - Either "POSITIVE" or "NEGATIVE".
Returns:
`0` or `1`.
"""
if label == 'POSITIVE':
return 1
else:
return 0
get_target_for_label(labels[0])
labels[1]
get_target_for_label(labels[1])
"""
Explanation: Transforming Text into Numbers
End of explanation
"""
|
5agado/data-science-learning | graphics/physarum/Physarum.ipynb | apache-2.0 | import numpy as np
import cupy as cp
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import tqdm
import math
import os
import sys
from pathlib import Path
%matplotlib inline
%load_ext autoreload
%autoreload 2
import Physarum as physarum
from Physarum import Physarum
from ds_utils.sim_utils import named_configs
from ds_utils.video_utils import generate_video, imageio_generate_video
"""
Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Intro" data-toc-modified-id="Intro-1"><span class="toc-item-num">1 </span>Intro</a></span></li><li><span><a href="#Run-Test-Simulation" data-toc-modified-id="Run-Test-Simulation-2"><span class="toc-item-num">2 </span>Run Test Simulation</a></span><ul class="toc-item"><li><span><a href="#Performances-Profiling" data-toc-modified-id="Performances-Profiling-2.1"><span class="toc-item-num">2.1 </span>Performances Profiling</a></span></li></ul></li><li><span><a href="#Parameters-Grid-Search" data-toc-modified-id="Parameters-Grid-Search-3"><span class="toc-item-num">3 </span>Parameters Grid Search</a></span></li></ul></div>
Intro
This notebook explores slime mold simulation and visualization. For an introduction to the phenomenon and method see this sagejenson post
End of explanation
"""
width = 400
height = 400
system_shape = (width, height)
init_fun_perlin=lambda shape: physarum.get_perlin_init(shape=shape, n=int(30e4), scale=380)
init_fun_circle=lambda shape: physarum.get_filled_circle_init(n=int(10e4), center=(shape[0]//2,shape[1]//2),
radius=100)
def combined_init(shape):
pop_01 = physarum.get_filled_circle_init(n=int(10e4), center=(shape[0]//2,shape[1]//2), radius=100)
pop_02 = physarum.get_perlin_init(shape=shape, n=int(30e4), scale=80)
return np.concatenate([pop_01, pop_02])
species_a = Physarum(shape=system_shape, horizon_walk=1, horizon_sense=9,
theta_walk=15, theta_sense=10., walk_range=1.,
social_behaviour=0, trace_strength=1,
init_fun=combined_init)
species_b = Physarum(shape=system_shape, horizon_walk=1,horizon_sense=9,
theta_walk=15, theta_sense=10., walk_range=1.2,
social_behaviour=-16,trace_strength=1,
init_fun=init_fun_circle)
simulation_steps = 10
images = physarum.run_physarum_simulation(populations=[species_a], steps=simulation_steps)
out_path = Path.home() / 'Documents/graphics/generative_art_output/physarum/test_01'
out_path.mkdir(exist_ok=True, parents=True)
imageio_generate_video(str(out_path/"test_02.mp4"), images, fps=20, format="mp4", loop=False)
generate_video(str(out_path/"tmp.mp4"), (width, height),
frame_gen_fun = lambda i: np.array(images[i])[:,:,:3],
nb_frames = len(images))
"""
Explanation: Run Test Simulation
End of explanation
"""
%%prun -s cumulative -l 30 -r
# We profile the cell, sort the report by "cumulative
# time", limit it to 30 lines
simulation_steps = 50
images = physarum.run_physarum_simulation(populations=[species_b], steps=simulation_steps)
"""
Explanation: Performances Profiling
End of explanation
"""
out_path = Path.home() / 'Documents/graphics/generative_art_output/physarum/grid_search'
import cupy as cp
def normalize_snapshots(snapshots):
norm_s = cp.asnumpy(snapshots)
#fix_images = np.sqrt(fix_images + 0.1) - np.sqrt(0.1)
#fix_images = np.log(fix_images + 1
norm_s = (norm_s/norm_s.max())*255
# add z axis
norm_s = norm_s[:, :, :, np.newaxis]
return norm_s.astype(np.uint8)
nb_vals = 3
grid_search_params = {
'horizon_walk': np.linspace(1., 3., nb_vals).round(2), # higher more spread, chaos
'horizon_sense': np.linspace(10., 25., nb_vals).round(2),
'theta_sense': np.linspace(10., 25., nb_vals).round(2), # the smaller, the more narrow paths they create
'theta_walk': np.linspace(5., 15., nb_vals).round(2), # should be close to theta_sense, if way bigger, they disappear or constrain to concentrated areas
'walk_range': [1.],
'social_behaviour': [0],
'trace_strength': [1],
'decay': [0.8], #np.linspace(.6, .9, nb_vals)
}
configs = list(named_configs(grid_search_params))
system_size = 100
system_shape = tuple([system_size]*2)
render_dir = out_path / f'{system_size}_size'
render_dir.mkdir(exist_ok=False, parents=True)
nb_frames = 90
generate_ply = True
ply_threshold = 70
#init_setup = physarum.get_perlin_init(shape=system_shape, n=int(60e4), scale=150)
#init_setup = physarum.get_filled_circle_init(n=int(20e5), center=(system_shape[0]//2,system_shape[1]//2), radius=150)
def combined_init(shape):
pop_01 = physarum.get_filled_circle_init(n=int(80e4), center=(shape[0]//2,shape[1]//2), radius=150)
pop_02 = physarum.get_perlin_init(shape=shape, n=int(10e4), scale=200)
#pop_01 = physarum.get_gaussian_gradient(n=int(10e4), center=(system_shape[0]//2,system_shape[1]//2), sigma=200)
#pop_02 = physarum.get_circle_init(n=int(5e4), center=(shape[0]//2,shape[1]//2), radius=100, width=30)
return cp.concatenate([pop_01, pop_02])
#init_setup = combined_init(system_shape)
init_setup = physarum.get_gaussian_gradient(n=int(50e5), center=(system_shape[0]//2,system_shape[1]//2), sigma=40)
imgs_path = "MAYBE_DUPLICATES/flat_hexa_logo/19"
#mask = physarum.get_image_mask(list(Path(img_path).glob('*.png'))[np.random.randint(15)], system_shape, threshold=0.5)
nb_runs = 1
for run_idx in range(nb_runs):
with open(str(render_dir / "logs.txt"), 'w+') as f:
for config_idx, config in tqdm.tqdm_notebook(enumerate(configs)):
print(f'#####################')
print(f'Run {run_idx} - config {config_idx}')
run_dir = render_dir / 'run_{}_config_{}'.format(run_idx, config_idx)
system = Physarum(shape=system_shape,
horizon_walk=config.horizon_walk,
horizon_sense=config.horizon_sense,
theta_walk=config.theta_walk,
theta_sense=config.theta_sense,
walk_range=config.walk_range,
social_behaviour=config.social_behaviour,
trace_strength=config.trace_strength,
init_fun=lambda shape: init_setup,
template=None, template_strength=0)
imgs_path = f"MAYBE_DUPLICATES/flat_hexa_logo/{np.random.randint(5, 19)}"
imgs_path = list(Path(imgs_path).glob('*.png'))
img_path = imgs_path[np.random.randint(len(imgs_path))]
#init_setup = physarum.get_image_init_positions(img_path, system_shape, int(80e4), invert=True)
template = physarum.get_image_mask(img_path, system_shape, invert=False)
template_strength = 1
#config = config._replace(theta_walk = config.theta_sense-5)
images = physarum.run_physarum_simulation(populations=[system], diffusion='median',
steps=nb_frames, decay=config.decay, mask=None, mask_factor=.5)
# write out config
f.write(str(config)+"\n")
SYSTEM_CONFIG = config._asdict()
norm_snapshots = normalize_snapshots(images)
#np.save(render_dir / f'run_{run}.npy', fix_images)
# save each frame to ply
if generate_ply:
print('Writing ply files')
out_ply = run_dir / 'ply'
out_ply.mkdir(exist_ok=False)
ply_snapshots = prepare_for_ply(norm_snapshots, ply_threshold)
for frame in np.arange(norm_snapshots.shape[0]):
tmp = ply_snapshots[frame]
tmp = tmp[tmp[:,-1] >= ply_threshold]
write_to_ply(tmp, out_ply / f'frame_{frame:03d}.ply')
# generate video
print('Generating video')
out_video = run_dir / 'run.mp4'
generate_video(str(out_video), (system_shape[1], system_shape[0]),
frame_gen_fun = lambda i: norm_snapshots[i][0],
nb_frames=len(res_images), is_color=False, disable_tqdm=True)
"""
Explanation: Parameters Grid Search
End of explanation
"""
|
hpparvi/PyTransit | notebooks/contamination/example_1b.ipynb | gpl-2.0 | %pylab inline
import sys
from corner import corner
sys.path.append('.')
from src.mocklc import MockLC, SimulationSetup
from src.blendlpf import MockLPF
import src.plotting as pl
"""
Explanation: Contamination example 1b
No contamination and informative priors on orbit parameters
Hannu Parviainen<br>
Instituto de Astrofísica de Canarias
Last modified: 15.7.2019
Here we use the pytransit.contamination module to estimate the true planet to star radius ratio robustly using multicolour photometry in the presence of possible flux contamination from an unresolved source in the photometry aperture, as detailed in Parviainen et al. 2019 (submitted). This can be used in the validation of transiting planet candidates, where, e.g., blended eclipsing binaries are a significant source of false positives.
This notebook runs the simulation for a light curve without contamination and informative priors on the impact parameter and stellar density, while the previous notebook (1a) did the same with uninformative priors on the orbital parameters.
Light curves: We don't use real data here, but create simulated multicolour photometry lightcurves using the MockLC class found in src.mocklc. The code is the same that was used for the simulations in Parviainen et al. (2019).
Log posterior function: The log posterior function is defined by MockLPF class found in src.blendlpf.MockLPF. The class inherits pytransit.lpf.PhysContLPF and overrides the _init_instrument method to define the instrument and the contamination model (amongst other things to make running a variety of simulations smooth).
Parametrisation: As discussed in the paper, the contamination is parametrised by the apparent area ratio ($k_\mathrm{True}^2$), true area ratio ($k_\mathrm{App}^2$), and the effective temperatures of the host and contaminant stars. The apparent area ratio defines how deep the transit is in a single single passband and can be wavelength dependent (if the host and contaminant are of different spectral type), while the true area ratio stands for the unblended true geometric planet-star area ratio.
The true radius ratio ($k_\mathrm{True}$) is the main quantity of interest in transiting planet candidate validation), since it together with a stellar radius estimate gives the true absolute planetary radius.
End of explanation
"""
lc = MockLC(SimulationSetup('M', 0.1, 0.0, 0.0, 'short_transit', cteff=5500, know_orbit=True))
lc.create(wnsigma=[0.001, 0.001, 0.001, 0.001], rnsigma=0.00001, rntscale=0.5, nights=1);
lc.plot();
"""
Explanation: Create a mock light curve
End of explanation
"""
lpf = MockLPF('Example_1', lc)
lpf.print_parameters(columns=2)
"""
Explanation: Initialize the log posterior function
End of explanation
"""
lpf.optimize_global(1000)
lpf.plot_light_curves()
"""
Explanation: Optimize
End of explanation
"""
lpf.sample_mcmc(5000, reset=True, repeats=2)
"""
Explanation: Estimate the posterior
The contamination parameter space is a bit difficult to sample (especially if the signal to noise ratio is low), so the sampling should be continued at least for 10000 iterations.
End of explanation
"""
df = lpf.posterior_samples()
pl.joint_radius_ratio_plot(df, fw=13, clim=(0.099, 0.12), htelim=(3570, 3630), ctelim=(2400,3800), blim=(0, 0.5), rlim=(3.8, 5.2));
pl.joint_contamination_plot(df, fw=13, clim=(0, 0.4), htelim=(3570, 3630), ctelim=(2400,3800), blim=(0, 0.5), rlim=(3.8, 5.2));
"""
Explanation: Analysis
We plot the main results below. It is clear that a single good-quality four-colour light curve still allows for contamination from a source of similar spectral type as the host star. However, in this example, the maximum allowed level of contamination is not sufficient to take the transiting object out of the planetary regime.
Also, the joint posterior plots clearly show that any significant contamination must come from a source of a similar spectral type as the host. Combining this information with prior knowledge about the probability of having such a system without colour variations can be used in probabilistic planet candidate validation.
Plot the basic joint posterior
End of explanation
"""
pl.marginal_radius_ratio_plot(df, bins=60, klim=(0.097, 0.12), figsize=(7,5));
"""
Explanation: Plot the apparent and true radius ratio posteriors
End of explanation
"""
corner(df.iloc[:,2:-3]);
"""
Explanation: Make a corner plot to have a good overview to the posterior space
End of explanation
"""
|
LucaCanali/Miscellaneous | Spark_Physics/HEP_benchmark/ADL_HEP_Query_Benchmark_Q1_Q5_Parquet_sparkhistogram.ipynb | apache-2.0 | # Install PySpark if needed
# !pip install pyspark
# Install sparkhistogram
! pip install sparkhistogram
# Note: if you cannot install the package sparkhistogram,
# create the computeHistogram function as detailed at the end of this notebook.
# See https://github.com/LucaCanali/Miscellaneous/blob/master/Spark_Notes/Spark_DataFrame_Histograms.md
from sparkhistogram import computeHistogram
# Download the data if not yet available locally
# Download the reduced data set (2 GB)
! wget -r -np -R "index.html*" -e robots=off https://sparkdltrigger.web.cern.ch/sparkdltrigger/Run2012B_SingleMu_sample.parquet/
# This downloads the full dataset (16 GB)
# ! wget -r -np -R "index.html*" -e robots=off https://sparkdltrigger.web.cern.ch/sparkdltrigger/Run2012B_SingleMu.parquet/
# Start the Spark Session
# This uses local mode for simplicity
# The use of findspark is optional
import findspark
findspark.init("/home/luca/Spark/spark-3.3.0-bin-hadoop3")
from pyspark.sql import SparkSession
spark = (SparkSession.builder
.appName("HEP benchmark")
.master("local[4]")
.config("spark.driver.memory", "4g")
.config("spark.sql.parquet.enableNestedColumnVectorizedReader", "true")
.getOrCreate()
)
# Read data for the benchmark tasks
# download data as detailed at https://github.com/LucaCanali/Miscellaneous/tree/master/Spark_Physics
path = "sparkdltrigger.web.cern.ch/sparkdltrigger/"
input_data = "Run2012B_SingleMu_sample.parquet"
# use this if you downloaded the full dataset
# input_data = "Run2012B_SingleMu.orc"
df_events = spark.read.parquet(path + input_data)
df_events.printSchema()
print(f"Number of events: {df_events.count()}")
"""
Explanation: HEP Benchmark Queries Q1 to Q5
This follows the IRIS-HEP benchmark
and the article Evaluating Query Languages and Systems for High-Energy Physics Data
and provides implementations of the benchmark tasks using Apache Spark.
The workload and data:
- Benchmark jobs are implemented follwing IRIS-HEP benchmark
- The input data is a series of events from CMS opendata
- The job output is typically a histogram
- See also https://github.com/LucaCanali/Miscellaneous/tree/master/Spark_Physics
Author and contact: Luca.Canali@cern.ch
February, 2022
Setup: data and dependencies
End of explanation
"""
# Compute the histogram for MET_pt
# This defines the DataFrame transformation using sparkhistogram
# See https://github.com/LucaCanali/Miscellaneous/blob/master/Spark_Notes/Spark_DataFrame_Histograms.md
# Histogram parameters
min_val = 0
max_val = 100
num_bins = 100
# Use the helper function computeHistogram in the package sparkhistogram
# The result is a histogram with (energy) bin values and event counts foreach bin
histogram_data = computeHistogram(df_events, "MET_pt", min_val, max_val, num_bins)
# The action toPandas() here triggers the computation.
# Histogram data is fetched into the driver as a Pandas Dataframe.
%time histogram_data_pandas=histogram_data.toPandas()
histogram_data_pandas.head(5)
import matplotlib.pyplot as plt
plt.style.use('seaborn-darkgrid')
plt.rcParams.update({'font.size': 20, 'figure.figsize': [14,10]})
x = histogram_data_pandas["value"]
y = histogram_data_pandas["count"]
# line plot
f, ax = plt.subplots()
ax.plot(x, y, '-')
ax.set_xlim(min_val, max_val)
ax.set_xlabel('$𝐸^{𝑚𝑖𝑠𝑠}_T$ (GeV)')
ax.set_ylabel('Number of Events')
ax.set_title("Distribution of $𝐸^{𝑚𝑖𝑠𝑠}_T$ ")
plt.show()
"""
Explanation: Benchmark task: Q1
Plot the $𝐸^{𝑚𝑖𝑠𝑠}_T$ (missing transverse energy) of all events.
End of explanation
"""
# Jet_pt contains arrays of jet measurements
df_events.select("Jet_pt").show(5,False)
# Use the explode function to extract array data into DataFrame rows
df_events_jet_pt = df_events.selectExpr("explode(Jet_pt) as Jet_pt")
df_events_jet_pt.printSchema()
df_events_jet_pt.show(10, False)
# Compute the histogram for MET_pt
# This defines the DataFrame transformation using sparkhistogram
# See https://github.com/LucaCanali/Miscellaneous/blob/master/Spark_Notes/Spark_DataFrame_Histograms.md
# Histogram parameters
min_val = 15
max_val = 60
num_bins = 100
# Use the helper function computeHistogram in the package sparkhistogram
histogram_data = computeHistogram(df_events_jet_pt, "Jet_pt", min_val, max_val, num_bins)
# The action toPandas() here triggers the computation.
# Histogram data is fetched into the driver as a Pandas Dataframe.
%time histogram_data_pandas=histogram_data.toPandas()
import matplotlib.pyplot as plt
plt.style.use('seaborn-darkgrid')
plt.rcParams.update({'font.size': 20, 'figure.figsize': [14,10]})
x = histogram_data_pandas["value"]
y = histogram_data_pandas["count"]
# line plot
f, ax = plt.subplots()
ax.plot(x, y, '-')
ax.set_xlim(min_val, max_val)
ax.set_xlabel('$p_T$ (GeV)')
ax.set_ylabel('Number of Events')
ax.set_title("Distribution of $p_T$ ")
plt.show()
"""
Explanation: Benchmark task: Q2
Plot the $𝑝_𝑇$ (transverse momentum) of all jets in all events
End of explanation
"""
# Take Jet arrays for pt and eta and transform them to rows with explode()
df1 = df_events.selectExpr("explode(arrays_zip(Jet_pt, Jet_eta)) as Jet")
df1.printSchema()
df1.show(10, False)
# Apply a filter on Jet_eta
q3 = df1.select("Jet.Jet_pt").filter("abs(Jet.Jet_eta) < 1")
q3.show(10,False)
# Compute the histogram for MET_pt
# This defines the DataFrame transformation using sparkhistogram
# See https://github.com/LucaCanali/Miscellaneous/blob/master/Spark_Notes/Spark_DataFrame_Histograms.md
# Histogram parameters
min_val = 15
max_val = 60
num_bins = 100
# Use the helper function computeHistogram in the package sparkhistogram
histogram_data = computeHistogram(q3, "Jet_pt", min_val, max_val, num_bins)
# The action toPandas() here triggers the computation.
# Histogram data is fetched into the driver as a Pandas Dataframe.
%time histogram_data_pandas=histogram_data.toPandas()
import matplotlib.pyplot as plt
plt.style.use('seaborn-darkgrid')
plt.rcParams.update({'font.size': 20, 'figure.figsize': [14,10]})
x = histogram_data_pandas["value"]
y = histogram_data_pandas["count"]
# line plot
f, ax = plt.subplots()
ax.plot(x, y, '-')
ax.set_xlim(min_val, max_val)
ax.set_xlabel('$p_T$ (GeV)')
ax.set_ylabel('Number of Events')
ax.set_title("Distribution of $p_T$ ")
plt.show()
"""
Explanation: Benchmark task: Q3
Plot the $𝑝_𝑇$ of jets with |𝜂| < 1 (𝜂 is the jet pseudorapidity).
End of explanation
"""
# This will use MET adn Jet_pt
df_events.select("MET_pt","Jet_pt").show(10,False)
# The filter ispushed inside arrays of Jet_pt
# This use Spark's higher order functions for array processing
q4 = df_events.select("MET_pt").where("cardinality(filter(Jet_pt, x -> x > 40)) > 1")
q4.show(5,False)
# Compute the histogram for MET_pt
# This defines the DataFrame transformation using sparkhistogram
# See https://github.com/LucaCanali/Miscellaneous/blob/master/Spark_Notes/Spark_DataFrame_Histograms.md
# Histogram parameters
min_val = 0
max_val = 100
num_bins = 100
# Use the helper function computeHistogram in the package sparkhistogram
histogram_data = computeHistogram(q4, "MET_pt", min_val, max_val, num_bins)
# The action toPandas() here triggers the computation.
# Histogram data is fetched into the driver as a Pandas Dataframe.
%time histogram_data_pandas=histogram_data.toPandas()
import matplotlib.pyplot as plt
plt.style.use('seaborn-darkgrid')
plt.rcParams.update({'font.size': 20, 'figure.figsize': [14,10]})
x = histogram_data_pandas["value"]
y = histogram_data_pandas["count"]
# line plot
f, ax = plt.subplots()
ax.plot(x, y, '-')
ax.set_xlim(min_val, max_val)
ax.set_xlabel('$𝐸^{𝑚𝑖𝑠𝑠}_T$ (GeV)')
ax.set_ylabel('Number of Events')
ax.set_title("Distribution of $𝐸^{𝑚𝑖𝑠𝑠}_T$ ")
plt.show()
"""
Explanation: Benchmark task: Q4
Plot the $𝐸^{𝑚𝑖𝑠𝑠}_𝑇$ of the events that have at least two jets with
$𝑝_𝑇$ > 40 GeV (gigaelectronvolt).
End of explanation
"""
# filter the events
# select only events with 2 muons
# the 2 muons must have opposite charge
df_muons = df_events.filter("nMuon == 2").filter("Muon_charge[0] != Muon_charge[1]")
# Formula for dimuon mass in pt, eta, phi, m coordinates
# see also http://edu.itp.phys.ethz.ch/hs10/ppp1/2010_11_02.pdf
# and https://en.wikipedia.org/wiki/Invariant_mass
df_with_dimuonmass = df_muons.selectExpr("MET_pt","""
sqrt(2 * Muon_pt[0] * Muon_pt[1] *
( cosh(Muon_eta[0] - Muon_eta[1]) - cos(Muon_phi[0] - Muon_phi[1]) )
) as Dimuon_mass
""")
# apply a filter on the dimuon mass
Q5 = df_with_dimuonmass.filter("Dimuon_mass between 60 and 120")
# Compute the histogram for MET_pt
# This defines the DataFrame transformation using sparkhistogram
# See https://github.com/LucaCanali/Miscellaneous/blob/master/Spark_Notes/Spark_DataFrame_Histograms.md
# Histogram parameters
min_val = 0
max_val = 100
num_bins = 100
# Use the helper function computeHistogram in the package sparkhistogram
histogram_data = computeHistogram(Q5, "MET_pt", min_val, max_val, num_bins)
# The action toPandas() here triggers the computation.
# Histogram data is fetched into the driver as a Pandas Dataframe.
%time histogram_data_pandas=histogram_data.toPandas()
import matplotlib.pyplot as plt
plt.style.use('seaborn-darkgrid')
plt.rcParams.update({'font.size': 20, 'figure.figsize': [14,10]})
x = histogram_data_pandas["value"]
y = histogram_data_pandas["count"]
# line plot
f, ax = plt.subplots()
ax.plot(x, y, '-')
ax.set_xlabel('$𝐸^{𝑚𝑖𝑠𝑠}_T$ (GeV)')
ax.set_ylabel('Number of Events')
ax.set_title("Distribution of $𝐸^{𝑚𝑖𝑠𝑠}_T$ ")
plt.show()
spark.stop()
"""
Explanation: Benchmark task: Q5
Plot the $𝐸^{𝑚𝑖𝑠𝑠}_T$ of events that have an opposite-charge muon
pair with an invariant mass between 60 GeV and 120 GeV.
End of explanation
"""
def computeHistogram(df: "DataFrame", value_col: str, min: float, max: float, bins: int) -> "DataFrame":
""" This is a dataframe function to compute the count/frequecy histogram of a column
Parameters
----------
df: the dataframe with the data to compute
value_col: column name on which to compute the histogram
min: minimum value in the histogram
max: maximum value in the histogram
bins: number of histogram buckets to compute
Output DataFrame
----------------
bucket: the bucket number, range from 1 to bins (included)
value: midpoint value of the given bucket
count: number of values in the bucket
"""
step = (max - min) / bins
# this will be used to fill in for missing buckets, i.e. buckets with no corresponding values
df_buckets = spark.sql(f"select id+1 as bucket from range({bins})")
histdf = (df
.selectExpr(f"width_bucket({value_col}, {min}, {max}, {bins}) as bucket")
.groupBy("bucket")
.count()
.join(df_buckets, "bucket", "right_outer") # add missing buckets and remove buckets out of range
.selectExpr("bucket", f"{min} + (bucket - 1/2) * {step} as value", # use center value of the buckets
"nvl(count, 0) as count") # buckets with no values will have a count of 0
.orderBy("bucket")
)
return histdf
"""
Explanation: Note on sparkhistogram
Use this to define the computeHistogram function if you cannot pip install sparkhistogram
End of explanation
"""
|
JoseGuzman/myIPythonNotebooks | SignalProcessing/Complex numbers.ipynb | gpl-2.0 | # initiation and examples
z = complex(3,4)
print('The complex {}, where {} is the real and {} the imaginary part'.format(z, z.real, z.imag))
"""
Explanation: <H1>Complex numbers</H1>
A complex number has the property that multipied by itself get a negative answer. For example, if an imaginary number like z could be -1
$$
j \cdot j = -1
$$
then, the imaginary operatior $j$ is
$$ j= \sqrt{-1}$$
Then, a complex numbers is a number that, when squared gives a negative result.
End of explanation
"""
z = 1+1j # alternative way to complex numbers
dist = abs(z)
angle = np.angle(z, deg = True)
print('distance = {:2.4f}, angle = {} deg'.format(dist, angle))
# distance with trigonometry formula
dist = np.sqrt( np.power(z.imag,2) + np.power(z.real,2))
# angle in radiants
angle = np.rad2deg(np.arctan(z.imag/z.real))
print('distance = {:2.4f}, angle = {} deg'.format(dist, angle))
"""
Explanation: A complex number give us the distance to the (0,0) origin and the angle to the positive axis
End of explanation
"""
z2 = z * z.conjugate()
np.abs(np.sqrt(z2))
"""
Explanation: If we multiply a complex number by its conjugate, we obtain a new complex number, whose module is the square of the complex number module
End of explanation
"""
z = (3+4j)
m = np.abs(z)
phi = np.angle(z)
print('cartesian coordinates = ({:0.0f}, {:0.0f})'.format(z.real, z.imag))
print('polar coordinates = ({:0.0f}, {:2.4f})'.format(m, phi))
plt.plot(z.real, z.imag, 'ro', ms =12)
plt.plot([0,z.real],[0, z.imag], 'r-', lw=2)
plt.grid(True)
plt.axis([-5,5,-5,5], option = 'square');
"""
Explanation: <H1>The complex plane</H1>
It's a plane for complex numbers, where we have real and imaginary axis in the plane. We can think of a complex number as a vector (a,bi) or we can express a complex number in its polar form with its module and angle (m, $\phi$). For example, (3+4j) can be also expressed as (5, 0.9273 rad).
End of explanation
"""
# trigonometric relation
m = np.abs(z)
phi = np.angle(z)
x = m*np.cos(phi)
y = m*np.sin(phi)
print('({} + {}j) is {}e^{:2.4f}j '.format(z.real, z.imag, m, phi))
"""
Explanation: <H1>Euler's formula</H1>
We can use $m e^{j\phi}$, where $m$ is the distance to the origin, and $\phi$ the angle with the ordinal axis to express any irrational number in a circle with radius m. This alternative way to a complex numbers turns out to be very useful, because there are many cases, like multiplication, where it is easier to deal with exponentials than with standard complex numbers.
Euler's formula also provides a link between trigonometry and imaginary numbers.
$$e^{ \pm j\theta } = \cos \theta \pm j\sin \theta$$
It describes the vector from the origin to some point of the circle with radius one. If you want to have larger than one vector, we can use:
$$ m e^{ \pm j\theta } = m \cos \theta \pm m j\sin \theta$$
Then, we can have complex waves simply expressed as $m e^{j\theta }$
End of explanation
"""
|
revspete/self-driving-car-nd | sem1/p1-lane-lines/.ipynb_checkpoints/P1-checkpoint.ipynb | mit | #importing some useful packages
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
%matplotlib inline
"""
Explanation: Self-Driving Car Engineer Nanodegree
Project: Finding Lane Lines on the Road
In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below.
Once you have a result that looks roughly like "raw-lines-example.mp4", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right.
In addition to implementing code, there is a brief writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a write up template that can be used to guide the writing process. Completing both the code in the Ipython notebook and the writeup template will cover all of the rubric points for this project.
Let's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the "play" button above) to display the image.
Note: If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".
The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.
<figure>
<img src="examples/line-segments-example.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your output should look something like this (above) after detecting line segments using the helper functions below </p>
</figcaption>
</figure>
<p></p>
<figure>
<img src="examples/laneLines_thirdPass.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your goal is to connect/average/extrapolate line segments to get output like this</p>
</figcaption>
</figure>
Run the cell below to import some packages. If you get an import error for a package you've already installed, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, see this forum post for more troubleshooting tips.
Import Packages
End of explanation
"""
#reading in an image
image = mpimg.imread('test_images/solidWhiteRight.jpg')
#printing out some stats and plotting
print('This image is:', type(image), 'with dimensions:', image.shape)
plt.imshow(image) # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray')
"""
Explanation: Read in an Image
End of explanation
"""
import math
def grayscale(img):
"""Applies the Grayscale transform
This will return an image with only one color channel
but NOTE: to see the returned image as grayscale
(assuming your grayscaled image is called 'gray')
you should call plt.imshow(gray, cmap='gray')"""
return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# Or use BGR2GRAY if you read an image with cv2.imread()
# return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
def canny(img, low_threshold, high_threshold):
"""Applies the Canny transform"""
return cv2.Canny(img, low_threshold, high_threshold)
def gaussian_blur(img, kernel_size):
"""Applies a Gaussian Noise kernel"""
return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)
def region_of_interest(img, vertices):
"""
Applies an image mask.
Only keeps the region of the image defined by the polygon
formed from `vertices`. The rest of the image is set to black.
"""
#defining a blank mask to start with
mask = np.zeros_like(img)
#defining a 3 channel or 1 channel color to fill the mask with depending on the input image
if len(img.shape) > 2:
channel_count = img.shape[2] # i.e. 3 or 4 depending on your image
ignore_mask_color = (255,) * channel_count
else:
ignore_mask_color = 255
#filling pixels inside the polygon defined by "vertices" with the fill color
cv2.fillPoly(mask, vertices, ignore_mask_color)
#returning the image only where mask pixels are nonzero
masked_image = cv2.bitwise_and(img, mask)
return masked_image
def draw_lines(img, lines, color=[255, 0, 0], thickness=2):
"""
NOTE: this is the function you might want to use as a starting point once you want to
average/extrapolate the line segments you detect to map out the full
extent of the lane (going from the result shown in raw-lines-example.mp4
to that shown in P1_example.mp4).
Think about things like separating line segments by their
slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left
line vs. the right line. Then, you can average the position of each of
the lines and extrapolate to the top and bottom of the lane.
This function draws `lines` with `color` and `thickness`.
Lines are drawn on the image inplace (mutates the image).
If you want to make the lines semi-transparent, think about combining
this function with the weighted_img() function below
"""
for line in lines:
for x1,y1,x2,y2 in line:
cv2.line(img, (x1, y1), (x2, y2), color, thickness)
def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap):
"""
`img` should be the output of a Canny transform.
Returns an image with hough lines drawn.
"""
lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)
line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)
draw_lines(line_img, lines)
return line_img
# Python 3 has support for cool math symbols.
def weighted_img(img, initial_img, α=0.8, β=1., λ=0.):
"""
`img` is the output of the hough_lines(), An image with lines drawn on it.
Should be a blank image (all black) with lines drawn on it.
`initial_img` should be the image before any processing.
The result image is computed as follows:
initial_img * α + img * β + λ
NOTE: initial_img and img must be the same shape!
"""
return cv2.addWeighted(initial_img, α, img, β, λ)
"""
Explanation: Ideas for Lane Detection Pipeline
Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:
cv2.inRange() for color selection
cv2.fillPoly() for regions selection
cv2.line() to draw lines on an image given endpoints
cv2.addWeighted() to coadd / overlay two images
cv2.cvtColor() to grayscale or change color
cv2.imwrite() to output images to file
cv2.bitwise_and() to apply a mask to an image
Check out the OpenCV documentation to learn about these and discover even more awesome functionality!
Helper Functions
Below are some helper functions to help get you started. They should look familiar from the lesson!
End of explanation
"""
import os
os.listdir("test_images/")
"""
Explanation: Test Images
Build your pipeline to work on the images in the directory "test_images"
You should make sure your pipeline works well on these images before you try the videos.
End of explanation
"""
# TODO: Build your pipeline that will draw lane lines on the test_images
# then save them to the test_images directory.
"""
Explanation: Build a Lane Finding Pipeline
Build the pipeline and run your solution on all test_images. Make copies into the test_images_output directory, and you can use the images in your writeup report.
Try tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters.
End of explanation
"""
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
def process_image(image):
# NOTE: The output you return should be a color image (3 channel) for processing video below
# TODO: put your pipeline here,
# you should return the final output (image where lines are drawn on lanes)
return result
"""
Explanation: Test on Videos
You know what's cooler than drawing lanes over images? Drawing lanes over video!
We can test our solution on two provided videos:
solidWhiteRight.mp4
solidYellowLeft.mp4
Note: if you get an import error when you run the next cell, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, check out this forum post for more troubleshooting tips.
If you get an error that looks like this:
NeedDownloadError: Need ffmpeg exe.
You can download it by calling:
imageio.plugins.ffmpeg.download()
Follow the instructions in the error message and check out this forum post for more troubleshooting tips across operating systems.
End of explanation
"""
white_output = 'test_videos_output/solidWhiteRight.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4").subclip(0,5)
clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4")
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
%time white_clip.write_videofile(white_output, audio=False)
"""
Explanation: Let's try the one with the solid white lane on the right first ...
End of explanation
"""
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(white_output))
"""
Explanation: Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.
End of explanation
"""
yellow_output = 'test_videos_output/solidYellowLeft.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4').subclip(0,5)
clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4')
yellow_clip = clip2.fl_image(process_image)
%time yellow_clip.write_videofile(yellow_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(yellow_output))
"""
Explanation: Improve the draw_lines() function
At this point, if you were successful with making the pipeline and tuning parameters, you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. As mentioned previously, try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4".
Go back and modify your draw_lines function accordingly and try re-running your pipeline. The new output should draw a single, solid line over the left lane line and a single, solid line over the right lane line. The lines should start from the bottom of the image and extend out to the top of the region of interest.
Now for the one with the solid yellow lane on the left. This one's more tricky!
End of explanation
"""
challenge_output = 'test_videos_output/challenge.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip3 = VideoFileClip('test_videos/challenge.mp4').subclip(0,5)
clip3 = VideoFileClip('test_videos/challenge.mp4')
challenge_clip = clip3.fl_image(process_image)
%time challenge_clip.write_videofile(challenge_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(challenge_output))
"""
Explanation: Writeup and Submission
If you're satisfied with your video outputs, it's time to make the report writeup in a pdf or markdown file. Once you have this Ipython notebook ready along with the writeup, it's time to submit for review! Here is a link to the writeup template file.
Optional Challenge
Try your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project!
End of explanation
"""
|
DOV-Vlaanderen/pydov | docs/notebooks/search_boringen.ipynb | mit | %matplotlib inline
import inspect, sys
import warnings; warnings.simplefilter('ignore')
# check pydov path
import pydov
"""
Explanation: Example of DOV search methods for boreholes (boringen)
Use cases explained below
Get boreholes in a bounding box
Get boreholes with specific properties
Get boreholes in a bounding box based on specific properties
Select boreholes in a municipality and return depth
Get boreholes based on fields not available in the standard output dataframe
Get borehole data, returning fields not available in the standard output dataframe
Get boreholes in a municipality and where groundwater related data are available
End of explanation
"""
from pydov.search.boring import BoringSearch
boring = BoringSearch()
"""
Explanation: Get information about the datatype 'Boring'
End of explanation
"""
boring.get_description()
"""
Explanation: A description is provided for the 'Boring' datatype:
End of explanation
"""
fields = boring.get_fields()
# print available fields
for f in fields.values():
print(f['name'])
"""
Explanation: The different fields that are available for objects of the 'Boring' datatype can be requested with the get_fields() method:
End of explanation
"""
fields['diepte_boring_tot']
"""
Explanation: You can get more information of a field by requesting it from the fields dictionary:
* name: name of the field
* definition: definition of this field
* cost: currently this is either 1 or 10, depending on the datasource of the field. It is an indication of the expected time it will take to retrieve this field in the output dataframe.
* notnull: whether the field is mandatory or not
* type: datatype of the values of this field
End of explanation
"""
fields['methode']['values']
"""
Explanation: Optionally, if the values of the field have a specific domain the possible values are listed as values:
End of explanation
"""
from pydov.util.location import Within, Box
df = boring.search(location=Within(Box(153145, 206930, 153150, 206935)))
df.head()
"""
Explanation: Example use cases
Get boreholes in a bounding box
Get data for all the boreholes that are geographically located within the bounds of the specified box.
The coordinates are in the Belgian Lambert72 (EPSG:31370) coordinate system and are given in the order of lower left x, lower left y, upper right x, upper right y.
End of explanation
"""
for pkey_boring in set(df.pkey_boring):
print(pkey_boring)
"""
Explanation: The dataframe contains one borehole where three methods ('boormethode') were applied for its construction. The available data are flattened to represent unique attributes per row of the dataframe.
Using the pkey_boring field one can request the details of this borehole in a webbrowser:
End of explanation
"""
[i for i,j in inspect.getmembers(sys.modules['owslib.fes'], inspect.isclass) if 'Property' in i]
"""
Explanation: Get boreholes with specific properties
Next to querying boreholes based on their geographic location within a bounding box, we can also search for boreholes matching a specific set of properties. For this we can build a query using a combination of the 'Boring' fields and operators provided by the WFS protocol.
A list of possible operators can be found below:
End of explanation
"""
from owslib.fes import PropertyIsEqualTo
query = PropertyIsEqualTo(propertyname='gemeente',
literal='Herstappe')
df = boring.search(query=query)
df.head()
"""
Explanation: In this example we build a query using the PropertyIsEqualTo operator to find all boreholes that are within the community (gemeente) of 'Herstappe':
End of explanation
"""
for pkey_boring in set(df.pkey_boring):
print(pkey_boring)
"""
Explanation: Once again we can use the pkey_boring as a permanent link to the information of these boreholes:
End of explanation
"""
from owslib.fes import PropertyIsGreaterThanOrEqualTo
query = PropertyIsGreaterThanOrEqualTo(
propertyname='diepte_boring_tot',
literal='2000')
df = boring.search(
location=Within(Box(200000, 211000, 205000, 214000)),
query=query
)
df.head()
"""
Explanation: Get boreholes in a bounding box based on specific properties
We can combine a query on attributes with a query on geographic location to get the boreholes within a bounding box that have specific properties.
The following example requests the boreholes with a depth greater than or equal to 2000 meters within the given bounding box.
(Note that the datatype of the literal parameter should be a string, regardless of the datatype of this field in the output dataframe.)
End of explanation
"""
for pkey_boring in set(df.pkey_boring):
print(pkey_boring)
"""
Explanation: We can look at one of the boreholes in a webbrowser using its pkey_boring:
End of explanation
"""
query = PropertyIsEqualTo(propertyname='gemeente',
literal='Gent')
df = boring.search(query=query,
return_fields=('diepte_boring_tot',))
df.head()
df.describe()
"""
Explanation: Select boreholes in a municipality and return depth
We can limit the columns in the output dataframe by specifying the return_fields parameter in our search.
In this example we query all the boreholes in the city of Ghent and return their depth:
End of explanation
"""
df[df.diepte_boring_tot != 0].describe()
ax = df[df.diepte_boring_tot != 0].boxplot()
ax.set_ylabel("Depth (m)");
ax.set_title("Distribution borehole depth Gent");
"""
Explanation: By discarding the boreholes with a depth of 0 m, we get a different result:
End of explanation
"""
from owslib.fes import And
query = And([PropertyIsEqualTo(propertyname='gemeente',
literal='Antwerpen'),
PropertyIsEqualTo(propertyname='hydrogeologische_stratigrafie',
literal='True')]
)
df = boring.search(query=query,
return_fields=('pkey_boring', 'boornummer', 'x', 'y', 'diepte_boring_tot', 'datum_aanvang'))
df.head()
"""
Explanation: Get boreholes based on fields not available in the standard output dataframe
To keep the output dataframe size acceptable, not all available WFS fields are included in the standard output. However, one can use this information to select boreholes as illustrated below.
For example, make a selection of the boreholes in municipality the of Antwerp, for which a hydrogeological interpretation was performed:
End of explanation
"""
query = PropertyIsGreaterThanOrEqualTo(
propertyname='diepte_boring_tot',
literal='2000')
df = boring.search(query=query,
return_fields=('pkey_boring', 'boornummer', 'diepte_boring_tot',
'informele_stratigrafie', 'formele_stratigrafie', 'lithologische_beschrijving',
'gecodeerde_lithologie', 'hydrogeologische_stratigrafie', 'quartaire_stratigrafie',
'geotechnische_codering', 'informele_hydrostratigrafie'))
df.head()
"""
Explanation: Get borehole data, returning fields not available in the standard output dataframe
As denoted in the previous example, not all available fields are available in the default output frame to keep its size limited. However, you can request any available field by including it in the return_fields parameter of the search:
End of explanation
"""
from owslib.fes import PropertyIsLike
from owslib.fes import PropertyIsNull
from owslib.fes import Or
from owslib.fes import Not
query = And([PropertyIsEqualTo(propertyname='gemeente',
literal='Antwerpen'),
Or([Not([PropertyIsNull(propertyname='putnummer')]),
PropertyIsLike(propertyname='doel',
literal='Grondwater%'),
PropertyIsEqualTo(propertyname='erkenning',
literal='2. Andere grondwaterwinningen')]
)]
)
df = boring.search(query=query)
df.head()
"""
Explanation: Get boreholes in a municipality and where groundwater related data are available
The following full example return all boreholes where gemeente is 'Antwerpen' and either putnummer is not empty or doel starts with 'Grondwater' or erkenning is '2. Andere grondwaterwinningen'.
End of explanation
"""
# import the necessary modules (not included in the requirements of pydov!)
import folium
from folium.plugins import MarkerCluster
from pyproj import Transformer
# convert the coordinates to lat/lon for folium
def convert_latlon(x1, y1):
transformer = Transformer.from_crs("epsg:31370", "epsg:4326", always_xy=True)
x2,y2 = transformer.transform(x1, y1)
return x2, y2
df['lon'], df['lat'] = zip(*map(convert_latlon, df['x'], df['y']))
# convert to list
loclist = df[['lat', 'lon']].values.tolist()
# initialize the Folium map on the centre of the selected locations, play with the zoom until ok
fmap = folium.Map(location=[df['lat'].mean(), df['lon'].mean()], zoom_start=12)
marker_cluster = MarkerCluster().add_to(fmap)
for loc in range(0, len(loclist)):
folium.Marker(loclist[loc], popup=df['boornummer'][loc]).add_to(marker_cluster)
fmap
"""
Explanation: Visualize results
Using Folium, we can display the results of our search on a map.
End of explanation
"""
|
dismalpy/dismalpy | doc/notebooks/varmax.ipynb | bsd-2-clause | %matplotlib inline
import numpy as np
import pandas as pd
import statsmodels.api as sm
import dismalpy as dp
import matplotlib.pyplot as plt
dta = pd.read_stata('data/lutkepohl2.dta')
dta.index = dta.qtr
endog = dta.ix['1960-04-01':'1978-10-01', ['dln_inv', 'dln_inc', 'dln_consump']]
"""
Explanation: VARMAX models
This is a notebook stub for VARMAX models. Full development will be done after impulse response functions are available.
End of explanation
"""
exog = pd.Series(np.arange(len(endog)), index=endog.index, name='trend')
exog = endog['dln_consump']
mod = dp.ssm.VARMAX(endog[['dln_inv', 'dln_inc']], order=(2,0), trend='nc', exog=exog)
res = mod.fit(maxiter=1000)
print(res.summary())
"""
Explanation: Model specification
The VARMAX class in Statsmodels allows estimation of VAR, VMA, and VARMA models (through the order argument), optionally with a constant term (via the trend argument). Exogenous regressors may also be included (as usual in Statsmodels, by the exog argument), and in this way a time trend may be added. Finally, the class allows measurement error (via the measurement_error argument) and allows specifying either a diagonal or unstructured innovation covariance matrix (via the error_cov_type argument).
Example 1: VAR
Below is a simple VARX(2) model in two endogenous variables and an exogenous series, but no constant term. Notice that we needed to allow for more iterations than the default (which is maxiter=50) in order for the likelihood estimation to converge. This is not unusual in VAR models which have to estimate a large number of parameters, often on a relatively small number of time series: this model, for example, estimates 27 parameters off of 75 observations of 3 variables.
End of explanation
"""
mod = dp.ssm.VARMAX(endog[['dln_inv', 'dln_inc']], order=(0,2), error_cov_type='diagonal')
res = mod.fit(maxiter=1000)
print(res.summary())
"""
Explanation: Example 2: VMA
A vector moving average model can also be formulated. Below we show a VMA(2) on the same data, but where the innovations to the process are uncorrelated. In this example we leave out the exogenous regressor but now include the constant term.
End of explanation
"""
mod = dp.ssm.VARMAX(endog[['dln_inv', 'dln_inc']], order=(1,1))
res = mod.fit(maxiter=1000)
print(res.summary())
"""
Explanation: Caution: VARMA(p,q) specifications
Although the model allows estimating VARMA(p,q) specifications, these models are not identified without additional restrictions on the representation matrices, which are not built-in. For this reason, it is recommended that the user proceed with error (and indeed a warning is issued when these models are specified). Nonetheless, they may in some circumstances provide useful information.
End of explanation
"""
|
UWSEDS/LectureNotes | Fall2018/07-Jupyter-Notebook-In-Depth/LorenzSystem.ipynb | bsd-2-clause | %matplotlib inline
from ipywidgets import interact, interactive
from IPython.display import clear_output, display, HTML
import numpy as np
from scipy import integrate
from matplotlib import pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib.colors import cnames
from matplotlib import animation
"""
Explanation: Exploring the Lorenz System of Differential Equations
Downloaded 10/2017 from the ipywidgets docs
In this Notebook we explore the Lorenz system of differential equations:
$$
\begin{aligned}
\dot{x} & = \sigma(y-x) \
\dot{y} & = \rho x - y - xz \
\dot{z} & = -\beta z + xy
\end{aligned}
$$
This is one of the classic systems in non-linear differential equations. It exhibits a range of different behaviors as the parameters (\(\sigma\), \(\beta\), \(\rho\)) are varied.
Imports
First, we import the needed things from IPython, NumPy, Matplotlib and SciPy.
End of explanation
"""
def solve_lorenz(N=10, angle=0.0, max_time=4.0, sigma=10.0, beta=8./3, rho=28.0):
fig = plt.figure()
ax = fig.add_axes([0, 0, 1, 1], projection='3d')
ax.axis('off')
# prepare the axes limits
ax.set_xlim((-25, 25))
ax.set_ylim((-35, 35))
ax.set_zlim((5, 55))
def lorenz_deriv(x_y_z, t0, sigma=sigma, beta=beta, rho=rho):
"""Compute the time-derivative of a Lorenz system."""
x, y, z = x_y_z
return [sigma * (y - x), x * (rho - z) - y, x * y - beta * z]
# Choose random starting points, uniformly distributed from -15 to 15
np.random.seed(1)
x0 = -15 + 30 * np.random.random((N, 3))
# Solve for the trajectories
t = np.linspace(0, max_time, int(250*max_time))
x_t = np.asarray([integrate.odeint(lorenz_deriv, x0i, t)
for x0i in x0])
# choose a different color for each trajectory
colors = plt.cm.viridis(np.linspace(0, 1, N))
for i in range(N):
x, y, z = x_t[i,:,:].T
lines = ax.plot(x, y, z, '-', c=colors[i])
plt.setp(lines, linewidth=2)
ax.view_init(30, angle)
plt.show()
return t, x_t
"""
Explanation: Computing the trajectories and plotting the result
We define a function that can integrate the differential equations numerically and then plot the solutions. This function has arguments that control the parameters of the differential equation (\(\sigma\), \(\beta\), \(\rho\)), the numerical integration (N, max_time) and the visualization (angle).
End of explanation
"""
t, x_t = solve_lorenz(angle=0, N=10)
"""
Explanation: Let's call the function once to view the solutions. For this set of parameters, we see the trajectories swirling around two points, called attractors.
End of explanation
"""
w = interactive(solve_lorenz, angle=(0.,360.), max_time=(0.1, 4.0),
N=(0,50), sigma=(0.0,50.0), rho=(0.0,50.0))
display(w)
"""
Explanation: Using IPython's interactive function, we can explore how the trajectories behave as we change the various parameters.
End of explanation
"""
t, x_t = w.result
w.kwargs
"""
Explanation: The object returned by interactive is a Widget object and it has attributes that contain the current result and arguments:
End of explanation
"""
xyz_avg = x_t.mean(axis=1)
xyz_avg.shape
"""
Explanation: After interacting with the system, we can take the result and perform further computations. In this case, we compute the average positions in \(x\), \(y\) and \(z\).
End of explanation
"""
plt.hist(xyz_avg[:,0])
plt.title('Average $x(t)$');
plt.hist(xyz_avg[:,1])
plt.title('Average $y(t)$');
"""
Explanation: Creating histograms of the average positions (across different trajectories) show that on average the trajectories swirl about the attractors.
End of explanation
"""
|
unmrds/cc-python | .ipynb_checkpoints/Space Analysis -checkpoint.ipynb | apache-2.0 | # Import a very useful and powerful module for interacting with tabular data
import pandas as pd
# Install and import tabulate for generating tables for hardcopy reports
!TABULATE_INSTALL=lib-only; pip install tabulate
from tabulate import tabulate
# Set up the report generation variables
report_file_name = 'report.md'
report_content = []
print("The generated report will be: " + file_name)
print("The version of pandas is: " + pd.__version__)
# Define the location of the file of interest
file_path = "" # include a trailing "/" if not empty
file_name = "Space_Analysis_Pilot_(Responses).csv"
file_location = file_path + file_name
print("The file that will be read is: " + file_location)
# explicitly define the column names that will be associated with our table - this helps mitigate any
# strangeness in the column names in the source CSV file
column_names=[
"timestamp",
"building",
"floor",
"range",
"section",
"shelf",
"leading_alpha",
"leading_numeric",
"ending_alpha",
"ending_numeric",
"occupied_in",
"stacked"
]
# Load the referenced file into a pandas dataframe for use in our analysis
shelf_data = pd.read_csv(
file_location,
names = column_names,
header = 0,
usecols = ['timestamp','leading_alpha','ending_alpha','occupied_in'],
skipinitialspace = True
)
# create a series of datatime values from the timestamps in the dataframe
# attempt to coerce error generating values, if can't set value to NaT (missing)
shelf_data['datetime'] = pd.to_datetime(shelf_data.loc[:,"timestamp"], errors='coerce')
# fill the NA values in the leading and ending alpha fields with a symbol
shelf_data.leading_alpha = shelf_data.leading_alpha.fillna('*')
shelf_data.ending_alpha = shelf_data.ending_alpha.fillna('*')
shelf_data.occupied_in.loc[(shelf_data.leading_alpha == "*")] = 0
# drop the rows that are missing leading_alpha values
shelf_data = shelf_data.loc[shelf_data.leading_alpha.notnull()]
# strip any leading or trailing white space from the alpha fields
shelf_data.leading_alpha.astype(str)
shelf_data['leading_alpha'] = shelf_data.leading_alpha.str.strip()
shelf_data['ending_alpha'] = shelf_data.ending_alpha.str.strip()
# generate columns with alpha prefixes for higher-level grouping
shelf_data['leading_prefix'] = shelf_data.leading_alpha.str[0]
shelf_data['ending_prefix'] = shelf_data.ending_alpha.str[0]
# add shelf capacity to each row in the table
shelf_data['capacity'] = 35.5
print("\nThe data types in the table are:\n"+str(shelf_data.dtypes))
print()
#print(shelf_data)
"""
Explanation: Space Analysis Demonstration
We are currently in the process of testing a strategy for capturing shelf space data and would like to use the growing dataset to better understand the time required to collect the data and perform some preliminary analyses of the data. Some questions that we would like to ask of this dataset include:
How many shelves can be measured per hour
How much shelf space is taken up by each (LC subject level) range in the dataset
How much shelf space is available within each range in the dataset
In support of answering these questions we need to do the following:
Import the data into the script for analysis
Aggregate the data by hour and count the number of shelves that were measured in each hour
Aggregate the occupied space for all of the shelves for each range
Calculate the total shelf space for each range
1. Import the data for the analysis
Importing the data for the analysis includes the following sub-tasks:
Point to the local data file that containes the values that we want to analyze
Create a file-like object from which we can read the data
Read the data into a variable that we can use in our analyses
End of explanation
"""
# create a dataframe that has a subset of rows that only contain valid datetime values
clean_dates = shelf_data[shelf_data.datetime.notnull()]
clean_dates = clean_dates.set_index('datetime')
#clean_dates['hour'] = clean_dates['datetime'].strftime('%Y-%M-%d %H')
clean_dates
# resample the timestamps by hour and return the frequency distribution
r = clean_dates.resample("1H")
r_count = r.count() # .loc[(r_count.timestamp > 0)]
r_count.timestamp
r_count['timestamp'].plot()
"""
Explanation: 2. Aggregate the data by hour and count the shelves for each hour
The following sub-tasks are required for this part of our analysis
Create a new column in the dataframe that contains a distinct value for each hour in the database
Group the individual shelves by hour label
Count the shelves for each hour
Plot the number of shelves per hour for the analysis period
End of explanation
"""
full_shelves = shelf_data.loc[(shelf_data.leading_alpha == shelf_data.ending_alpha)]
partial_shelves = shelf_data.loc[(shelf_data.leading_alpha != shelf_data.ending_alpha)].loc[(shelf_data.leading_alpha.notnull())]
# convert our column to numeric values instead of the string type it defaulted to - adding strings is not
# what we want to do. Partial shelves are arbitrarily assigned half of the occupied value and the capacity.
pd.to_numeric(full_shelves['occupied_in'], errors="coerce")
partial_shelves['partial_occupied_in'] = pd.to_numeric(partial_shelves['occupied_in'], errors="coerce") / 2
partial_shelves['partial_capacity'] = pd.to_numeric(partial_shelves['capacity'], errors="coerce") / 2
#print(full_shelves[['leading_alpha','ending_alpha','occupied_in','capacity']])
#print(partial_shelves[['leading_alpha','ending_alpha','partial_occupied_in','partial_capacity']])
"""
Explanation: 3. Aggregate the occupied space for all of the shelves for each range
The following sub-tasks are required for this part of our analysis:
Total the occupied space for all of the whole shelves - i.e. those where the start and end range alpha values are the same
Estimate the occupied space for all of the partial shelves
Add these numbers up for an estimate of the total occupied space for each range
End of explanation
"""
# import a useful library for doing calculations on arrays - including our data table and its columns
import numpy as np
# group our full and partial shelves by the leading_alpha subject string
full_shelves_by_range = full_shelves.groupby('leading_alpha')
partial_shelves_by_range = partial_shelves.groupby('leading_alpha')
# calculate the sum for all of the numeric columns for our full and partial shelves
full_ranges = full_shelves_by_range.aggregate(np.sum)
partial_ranges = partial_shelves_by_range.aggregate(np.sum)
# calculate some new columns of data for each range
combined_by_lc = pd.concat([full_ranges,partial_ranges], axis=1)
combined_by_lc['total_occupied_in'] = combined_by_lc[['occupied_in','partial_occupied_in']].sum(axis=1)
combined_by_lc['total_capacity_in'] = combined_by_lc[['capacity','partial_capacity']].sum(axis=1)
combined_by_lc['total_empty_in'] = combined_by_lc['total_capacity_in'] - combined_by_lc['total_occupied_in']
combined_by_lc['total_empty_pct'] = 100 * (1-combined_by_lc['total_occupied_in'] / combined_by_lc['total_capacity_in'])
# now let's generate some plots for fun
lc_plot = combined_by_lc['total_empty_in'].plot(kind="barh",stacked=False,figsize={8,10}, title="Empty space (in) for each LC subject")
lc_fig = lc_plot.get_figure()
lc_fig.savefig("lc_fig.png")
report_content.append("""

""")
# cumulative = combined_by_lc[['total_occupied_in','total_capacity_in','total_empty_in']].cumsum()
# print(cumulative)
#cumulative.plot(kind="barh",stacked=True,figsize={8,10})
"""
Explanation: 4. Calculate occupied and total shelf space for each LC subject
For this we need to:
Group the individual shelf data by LC subject categories
Calculate the occupied and total capacity for each group
Calculate some derived values for reporting
End of explanation
"""
prefix_full_shelves = shelf_data.loc[(shelf_data.leading_prefix == shelf_data.ending_prefix)]
prefix_partial_shelves = shelf_data.loc[(shelf_data.leading_prefix != shelf_data.ending_prefix)].loc[(shelf_data.leading_prefix.notnull())]
# convert our column to numeric values instead of the string type it defaulted to - adding strings is not
# what we want to do. Partial shelves are arbitrarily assigned half of the occupied value and the capacity.
pd.to_numeric(prefix_full_shelves['occupied_in'], errors="coerce")
prefix_partial_shelves['partial_occupied_in'] = pd.to_numeric(prefix_partial_shelves['occupied_in'], errors="coerce") / 2
prefix_partial_shelves['partial_capacity'] = pd.to_numeric(prefix_partial_shelves['capacity'], errors="coerce") / 2
#print(full_shelves[['leading_prefix','ending_prefix','occupied_in','capacity']])
#print(partial_shelves[['leading_prefix','ending_prefix','partial_occupied_in','partial_capacity']])
# import a useful library for doing calculations on arrays - including our data table and its columns
import numpy as np
# group our full and partial shelves by the leading_alpha subject string
prefix_full_shelves_by_range = prefix_full_shelves.groupby('leading_prefix')
prefix_partial_shelves_by_range = prefix_partial_shelves.groupby('leading_prefix')
# calculate the sum for all of the numeric columns for our full and partial shelves
prefix_full_ranges = prefix_full_shelves_by_range.aggregate(np.sum)
prefix_partial_ranges = prefix_partial_shelves_by_range.aggregate(np.sum)
# calculate some new columns of data for each range
prefix_combined_by_lc = pd.concat([prefix_full_ranges,prefix_partial_ranges], axis=1)
prefix_combined_by_lc['total_occupied_in'] = prefix_combined_by_lc[['occupied_in','partial_occupied_in']].sum(axis=1)
prefix_combined_by_lc['total_capacity_in'] = prefix_combined_by_lc[['capacity','partial_capacity']].sum(axis=1)
prefix_combined_by_lc['total_empty_in'] = prefix_combined_by_lc['total_capacity_in'] - prefix_combined_by_lc['total_occupied_in']
prefix_combined_by_lc['total_empty_pct'] = 100 * (1-prefix_combined_by_lc['total_occupied_in'] / prefix_combined_by_lc['total_capacity_in'])
# now let's generate some plots for fun
prefix_plot = prefix_combined_by_lc['total_empty_in'].plot(kind="barh",stacked=False,figsize={8,10}, title="Empty space (in) for each LC subject")
prefix_fig = prefix_plot.get_figure()
prefix_fig.savefig("prefix_fig.png")
report_content.append("""

""")
# prefix_cumulative = prefix_combined_by_lc[['total_occupied_in','total_capacity_in','total_empty_in']].cumsum()
# print(cumulative)
#cumulative.plot(kind="barh",stacked=True,figsize={8,10})
"""
Explanation: 5. Repeat the above range-level analyses for the full top-level LC subject category prefixes
End of explanation
"""
report_content.append("""
# UNM College of University Libraries and Learning Sciences Shelf Space Analysis
This is where some intro text will go ...
""")
# calculate some overall totals for all LC subjects
prefix_grand_total_capacity_in = prefix_combined_by_lc['total_capacity_in'].sum()
prefix_grand_total_occupied_in = prefix_combined_by_lc['total_occupied_in'].sum()
prefix_grand_total_empty_in = prefix_combined_by_lc['total_empty_in'].sum()
prefix_grand_total_empty_pct = 100 * (prefix_grand_total_empty_in / prefix_grand_total_capacity_in)
# print out the generated information
print("For Library of Congress Subject Prefixes")
print("\tTotal Capacity (in): " + str(prefix_grand_total_capacity_in))
print("\tTotal Occupied (in): " + str(prefix_grand_total_occupied_in))
print("\tTotal Empty Space (in): " + str(prefix_grand_total_empty_in))
print("\tTotal Empty Space (pct): " + str(prefix_grand_total_empty_pct))
print()
# write the information into the report
report_content.append("""
## Summary information for LC subject prefixes
Total Capacity (in): %s\\
Total Occupied (in): %s\\
Total Empty Space (in): %s\\
Total Empty Space (pct): %s\\
"""%(
prefix_grand_total_capacity_in,
prefix_grand_total_occupied_in,
prefix_grand_total_empty_in,
prefix_grand_total_empty_pct
)
)
# calculate some overall totals for all LC subjects
grand_total_capacity_in = combined_by_lc['total_capacity_in'].sum()
grand_total_occupied_in = combined_by_lc['total_occupied_in'].sum()
grand_total_empty_in = combined_by_lc['total_empty_in'].sum()
grand_total_empty_pct = 100 * (grand_total_empty_in / grand_total_capacity_in)
# print out the generated information
print("For Library of Congress Subjects")
print("\tTotal Capacity (in): " + str(grand_total_capacity_in))
print("\tTotal Occupied (in): " + str(grand_total_occupied_in))
print("\tTotal Empty Space (in): " + str(grand_total_empty_in))
print("\tTotal Empty Space (pct): " + str(grand_total_empty_pct))
print()
# write the information into the report
report_content.append("""
## Summary information for LC subjects
Total Capacity (in): %s\\
Total Occupied (in): %s\\
Total Empty Space (in): %s\\
Total Empty Space (pct): %s\\
"""%(
grand_total_capacity_in,
grand_total_occupied_in,
grand_total_empty_in,
grand_total_empty_pct
)
)
# generate some tables for the report
prefix_table = tabulate(prefix_combined_by_lc[['total_occupied_in','total_capacity_in','total_empty_in', 'total_empty_pct']], headers=["Occupied (in)","Capacity (in)","Empty (in)","Empty (pct)"])
subject_table = tabulate(combined_by_lc[['total_occupied_in','total_capacity_in','total_empty_in', 'total_empty_pct']], headers=["Occupied (in)","Capacity (in)","Empty (in)","Empty (pct)"])
report_content.append("""
## Summary Tables
%s
Table: LC Prefix Space Summary Table
%s
Table: LC Subject Space Summary Table
"""%(prefix_table, subject_table))
print(prefix_table)
print(subject_table)
# generate the report
with open(report_file_name,"w") as f:
for block in report_content:
f.write(block)
pandoc_options = "-o %s.pdf %s"%(report_file_name,report_file_name)
import subprocess
subprocess.run(['pandoc', "-o %s.pdf"%(report_file_name), report_file_name])
"""
Explanation: Generate a report with the results
End of explanation
"""
|
stanfordnqp/spins-b | examples/invdes/wdm2/monitor_processing_example.ipynb | gpl-3.0 | ## Import libraries necessary for monitor data processing. ##
from matplotlib import pyplot as plt
import numpy as np
import os
import pandas as pd
import pickle
from spins.invdes.problem_graph import log_tools
## Define filenames. ##
# `save_folder` is the full path to the directory containing the Pickle (.pkl) log files from the optimization.
save_folder = os.getcwd()
## Load the logged monitor data and monitor spec information. ##
# `df` is a pandas dataframe containing all the data loaded from the log Pickle (.pkl) files.
df = log_tools.create_log_data_frame(log_tools.load_all_logs(save_folder))
"""
Explanation: Introduction
This notebook gives examples for processing spins monitor data.
Logging data is stored in monitors that are defined within the optimization plan. Every iteration of the optimization saves a log file in the form a Pickle file, which contains the values of all the monitors at that point in time. To help process this data, spins includes the log_tools (spins.invdes.problem_graph.log_tools).
There are 3 general ways that these logs can be processed:
1. Defining a monitor_spec file that describes how you want data to be plotted.
2. Use lower-level log_tools functions to load the data and plot the data.
3. Directly load and process the Pickle files.
Prepare the log data and plotting information.
The following three cells import the necessary libraries and load the optimization monitor data so it can be processed.
A monitor specification file is a YAML file that lists all the monitors to be plotted as well as how they should be plotted (e.g. taking magnitude, real part, etc.). The monitor specification file also allows you to join multiple monitors into one plot (e.g. for joining different monitors across different transformations).
Note that the monitor specification can be generated in code if desired (instead of actually saving it to a YAML file).
End of explanation
"""
# `monitor_spec_filename` is the full path to the monitor spec yml file.
monitor_spec_filename = os.path.join(save_folder, "monitor_spec.yml")
# `monitor_descriptions` now contains the information from the monitor_spec.yml file. It follows the format of
# the schema found in `log_tools.monitor_spec`.
monitor_descriptions = log_tools.load_from_yml(monitor_spec_filename)
## Plot all monitor data and save into a pdf file in the project folder. ##
# `summary_out_name` is the full path to the pdf that will be generated containing plots of all the log data.
summary_out_name = os.path.join(save_folder, "summary.pdf")
# This command plots all the monitor data contained in the log files, saves it to the specified pdf file, and
# displays to the screen.
log_tools.plot_monitor_data(df, monitor_descriptions, summary_out_name)
## Print summary of scalar monitor values to screen during optimization without plotting. ##
# This command is useful to quickly view the current optimization state or
# if one is running an optimization somewhere where plotting to a screen is difficult.
log_tools.display_summary(df, monitor_descriptions)
"""
Explanation: Option 1: Using a monitor specification file.
End of explanation
"""
## Get the iterations and data for a specific 1-dimensional scalar monitor (here, power vs iteration is demonstrated)
## for a specific overlap monitor.
# We call `get_joined_scalar_monitors` because we want the monitor data across all iterations rather than
# just the data for particular transformation or iteration number (contrast with `get_single_monitor_data` usage
# below).
joined_monitor_data = log_tools.get_joined_scalar_monitors(
df, "power1300", event_name="optimizing", scalar_operation="magnitude_squared")
# Now, the iteration numbers are stored in the list iterations and the overlap monitor power values are
# stored in the list data. - If desired, these lists can now be exported for plotting in a different program
# or can be plotted manually by the user in python, as demonstrated next.
iterations = joined_monitor_data.iterations
data = joined_monitor_data.data
## Manually plot the power versus iteration data we've retrieved for the monitor of interest. ##
plt.figure()
plt.plot(iterations, data)
plt.xlabel("Iterations")
plt.ylabel("Transmission")
plt.title("Transmission at 1300 nm")
## Get the data for a specific 2-dimensional field slice monitor. ##
# These functions get the monitor information for the monitor name specified above and return the data associated
# with the monitor name. Here we retrieve the last stored field. We can specify `transformation_name` and
# `iteration` to grab data from a particular transformation or iteration.
field_data = log_tools.get_single_monitor_data(df, "field1550")
# `field_data` is now an array with 3 entries, corresponding to the x-, y-, and z- components of the field,
# so we apply a utility function to get the magnitude of the vector.
field_mag = log_tools.process_field(field_data, vector_operation="magnitude")
## Manually plot this 2-dimensional field data. ##
plt.figure()
plt.imshow(np.squeeze(np.array(field_mag.T)), origin="lower")
plt.title("E-Field Magnitude at 1550 nm")
"""
Explanation: Option 2: Using log_tools to extract the data.
The following 2 cells demonstrate extracting specific monitor data of interest in order to export the data or plot it yourself.
End of explanation
"""
with open(os.path.join(save_folder, "step1.pkl"), "rb") as fp:
data = pickle.load(fp)
print("Log time: ", data["time"])
print("Transmission at 1300 nm: ", data["monitor_data"]["power1300"])
"""
Explanation: Option 3: Directly manipulating Pickle files.
This is the most tedious way of accessing data as there is only one Pickle file iteration.
However, this enables one to inspect all of the available data.
Note that data formats are subject to change.
End of explanation
"""
|
sarajcev/logreg-linreg | logreg-compare.ipynb | gpl-2.0 | from __future__ import print_function
import numpy as np
import statsmodels.api as sm
from sklearn.linear_model import LogisticRegression
from sklearn.linear_model import LinearRegression
import matplotlib.pyplot as plt
from scipy import stats
import seaborn as sns
sns.set(style='darkgrid', font_scale=1.2)
%matplotlib inline
"""
Explanation: Logistic & Linear regression in Python
Logistic and Linear Regression using Seaborn, Statsmodels, and Scikit-learn
Performing logistic regression in Python using:
- regplot function from the seaborn library
- GLM (Generalized Linear Models) with Binomial family and Logit (default) link from the Statsmodels library
- Logit function from the Statsmodels library
- LogisticRegression from the scikit-learn library
Bootstrap method application is demonstrated for the purpose of estimating confidence intervals on logistic regression coefficients and predictions.
Performing linear regression in Python using:
- regplot function from the seaborn library
- OLS (Ordinary Least Squares) from the Statsmodels library
- LinearRegression from the scikit-learn library
Bootstrap method application is demonstrated for the purpose of estimating confidence intervals on linear regression coefficients and predictions.
End of explanation
"""
# load "tips" dataset from the seaborn library
tips = sns.load_dataset('tips')
tips.head()
# Create a categorical variable for the regression analysis
tips["big_tip"] = (tips.tip / tips.total_bill) > .175
tips['big_tip'] = tips['big_tip'].values.astype(int)
tips.head()
"""
Explanation: Load some data for the demonstration purposes
End of explanation
"""
# Seaborn regplot (logistic=True) <= uses GLM from statsmodels under the hood
fig, ax = plt.subplots(figsize=(10,6))
sns.regplot(x='total_bill', y='big_tip', data=tips, logistic=True, n_boot=500, y_jitter=.03, ax=ax)
plt.show()
# Extract the line data from the seaborn figure
xs, ys = ax.get_lines()[0].get_data()
fig, ax = plt.subplots(figsize=(10,6))
ax.plot(xs, ys, ls='-', c='seagreen', lw=3)
ax.set_xlabel('total_bill')
ax.set_ylabel('big_tip')
plt.show()
"""
Explanation: Seaborn regplot function
Seaborn regplot: https://web.stanford.edu/~mwaskom/software/seaborn/generated/seaborn.regplot.html#seaborn.regplot
End of explanation
"""
X = np.c_[np.ones(tips.shape[0]), tips['total_bill']]
y = tips['big_tip']
"""
Explanation: Create endog and exog variables for further usage in satsmodels and scikit-learn routines
End of explanation
"""
# Statsmodels Logistic Regression using GLM with Binomial family and Logit (default) link
model_sm = sm.GLM(y, X, family=sm.families.Binomial()) # notice the order of the endog and exog variables
res_sm = model_sm.fit()
res_sm.summary()
"""
Explanation: Statsmodels GLM function
Statsmodels GLM: http://statsmodels.sourceforge.net/stable/glm.html
End of explanation
"""
# New data for the prediction
support = np.linspace(0, 60, 100)
xnew = np.c_[np.ones(support.size), support] # must be a 2D array
out_sm = res_sm.predict(xnew)
fig, ax = plt.subplots(2, 1, sharex=True, figsize=(10,6))
ax[0].plot(xs, ys, ls='-', c='seagreen', lw=3, label='seaborn')
ax[0].set_ylabel('big_tip')
ax[0].legend(loc='best')
ax[1].plot(support, out_sm, ls='-', c='blue', lw=3, label='Statsmodels GLM')
ax[1].set_xlabel('total_bill')
ax[1].set_ylabel('big_tip')
ax[1].legend(loc='best')
plt.tight_layout()
plt.show()
# Difference between curves
diff = np.sum(abs(ys-out_sm))
print('Total difference: {:g}'.format(diff))
"""
Explanation: Create new data for the prediction
End of explanation
"""
# yhat values
yhat = res_sm.fittedvalues
# residuals
resid = res_sm.resid_response
alpha = 0.05 # 95% confidence interval
n_boot = 1000 # No. of bootstrap samples
const = []; x1 = []
const.append(res_sm.params[0]) # constant term
x1.append(res_sm.params[1]) # x1 value
# Bootstrap
for i in range(n_boot):
resid_boot = np.random.choice(resid, size=len(resid), replace=True)
yboot = yhat + resid_boot
model_boot = sm.GLM(yboot, X, family=sm.families.Binomial())
res_boot = model_boot.fit()
const.append(res_boot.params[0])
x1.append(res_boot.params[1])
# Confidence intervals
def ci(var):
coef = np.asarray(var)
c_mean = np.mean(coef)
c_std = np.std(coef)
ql = (alpha/2)*100.
qh = (1 - alpha/2)*100.
ci_low = np.percentile(coef, ql, interpolation='midpoint')
ci_high = np.percentile(coef, qh, interpolation='midpoint')
return c_mean, c_std, ci_low, ci_high
# Const
cm, cs, cl, ch = ci(const)
# Coeff
x1m, x1s, x1l, x1h = ci(x1)
print('Coefficiens of the logistic regression (compare with output from GLM above):')
print('Const: Mean value: {:7.4f}, Std. error: {:7.4f}, 95% Conf. Int.: {:7.4f} to {:7.4f}'.format(cm, cs, cl, ch))
print('x1: Mean value: {:7.4f}, Std. error: {:7.4f}, 95% Conf. Int.: {:7.4f} to {:7.4f}'.format(x1m, x1s, x1l, x1h))
"""
Explanation: Bootstrap confidence intervals for the coefficients of logistic regression
Bootstrap wiki: https://en.wikipedia.org/wiki/Bootstrapping_%28statistics%29
End of explanation
"""
# Statsmodels Logit function
model_logit = sm.Logit(y, X) # notice the order of the endog and exog variables
res_logit = model_logit.fit()
res_logit.summary()
out_logit = res_logit.predict(xnew)
fig, ax = plt.subplots(2, 1, sharex=True, figsize=(10,6))
ax[0].plot(xs, ys, ls='-', c='seagreen', lw=3, label='seaborn')
ax[0].set_ylabel('big_tip')
ax[0].legend(loc='best')
ax[1].plot(support, out_logit, ls='-', c='blue', lw=3, label='Logit')
ax[1].set_xlabel('total_bill')
ax[1].set_ylabel('big_tip')
ax[1].legend(loc='best')
plt.tight_layout()
plt.show()
# Difference between curves
diff = np.sum(abs(ys-out_logit))
print('Total difference: {:g}'.format(diff))
"""
Explanation: Statsmodels Logit function
Statsmodels Logit: http://statsmodels.sourceforge.net/stable/generated/statsmodels.discrete.discrete_model.Logit.html
End of explanation
"""
# Scikit-learn Logistic Regression
model_sk = LogisticRegression(C=1e6, fit_intercept=False) # constant term is already in exog
res_sk = model_sk.fit(X, y) # notice the order of the endog and exog variables
res_sk.coef_
out_sk = res_sk.predict_proba(xnew) # returns a 2D array; second column is important
fig, ax = plt.subplots(2, 1, sharex=True, figsize=(10,6))
ax[0].plot(xs, ys, ls='-', c='seagreen', lw=3, label='seaborn')
ax[0].set_ylabel('big_tip')
ax[0].legend(loc='best')
ax[1].plot(support, out_sk[:,1], ls='-', c='blue', lw=3, label='scikit-learn')
ax[1].set_xlabel('total_bill')
ax[1].set_ylabel('big_tip')
ax[1].legend(loc='best')
plt.tight_layout()
plt.show()
# Difference between curves
diff = np.sum(abs(ys-out_sk[:,1]))
print('Total difference: {:g}'.format(diff))
"""
Explanation: Scikit-learn LogisticRegression function
Scikit-learn LogisticRegression: http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html
End of explanation
"""
alpha = 0.05 # 95% confidence interval
n_boot = 1000 # No. of bootstrap samples
y_hat = res_sm.fittedvalues # fittedvalues = np.dot(exog, params)
residuals = res_sm.resid_response # residuals = endog - fittedvalues
values = []
# Bootstrap
for i in range(n_boot):
resid_boot = np.random.choice(residuals, size=len(residuals), replace=True)
yboot = y_hat + resid_boot
model_boot = sm.GLM(yboot, X, family=sm.families.Binomial())
res_boot = model_boot.fit()
# Prediction values
out_boot = res_boot.predict(xnew)
values.append(out_boot)
values = np.asarray(values)
# Means and standard deviations of predicted values
means = np.mean(values, axis=0)
stds = np.std(values, axis=0)
ql = (alpha/2)*100.
qh = (1 - alpha/2)*100.
ci_lows = np.percentile(values, ql, axis=0, interpolation='midpoint')
ci_higs = np.percentile(values, qh, axis=0, interpolation='midpoint')
fig, ax = plt.subplots(figsize=(10,6))
ax.plot(xs, ys, ls='-', c='red', lw=3, label='from seaborn')
ax.plot(support, means, c='blue', ls='-', lw=3, label='from bootstrap')
ax.plot(support, ci_lows, c='blue', ls='--', lw=1, label='CI lower')
ax.plot(support, ci_higs, c='blue', ls='-.', lw=1, label='CI upper')
ax.fill_between(support, ci_lows, ci_higs, facecolor='lightblue', interpolate=True, alpha=0.5)
ax.legend(loc='best')
ax.set_xlabel('total_bill')
ax.set_ylabel('big_tip')
plt.show()
# Jitter data in scatter plot
def jitter(arr, size=0.01):
stdev = size*(max(arr)-min(arr))
return arr + np.random.randn(len(arr)) * stdev
# Comparing bootstrap confidence intervals with seaborn
fig, ax = plt.subplots(1, 2, sharey=True, figsize=(10,6))
sns.regplot(x='total_bill', y='big_tip', data=tips, logistic=True, n_boot=500, y_jitter=.03, ax=ax[0])
ax[1].scatter(tips['total_bill'], jitter(tips['big_tip'], size=0.02), color='blue', alpha=0.5)
ax[1].plot(support, means, c='blue', ls='-', lw=3)
ax[1].fill_between(support, ci_lows, ci_higs, facecolor='lightblue', edgecolor='lightblue', interpolate=True, alpha=0.5)
ax[1].set_xlabel('total_bill')
ax[1].set_xlim(0,60)
plt.tight_layout()
plt.show()
"""
Explanation: Bootstrap the confidence interval on predicted values
Demonstrate using bootstrap to compute the confidence interval on predicted values. Logistic regression using GLM from statsmodels.
End of explanation
"""
# Seaborn regplot
fig, ax = plt.subplots(figsize=(10,6))
sns.regplot(x='total_bill', y='tip', data=tips, ax=ax)
plt.show()
# Extract the line data from the seaborn figure
xs, ys = ax.get_lines()[0].get_data()
"""
Explanation: Linear Regression
Seaborn regplot function
Seaborn regplot: https://web.stanford.edu/~mwaskom/software/seaborn/generated/seaborn.regplot.html#seaborn.regplot
End of explanation
"""
X = np.c_[np.ones(tips.shape[0]), tips['total_bill']]
y = tips['tip']
"""
Explanation: Create endog and exog variables for further usage in satsmodels and scikit-learn routines
End of explanation
"""
# Statsmodels Linear Regression using OLS
model_ols = sm.OLS(y, X) # notice the order of the endog and exog variables
res_ols = model_ols.fit()
res_ols.summary()
# New data for the prediction
support = np.linspace(0, 60, 100)
xnew = np.c_[np.ones(support.size), support] # must be a 2D array
out_ols = res_ols.predict(xnew)
# Difference between curves
diff = np.sum(abs(ys-out_ols))
print('Total difference: {:g}'.format(diff))
"""
Explanation: Statsmodels OLS function
Statsmodels OLS: http://statsmodels.sourceforge.net/devel/generated/statsmodels.regression.linear_model.OLS.html#statsmodels.regression.linear_model.OLS
End of explanation
"""
alpha = 0.05 # 95% confidence interval
n_boot = 1000 # No. of bootstrap samples
y_hat = res_ols.fittedvalues # fittedvalues = np.dot(exog, params)
residuals = res_ols.resid # residuals = endog - fittedvalues
values = []
# Bootstrap
for i in range(n_boot):
resid_boot = np.random.choice(residuals, size=len(residuals), replace=True)
yboot = y_hat + resid_boot
model_boot = sm.OLS(yboot, X)
res_boot = model_boot.fit()
# Prediction values
out_boot = res_boot.predict(xnew)
values.append(out_boot)
values = np.asarray(values)
# Means and standard deviations of predicted values
means = np.mean(values, axis=0)
stds = np.std(values, axis=0)
ql = (alpha/2)*100.
qh = (1 - alpha/2)*100.
ci_lows = np.percentile(values, ql, axis=0, interpolation='midpoint')
ci_higs = np.percentile(values, qh, axis=0, interpolation='midpoint')
fig, ax = plt.subplots(figsize=(10,6))
ax.plot(xs, ys, ls='-', c='red', lw=3, label='from seaborn')
ax.plot(support, means, c='blue', ls='-', lw=3, label='from bootstrap')
ax.plot(support, ci_lows, c='blue', ls='--', lw=1, label='CI lower')
ax.plot(support, ci_higs, c='blue', ls='-.', lw=1, label='CI upper')
ax.fill_between(support, ci_lows, ci_higs, facecolor='lightblue', interpolate=True, alpha=0.5)
ax.legend(loc='best')
ax.set_xlabel('total_bill')
ax.set_ylabel('big_tip')
plt.show()
# Comparing bootstrap confidence intervals with seaborn
fig, ax = plt.subplots(1, 2, sharey=True, figsize=(10,6))
sns.regplot(x='total_bill', y='tip', data=tips, ax=ax[0])
ax[1].scatter(tips['total_bill'], tips['tip'], color='blue', alpha=0.5)
ax[1].plot(support, means, c='blue', ls='-', lw=2)
ax[1].fill_between(support, ci_lows, ci_higs, facecolor='lightblue', edgecolor='lightblue', interpolate=True, alpha=0.5)
ax[1].set_xlabel('total_bill')
ax[1].set_xlim(0,60)
plt.tight_layout()
plt.show()
"""
Explanation: Bootstrap the confidence interval on predicted values
Bootstrap wiki: https://en.wikipedia.org/wiki/Bootstrapping_%28statistics%29
Demonstrate using bootstrap to compute the confidence interval on predicted values. Linear regression using OLS from statsmodels.
End of explanation
"""
from statsmodels.sandbox.regression.predstd import wls_prediction_std
# Computing prediction intervals from OLS regression
prstd, iv_l, iv_u = wls_prediction_std(res_ols, exog=xnew, alpha=0.05) # notice the exog parameter
fig, ax = plt.subplots(figsize=(10,6))
ax.plot(support, means, c='blue', ls='-', lw=3, label='OLS')
ax.plot(support, ci_lows, c='blue', ls='--', lw=1, label='lower confidence limit')
ax.plot(support, ci_higs, c='blue', ls='-.', lw=1, label='upper confidence limit')
ax.fill_between(support, ci_lows, ci_higs, facecolor='lightblue', interpolate=True, alpha=0.5)
ax.plot(support, iv_l, c='green', ls='--', lw=2, label='lower prediction limit')
ax.plot(support, iv_u, c='green', ls='-.', lw=2, label='upper prediction limit')
ax.legend(loc='best')
ax.set_xlabel('total_bill')
ax.set_ylabel('tip')
plt.show()
"""
Explanation: Prediction intervals for linear regression
Ordinary Least Squares: http://statsmodels.sourceforge.net/devel/examples/notebooks/generated/ols.html
End of explanation
"""
# Scikit-learn Linear Regression
model_skl = LinearRegression(fit_intercept=False) # constant term is already in exog
res_skl = model_skl.fit(X, y) # notice the order of the endog and exog variables
res_skl.coef_
# Predictions
out_skl = res_skl.predict(xnew)
fig, ax = plt.subplots(figsize=(10,6))
ax.plot(xs, ys, ls='-', c='red', lw=2, label='from seaborn')
ax.plot(support, out_ols, c='green', ls='-', lw=2, label='from statsmodels')
ax.plot(support, out_skl, c='blue', ls='-', lw=2, label='from scikit-learn')
ax.legend(loc='best')
ax.set_xlabel('total_bill')
ax.set_ylabel('tip')
plt.show()
# Difference between curves (seaborn vs scikit-learn)
diff = np.sum(abs(ys-out_skl))
print('Total difference: {:g}'.format(diff))
"""
Explanation: Scikit-learn LinearRegression function
Scikit-learn LinearRegression: http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html
End of explanation
"""
from statsmodels.stats.outliers_influence import summary_table
# Simple example
x = np.linspace(0, 10, 100)
e = np.random.normal(size=100)
y = 1 + 0.5*x + 2*e
X = sm.add_constant(x)
# OLS model
re = sm.OLS(y, X).fit()
# Summary table
st, data, ss2 = summary_table(re, alpha=0.05)
print(ss2) # column names
# Fitted values
fittedvalues = data[:,2]
# Confidence interval
predict_mean_ci_low, predict_mean_ci_upp = data[:,4:6].T
# Prediction interval
predict_ci_low, predict_ci_upp = data[:,6:8].T
fig, ax = plt.subplots(figsize=(10,6))
# Scatter
ax.scatter(x, y, color='blue', alpha=0.5)
# Confidence interval
ax.plot(x, predict_ci_low, c='red', ls='--', lw=2, label='Predict low')
ax.plot(x, predict_ci_upp, c='red', ls='-.', lw=2, label='Predict high')
ax.fill_between(x, predict_ci_low, predict_ci_upp, facecolor='#FF3333', interpolate=True, alpha=0.2)
# Predicition interval
ax.plot(x, predict_mean_ci_low, c='blue', ls='--', lw=2, label='CI low')
ax.plot(x, predict_mean_ci_upp, c='blue', ls='-.', lw=2, label='CI high')
ax.fill_between(x, predict_mean_ci_low, predict_mean_ci_upp, facecolor='lightblue', interpolate=True, alpha=0.8)
# OLS regression line
ax.plot(x, fittedvalues, c='seagreen', ls='-', lw=2, label='OLS')
ax.legend(loc='best')
ax.set_xlim(0, 10)
ax.set_xlabel('x')
ax.set_ylabel('y')
plt.show()
"""
Explanation: Alternative way of computing confidence and prediction intervals for linear regression
Stackoverflow: https://stackoverflow.com/questions/17559408/confidence-and-prediction-intervals-with-statsmodels
End of explanation
"""
# Example of using Student's t distribution for computing
# confidence levels on linear regression coefficients
n = len(x)
alpha = 0.05 # 95% confidence level
xbar = np.mean(x)
ybar = np.mean(y)
b1 = (np.sum(x*y)-n*xbar*ybar)/(np.sum(x**2)-n*xbar**2) # or b1 = re.params[1] # from OLS
b0 = ybar - b1*xbar # or b0 = re.params[0] # from OLS
# Compute correlation coefficient
# yhat = re.fittedvalues # or yhat = b0 + b1*x
# r = np.sqrt(((np.sum((y-ybar)**2))-(np.sum((y-yhat)**2)))/(np.sum((y-ybar)**2)))
r = np.corrcoef(x,y)[0,1]
s = np.sqrt(((1.-r)**2 * np.sum((y-ybar)**2))/(n-2.))
s_b0 = s*np.sqrt((1./n) + xbar**2/(np.sum((x-xbar)**2)))
s_b1 = s/(np.sqrt(np.sum((x-xbar)**2)))
q = 1. - alpha/2
t = stats.t.ppf(q, n-2)
b0_low = b0 - t*s_b0
b0_high = b0 + t*s_b0
b1_low = b1 - t*s_b1
b1_high = b1 + t*s_b1
print('Const: Mean {:7.4f}, 95% CI {:7.4f} to {:7.4f}'.format(b0, b0_low, b0_high))
print(' x1: Mean {:7.4f}, 95% CI {:7.4f} to {:7.4f}'.format(b1, b1_low, b1_high))
re.summary()
"""
Explanation: Classical statistical view of the confidence and prediction intervals for linear regression
For the linear model:
$y_i = \beta_0 + \beta_1 x_i + \epsilon_i$
the $100(1-\alpha)$ % confidence intervals for $\beta_0$ and $\beta_1$ are given by:
$\hat \beta_0 \pm t_{n-2,\alpha/2} \cdot s_{\beta_0}$
$\hat \beta_1 \pm t_{n-2,\alpha/2} \cdot s_{\beta_1}$
with:
$\hat \beta_1 = {{\sum_{i=1}^n{x_iy_i} - n \bar x \bar y} \over {\sum_{i=1}^n{x_i^2}-n \bar x^2}}$
$\hat \beta_0 = \bar y - \hat \beta_1 \bar x$
and
$s_{\hat \beta_0} = s \cdot \sqrt{{1\over n} + {\bar x^2 \over \sum_{i=1}^n{(x_i-\bar x)^2}}}$
$s_{\hat \beta_1} = {s \over \sqrt{\sum_{i=1}^n{(x_i-\bar x)^2}}}$
$s = \sqrt{{(1-r^2)\sum_{i=1}^n{(y_i-\bar y)^2}}\over{n-2}}$
where:
n - number of samples;
$\bar x$, $\bar y$ - mean values of x and y, respectively
r - coefficient of correlation between x and y
$t_{n-2,\alpha/2}$ - point on the Student's t curve with n-2 degrees of freedom that cuts the area of $\alpha/2$ in the right-hand tail.
This means that the confidence intervals for $\beta_0$ and $\beta_1$ can be derived in exactly the same way as the Student's t based confidence interval for a population mean.
Confidence interval
A level $100(1-\alpha)$ % confidence interval for the quantity $\beta_0 + \beta_1 x$ is given by:
$\hat \beta_0 + \hat \beta_1 x \pm t_{n-2,\alpha/2} \cdot s_{\hat y}$
where:
$s_{\hat y} = s \cdot \sqrt{{1\over n} + {(x-\bar x)^2 \over \sum_{i=1}^n{(x_i-\bar x)^2}}}$
Prediction interval
A level $100(1-\alpha)$ % prediction interval for the quantity $\beta_0 + \beta_1 x$ is given by:
$\hat \beta_0 + \hat \beta_1 x \pm t_{n-2,\alpha/2} \cdot s_{pred}$
where:
$s_{pred} = s \cdot \sqrt{1 + {1\over n} + {(x-\bar x)^2 \over \sum_{i=1}^n{(x_i-\bar x)^2}}}$
End of explanation
"""
|
changhoonhahn/centralMS | notebook/local_sfs_prior.ipynb | mit | import numpy as np
# -- centralms --
from centralMS import util as UT
from centralMS import sfh as SFH
from centralMS import catalog as Cat
import corner as DFM
import matplotlib as mpl
import matplotlib.pyplot as pl
mpl.rcParams['text.usetex'] = True
mpl.rcParams['font.family'] = 'serif'
mpl.rcParams['axes.linewidth'] = 1.5
mpl.rcParams['axes.xmargin'] = 1
mpl.rcParams['xtick.labelsize'] = 'x-large'
mpl.rcParams['xtick.major.size'] = 5
mpl.rcParams['xtick.major.width'] = 1.5
mpl.rcParams['ytick.labelsize'] = 'x-large'
mpl.rcParams['ytick.major.size'] = 5
mpl.rcParams['ytick.major.width'] = 1.5
mpl.rcParams['legend.frameon'] = False
%matplotlib inline
mbin = np.linspace(9., 12., 100)
fsdss = Cat.SDSSgroupcat(Mrcut=18, censat='central')
sdss = fsdss.Read()
fig = plt.figure()
sub = fig.add_subplot(111)
sub.scatter(sdss['mstar'], sdss['sfr'], c='k', s=1)
sub.plot(mbin, SFH.SFR_sfms(mbin, 0.05, {'name': 'anchored', 'slope':0, 'amp': 0}), c='C0', lw=2, ls='--')
sub.plot(mbin, SFH.SFR_sfms(mbin, 0.05, {'name': 'flex', 'mslope': 0.55, 'zslope': 1.2}), c='C1', lw=2, ls='--')
sub.set_xlabel('$\log\,M_*$', fontsize=25)
sub.set_xlim([9., 12.])
sub.set_ylabel('$\log\,\mathrm{SFR}$', fontsize=25)
sub.set_ylim([-3., 1.])
# Lee et al. (2015)
logSFR_lee = lambda mm: 1.53 - np.log10(1 + (10**mm/10**(10.10))**-1.26)
# Noeske et al. (2007) 0.85 < z< 1.10 (by eye)
logSFR_noeske = lambda mm: (1.580 - 1.064)/(11.229 - 10.029)*(mm - 10.0285) + 1.0637
# Moustakas et al. (2013) 0.8 < z < 1. (by eye)
logSFR_primus = lambda mm: (1.3320 - 1.296)/(10.49 - 9.555) * (mm-9.555) + 1.297
# Hahn et al. (2017)
logSFR_hahn = lambda mm: 0.53*(mm-10.5) + 1.1*(1.-0.05) - 0.11
fig = plt.figure()
sub = fig.add_subplot(111)
sub.set_title('Prior range for "anchored" SFS prescription', fontsize=15)
sub.plot(mbin, logSFR_lee(mbin), c='C0', lw=2, ls='-')
sub.plot(mbin, logSFR_noeske(mbin), c='C1', lw=2, ls='-')
sub.plot(mbin, logSFR_primus(mbin), c='C2', lw=2, ls='-')
sub.plot(mbin, logSFR_hahn(mbin), c='C3', lw=2, ls='-')
sfs_anchored = np.zeros((len(mbin), 4))
sfs_anchored[:,0] = SFH.SFR_sfms(mbin, 1., {'name': 'anchored', 'slope':-0.5, 'amp': 0.5})
sfs_anchored[:,1] = SFH.SFR_sfms(mbin, 1., {'name': 'anchored', 'slope':-0.5, 'amp': 2.5})
sfs_anchored[:,2] = SFH.SFR_sfms(mbin, 1., {'name': 'anchored', 'slope':0.5, 'amp': 2.5})
sfs_anchored[:,2] = SFH.SFR_sfms(mbin, 1., {'name': 'anchored', 'slope':0.5, 'amp': 0.5})
sub.fill_between(mbin, np.min(sfs_anchored, axis=1), np.max(sfs_anchored, axis=1), color='k', alpha=0.25)
sub.plot(mbin, SFH.SFR_sfms(mbin, 0.05, {'name': 'anchored', 'slope':0, 'amp': 0}), c='k', lw=2, ls='--')
sub.set_xlabel('$\log\,M_*$', fontsize=25)
sub.set_xlim([9., 12.])
sub.set_ylabel('$\log\,\mathrm{SFR}$', fontsize=25)
sub.set_ylim([-1., 3.])
"""
Explanation: Currently there are two implementation of the SFS:
$$\log \mathrm{SFR}\mathrm{SFS}(M, z) = m_{M_} (\log\,M_* - 10.5) + m_z (z - 0.05) - 0.11$$
$$\log \mathrm{SFR}\mathrm{SFS}(M, z) = (0.58 (\log\,M_ - 10.5) - 0.13868) + (z - 0.05) (m (\log\,M_* - 10.5) + b)$$
Lets determine the range of parameters that encompass observations, which will act as priors
End of explanation
"""
fig = plt.figure()
sub = fig.add_subplot(111)
sub.set_title('Prior range for "flex" SFS prescription', fontsize=15)
sub.plot(mbin, logSFR_lee(mbin), c='C0', lw=2, ls='-')
sub.plot(mbin, logSFR_noeske(mbin), c='C1', lw=2, ls='-')
sub.plot(mbin, logSFR_primus(mbin), c='C2', lw=2, ls='-')
sub.plot(mbin, logSFR_hahn(mbin), c='C3', lw=2, ls='-')
sfs_anchored = np.zeros((len(mbin), 4))
sfs_anchored[:,0] = SFH.SFR_sfms(mbin, 1., {'name': 'flex', 'mslope':0.0, 'zslope': 1.})
sfs_anchored[:,1] = SFH.SFR_sfms(mbin, 1., {'name': 'flex', 'mslope':0.0, 'zslope': 2.})
sfs_anchored[:,2] = SFH.SFR_sfms(mbin, 1., {'name': 'flex', 'mslope':0.8, 'zslope': 1.})
sfs_anchored[:,2] = SFH.SFR_sfms(mbin, 1., {'name': 'flex', 'mslope':0.8, 'zslope': 2.})
sub.fill_between(mbin, np.min(sfs_anchored, axis=1), np.max(sfs_anchored, axis=1), color='k', alpha=0.25)
sub.plot(mbin, SFH.SFR_sfms(mbin, 0.05, {'name': 'flex', 'mslope': 0.55, 'zslope': 1.2}), c='k', lw=2, ls='--')
sub.set_xlabel('$\log\,M_*$', fontsize=25)
sub.set_xlim([9., 12.])
sub.set_ylabel('$\log\,\mathrm{SFR}$', fontsize=25)
sub.set_ylim([-1., 3.])
"""
Explanation: Prior range: $m_z = [-0.5, 0.5], b_z = [0.5, 2.5 ]$$
End of explanation
"""
|
QuantScientist/Deep-Learning-Boot-Camp | day02-PyTORCH-and-PyCUDA/PyTorch/18-PyTorch-NUMER.AI-Binary-Classification-BCELoss.ipynb | mit | import torch
import sys
import torch
from torch.utils.data.dataset import Dataset
from torch.utils.data import DataLoader
from torchvision import transforms
from torch import nn
import torch.nn.functional as F
import torch.optim as optim
from torch.autograd import Variable
from sklearn import cross_validation
from sklearn import metrics
from sklearn.metrics import roc_auc_score, log_loss, roc_auc_score, roc_curve, auc
from sklearn.cross_validation import StratifiedKFold, ShuffleSplit, cross_val_score, train_test_split
print('__Python VERSION:', sys.version)
print('__pyTorch VERSION:', torch.__version__)
print('__CUDA VERSION')
from subprocess import call
# call(["nvcc", "--version"]) does not work
! nvcc --version
print('__CUDNN VERSION:', torch.backends.cudnn.version())
print('__Number CUDA Devices:', torch.cuda.device_count())
print('__Devices')
# call(["nvidia-smi", "--format=csv", "--query-gpu=index,name,driver_version,memory.total,memory.used,memory.free"])
print('Active CUDA Device: GPU', torch.cuda.current_device())
print ('Available devices ', torch.cuda.device_count())
print ('Current cuda device ', torch.cuda.current_device())
import numpy
import numpy as np
use_cuda = torch.cuda.is_available()
FloatTensor = torch.cuda.FloatTensor if use_cuda else torch.FloatTensor
LongTensor = torch.cuda.LongTensor if use_cuda else torch.LongTensor
Tensor = FloatTensor
import pandas
import pandas as pd
import logging
handler=logging.basicConfig(level=logging.INFO)
lgr = logging.getLogger(__name__)
%matplotlib inline
# !pip install psutil
import psutil
import os
def cpuStats():
print(sys.version)
print(psutil.cpu_percent())
print(psutil.virtual_memory()) # physical memory usage
pid = os.getpid()
py = psutil.Process(pid)
memoryUse = py.memory_info()[0] / 2. ** 30 # memory use in GB...I think
print('memory GB:', memoryUse)
cpuStats()
"""
Explanation: Deep Learning Bootcamp November 2017, GPU Computing for Data Scientists
<img src="../images/bcamp.png" align="center">
18 PyTorch NUMER.AI Deep Learning Binary Classification using BCELoss
Web: https://www.meetup.com/Tel-Aviv-Deep-Learning-Bootcamp/events/241762893/
Notebooks: <a href="https://github.com/QuantScientist/Data-Science-PyCUDA-GPU"> On GitHub</a>
Shlomo Kashani
<img src="../images/pt.jpg" width="35%" align="center">
What consists a Numerai competition?
Numerai provides payments based on the number of correctly predictted labels (LOGG_LOSS) in a data-set which changes every week.
Two data-sets are provided: numerai_training_data.csv and numerai_tournament_data.csv
Criteria
On top of LOG_LOSS, they also measure:
Consistency
Originality
Concordance
PyTorch and Numerai
This tutorial was written in order to demonstrate a fully working example of a PyTorch NN on a real world use case, namely a Binary Classification problem on the NumerAI data set. If you are interested in the sk-learn version of this problem please refer to: https://github.com/QuantScientist/deep-ml-meetups/tree/master/hacking-kaggle/python/numer-ai
For the scientific foundation behind Binary Classification and Logistic Regression, refer to: https://github.com/QuantScientist/Deep-Learning-Boot-Camp/tree/master/Data-Science-Interviews-Book
Every step, from reading the CSV into numpy arrays, converting to GPU based tensors, training and validation, are meant to aid newcomers in their first steps in PyTorch.
Additionally, commonly used Kaggle metrics such as ROC_AUC and LOG_LOSS are logged and plotted both for the training set as well as for the validation set.
Thus, the NN architecture is naive and by no means optimized. Hopefully, I will improve it over time and I am working on a second CNN based version of the same problem.
Data
Download from https://numer.ai/leaderboard
<img src="../images/numerai-logo.png" width="35%" align="center">
PyTorch Imports
End of explanation
"""
# %%timeit
use_cuda = torch.cuda.is_available()
# use_cuda = False
FloatTensor = torch.cuda.FloatTensor if use_cuda else torch.FloatTensor
LongTensor = torch.cuda.LongTensor if use_cuda else torch.LongTensor
Tensor = FloatTensor
lgr.info("USE CUDA=" + str (use_cuda))
torch.backends.cudnn.enabled=False
# torch.backends.cudnn.enabled=True
# ! watch -n 0.1 'ps f -o user,pgrp,pid,pcpu,pmem,start,time,command -p `lsof -n -w -t /dev/nvidia*`'
# sudo apt-get install dstat #install dstat
# sudo pip install nvidia-ml-py #install Python NVIDIA Management Library
# wget https://raw.githubusercontent.com/datumbox/dstat/master/plugins/dstat_nvidia_gpu.py
# sudo mv dstat_nvidia_gpu.py /usr/share/dstat/ #move file to the plugins directory of dstat
"""
Explanation: CUDA
End of explanation
"""
# Data params
TARGET_VAR= 'target'
TOURNAMENT_DATA_CSV = 'numerai_tournament_data.csv'
TRAINING_DATA_CSV = 'numerai_training_data.csv'
BASE_FOLDER = 'numerai/'
# fix seed
seed=17*19
np.random.seed(seed)
torch.manual_seed(seed)
if use_cuda:
torch.cuda.manual_seed(seed)
"""
Explanation: Global params
End of explanation
"""
# %%timeit
df_train = pd.read_csv(BASE_FOLDER + TRAINING_DATA_CSV)
df_train.head(5)
"""
Explanation: Load a CSV file for Binary classification (numpy)
As mentioned, NumerAI provided numerai_training_data.csv and numerai_tournament_data.csv.
Training_data.csv is labeled
Numerai_tournament_data.csv has lebles for the validation set and no labels for the test set. See belo how I seperate them.
End of explanation
"""
from sklearn.preprocessing import LabelEncoder
from sklearn.pipeline import Pipeline
from collections import defaultdict
# def genBasicFeatures(inDF):
# print('Generating basic features ...')
# df_copy=inDF.copy(deep=True)
# magicNumber=21
# feature_cols = list(inDF.columns)
# inDF['x_mean'] = np.mean(df_copy.ix[:, 0:magicNumber], axis=1)
# inDF['x_median'] = np.median(df_copy.ix[:, 0:magicNumber], axis=1)
# inDF['x_std'] = np.std(df_copy.ix[:, 0:magicNumber], axis=1)
# inDF['x_skew'] = scipy.stats.skew(df_copy.ix[:, 0:magicNumber], axis=1)
# inDF['x_kurt'] = scipy.stats.kurtosis(df_copy.ix[:, 0:magicNumber], axis=1)
# inDF['x_var'] = np.var(df_copy.ix[:, 0:magicNumber], axis=1)
# inDF['x_max'] = np.max(df_copy.ix[:, 0:magicNumber], axis=1)
# inDF['x_min'] = np.min(df_copy.ix[:, 0:magicNumber], axis=1)
# return inDF
def addPolyFeatures(inDF, deg=2):
print('Generating poly features ...')
df_copy=inDF.copy(deep=True)
poly=PolynomialFeatures(degree=deg)
p_testX = poly.fit(df_copy)
# AttributeError: 'PolynomialFeatures' object has no attribute 'get_feature_names'
target_feature_names = ['x'.join(['{}^{}'.format(pair[0],pair[1]) for pair in tuple if pair[1]!=0]) for tuple in [zip(df_copy.columns,p) for p in poly.powers_]]
df_copy = pd.DataFrame(p_testX.transform(df_copy),columns=target_feature_names)
return df_copy
def oneHOT(inDF):
d = defaultdict(LabelEncoder)
X_df=inDF.copy(deep=True)
# Encoding the variable
X_df = X_df.apply(lambda x: d['era'].fit_transform(x))
return X_df
"""
Explanation: Feature enrichement
This would be usually not required when using NN's; it is here for demonstration purposes.
End of explanation
"""
from sklearn import preprocessing
# Train, Validation, Test Split
def loadDataSplit():
df_train = pd.read_csv(BASE_FOLDER + TRAINING_DATA_CSV)
# TOURNAMENT_DATA_CSV has both validation and test data provided by NumerAI
df_test_valid = pd.read_csv(BASE_FOLDER + TOURNAMENT_DATA_CSV)
answers_1_SINGLE = df_train[TARGET_VAR]
df_train.drop(TARGET_VAR, axis=1,inplace=True)
df_train.drop('id', axis=1,inplace=True)
df_train.drop('era', axis=1,inplace=True)
df_train.drop('data_type', axis=1,inplace=True)
# df_train=oneHOT(df_train)
df_train.to_csv(BASE_FOLDER + TRAINING_DATA_CSV + 'clean.csv', header=False, index = False)
df_train= pd.read_csv(BASE_FOLDER + TRAINING_DATA_CSV + 'clean.csv', header=None, dtype=np.float32)
df_train = pd.concat([df_train, answers_1_SINGLE], axis=1)
feature_cols = list(df_train.columns[:-1])
# print (feature_cols)
target_col = df_train.columns[-1]
trainX, trainY = df_train[feature_cols], df_train[target_col]
# TOURNAMENT_DATA_CSV has both validation and test data provided by NumerAI
# Validation set
df_validation_set=df_test_valid.loc[df_test_valid['data_type'] == 'validation']
df_validation_set=df_validation_set.copy(deep=True)
answers_1_SINGLE_validation = df_validation_set[TARGET_VAR]
df_validation_set.drop(TARGET_VAR, axis=1,inplace=True)
df_validation_set.drop('id', axis=1,inplace=True)
df_validation_set.drop('era', axis=1,inplace=True)
df_validation_set.drop('data_type', axis=1,inplace=True)
# df_validation_set=oneHOT(df_validation_set)
df_validation_set.to_csv(BASE_FOLDER + TRAINING_DATA_CSV + '-validation-clean.csv', header=False, index = False)
df_validation_set= pd.read_csv(BASE_FOLDER + TRAINING_DATA_CSV + '-validation-clean.csv', header=None, dtype=np.float32)
df_validation_set = pd.concat([df_validation_set, answers_1_SINGLE_validation], axis=1)
feature_cols = list(df_validation_set.columns[:-1])
target_col = df_validation_set.columns[-1]
valX, valY = df_validation_set[feature_cols], df_validation_set[target_col]
# Test set for submission (not labeled)
df_test_set = pd.read_csv(BASE_FOLDER + TOURNAMENT_DATA_CSV)
# df_test_set=df_test_set.loc[df_test_valid['data_type'] == 'live']
df_test_set=df_test_set.copy(deep=True)
df_test_set.drop(TARGET_VAR, axis=1,inplace=True)
tid_1_SINGLE = df_test_set['id']
df_test_set.drop('id', axis=1,inplace=True)
df_test_set.drop('era', axis=1,inplace=True)
df_test_set.drop('data_type', axis=1,inplace=True)
# df_test_set=oneHOT(df_validation_set)
feature_cols = list(df_test_set.columns) # must be run here, we dont want the ID
# print (feature_cols)
df_test_set = pd.concat([tid_1_SINGLE, df_test_set], axis=1)
testX = df_test_set[feature_cols].values
return trainX, trainY, valX, valY, testX, df_test_set
# %%timeit
trainX, trainY, valX, valY, testX, df_test_set = loadDataSplit()
min_max_scaler = preprocessing.MinMaxScaler()
# # Number of features for the input layer
N_FEATURES=trainX.shape[1]
print (trainX.shape)
print (trainY.shape)
print (valX.shape)
print (valY.shape)
print (testX.shape)
print (df_test_set.shape)
# print (trainX)
"""
Explanation: Train / Validation / Test Split
Numerai provides a data set that is allready split into train, validation and test sets.
End of explanation
"""
# seperate out the Categorical and Numerical features
import seaborn as sns
numerical_feature=trainX.dtypes[trainX.dtypes!= 'object'].index
categorical_feature=trainX.dtypes[trainX.dtypes== 'object'].index
print ("There are {} numeric and {} categorical columns in train data".format(numerical_feature.shape[0],categorical_feature.shape[0]))
corr=trainX[numerical_feature].corr()
sns.heatmap(corr)
from pandas import *
import numpy as np
from scipy.stats.stats import pearsonr
import itertools
# from https://stackoverflow.com/questions/17778394/list-highest-correlation-pairs-from-a-large-correlation-matrix-in-pandas
def get_redundant_pairs(df):
'''Get diagonal and lower triangular pairs of correlation matrix'''
pairs_to_drop = set()
cols = df.columns
for i in range(0, df.shape[1]):
for j in range(0, i+1):
pairs_to_drop.add((cols[i], cols[j]))
return pairs_to_drop
def get_top_abs_correlations(df, n=5):
au_corr = df.corr().abs().unstack()
labels_to_drop = get_redundant_pairs(df)
au_corr = au_corr.drop(labels=labels_to_drop).sort_values(ascending=False)
return au_corr[0:n]
print("Top Absolute Correlations")
print(get_top_abs_correlations(trainX, 5))
"""
Explanation: Correlated columns
Correlation plot
Scatter plots
End of explanation
"""
# Convert the np arrays into the correct dimention and type
# Note that BCEloss requires Float in X as well as in y
def XnumpyToTensor(x_data_np):
x_data_np = np.array(x_data_np, dtype=np.float32)
print(x_data_np.shape)
print(type(x_data_np))
if use_cuda:
lgr.info ("Using the GPU")
X_tensor = Variable(torch.from_numpy(x_data_np).cuda()) # Note the conversion for pytorch
else:
lgr.info ("Using the CPU")
X_tensor = Variable(torch.from_numpy(x_data_np)) # Note the conversion for pytorch
print(type(X_tensor.data)) # should be 'torch.cuda.FloatTensor'
print(x_data_np.shape)
print(type(x_data_np))
return X_tensor
# Convert the np arrays into the correct dimention and type
# Note that BCEloss requires Float in X as well as in y
def YnumpyToTensor(y_data_np):
y_data_np=y_data_np.reshape((y_data_np.shape[0],1)) # Must be reshaped for PyTorch!
print(y_data_np.shape)
print(type(y_data_np))
if use_cuda:
lgr.info ("Using the GPU")
# Y = Variable(torch.from_numpy(y_data_np).type(torch.LongTensor).cuda())
Y_tensor = Variable(torch.from_numpy(y_data_np)).type(torch.FloatTensor).cuda() # BCEloss requires Float
else:
lgr.info ("Using the CPU")
# Y = Variable(torch.squeeze (torch.from_numpy(y_data_np).type(torch.LongTensor))) #
Y_tensor = Variable(torch.from_numpy(y_data_np)).type(torch.FloatTensor) # BCEloss requires Float
print(type(Y_tensor.data)) # should be 'torch.cuda.FloatTensor'
print(y_data_np.shape)
print(type(y_data_np))
return Y_tensor
"""
Explanation: Create PyTorch GPU tensors from numpy arrays
Note how we transfrom the np arrays
End of explanation
"""
# p is the probability of being dropped in PyTorch
# p is the probability of being dropped in PyTorch
# NN params
DROPOUT_PROB = 0.65
LR = 0.005
MOMENTUM= 0.9
dropout = torch.nn.Dropout(p=1 - (DROPOUT_PROB))
sigmoid = torch.nn.Sigmoid()
tanh=torch.nn.Tanh()
relu=torch.nn.LeakyReLU()
lgr.info(dropout)
hiddenLayer1Size=64
hiddenLayer2Size=int(hiddenLayer1Size/2)
hiddenLayer3Size=int(hiddenLayer1Size/2)
linear1=torch.nn.Linear(N_FEATURES, hiddenLayer1Size, bias=True)
torch.nn.init.xavier_uniform(linear1.weight)
linear2=torch.nn.Linear(hiddenLayer1Size, hiddenLayer2Size)
torch.nn.init.xavier_uniform(linear2.weight)
linear3=torch.nn.Linear(hiddenLayer2Size,hiddenLayer3Size)
torch.nn.init.xavier_uniform(linear3.weight)
linear4=torch.nn.Linear(hiddenLayer3Size, 1)
torch.nn.init.xavier_uniform(linear4.weight)
net = torch.nn.Sequential(linear1,dropout,nn.BatchNorm1d(hiddenLayer1Size),relu,
linear2,dropout,relu,
linear3,dropout,relu,
linear4,dropout,sigmoid
)
lgr.info(net)
# optimizer = torch.optim.SGD(net.parameters(), lr=0.02)
# optimizer = optim.SGD(model.parameters(), lr=0.001, momentum=0.9)
# optimizer = optim.SGD(net.parameters(), lr=LR, momentum=MOMENTUM, weight_decay=5e-3)
#L2 regularization can easily be added to the entire model via the optimizer
optimizer = torch.optim.Adam(net.parameters(), lr=LR,weight_decay=5e-4) # L2 regularization
loss_func=torch.nn.BCELoss() # Binary cross entropy: http://pytorch.org/docs/nn.html#bceloss
# http://andersonjo.github.io/artificial-intelligence/2017/01/07/Cost-Functions/
if use_cuda:
lgr.info ("Using the GPU")
net.cuda()
loss_func.cuda()
# cudnn.benchmark = True
lgr.info (optimizer)
lgr.info (loss_func)
"""
Explanation: The NN model
MLP model
A multilayer perceptron is a logistic regressor where instead of feeding the input to the logistic regression you insert a intermediate layer, called the hidden layer, that has a nonlinear activation function (usually tanh or sigmoid) . One can use many such hidden layers making the architecture deep.
Here we define a simple MLP structure. We map the input feature vector to a higher space, then later gradually decrease the dimension, and in the end into a 1-dimension space. Because we are calculating the probability of each genre independently, after the final layer we need to use a sigmoid layer.
Initial weights selection
There are many ways to select the initial weights to a neural network architecture. A common initialization scheme is random initialization, which sets the biases and weights of all the nodes in each hidden layer randomly.
Before starting the training process, an initial value is assigned to each variable. This is done by pure randomness, using for example a uniform or Gaussian distribution. But if we start with weights that are too small, the signal could decrease so much that it is too small to be useful. On the other side, when the parameters are initialized with high values, the signal can end up to explode while propagating through the network.
In consequence, a good initialization can have a radical effect on how fast the network will learn useful patterns.For this purpose, some best practices have been developed. One famous example used is Xavier initialization. Its formulation is based on the number of input and output neurons and uses sampling from a uniform distribution with zero mean and all biases set to zero.
In effect (according to theory) initializing the weights of the network to values that would be closer to the optimal, and therefore require less epochs to train.
References:
nninit.xavier_uniform(tensor, gain=1) - Fills tensor with values according to the method described in "Understanding the difficulty of training deep feedforward neural networks" - Glorot, X. and Bengio, Y., using a uniform distribution.
nninit.xavier_normal(tensor, gain=1) - Fills tensor with values according to the method described in "Understanding the difficulty of training deep feedforward neural networks" - Glorot, X. and Bengio, Y., using a normal distribution.
nninit.kaiming_uniform(tensor, gain=1) - Fills tensor with values according to the method described in "Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification" - He, K. et al. using a uniform distribution.
nninit.kaiming_normal(tensor, gain=1) - Fills tensor with values according to the method described in ["Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification" - He, K. et al.]
End of explanation
"""
import time
start_time = time.time()
epochs=100 # change to 400 for better results
all_losses = []
X_tensor_train= XnumpyToTensor(trainX)
Y_tensor_train= YnumpyToTensor(trainY)
print(type(X_tensor_train.data), type(Y_tensor_train.data)) # should be 'torch.cuda.FloatTensor'
# From here onwards, we must only use PyTorch Tensors
for step in range(epochs):
out = net(X_tensor_train) # input x and predict based on x
cost = loss_func(out, Y_tensor_train) # must be (1. nn output, 2. target), the target label is NOT one-hotted
optimizer.zero_grad() # clear gradients for next train
cost.backward() # backpropagation, compute gradients
optimizer.step() # apply gradients
if step % 20 == 0:
loss = cost.data[0]
all_losses.append(loss)
print(step, cost.data.cpu().numpy())
# RuntimeError: can't convert CUDA tensor to numpy (it doesn't support GPU arrays).
# Use .cpu() to move the tensor to host memory first.
prediction = (net(X_tensor_train).data).float() # probabilities
# prediction = (net(X_tensor).data > 0.5).float() # zero or one
# print ("Pred:" + str (prediction)) # Pred:Variable containing: 0 or 1
# pred_y = prediction.data.numpy().squeeze()
pred_y = prediction.cpu().numpy().squeeze()
target_y = Y_tensor_train.cpu().data.numpy()
tu = (log_loss(target_y, pred_y),roc_auc_score(target_y,pred_y ))
print ('LOG_LOSS={}, ROC_AUC={} '.format(*tu))
end_time = time.time()
print ('{} {:6.3f} seconds'.format('GPU:', end_time-start_time))
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot(all_losses)
plt.show()
false_positive_rate, true_positive_rate, thresholds = roc_curve(target_y,pred_y)
roc_auc = auc(false_positive_rate, true_positive_rate)
plt.title('LOG_LOSS=' + str(log_loss(target_y, pred_y)))
plt.plot(false_positive_rate, true_positive_rate, 'b', label='AUC = %0.6f' % roc_auc)
plt.legend(loc='lower right')
plt.plot([0, 1], [0, 1], 'r--')
plt.xlim([-0.1, 1.2])
plt.ylim([-0.1, 1.2])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()
"""
Explanation: Training in batches + Measuring the performance of the deep learning model
End of explanation
"""
net.eval()
# Validation data
print (valX.shape)
print (valY.shape)
X_tensor_val= XnumpyToTensor(valX)
Y_tensor_val= YnumpyToTensor(valY)
print(type(X_tensor_val.data), type(Y_tensor_val.data)) # should be 'torch.cuda.FloatTensor'
predicted_val = (net(X_tensor_val).data).float() # probabilities
# predicted_val = (net(X_tensor_val).data > 0.5).float() # zero or one
pred_y = predicted_val.cpu().numpy()
target_y = Y_tensor_val.cpu().data.numpy()
print (type(pred_y))
print (type(target_y))
tu = (log_loss(target_y, pred_y),roc_auc_score(target_y,pred_y ))
print ('\n')
print ('log_loss={} roc_auc={} '.format(*tu))
false_positive_rate, true_positive_rate, thresholds = roc_curve(target_y,pred_y)
roc_auc = auc(false_positive_rate, true_positive_rate)
plt.title('LOG_LOSS=' + str(log_loss(target_y, pred_y)))
plt.plot(false_positive_rate, true_positive_rate, 'b', label='AUC = %0.6f' % roc_auc)
plt.legend(loc='lower right')
plt.plot([0, 1], [0, 1], 'r--')
plt.xlim([-0.1, 1.2])
plt.ylim([-0.1, 1.2])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()
# print (pred_y)
"""
Explanation: Performance of the deep learning model on the Validation set
End of explanation
"""
# testX, df_test_set
# df[df.columns.difference(['b'])]
# trainX, trainY, valX, valY, testX, df_test_set = loadDataSplit()
print (df_test_set.shape)
columns = ['id', 'probability']
df_pred=pd.DataFrame(data=np.zeros((0,len(columns))), columns=columns)
# df_pred.id.astype(int)
for index, row in df_test_set.iterrows():
rwo_no_id=row.drop('id')
# print (rwo_no_id.values)
x_data_np = np.array(rwo_no_id.values, dtype=np.float32)
if use_cuda:
X_tensor_test = Variable(torch.from_numpy(x_data_np).cuda()) # Note the conversion for pytorch
else:
X_tensor_test = Variable(torch.from_numpy(x_data_np)) # Note the conversion for pytorch
X_tensor_test=X_tensor_test.view(1, trainX.shape[1]) # does not work with 1d tensors
predicted_val = (net(X_tensor_test).data).float() # probabilities
p_test = predicted_val.cpu().numpy().item() # otherwise we get an array, we need a single float
df_pred = df_pred.append({'id':row['id'], 'probability':p_test},ignore_index=True)
# df_pred = df_pred.append({'id':row['id'].astype(int), 'probability':p_test},ignore_index=True)
df_pred.head(5)
"""
Explanation: Submission on Test set
End of explanation
"""
# df_pred.id=df_pred.id.astype(int)
def savePred(df_pred, loss):
# csv_path = 'pred/p_{}_{}_{}.csv'.format(loss, name, (str(time.time())))
csv_path = 'pred/pred_{}_{}.csv'.format(loss, (str(time.time())))
df_pred.to_csv(csv_path, columns=('id', 'probability'), index=None)
print (csv_path)
savePred (df_pred, log_loss(target_y, pred_y))
"""
Explanation: Create a CSV with the ID's and the coresponding probabilities.
End of explanation
"""
|
MTG/essentia | src/examples/python/tutorial_pitch_melody.ipynb | agpl-3.0 | # For embedding audio player
import IPython
# Plots
import matplotlib.pyplot as plt
from pylab import plot, show, figure, imshow
plt.rcParams['figure.figsize'] = (15, 6)
import numpy
import essentia.standard as es
audiofile = '../../../test/audio/recorded/flamenco.mp3'
# Load audio file.
# It is recommended to apply equal-loudness filter for PredominantPitchMelodia.
loader = es.EqloudLoader(filename=audiofile, sampleRate=44100)
audio = loader()
print("Duration of the audio sample [sec]:")
print(len(audio)/44100.0)
# Extract the pitch curve
# PitchMelodia takes the entire audio signal as input (no frame-wise processing is required).
pitch_extractor = es.PredominantPitchMelodia(frameSize=2048, hopSize=128)
pitch_values, pitch_confidence = pitch_extractor(audio)
# Pitch is estimated on frames. Compute frame time positions.
pitch_times = numpy.linspace(0.0,len(audio)/44100.0,len(pitch_values) )
# Plot the estimated pitch contour and confidence over time.
f, axarr = plt.subplots(2, sharex=True)
axarr[0].plot(pitch_times, pitch_values)
axarr[0].set_title('estimated pitch [Hz]')
axarr[1].plot(pitch_times, pitch_confidence)
axarr[1].set_title('pitch confidence')
plt.show()
"""
Explanation: Melody detection
In this tutorial we will analyse the pitch contour of the predominant melody in an audio recording using the PredominantPitchMelodia algorithm. This algorithm outputs a time series (sequence of values) with the instantaneous pitch value (in Hertz) of the perceived melody. It can be used with both monophonic and polyphonic signals.
End of explanation
"""
IPython.display.Audio(audiofile)
from mir_eval.sonify import pitch_contour
from tempfile import TemporaryDirectory
temp_dir = TemporaryDirectory()
# Essentia operates with float32 ndarrays instead of float64, so let's cast it.
synthesized_melody = pitch_contour(pitch_times, pitch_values, 44100).astype(numpy.float32)[:len(audio)]
es.AudioWriter(filename=temp_dir.name + 'flamenco_melody.mp3', format='mp3')(es.StereoMuxer()(audio, synthesized_melody))
IPython.display.Audio(temp_dir.name + 'flamenco_melody.mp3')
"""
Explanation: The zero pitch value correspond to unvoiced audio segments with a very low pitch confidence according to the algorithm's estimation. You can force estimations on those as well by setting the guessUnvoiced parameter.
Let's listen to the estimated pitch and compare it to the original audio. To this end we will generate a sine wave signal following the estimated pitch, using the mir_eval Python package (make sure to install it with pip install mir_eval to be able to run this code).
End of explanation
"""
onsets, durations, notes = es.PitchContourSegmentation(hopSize=128)(pitch_values, audio)
print("MIDI notes:", notes) # Midi pitch number
print("MIDI note onsets:", onsets)
print("MIDI note durations:", durations)
"""
Explanation: Note segmentation and converting to MIDI
The PredominantPitchMelodia algorithm outputs pitch values in Hz, but we can also convert it to MIDI notes using the PitchContourSegmentation algorithm. Here is the default output it provides (tune the parameters for better note estimation).
End of explanation
"""
import mido
PPQ = 96 # Pulses per quarter note.
BPM = 120 # Assuming a default tempo in Ableton to build a MIDI clip.
tempo = mido.bpm2tempo(BPM) # Microseconds per beat.
# Compute onsets and offsets for all MIDI notes in ticks.
# Relative tick positions start from time 0.
offsets = onsets + durations
silence_durations = list(onsets[1:] - offsets[:-1]) + [0]
mid = mido.MidiFile()
track = mido.MidiTrack()
mid.tracks.append(track)
for note, onset, duration, silence_duration in zip(list(notes), list(onsets), list(durations), silence_durations):
track.append(mido.Message('note_on', note=int(note), velocity=64,
time=int(mido.second2tick(duration, PPQ, tempo))))
track.append(mido.Message('note_off', note=int(note),
time=int(mido.second2tick(silence_duration, PPQ, tempo))))
midi_file = temp_dir.name + '/extracted_melody.mid'
mid.save(midi_file)
print("MIDI file location:", midi_file)
"""
Explanation: We can now export results to a MIDI file. We will use mido Python package (which you can install with pip install mido) to do generate the .mid file. You can test the result using the generated .mid file in a DAW.
End of explanation
"""
|
ga7g08/ga7g08.github.io | _notebooks/2015-05-12-GDP-by-country.ipynb | mit | # %load ../data_sets/GDP_by_Country_WorldBank/Makefile
DOWNLOAD = data.zip
OUT = ny.gdp.mktp.cd_Indicator_en_csv_v2.csv
.PHONY: download clean
download:
rm -f ${DOWNLOAD}
wget http://api.worldbank.org/v2/en/indicator/ny.gdp.mktp.cd?downloadformat=csv -O data.zip
unzip $(DOWNLOAD)
rm -f ${DOWNLOAD} Metadata*csv *xml
"""
Explanation: GDP by country
This notebook deals with downloading and processing data on the GDP by country.
Downloading the country
We will download the data from the Worldbank. A Makefile
to automate this process can be located in a local directory of this repo ../data_sets/GDP_by_Country_WorldBank/Makefile. For convience here is the file:
End of explanation
"""
import pandas as pd
df = pd.read_csv("../data_sets/GDP_by_Country_WorldBank/ny.gdp.mktp.cd_Indicator_en_csv_v2.csv",
quotechar='"', skiprows=2)
df.head()
"""
Explanation: Once this Make file is run, we can load the data and see what we have
Initial look
End of explanation
"""
colnames_to_drop = df.columns[np.array([2, 3, -2, -1])]
for c in colnames_to_drop:
df.drop(c, 1, inplace=True)
df.head()
"""
Explanation: Cleaning
Remove columns
End of explanation
"""
years = [int(i) for i in df.columns.values[2:]]
fig, ax = plt.subplots()
top_5 = df.sort(['2013'], na_position='last', ascending=False).irow(range(5))
for country in df['Country Name']:
if country in top_5['Country Name'].values:
label=country
else:
label=None
ax.plot(years, df[df['Country Name'] == country].values[0, 2:],
label=label)
ax.legend(loc='best')
ax.set_xlabel("Year")
ax.set_ylabel("US dollars")
plt.show()
"""
Explanation: First plot
End of explanation
"""
|
eds-uga/csci1360e-su17 | lectures/L5.ipynb | mit | for i in range(10):
print(i, end = " ")
"""
Explanation: Lecture 5: Advanced Data Structures
CSCI 1360E: Foundations for Informatics and Analytics
Overview and Objectives
We've covered list, tuples, sets, and dictionaries. These are the foundational data structures in Python. In this lecture, we'll go over some more advanced topics that are related to these datasets. By the end of this lecture, you should be able to
Define and use different iterators in loops, such as range(), zip(), and enumerate()
Use variable unpacking to quickly and elegantly pull data out of lists
Compare and contrast generators and comprehensions, and how to construct them
Explain the benefits of generators, especially in the case of huge datasets
Part 1: Iterators
The unifying theme with all these collections we've been discussing (lists, tuples, sets) in the context of looping, is that they're all examples of iterators.
Apart from directly iterating over these collections as in the last lecture, the most common iterator you'll use is the range function.
range()
Here's an example:
End of explanation
"""
for i in range(5, 10):
print(i, end = " ")
"""
Explanation: Note that the range of numbers goes from 0 (inclusive) to the specified end (exclusive)! The critical point is that the argument to range specifies the length of the returned iterator.
In short, range() generates a list of numbers for you to loop over.
If you only supply one argument, range() will generate a list of numbers starting at 0 (inclusive) and going up to the number you provided (exclusive).
You can also supply two arguments: a starting number (again, inclusive) and an ending number.
End of explanation
"""
squares = []
for element in range(10):
squares.append(element ** 2)
print(squares)
"""
Explanation: Part 2: List Comprehensions
Here's some good news: if we get right down to it, having done loops and lists already, there's nothing new here.
Here's the bad news: it's a different, and possibly less-easy-to-understand, but much more concise way of creating lists. We'll go over it bit by bit.
Let's look at an example: creating a list of squares.
End of explanation
"""
squares = [element ** 2 for element in range(10)]
print(squares)
"""
Explanation: Let's break it down.
for element in range(10):
It's a standard "for" loop header.
The thing we're iterating over is at the end: range(10), or an iterator that contains numbers [0, 10) by 1s.
In each loop, the current element from the range(10) iterator is stored in element.
squares.append(element ** 2)
Inside the loop, we append a new item to our list squares
The item is computed by taking the current its, element, and computing its square
In a list comprehension, we'll see these same pieces show up again, just in a slightly different order.
End of explanation
"""
new_counts = [item + 10 for item in squares]
print(new_counts)
"""
Explanation: There it is: a list comprehension. Let's break it down.
Notice, first, that the entire expression is surrounded by the square brackets [ ] of a list. This is for the exact reason you'd think: we're building a list!
The "for" loop is completely intact, too; the entire header appears just as before (albeit at the end of the line).
The biggest wrinkle is the loop body. It appears right after the opening bracket, before the loop header. The rationale for this is that it's easy to see from the start of the line that
We're building a list (revealed by the opening square bracket), and
The list is built by successfully squaring a variable element
Here's another example: adding 10 every element in the squares list from before.
End of explanation
"""
x = range(10)
"""
Explanation: Lists are iterators by default, so the header (for item in squares) goes through the list squares one element at a time, storing each one in the variable item
The loop body takes the current element in the list (item) and adds 10 to it
Hopefully nothing out of the ordinary; just a strange way of organizing the code.
Part 3: Generators
Generators are cool twists on lists (see what I did there). They've been around since Python 2 but took on a whole new life in Python 3.
That said, if you ever get confused about generators, just think of them as lists. This can potentially get you in trouble with weird errors, but 90% of the time it'll work every time.
Let's start with an example using range():
End of explanation
"""
x = [i for i in range(10)] # Brackets -> list
print(x)
x = (i for i in range(10)) # Parentheses -> generator
print(x)
"""
Explanation: As we know, this will create an iterator with the numbers 0 through 9, inclusive, and assign it to the variable x.
But it's technically not an iterator; at the very least, it's a special type of iterator called a generator that warrants mention.
So range() gives us a generator! Great! ...what does that mean, exactly?
For most practical purposes, generators and lists are indistinguishable. However, there are some key differences to be aware of:
Generators are "lazy". This means when you call range(10), not all 10 numbers are immediately computed; in fact, none of them are. They're computed on-the-fly in the loop itself! This really comes in handy if, say, you wanted to loop through 1 trillion numbers, or call range(1000000000000). With vanilla lists, this would immediately create 1 trillion numbers in memory and store them, taking up a whole lot of space. With generators, only 1 number is ever computed at a given loop iteration. Huge memory savings!
This "laziness" means you cannot directly index a generator, as you would a list, since the numbers are generated on-the-fly during the loop.
The other point of interest with generators:
Generators only work once. This is where you can get into trouble. Let's say you're trying to identify the two largest numbers in a generator of numbers. You'd loop through once and identify the largest number, then use that as a point of comparison to loop through again to find the second-largest number (you could do it with just one loop, but for the sake of discussion let's assume you did it this way). With a list, this would work just fine. Not with a generator, though. You'd need to explicitly recreate the generator.
How do we build generators? Aside from range(), that is.
Remember list comprehensions? Just replace the brackets of a list comprehension [ ] with parentheses ( ).
End of explanation
"""
t = ("shannon", "quinn", "python")
"""
Explanation: In sum, use lists if:
you're working with a relatively small amount of elements
you want to add to / edit / remove from the elements
you need direct access to arbitrary elements, e.g. some_list[431]
On the other hand, use generators if:
you're working with a giant collection of elements
you'll only loop through the elements once or twice
when looping through elements, you're fine going in sequential order
Part 4: Other looping mechanisms
There are a few other advanced looping mechanisms in Python that are a little complex, but can make your life a lot easier when used correctly (especially if you're a convert from something like C++ or Java).
Variable unpacking
This isn't a looping mechanism per se, but it is incredibly useful and is often used in the context of looping.
Imagine you have a tuple of a handful of items; for the sake of example, we'll say this tuple stores three elements: first name, last name, and favorite programming language.
The tuple might look like this:
End of explanation
"""
first_name = t[0]
last_name = t[1]
lang = t[2]
print(lang)
"""
Explanation: Now I want to pull the elements out of the tuple and work with them independently, one at a time. You already know how to do this:
End of explanation
"""
fname, lname, language = t
print(fname)
"""
Explanation: ...but you have to admit, using three lines of code, one per variable, to extract all the elements from the tuple into their own variables, is kind of clunky.
Luckily, there's a method called "variable unpacking" that allows us to compress those three lines down to one:
End of explanation
"""
first_names = ['Shannon', 'Jen', 'Natasha', 'Benjamin']
last_names = ['Quinn', 'Benoit', 'Romanov', 'Button']
fave_langs = ['Python', 'Java', 'Assembly', 'Go']
"""
Explanation: This does exactly the same thing as before. By presenting three variables on the left hand side, we're telling Python to pull out elements of the tuple at positions 0, 1, and 2.
(variable unpacking is always assumed to start at position 0 of the structure on the right hand side)
We'll see more examples of this in practice using the looping tools later in the lecture.
zip()
zip() is a small method that packs a big punch. It "zips" multiple lists together into something of one big mega-list for the sole purpose of being able to iterate through them all simultaneously.
Here's an example: first names, last names, and favorite programming languages.
End of explanation
"""
for fname, lname, lang in zip(first_names, last_names, fave_langs):
print(fname, lname, lang)
for fname, lname, lang in zip(first_names, last_names, fave_langs):
print(fname, lname, lang)
"""
Explanation: I want to loop through these three lists simultaneously, so I can print out the person's first name, last name, and their favorite language on the same line. Since I know they're the same length, I can zip them together and, combined with a neat use of variable unpacking, do all of this in two lines:
End of explanation
"""
for fname, lname, lang in zip(first_names, last_names, fave_langs):
print(fname, lname, lang)
"""
Explanation: There's a lot happening here, so take it in chunks:
zip(first_names, last_names, fave_langs): This zips together the three lists, so that the elements at position 0 all line up, then the elements at position 1, then position 2, and so on.
Each iteration of the loop handles one of those zipped positions.
Since we know one of those zipped positions contains one element from each of the three lists (and therefore three total elements), we can use variable unpacking to extract each one of the individual elements into individual variables.
enumerate()
Of course, there are always those situations where it's really, really nice to have an index variable in the loop. Let's take a look at that previous example:
End of explanation
"""
x = ['a', 'list', 'of', 'strings']
for index, element in enumerate(x):
print(element, index)
"""
Explanation: This is great if all I want to do is loop through the lists simultaneously. But what if the ordering of the elements matters? For example, I want to prefix each sentence with the line number. How can I track what index I'm on in a loop if I don't use range()?
enumerate() handles this. By wrapping the object we loop over inside enumerate(), on each loop iteration we not only get the next object of interest, but also the index of that object. To wit:
End of explanation
"""
while True:
# "True" can't ever be "False", so this is quite literally an infinite loop!
"""
Explanation: This comes in handy anytime you need to loop through a list or generator, but also need to know what index you're on.
And note again: we're using variable unpacking in the loop header. enumerate essentially performs an "invisible" zip() on the iterator you supply, and zips it up with numbers, one per element of the iterator.
break and continue
These are two commands that give you much greater control over loop behavior, beyond just what you specify in the header.
With for loops, you specify how many times to run the loop.
With while loops, you iterate until some condition is met.
For the vast majority of cases, this works well. But sometimes you need just a little more control for extenuating circumstances.
There are some instances where, barring some external intervention, you really do want to just loop forever:
End of explanation
"""
while True:
print("In the loop!")
break
print("Out of the loop!")
"""
Explanation: How do you get out of this infinite loop? With a break statement.
End of explanation
"""
for i in range(100000): # Loop 100,000 times!
if i == 5:
break
print(i)
"""
Explanation: Just break. That will snap whatever loop you're currently in and immediately dump you out just after it.
Same thing with for loops:
End of explanation
"""
for i in range(100):
continue
print("This will never be printed, because 'continue' skips it.")
print(i)
"""
Explanation: Similar to break is continue, though you use this when you essentially want to "skip" certain iterations.
continue will also halt the current iteration, but instead of ending the loop entirely, it basically skips you on to the next iteration of the loop without executing any code that may be below it.
End of explanation
"""
|
dtsmith2001/p-data-challenge | Report.ipynb | mit | def convert_list(query_string):
"""Parse the query string of the url into a dictionary.
Handle special cases:
- There is a single query "error=True" which is rewritten to 1 if True, else 0.
- Parsing the query returns a dictionary of key-value pairs. The value is a list.
We must get the list value as an int.
Note: This function may be a bottleneck when processing larger data files.
"""
def handle_error(z, col):
"""
Called in the dictionary comprehension below to handle the "error" key.
"""
if "error" in col:
return 1 if "True" in z else 0
return z
dd = parse_qs(query_string)
return {k: int(handle_error(dd[k][0], k)) for k in dd}
"""
Explanation: Data Challenge
Files are stored in an S3 bucket. The purpose here is to fully analyze the data and make some predictions.
This workbook was exported to a Python script and the resulting code was checked for PEP8 problems. The problems found were with formatting and order of imports. We can't fix the latter due to the way the script is exported from the notebook.
Any known or potential performance problems are flagged for further work.
Some of the information here is based on posts from Stackoverflow. I haven't kept up with it, so I'll just indicate by flagging with SO.
Prepare an AWS Instance
We're using a tiny instance here since the data and processing requirements are minimal.
Install Anaconda for Python 3.6.
Install the boto3 and seaborn packages.
Install and configure the AWS command-line tools.
Use jupyter notebook --generate-config to generate a config file in ~/.jupyter. An example is enclosed in the GitHub repository.
Run conda install -c conda-forge jupyter_contrib_nbextensions.
Configure Jupyter on the remote side to run without a browser and to require a password. Make sure you put the config file in the proper place.
On the client side, run ssh -N -L 8888:localhost:8888 ubuntu@ec2-34-230-78-129.compute-1.amazonaws.com (the hostname may vary). This sets up the ssh tunnel and maps to port 8888 locally.
ssh into the instance using ssh ubuntu@ec2-34-230-78-129.compute-1.amazonaws.com.
Start jupyter notebook.
Note: You can start the tmux terminal multiplexer in order to use notebooks when you are logged out.
Note: This doesn't start X windows tunnelling, which is a separate configuration and is not presented here.
Get the Data from S3
Instead of processing the data with the boto3.s3 class, we chose to
bash
aws s3 cp --recursive s3://my_bucket_name local_folder
This is not scalable, so we can use code such as
```python
s3 = boto3.resource('s3')
bucket_name = "postie-testing-assets"
test = s3.Bucket(bucket_name)
s3.meta.client.head_bucket(Bucket=bucket_name)
{'ResponseMetadata': {'HTTPHeaders': {'content-type': 'application/xml',
'date': 'Mon, 16 Oct 2017 18:06:53 GMT',
'server': 'AmazonS3',
'transfer-encoding': 'chunked',
'x-amz-bucket-region': 'us-east-1',
'x-amz-id-2': 'YhUEo61GDGSwz1qOpFGJl+C9Sxal34XKRYzOI0TF49PsSSGsbGg2Y6xwbf07z+KHIKusPIYkjxE=',
'x-amz-request-id': 'DDD0C4B61BDF320E'},
'HTTPStatusCode': 200,
'HostId': 'YhUEo61GDGSwz1qOpFGJl+C9Sxal34XKRYzOI0TF49PsSSGsbGg2Y6xwbf07z+KHIKusPIYkjxE=',
'RequestId': 'DDD0C4B61BDF320E',
'RetryAttempts': 0}}
for key in test.objects.all():
print(key.key)
2017-07-01.csv
2017-07-02.csv
2017-07-03.csv
```
We did not bother to use boto3.
Import and Process the Data
The most difficult part was processing the url query string. We are using convert_list as a helper funtion during import. It allows us to parse the url query string out properly into a Pandas Series whose elements are dictionaries. We're rewriting the dictonyary before returning it to transform everything to int. Note that we are handling the error key in the url. There is only one of these, but it looks like a completed transaction. We don't know why it's there, so we will keep it until we learn more.
Also, once we have a pd.Series object, we can apply the Series constructor to map the key-value pairs from the parsed query string into a separate DataFrame. This frame has the same number of rows, and is in the same order, as the original data. So we can just use the join method to put together the two DataFrames.
We then make a large DataFrame to keep all the data, and keep a list of DataFrame around for each file imported (just in case).
Prediction
To build a predictive model, we need more data. Ideally, we should enrich with customer and item information. An interesting idea is to use the item images from the website to generate features.
Data Issues
Column names have spaces, so we need to remove them. A more sophisticated method would do this on import. However, processing the first row in this way may slow down the import process, particularly if the files are much larger. There are ways to read chunks via read_csv, which can be used in a class to get the first line of the file, process it as a header, then continue reading the rest of the file in chunks. This is probably the best way to read many large files.
Placeholder is blank (NaN) for two files. But is this needed?
The file labeled "2017-07-01" has transactions for 7/1/2017 and 7/2/2017.
The file labeled "2017-07-02" has transactions for 7/2/2017 and 7/3/2017.
The file labeled "2017-07-03" has transactions only for 7/3/2017.
There are two website id's, but one website id has two separate domain names: store.example.com, and www.example.com. This affects counts and also reporting if using the domain name. Handling the domain names is very dependent on this dataset - no effort was made to write a more general solution.
Some of the checkout values are negative. Do the websites allow online returns? What does a negative checkout amount mean? We will assume that negative values are recorded with the wrong sign. So we take the absolute value of the checkout_amount.
End of explanation
"""
dfr = pd.DataFrame()
col_names = ["timestamp", "website_id", "customer_id", "app_version", "placeholder", "checkout_amount", "url"]
data_report = []
individual_days = []
item_lists = []
for fname in os.listdir("data"):
ffr = pd.read_csv(os.path.join("data", fname),
header=0, names=col_names,
infer_datetime_format=True, parse_dates=[0])
file_date = fname.split(".")[0]
ffr["file_date"] = file_date
transaction_date = ffr.timestamp.apply(lambda x: x.strftime('%Y-%m-%d')) # reformat transaction timestamp
ffr["transaction_date"] = transaction_date
url_items = ffr.url.apply(lambda x: urlparse(x))
domain_name = url_items.apply(lambda x: x[1])
# handle store.example.com and www.example.com as the same website
ffr["domain_name"] = domain_name.apply(lambda x: x if not "example.com" in x else ".".join(x.split(".")[1:]))
item_query = url_items.apply(lambda x: x[4])
qq = item_query.apply(lambda x: convert_list(x)).apply(pd.Series).fillna(value=0)
item_lists += qq.columns.tolist()
final_fr = ffr.join(qq)
print("date {} has {} sales for rows {} and unique dates {}".format(fname, ffr.checkout_amount.sum(),
ffr.shape[0],
transaction_date.unique().shape[0]))
data_report.append({"file_date": file_date, "sales": ffr.checkout_amount.sum(),
"n_placeholder_nan": sum(ffr.placeholder.isnull()),
"n_rows": ffr.shape[0],
"n_websites": ffr.website_id.unique().shape[0],
"n_customers": ffr.customer_id.unique().shape[0],
"n_app_versions": ffr.app_version.unique().shape[0],
"n_dates": transaction_date.unique().shape[0]})
dfr = dfr.append(final_fr)
individual_days.append(final_fr)
### Note: This is an assumption
dfr["checkout_amount"] = dfr["checkout_amount"].abs()
dfr.reset_index(drop=True, inplace=True)
item_lists = list(set([item for item in item_lists if not "error" in item]))
dfr.shape
"""
Explanation: Explanation of Import Code
We are using dfr instead of fr. The latter is the name of a function in R.
We want to use the Pandas csv import and parse the timestamp into a Timestamp field.
The code assumes there are only csv files to process in the data directory. This can be fixed but it makes the code more complicated and will not be addressed here.
See above for a few issues discovered with the data.
We parse the url field to obtain the item counts purchased. This allows us to infer prices.
urlparse returns a structure. The first element is the hostname, and the fourth is the query string (if available). All our url strings have a query string, so we don't need any special processing here.
Apply the function convert_list to the query Series. The result is a Series. Why is this important?
python
qq = item_query.apply(lambda x: convert_list(x)).apply(pd.Series).fillna(value=0)
We need to apply the Series constructor to each row of the results of convert_list. The constructor parses the key-value pairs into columns and creates a DataFrame. We then fill the NaN values introduced. Since the resulting DataFrame has the same rows, in the same order, as the source frame (ffr, see below), we can just use the join method.
We keep a list DataFrames of the separate files.
End of explanation
"""
pd.pivot_table(dfr, values="website_id", index="transaction_date", columns="domain_name", aggfunc=[np.max])
"""
Explanation: Just to make sure there are no inconsistencies in the data, let's check the website_id against the domain name we've extracted.
End of explanation
"""
dfr.drop(["error"], axis=1, inplace=True)
"""
Explanation: Finally, let's drop the error column. It affects a single row, and we don't have enough information to determine if this is a legitimate error or not.
End of explanation
"""
pd.pivot_table(dfr, values="checkout_amount", index="transaction_date", columns="domain_name",
aggfunc=[np.sum], margins=True)
"""
Explanation: Potential Performance Issues
Import code processing by file: handle url and domain name processing outside the loop.
Wrap import code into a function and read in the data using a list comprehension.
Keeping a list of DataFrames for each file imported is not necessary and should be eliminated in pipeline code.
Extract the individual item price by domain name upon import.
Summaries of the Data
First, let's check out the total sales per transaction_date. This is the date from the timestamp field. A separate field file_date will also be used to reveal problems with the data.
Note: These pivot tables can obviously be reformatted but we will not bother here. That's something to do when presenting externally.
End of explanation
"""
pd.pivot_table(dfr, values="checkout_amount", index="file_date", columns="domain_name",
aggfunc=[np.sum], margins=True)
"""
Explanation: Before we comment further, here are total sales per file_date. This is the date we got from the file name, assuming that each file contains a single day of data.
End of explanation
"""
sns.boxplot(x="transaction_date", y="checkout_amount", data=dfr);
"""
Explanation: So now we can see what's happening. Since the files do not separate dates properly, the average sales are off. We need to use the actual transaction date to calculate sales. Additionally, some of the checkout values are negative. This distorts the sales.
The analyst's question is thus answered - there is a problem with just importing one file to calculate the average sales. Also, average sales, while meaningful, is not a good metric. It is sensitive to outliers, so a very large or small transaction has a large effect on the average. A better measure would be the median or quartiles. But let's look at a boxplot of the data first.
End of explanation
"""
cols = [item for item in item_lists if not "error" in item]
pricing_temp = dfr[dfr[cols].astype(bool).sum(axis=1) == 1].copy() # to avoid setting values on a view
pricing_temp.drop_duplicates(subset=cols + ["domain_name", "transaction_date"], inplace=True)
price_cols = []
for col in cols:
price_cols.append(np.abs(pricing_temp["checkout_amount"]/pricing_temp[col]))
pricing = pd.concat(price_cols, axis=1)
pricing.columns = cols
price_cols = [col + "_price" for col in cols]
px = pricing_temp.join(pricing, rsuffix="_price")[price_cols + ["transaction_date", "domain_name"]]
px = px.replace([np.inf, -np.inf], np.nan).fillna(value=0)
pd.pivot_table(px, values=price_cols, index="transaction_date", columns="domain_name", aggfunc=np.max).transpose()
"""
Explanation: This is a boxplot, which is a good way to display outliers, the median, quartiles, and range. We can see immediately that we have a problem with the wide range of checkout amounts. This is why the interquartile range is so compressed. There are two options.
The large values are not legitimate transactions. In this case, they are true outliers and should be ignored.
The large values are definitely legitimate transactions. This will complicate any predictive model.
We need more information about sales. We need to know if the negative values are recorded wrong.
Extracting Prices from the URL
Let's now use the pricing information extracted from the url. We can query items in the DataFrame to find this information. At this point, we assume that prices are not changed on items during the day. This seems to be the case with this data, but we can't make that assumption about future data.
Note: We are doing the item prices separately since the method used to produce the DataFrame above isn't adaptable. A different method of processing the data which avoids using the convienence of the pd.Series constructor can be used, at the expense of more difficult processing of the parsed url. Also, it's possible that an item only appears for a given domain name on a particular day, meaning we must process all the data first.
First, to get the rows where only one item was purchased, we can use (SO)
python
dfr[dfr[cols].astype(bool).sum(axis=1) == 1]
We then calculate the pricing and reformulate the data into a table we can view. This code is annoying to write, so it's preferable to redo the above loop to get the pricing. However, that may slow down processing.
End of explanation
"""
frame = dfr[item_lists + ["transaction_date", "domain_name", "checkout_amount"]]
gr = frame.groupby(["transaction_date", "domain_name"])
"""
Explanation: Based on this table, it seems true that prices do not change from day to day, or during the day. However, what if a price was changed during the day, and then changed back? Our analysis will not pick this up.
Notes for the Analyst
Make sure you understand what is in the files before you run an analysis.
Research with the customer the negative checkout amounts.
There is a single transaction with an error in the url query, but there is a seemingly valid item. Ask about this.
What does placeholder mean? Why is it blank in two files?
One of the websites has checkouts from two different subdomains. Make sure you understand this. Is one set of checkout amounts mobile versus desktop?
The code I have produced is inefficient in many aspects. If you can modify the code to make it more efficient, do so. Otherwise, get back to me.
Note that I can confirm your sales figure on the 3rd if I don't take the absolute value. The above analysis is with the absolute value of the checkout amount.
Are the checkout amounts consistent with the prices we calculated? Check this.
Check the data issues above to find anything I missed.
Analysis of Purchases
This is not hard, but the format of the results is a bit difficult to see. Let's reduce our dataset down a bit to examine counts of purchases.
End of explanation
"""
gb_frames = []
for name, group in gr:
gb_frames.append({"date": name[0], "domain_name": name[1], "frame": group})
gb_frames[0]["frame"].nlargest(5, "Bignay")
"""
Explanation: Now we can look at the top five purchases. Let's concentrate first on Bignay.
At the risk of overcomplicating the code, let's make a data structure which may not be optimal. We shall see.
End of explanation
"""
gb_frames[1]["frame"].nlargest(5, "Bignay")
"""
Explanation: We can see that a lot of items are bought from example.com in bulk, and together. This means shipping costs for these orders are higher.
End of explanation
"""
corr = gb_frames[0]["frame"][item_lists].corr()
ax = sns.heatmap(corr, xticklabels=corr.columns.values, yticklabels=corr.columns.values,
linewidths=.5, cmap="YlGnBu")
plt.title("{} Item Correlation for {}".format(gb_frames[0]["domain_name"], gb_frames[0]["date"]));
"""
Explanation: And for xyz.com, we see a different pattern. the checkout amounts are lower, and the number of items bought together is much lower. So shipping costs are lower.
We can get the data for 7/2/2017 using gb_frames[2], etc, and for 7/3/2017 using gb_frames[4], etc. They will not be displayed here.
It's a bit difficult to generalize this to other columns without further analysis, and it's tedious to do nlargest for each column. Let's look at the correlation between order amounts for each website. First, an example. Then we will look at each date individually.
Note: The checkout amount is collinear with the item counts.
End of explanation
"""
corr = gb_frames[1]["frame"][item_lists].corr()
ax = sns.heatmap(corr, xticklabels=corr.columns.values, yticklabels=corr.columns.values,
linewidths=.5) #, cmap="YlGnBu")
plt.title("{} Item Correlation for {}".format(gb_frames[1]["domain_name"], gb_frames[1]["date"]));
"""
Explanation: This type of plot gives us information about which items are bought together. Let's see one for xyz.com.
End of explanation
"""
pt = pd.pivot_table(dfr, values="checkout_amount", index="transaction_date", columns="domain_name",
aggfunc=[np.sum], margins=False)
pt.columns = ["example.com", "xyz.com"] # get rid of multiindex
pt
"""
Explanation: It's very interesting to see that the item correlations are much lower. Originally, we used the same colormap as for the first heatmap, but the lower correlations didn't look as well (the colors wash out). It's difficult to distinguish the colors here. More research is needed to produce meaningful colors in these plots.
At this point, we could write a few loops and use subplots to display more information.
Sales Prediction
Let's take a look at the daily sales again.
End of explanation
"""
for col in pt.columns.tolist():
print("Tomorrow's (7/14/2017) sales for {} is {}".format(col, pt[col].mean()))
"""
Explanation: Here are a few modeling considerations.
We only have three days to work with. That's not enough.
There are questions to clear up about the data.
How does the app version and placeholder affect the data?
However, we can make some predictions at this point. First, we can make the following prediction by averaging.
End of explanation
"""
pt = pd.pivot_table(dfr, values="checkout_amount", index="transaction_date", columns="domain_name",
aggfunc=[np.sum], margins=False)
pt.columns = ['example.com', 'xyz.com']
pt.index = pd.DatetimeIndex(pt.index)
idx = pd.date_range(pt.index.min(), pt.index.max())
pt = pt.reindex(index=idx)
pt.insert(pt.shape[1],
'row_count',
pt.index.value_counts().sort_index().cumsum())
slope, intercept, r_value, p_value, std_err = linregress(x=pt["row_count"].values,
y=pt["example.com"].values)
pt["example.com regression line"] = intercept + slope * pt["row_count"]
ax = pt[["example.com", "example.com regression line"]].plot()
"""
Explanation: We can also draw a regression plot and make predictions using the regression line.
First, there are a few details with the index. Placing the labels is also annoying. The alternatives are to use the Pandas plotting capabilities. In fact, there is no really good solution to plotting a time series and associated regression line without creating the regression line values in the DataFrame. This latter idea is what I usually do, however, the below charts were produced in a different way.
Note that both show a clear linear trend. However, there are problems - we just don't have enough data. regplot is a good tool for linear regression plots, but it does have its deficiencies, such as not providing the slope and intercept of the regression line. Additionally, the error region is not meaningful with this small amount of data.
End of explanation
"""
print("Predicted sales for example.com on 7/4/2014 is {} with significance {}".format(intercept + slope * 4, r_value*r_value))
pt = pd.pivot_table(dfr, values="checkout_amount", index="transaction_date", columns="domain_name",
aggfunc=[np.sum], margins=False)
pt.columns = ["example.com", "xyz.com"]
pt.index = pd.DatetimeIndex(pt.index)
idx = pd.date_range(pt.index.min(), pt.index.max())
pt = pt.reindex(index=idx)
pt.insert(pt.shape[1],
'row_count',
pt.index.value_counts().sort_index().cumsum())
slope, intercept, r_value, p_value, std_err = linregress(x=pt["row_count"].values,
y=pt["xyz.com"].values)
pt["xyz.com regression line"] = intercept + slope * pt["row_count"]
ax = pt[["xyz.com", "xyz.com regression line"]].plot()
"""
Explanation: And the sales prediction is
End of explanation
"""
print("Predicted sales for xyz.com on 7/4/2014 is {} with significance {}".format(intercept + slope * 4, r_value*r_value))
"""
Explanation: Again, the sales prediction is
End of explanation
"""
|
cliburn/sta-663-2017 | notebook/03_Classes.ipynb | mit | class A:
"""Base class."""
def __init__(self, x):
self.x = x
def __repr__(self):
return '%s(%a)' % (self.__class__.__name__, self.x)
def report(self):
"""Report type of contained value."""
return 'My value is of type %s' % type(self.x)
"""
Explanation: Classes
As you probably know, Python is an object-oriented language, and so it has very strong support for objects. In fact, everything in Python is an object. We will mostly use an imperative or functional rather than object-oriented programming style in this course.
Here is the bare minimum about Python objects.
Defining a new class
We define a class A with two 'special' double underscore methods and one normal method. This class will have an attribute x that is specified at the time of creating new instances of the class.
The init method initializes properties of any new instance of A
The repr method provides an accurate string representation of A. For example, if we print an instance of A, the repr method will be used. If you don't specify a repr (or str) special method, the default name when printing only gives the address in memory.
There are many more special methods, as described in the official documentation. We will not go there.
End of explanation
"""
A.__doc__
help(A)
A.report.__doc__
"""
Explanation: Docstrings
End of explanation
"""
class X:
"""Empty class."""
x = X()
print(x)
"""
Explanation: Creating an instance of a class
Example of a class without repr.
End of explanation
"""
a0 = A('a')
print(a0)
a1 = A(x = 3.14)
print(a1)
"""
Explanation: Create new instances of the class A
End of explanation
"""
a0.x, a1.x
"""
Explanation: Attribute access
End of explanation
"""
a0.report(), a1.report()
"""
Explanation: Method access
End of explanation
"""
class B(A):
"""Derived class inherits from A."""
def report(self):
"""Overwrite report() method of A."""
return self.x
B.__doc__
"""
Explanation: Class inheritance
End of explanation
"""
b0 = B(3 + 4j)
b1 = B(x = a1)
"""
Explanation: Create new instances of class B
End of explanation
"""
b0.x
b1.x
"""
Explanation: Attribute access
End of explanation
"""
b1.report()
"""
Explanation: Method access
End of explanation
"""
b1.x.report()
"""
Explanation: Nested attribute access
End of explanation
"""
|
Almaz-KG/MachineLearning | ml-for-finance/python-for-financial-analysis-and-algorithmic-trading/02-NumPy/1-NumPy-Arrays.ipynb | apache-2.0 | import numpy as np
"""
Explanation: <a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>
<center>Copyright Pierian Data 2017</center>
<center>For more information, visit us at www.pieriandata.com</center>
NumPy
NumPy (or Numpy) is a Linear Algebra Library for Python, the reason it is so important for Finance with Python is that almost all of the libraries in the PyData Ecosystem rely on NumPy as one of their main building blocks. Plus we will use it to generate data for our analysis examples later on!
Numpy is also incredibly fast, as it has bindings to C libraries. For more info on why you would want to use Arrays instead of lists, check out this great StackOverflow post.
We will only learn the basics of NumPy, to get started we need to install it!
Installation Instructions
NumPy is already included in your environment! You are good to go if you are using pyfinance env!
For those not using the provided environment:
It is highly recommended you install Python using the Anaconda distribution to make sure all underlying dependencies (such as Linear Algebra libraries) all sync up with the use of a conda install. If you have Anaconda, install NumPy by going to your terminal or command prompt and typing:
conda install numpy
If you do not have Anaconda and can not install it, please refer to Numpy's official documentation on various installation instructions.
Using NumPy
Once you've installed NumPy you can import it as a library:
End of explanation
"""
my_list = [1,2,3]
my_list
np.array(my_list)
my_matrix = [[1,2,3],[4,5,6],[7,8,9]]
my_matrix
np.array(my_matrix)
"""
Explanation: Numpy has many built-in functions and capabilities. We won't cover them all but instead we will focus on some of the most important aspects of Numpy: vectors,arrays,matrices, and number generation. Let's start by discussing arrays.
Numpy Arrays
NumPy arrays are the main way we will use Numpy throughout the course. Numpy arrays essentially come in two flavors: vectors and matrices. Vectors are strictly 1-d arrays and matrices are 2-d (but you should note a matrix can still have only one row or one column).
Let's begin our introduction by exploring how to create NumPy arrays.
Creating NumPy Arrays
From a Python List
We can create an array by directly converting a list or list of lists:
End of explanation
"""
np.arange(0,10)
np.arange(0,11,2)
"""
Explanation: Built-in Methods
There are lots of built-in ways to generate Arrays
arange
Return evenly spaced values within a given interval.
End of explanation
"""
np.zeros(3)
np.zeros((5,5,5))
np.ones(3)
np.ones((3,3))
"""
Explanation: zeros and ones
Generate arrays of zeros or ones
End of explanation
"""
np.linspace(0,10,10)
np.linspace(0,10,50)
"""
Explanation: linspace
Return evenly spaced numbers over a specified interval.
End of explanation
"""
np.eye(8)
"""
Explanation: eye
Creates an identity matrix
End of explanation
"""
np.random.rand(2)
np.random.rand(5,5)
"""
Explanation: Random
Numpy also has lots of ways to create random number arrays:
rand
Create an array of the given shape and populate it with
random samples from a uniform distribution
over [0, 1).
End of explanation
"""
np.random.randn(2)
np.random.randn(5,5)
"""
Explanation: randn
Return a sample (or samples) from the "standard normal" distribution. Unlike rand which is uniform:
End of explanation
"""
np.random.randint(1,100)
np.random.randint(1,100,3)
"""
Explanation: randint
Return random integers from low (inclusive) to high (exclusive).
End of explanation
"""
arr = np.arange(30)
ranarr = np.random.randint(0,50,10)
arr
ranarr
"""
Explanation: Array Attributes and Methods
Let's discuss some useful attributes and methods or an array:
End of explanation
"""
arr.reshape(3,10)
"""
Explanation: Reshape
Returns an array containing the same data with a new shape.
End of explanation
"""
ranarr
ranarr.max()
ranarr.argmax()
ranarr.min()
ranarr.argmin()
"""
Explanation: max,min,argmax,argmin
These are useful methods for finding max or min values. Or to find their index locations using argmin or argmax
End of explanation
"""
# Vector
arr.shape
# Notice the two sets of brackets
arr.reshape(1,30)
arr.reshape(1,30).shape
arr.reshape(30,1)
arr.reshape(30,1).shape
"""
Explanation: Shape
Shape is an attribute that arrays have (not a method):
End of explanation
"""
arr.dtype
"""
Explanation: dtype
You can also grab the data type of the object in the array:
End of explanation
"""
|
BenLangmead/comp-genomics-class | projects/UnpairedAsmChallenge.ipynb | gpl-2.0 | # Download the file containing the reads to "reads.fa" in current directory
! wget http://www.cs.jhu.edu/~langmea/resources/f2020_hw4_reads.fa
# Following line is so we can see the first few lines of the reads file
# from within IPython -- don't paste this into your Python code
! head f2020_hw4_reads.fa
"""
Explanation: Unpaired assembly challenge
You will implement software to assemble a genome from synthetic reads. We supply Python code snippets that you might use or adapt in your solutions, but you don't have to.
Part 1: Get and parse the reads
10% of the points for this question
Download the reads:
http://www.cs.jhu.edu/~langmea/resources/f2020_hw4_reads.fa
All the reads come from the same synthetic genome and each is 100 nt long. For simplicity, these reads don't have any quality values.
The following Python code will download the data to a file called reads.fa in the current directory. (Caveat: I don't think the code below works in Python 3. Sorry about that. Go here for details on how to fix: http://python-future.org/compatible_idioms.html#urllib-module.)
End of explanation
"""
def make_kmer_table(seqs, k):
''' Given dictionary (e.g. output of parse_fasta) and integer k,
return a dictionary that maps each k-mer to the set of names
of reads containing the k-mer. '''
table = {} # maps k-mer to set of names of reads containing k-mer
for name, seq in seqs.items():
for i in range(0, len(seq) - k + 1):
kmer = seq[i:i+k]
if kmer not in table:
table[kmer] = set()
table[kmer].add(name)
return table
"""
Explanation: Or you can download them manually from your browser.
Part 2: Build an overlap graph
40% of the points for this question
Goal: Write a file containing each read's best buddy to the right. Let's define that.
For each read $A$, find the other read $B$ that has the longest suffix/prefix match with $A$, i.e. a suffix of $A$ matches a prefix of $B$. $B$ is $A$'s best buddy to the right. However, if there is a tie, or if the longest suffix/prefix match is less than 40 nucleotides long, then $A$ has no best buddy to the right. For each read, your program should output either (a) nothing, if there is no best buddy to the right, or (b) a single, space-separated line with the IDs of $A$ and $B$ and the length of the overlap, like this:
0255/2 2065/1 88
This indicates a 88 bp suffix of the read with ID 0255/2 is a prefix of the read with ID 2065/1. Because of how we defined best buddy, it also means no other read besides 2065/1 has a prefix of 88+ bp that is also a suffix of read 0255/2. A corrolary of this is that a particular read ID should appear in the first column of your program's output at most once. Also, since we require the overlap to be at least 40 bases long, no number less than 40 should every appear in the last column.
Notes:
* You can assume all reads are error-free and from the forward strand. You do not need to consider sequencing errors or reverse complements.
* Below is a hint that can make things speedier.
* The order of the output lines is not important.
Hint 1: the following function groups reads such that you can avoid comparing every read to every other read when looking for suffix/prefix matches. It builds a dictionary called table where the keys are k-mers and the values are sets containing the names of all reads containing that k-mer. Since you are looking for overlaps of length at least 40, you only need to compare reads if they have at least 1 40-mer in common.
End of explanation
"""
def suffixPrefixMatch(str1, str2, min_overlap):
''' Returns length of longest suffix of str1 that is prefix of
str2, as long as that suffix is at least as long as min_overlap. '''
if len(str2) < min_overlap: return 0
str2_prefix = str2[:min_overlap]
str1_pos = -1
while True:
str1_pos = str1.find(str2_prefix, str1_pos + 1)
if str1_pos == -1: return 0
str1_suffix = str1[str1_pos:]
if str2.startswith(str1_suffix): return len(str1_suffix)
"""
Explanation: Hint 2: here's a function for finding suffix/prefix matches; we saw this in class:
End of explanation
"""
import sys
def write_solution(genome, per_line=60, out=sys.stdout):
offset = 0
out.write('>solution\n')
while offset < len(genome):
nchars = min(len(genome) - offset, per_line)
line = genome[offset:offset+nchars]
offset += nchars
out.write(line + '\n')
"""
Explanation: Part 3: Build unitigs
50% of the points for this question
Goal: Write a program that takes the output of the overlap program from part 1 and creates uniquely assemblable contigs (unitigs), using the best buddy algorithm described below.
We already determined each read's best buddy to the right. I'll abbreviate this as bbr. We did not attempt to compute each read's best buddy to the left (bbl), but we can infer it from the bbrs. Consider the following output:
A B 60
E A 40
C B 70
D C 40
$A$'s bbr is $B$. But $B$'s bbl is $C$, not $A$! Your program should form unitigs by joining together two reads $X$ and $Y$ if they are mutual best buddies. $X$ and $Y$ are mutual best buddies if $X$'s bbr is $Y$ and $Y$'s bbl is $X$, or vice versa. In this example, we would join $D$, $C$, and $B$ into a single unitig (and in that order), and would join reads $E$ and $A$ into a single unitig (also in that order).
Your program's output should consist of several entries like the following, with one entry per unitig:
START UNITIG 1 D
C 40
B 70
END UNITIG 1
START UNITIG 2 E
A 40
END UNITIG 2
The first entry represents a unitig with ID 1 consisting of 3 reads. The first (leftmost) read is D. The second read, C, has a 40 nt prefix that is a suffix of the previous read (D). The third (rightmost) read in the contig (B) has a 70 bp prefix that is a suffix of the previous read (C).
Each read should be contained in exactly one unitig. The order of unitigs in the file is not important, but the unitig IDs should be integers and assigned in ascending order.
Note: we will never provide an input that can result in a circular unitig (i.e. one where a chain of mutual best buddies loops back on itself.)
Hint: the correct solution consists of exactly 4 unitigs.
OPTIONAL Part 4: Finish the assembly
This part is optional. You can submit your solution if you like. No extra credit will be awarded.
Goal: Assemble the genome! Report the sequence of the original genome as a FASTA file.
This requires that you compare the unitigs to each other, think about what order they must go in, and then put them together accordingly. Submit your solution as a single FASTA file containing a single sequence named "solution". The FASTA file should be "wrapped" so that no line has more than 60 characters. You can use the following Python code to write out your answer.
End of explanation
"""
import random
random.seed(5234)
write_solution(''.join([random.choice('ACGT') for _ in range(500)]))
"""
Explanation: Here is an example of what your output should look like. Note how the sequence is spread over many lines.
End of explanation
"""
|
DS-100/sp17-materials | sp17/labs/lab06/lab06_master.ipynb | gpl-3.0 | !pip install ipython-sql
%load_ext sql
%sql sqlite:///./lab06.sqlite
import sqlalchemy
engine = sqlalchemy.create_engine("sqlite:///lab06.sqlite")
connection = engine.connect()
!pip install -U okpy
from client.api.notebook import Notebook
ok = Notebook('lab06.ok')
"""
Explanation: Lab 6: SQL
End of explanation
"""
%%sql
DROP TABLE IF EXISTS users;
DROP TABLE IF EXISTS follows;
CREATE TABLE users (
USERID INT NOT NULL,
NAME VARCHAR (256) NOT NULL,
YEAR FLOAT NOT NULL,
PRIMARY KEY (USERID)
);
CREATE TABLE follows (
USERID INT NOT NULL,
FOLLOWID INT NOT NULL,
PRIMARY KEY (USERID, FOLLOWID)
);
%%capture
count = 0
users = ["Ian", "Daniel", "Sarah", "Kelly", "Sam", "Alison", "Henry", "Joey", "Mark", "Joyce", "Natalie", "John"]
years = [1, 3, 4, 3, 4, 2, 5, 2, 1, 3, 4, 2]
for username, year in zip(users, years):
count += 1
%sql INSERT INTO users VALUES ($count, '$username', $year);
%%capture
follows = [0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1,
0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1,
0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1,
1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1,
0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0,
0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1,
1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0,
1, 1, 0, 1]
for i in range(12):
for j in range(12):
if i != j and follows[i + j*12]:
%sql INSERT INTO follows VALUES ($i+1, $j+1);
"""
Explanation: Rapidgram
The date: March, 2017. All of the students at Berkeley are obsessed with the hot new social networking app, Rapidgram, where users can share text and image posts. You've been hired as Rapidgram's very first Data Scientist, in charge of analyzing their petabyte-scale user data, in order to sell it to credit card companies (I mean, they had to monetize somehow). But before you get into that, you need to learn more about their database schema.
First, run the next few cells to generate a snapshot of their data. It will be saved locally as the file lab05.sqlite.
End of explanation
"""
q1 = """
...
"""
%sql $q1
q1_answer = connection.execute(q1).fetchall()
_ = ok.grade('q1')
_ = ok.backup()
"""
Explanation: Question 1: Joey's Followers
How many people follow Joey?
End of explanation
"""
q2 = """
...
"""
%sql $q2
q2_answer = connection.execute(q2).fetchall()
_ = ok.grade('q2')
_ = ok.backup()
q2_answer
"""
Explanation: Question 2: I Ain't no Followback Girl
How many people does Joey follow?
End of explanation
"""
q3 = """
...
"""
%sql $q3
q3_answer = connection.execute(q3).fetchall()
_ = ok.grade('q3')
_ = ok.backup()
"""
Explanation: Question 3: Know your Audience
What are the names of Joey's followers?
End of explanation
"""
q4 = """
...
"""
%sql $q4
q4_answer = connection.execute(q4).fetchall()
_ = ok.grade('q4')
_ = ok.backup()
"""
Explanation: Question 4: Popularity Contest
How many followers does each user have? You'll need to use GROUP BY to solve this. List only the top 5 users by number of followers.
End of explanation
"""
q5a = """
SELECT u1.name as follower, u2.name as followee
FROM follows, users as u1, users as u2
WHERE follows.userid=u1.userid
AND follows.followid=u2.userid
AND RANDOM() < 0.33
"""
"""
Explanation: Question 5: Randomness
Rapidgram wants to get a random sample of their userbase. Specifically, they want to look at exactly one-third of the follow-relations in their data. A Rapidgram engineer suggests the following SQL query:
End of explanation
"""
q5b = """
...
"""
%sql $q5b
q5_answers = [connection.execute(q5b).fetchall() for _ in range(100)]
_ = ok.grade('q5')
_ = ok.backup()
"""
Explanation: Do you think this query will work as intended? Why or why not? Try designing a better query below:
End of explanation
"""
q6a = """
WITH RECURSIVE generate_series(value) AS (
SELECT 0
UNION ALL
SELECT value+1 FROM generate_series
WHERE value+1<=10
)
SELECT value
FROM generate_series
"""
%sql $q6a
"""
Explanation: Question 6: More Randomness
Rapidgram leadership wants to give more priority to more experienced users, so they decide to weight a survey of users towards students who have spend a greater number of years at berkeley. They want to take a sample of 10 students, weighted such that a student's chance of being in the sample is proportional to their number of years spent at berkeley - for instance, a student with 6 years has three times the chance of a student with 2 years, who has twice the chance of a student with only one year.
To take this sample, they've provided you with a helpful temporary view. You can run the cell below to see its functionality.
End of explanation
"""
q6b = """
WITH RECURSIVE generate_series(value) AS (
SELECT 0
UNION ALL
SELECT value+1 FROM generate_series
WHERE value+1<=12
)
SELECT name
FROM ...
WHERE ...
ORDER BY ...
LIMIT 10
"""
%sql $q6b
q6_answers = [connection.execute(q6b).fetchall() for _ in range(100)]
_ = ok.grade('q6')
_ = ok.backup()
"""
Explanation: Using the generate_series view, get a sample of ten students, weighted in this manner.
End of explanation
"""
q7 = """
SELECT name FROM (
SELECT ...
)
WHERE year > avg_follower_years
"""
%sql $q7
q7_answer = connection.execute(q7).fetchall()
_ = ok.grade('q7')
_ = ok.backup()
_ = ok.grade_all()
_ = ok.submit()
"""
Explanation: Question 7: Older and Wiser (challenge)
List every person who has been at Berkeley longer - that is, their year is greater - than their average follower.
End of explanation
"""
|
madHatter106/DataScienceCorner | posts/a-bayesian-tutorial-in-python-part-I.ipynb | mit | import pickle
import warnings
import sys
import pandas as pd
import numpy as np
from scipy.stats import norm as gaussian, uniform
import seaborn as sb
import matplotlib.pyplot as pl
from matplotlib import rcParams
from matplotlib import ticker as mtick
print('Versions:')
print('---------')
print(f'python: {sys.version.split("|")[0]}')
print(f'numpy: {np.__version__}')
print(f'pandas: {pd.__version__}')
print(f'seaborn: {sb.__version__}')
%matplotlib inline
warnings.filterwarnings('ignore', category=FutureWarning)
"""
Explanation: <center><u><u>Bayesian Modeling for the Busy and the Confused - Part I</u></u></center>
<center><i>Basic Principles of Bayesian Computation and the Grid Approximation</i><center>
Currently, the capacity to gather data is far ahead of the ability to generate meaningful insight using conventional approaches. Hopes of alleviating this bottleneck has come through the application of machine learning tools. Among these tools one that is increasingly garnering traction is probabilistic programming, particularly Bayesian modeling. In this paradigm, variables that are used to define models carry a probabilistic distribution rather than a scalar value. "Fitting" a model to data can then , simplistically, be construed as finding the appropriate parameterization for these distributions, given the model structure and the data. This offers a number of advantages over other methods, not the least of which is the estimation of uncertainty around model results. This in turn can better inform subsequent processes, such as decision-making, and/or scientific discovery.
<br><br>
<u>Part-I overview</u>:
The present is the first of a two-notebook series, the subject of which is a brief, basic, but hands-on programmatic introduction to Bayesian modeling. This first notebook begins with an overview of a few key probability principles relevant to Bayesian inference. An illustration of how to put these in practice follows. In particular, I will demonstrate one of the more intuitve approaches to Bayesian computation; Grid Approximation (GA). With this framework I will show how to create simple models that can be used to interpret and predict real world data. <br>
<u>Part-II overview</u>:
GA is computationally intensive and runs into problems quickly when the data set is large and/or the model increases in complexity. One of the more popular solutions to this problem is the Markov Chain Monte-Carlo (MCMC) algorithm. The implementation of MCMC in Bayesian models will be the subject of the second notebook of this series.
<br>
<u>Hands-on approach with Python</u>:
Bayesian modeling cannot be understood without practice. To that end, this notebook uses code snippets that should be iteratively modified and run for better insight.
As of this writing the most popular programming language in machine learning is Python. Python is an easy language to pickup. Python is free, open source, and a large number of very useful libraries have been written over the years that have propelled it to its current place of prominence in a number of fields, in addition to machine learning.
<br><br>
I use Python (3.6+) code to illustrate the mechanics of Bayesian inference in lieu of lengthy explanations. I also use a number of dedicated Python libraries that shortens the code considerably. A solid understanding of Bayesian modeling cannot be spoon-fed and can only come from getting one's hands dirty.. Emphasis is therefore on readable reproducible code. This should ease the work the interested has to do to get some practice re-running the notebook and experimenting with some of the coding and Bayesian modeling patterns presented. Some know-how is required regarding installing and running a Python distribution, the required libraries, and jupyter notebooks; this is easily gleaned from the internet. A popular option in the machine learning community is Anaconda.
<a id='TOP'></a>
Notebook Contents
Basics: Joint probability, Inverse probability and Bayes' Theorem
Example: Inferring the Statistical Distribution of Chlorophyll from Data
Grid Approximation
Impact of priors
Impact of data set size
MCMC
PyMC3
Regression
Data Preparation
Regression in PyMC3
Checking Priors
Model Fitting
Flavors of Uncertainty
[Final Comments](#Conclusion
End of explanation
"""
μ = np.linspace(-2, 2, num=200) # μ-axis
σ = np.linspace(0, 2, num=200) # σ-axis
"""
Explanation: <a id='BASIC'></a>
Back to Contents
1. <u>Basics</u>:
$\Rightarrow$Joint probability, Inverse probability and Bayes' rule
<br>
Here's a circumspect list of basic concepts that will help understand what is going on:
Joint probability of two events $A$, $B$:
$$P(A, B)=P(A|B)\times P(B)=P(B|A)\times P(A)$$
If A and B are independent: $$P(A|B) = P(A)\ \leftrightarrow P(A,B) = P(A)\times P(B)$$
Inverse probability:$$\boxed{P(A|B) = \frac{P(B|A) \times P(A)}{P(B)}}$$
$\rightarrow$Inverse probability is handy when $P(A|B)$ is desired but hard to compute, but its counterpart, $P(B|A)$ is easy to compute. The result above which is derived directly from the joint probability formulation above, is referred to as Bayes' theorem/rule. One might ask next, how this is used to build a "Bayesian model."
$\Rightarrow$Extending Bayes' theorem to model building
<br>
Given a model:
Hypotheses (\(H\)): values that model parameters can take
\( P(H) \): probability of each value in H
Data (\( D \))
\( P(D) \): probability of the data, commonly referred to as "Evidence."
Approach
* formulate initial opinion on what $H$ might include and with what probability, $P(H)$
* collect data ($D$)
* update $P(H)$ using $D$ and Bayes' theorem
$$\frac{P(H)\times P(D|H)}{P(D)} = P(H|D)$$
Computing the "Evidence", P(D), can yield intractable integrals to solve. Fortunately, it turns out that we can approximate the posterior, and give those integrals a wide berth. Hereafter, P(D), will be considered a normalization constant and will therefore be dropped; without prejudice, as it turns out.<br><br>
$$\boxed{P(H) \times P(D|H) \propto P(H|D)}$$
Note that what we care about is updating H, model parameters, after evaluating some observations.
Let's go over each of the elements of this proportionality statement.
The prior
$$\underline{P(H)}\times P(D|H) \propto P(H|D)$$
$H$: set of values that model parameters might take with corresponding probability $P(H)$.
Priors should encompass justifiable assumptions/context information and nothing more.
We can use probability distributions to express $P(H)$ as shown below.
The likelihood
$$P(H)\times \underline{P(D|H)} \propto P(H|D)$$
probability of the data, \(D\), given \(H\).
in the frequentist framework, this quantity is maximized to find the "best" fit \(\rightarrow\) Likelihood Maximization.
maximizing the likelihood means finding a particular value for H, \(\hat{H}\).
for simple models and uninformative priors, \(\hat{H}\) often corresponds to the mode of the Bayesian posterior (see below).
likelihood maximization discards a lot of potentially valuable information (the posterior).
The posterior:
$$P(H)\times P(D|H) \propto \underline{P(H|D)}$$
it's what Bayesians are after!!!
updated probability of \(H\) after exposing the model to \(D\).
used as prior for next iteration \(P(H|D)\rightarrow P(H)\), when new data become available.
$P(H|D)$ naturally yields uncertainty around the estimate via propagation.
In the next section I will attempt to illustrate the mechanics of Bayesian inference on real-world data.
Back to Contents
<a id='JustCHL'></a>
2. <u>Bayesian "Hello World": Inferring the Statistical Distribution of Chlorophyll</u>
<p>
The goal of Bayesian modeling is to approximate the process that generated a set of outcomes observed. Often, a set of input observations can be used to modify the expected outcome via a deterministic model expression. In a first instance, neither input observations nor deterministic expression are included. Only the set of outcomes is of concern here and the model is reduced to a probability assignment, using a simple statistical distribution. <br>
For the present example the outcome of interest are some chlorophyll measurements. Assuming that the process generating these observations can be approximated, <u>after log-transformation of the data</u>, by a Gaussian distribution, the scalar parameters of which are not expected to vary. The goal is to the range of values these parameters - a constant central tendency, \\(\mu\\), and a constant spread \\(\sigma\\) - could take. Note that this example, while not realistic, is intended to help build intuition. Further down the road, the use of inputs and deterministic models will be introduced with linear regression as example.</p>
</p> I will contrast two major approaches. <u>Grid computation</u>, and <u>Markov Chain Monte-Carlo</u>. Note in both methods, , as mentioned earlier, the evidence \(P(D)\) is ignored. In both cases, relative probabilities are computed and subsequently normalized so as to add to 1.</p>
A. Grid Computation
In grid-based inference, all the possible parameter combinations to infer upon are fixed before hand, through the building of a grid. This grid is made of as many dimensions as there are parameter to the model of interest. The user needs to define a range and a resolution for each dimension. This choice depends on the computing power available, and the requirements of the problem at hand.I will illustrate that as the model complexity increases, along with the number of parameters featured, the curse of dimensionality can quickly take hold and limit the usefulness of this approach.
Given a set of ranges and a resolutions for the grid's dimension, each grid point "stores" the joint probability of the corresponding parameter values. Initially the grid is populated by the stipulation of prior probabilities that should encode what is deemed to be "reasonable" by the practitioner. These priors can diverge between individual users. This is not a problem however as it makes assumptions - and therefore ground for disagreement - explicit and specific. As these priors are confronted to a relatively (usually to the model complexity) large amount of data, initially diverging priors tend to converge.
Given our model is a Gaussian distribution, our set of hypotheses (\(H\) in the previous section) includes 2 vectors; a mean \(\mu\) and a standard deviation \(\sigma\). The next couple of lines of code defines the corresponding two axes of a \(200 \times 200\) grid, and include the range of the axes, and by extension, their resolution.
End of explanation
"""
df_grid = pd.DataFrame([[μ_i, σ_i]
for σ_i in σ for μ_i in μ], columns=['μ', 'σ'])
"""
Explanation: For ease of manipulation I will use a pandas DataFrame, which at first sight looks deceivingly like a 'lame' spreadsheet, to store the grid coordinates. I use this dataframe to subsequently store the prior definitions, and the results of likelihood and posterior computation at each grid point. Here's the code that defines the DataFrame, named and populates the first two columns \(\mu\) and \(\sigma\).
End of explanation
"""
μ_log_prior = gaussian.logpdf(df_grid.μ, 1, 1)
σ_log_prior = uniform.logpdf(df_grid.σ, 0, 2)
"""
Explanation: Accessing say the column \(\mu\) is as simple as typing: df_grid.\(\mu\)
Priors
The next step is to define the priors for both \(\mu\) and \(\sigma\) that encodes what the user's knowledge, or more commonly her or his lack thereof. Principles guiding the choice of priors are beyond the scope of this post. For no other reason than what seems to make sense. In this case, chlorophyll is expected to be log-transformed, so \(\mu\) should range within a few digits north and south of '0', and \(\sigma\) should be positive, and not expected to range beyond a few orders of magnitude. Thus a normal distribution for \(\mu\) and a uniform distribution for \(\sigma\) parameterized as below seems to make sense: <br>
\(\rightarrow \mu \sim \mathcal{N}(mean=1, st.dev.=1)\); a gaussian (normal) distribution centered at 1, with an standard deviation of 1<br>
\(\rightarrow \sigma \sim \mathcal{U}(lo=0, high=2)\); a uniform distribution bounded at 0 and 2<br>
Note that these are specified independently because \(\mu\) and \(\sigma\) are assumed independent.
The code below computes the probability for each \(\mu\) and \(\sigma\) values;
The lines below show how to pass the grid defined above to the scipy.stats distribution functions to compute the prior at each grid point.
End of explanation
"""
# log prior probability
df_grid['log_prior_prob'] = μ_log_prior + σ_log_prior
# straight prior probability from exponentiation of log_prior_prob
df_grid['prior_prob'] = np.exp(df_grid.log_prior_prob
- df_grid.log_prior_prob.max())
"""
Explanation: Note that the code above computes the log (prior) probability of each parameter at each grid point. Because the parameters \(\mu\) and \(\sigma\) are assumed independent, the joint prior probability at each grid point is just the product the individual prior probability. Products of probabilities can result in underflow errors. Log-transformed probabilities can be summed and exponentiated to compute joint probabilities of the entire grid can be computed by summing log probabilities followed by taking the exponent of the result. I store both the joint log-probability and the log-probability at each grid point in the pandas dataframe with the code snippet below:
End of explanation
"""
f, ax = pl.subplots(figsize=(6, 6))
df_grid.plot.hexbin(x='μ', y='σ', C='prior_prob', figsize=(7,6),
cmap='plasma', sharex=False, ax=ax);
ax.set_title('Prior')
f.savefig('./resources/f1_grid_prior.svg')
"""
Explanation: Since there are only two parameters, visualizing the joint prior probability is straighforward:
End of explanation
"""
df_data = pd.read_pickle('./pickleJar/df_logMxBlues.pkl')
df_data[['MxBl-Gr', 'chl_l']].info()
"""
Explanation: In the figure above looking across the \(\sigma\)-axis reveals the 'wall' of uniform probability where none of the positive values, bounded here between 0 and 2.0, is expected to be more likely. Looking down the \(\mu\)-axis, on the other hand, reveals the gaussian peak around 1, within a grid of floats extending from -2.0 to 2.0.
Once priors have been defined, the model is ready to be fed some data. The chl_ loaded earlier had several thousand observations. Because grid approximation is computationally intensive, I'll only pick a handful of data. For reasons discussed further below, this will enable the comparison of the effects different priors can have on the final result.
I'll start by selecting 10 observations.
<a id='GRID'></a>
Building the Grid
For this example I simply want to approximate the distribution of chl_l following these steps:
Define a model to approximate the process that generates the observations
Theory: data generation is well approximated by a Gaussian.
Hypotheses (\(H\)) therefore include 2 vectors; mean \(\mu\) and standard deviation \(\sigma\).
Both parameters are expected to vary within a certain range.
Build the grid of model parameters
2D grid of \((\mu, \sigma)\) pair
Propose priors
define priors for both \(\mu\) and \(\sigma\)
Compute likelihood
Compute posterior
First, I load data stored in a pandas dataframe that contains among other things, log-transformed phytoplankton chlorophyll (chl_l) values measured during oceanographic cruises around the world.
End of explanation
"""
f, ax = pl.subplots(figsize=(4,4))
sb.kdeplot(df_data.chl_l, ax=ax, legend=False);
ax.set_xlabel('chl_l');
f.tight_layout()
f.savefig('./figJar/Presentation/fig1_chl.svg', dpi=300, format='svg')
"""
Explanation: here are two columns. MxBl-Gr is a blue-to-green ratio that will serve as predictor of chlorophyll when I address regression. For now, MxBl-Gr is ignored, only chl_l is of interest. Here is what the distribution of chl_l, smoothed by kernel density estimation, looks like:
End of explanation
"""
print(df_grid.shape)
df_grid.head(7)
"""
Explanation: ... and here is what it looks like.
End of explanation
"""
sample_N = 10
df_data_s = df_data.dropna().sample(n=sample_N)
g = sb.PairGrid(df_data_s.loc[:,['MxBl-Gr', 'chl_l']],
diag_sharey=False)
g.map_diag(sb.kdeplot, )
g.map_offdiag(sb.scatterplot, alpha=0.75, edgecolor='k');
make_lower_triangle(g)
g.axes[1,0].set_ylabel(r'$log_{10}(chl)$');
g.axes[1,1].set_xlabel(r'$log_{10}(chl)$');
"""
Explanation: In the figure above looking down the \(\sigma\)-axis shows the 'wall' of uniform probability where none of the positive values, capped here at 2.0 has is expected to be more likely. Looking down the \(\mu\)-axis, on the other hand, reveals the gaussian peak around 1, within a grid of floats extending from -2.0 to 2.0.
Once priors have been defined, the model is ready to be fed some data. The chl_ loaded earlier had several thousand observations. Because grid approximation is computationally intensive, I'll only pick a handful of data. For reasons discussed further below, this will enable the comparison of the effects different priors can have on the final result.
I'll start by selecting 10 observations.
End of explanation
"""
df_grid['LL'] = np.sum(norm.logpdf(df_data_s.chl_l.values.reshape(1, -1),
loc=df_grid.μ.values.reshape(-1, 1),
scale=df_grid.σ.values.reshape(-1, 1)
), axis=1)
"""
Explanation: Compute Log-Likelihood of the data given every pair \( ( \mu ,\sigma)\). This is done by summing the log-probability of each datapoint, given each grid point; i.e. each \((\mu, \sigma)\) pair.
End of explanation
"""
# compute log-probability
df_grid['log_post_prob'] = df_grid.LL + df_grid.log_prior_prob
# convert to straight prob.
df_grid['post_prob'] = np.exp(df_grid.log_post_prob
- df_grid.log_post_prob.max())
# Plot Multi-Dimensional Prior and Posterior
f, ax = pl.subplots(ncols=2, figsize=(12, 5), sharey=True)
df_grid.plot.hexbin(x='μ', y='σ', C='prior_prob',
cmap='plasma', sharex=False, ax=ax[0])
df_grid.plot.hexbin(x='μ', y='σ', C='post_prob',
cmap='plasma', sharex=False, ax=ax[1]);
ax[0].set_title('Prior Probability Distribution')
ax[1].set_title('Posterior Probability Distribution')
f.tight_layout()
f.savefig('./figJar/Presentation/grid1.svg')
"""
Explanation: Compute Posterior $P(\mu,\sigma\ | data) \propto P(data | \mu, \sigma) \times P(\mu, \sigma)$
End of explanation
"""
# Compute Marginal Priors and Posteriors for each Parameter
df_μ = df_grid.groupby(['μ']).sum().drop('σ', axis=1)[['prior_prob',
'post_prob']
].reset_index()
df_σ = df_grid.groupby(['σ']).sum().drop('μ', axis=1)[['prior_prob',
'post_prob']
].reset_index()
# Normalize Probability Distributions
df_μ.prior_prob /= df_μ.prior_prob.max()
df_μ.post_prob /= df_μ.post_prob.max()
df_σ.prior_prob /= df_σ.prior_prob.max()
df_σ.post_prob /= df_σ.post_prob.max()
#Plot Marginal Priors and Posteriors
f, ax = pl.subplots(ncols=2, figsize=(12, 4))
df_μ.plot(x='μ', y='prior_prob', ax=ax[0], label='prior');
df_μ.plot(x='μ', y='post_prob', ax=ax[0], label='posterior')
df_σ.plot(x='σ', y='prior_prob', ax=ax[1], label='prior')
df_σ.plot(x='σ', y='post_prob', ax=ax[1], label='posterior');
f.suptitle('Marginal Probability Distributions', fontsize=16);
f.tight_layout(pad=2)
f.savefig('./figJar/Presentation/grid2.svg')
"""
Explanation: <img src='./resources/grid1.svg'/>
End of explanation
"""
def compute_bayes_framework(data, priors_dict):
# build grid:
μ = np.linspace(-2, 2, num=200)
σ = np.linspace(0, 2, num=200)
df_b = pd.DataFrame([[μ_i, σ_i] for σ_i in σ for μ_i in μ],
columns=['μ', 'σ'])
# compute/store distributions
μ_prior = norm.logpdf(df_b.μ, priors_dict['μ_mean'],
priors_dict['μ_sd'])
σ_prior = uniform.logpdf(df_b.σ, priors_dict['σ_lo'],
priors_dict['σ_hi'])
# compute joint prior
df_b['log_prior_prob'] = μ_prior + σ_prior
df_b['prior_prob'] = np.exp(df_b.log_prior_prob
- df_b.log_prior_prob.max())
# compute log likelihood
df_b['LL'] = np.sum(norm.logpdf(data.chl_l.values.reshape(1, -1),
loc=df_b.μ.values.reshape(-1, 1),
scale=df_b.σ.values.reshape(-1, 1)
), axis=1)
# compute joint posterior
df_b['log_post_prob'] = df_b.LL + df_b.log_prior_prob
df_b['post_prob'] = np.exp(df_b.log_post_prob
- df_b.log_post_prob.max())
return df_b
def plot_posterior(df_, ax1, ax2):
df_.plot.hexbin(x='μ', y='σ', C='prior_prob',
cmap='plasma', sharex=False, ax=ax1)
df_.plot.hexbin(x='μ', y='σ', C='post_prob',
cmap='plasma', sharex=False, ax=ax2);
ax1.set_title('Prior Probability Distribution')
ax2.set_title('Posterior Probability Distribution')
def plot_marginals(df_, ax1, ax2, plot_prior=True):
"""Compute marginal posterior distributions."""
df_μ = df_.groupby(['μ']).sum().drop('σ',
axis=1)[['prior_prob',
'post_prob']
].reset_index()
df_σ = df_.groupby(['σ']).sum().drop('μ',
axis=1)[['prior_prob',
'post_prob']
].reset_index()
# Normalize Probability Distributions
df_μ.prior_prob /= df_μ.prior_prob.max()
df_μ.post_prob /= df_μ.post_prob.max()
df_σ.prior_prob /= df_σ.prior_prob.max()
df_σ.post_prob /= df_σ.post_prob.max()
#Plot Marginal Priors and Posteriors
if plot_prior:
df_μ.plot(x='μ', y='prior_prob', ax=ax1, label='prior');
df_σ.plot(x='σ', y='prior_prob', ax=ax2, label='prior')
df_μ.plot(x='μ', y='post_prob', ax=ax1, label='posterior')
df_σ.plot(x='σ', y='post_prob', ax=ax2, label='posterior');
"""
Explanation: Back to Contents
<a id='PriorImpact'></a>
Impact of Priors
End of explanation
"""
weak_prior=dict(μ_mean=1, μ_sd=1, σ_lo=0, σ_hi=2)
df_grid_1 = compute_bayes_framework(df_data_s, priors_dict=weak_prior)
f , axp = pl.subplots(ncols=2, nrows=2, figsize=(12, 9))
axp = axp.ravel()
plot_posterior(df_grid_1, axp[0], axp[1])
plot_marginals(df_grid_1, axp[2], axp[3])
axp[2].legend(['weak prior', 'posterior'])
axp[3].legend(['flat prior', 'posterior'])
f.tight_layout()
f.savefig('./figJar/Presentation/grid3.svg')
"""
Explanation: Try two priors:
1. $\mu \sim \mathcal{N}(1, 1)$, $\sigma \sim \mathcal{U}(0, 2)$ - a weakly informative set of priors
End of explanation
"""
strong_prior=dict(μ_mean=-1.5, μ_sd=.1, σ_lo=0, σ_hi=2)
df_grid_2 = compute_bayes_framework(df_data_s, priors_dict=strong_prior)
f , axp = pl.subplots(ncols=2, nrows=2, figsize=(12, 9))
axp = axp.ravel()
plot_posterior(df_grid_2, axp[0], axp[1])
plot_marginals(df_grid_2, axp[2], axp[3])
axp[2].legend(['strong prior', 'posterior'])
axp[3].legend(['flat prior', 'posterior'])
f.tight_layout()
f.savefig('./figJar/Presentation/grid4.svg')
"""
Explanation: <img src="./resources/grid3.svg?modified=3"/>
$\mu \sim \mathcal{N}(-1.5, 0.1)$, $\sigma \sim \mathcal{U}(0, 2)$ - a strongly informative prior
End of explanation
"""
sample_N = 500
# compute the inference dataframe
df_data_s = df_data.dropna().sample(n=sample_N)
# display the new sub-sample
g = sb.PairGrid(df_data_s.loc[:,['MxBl-Gr', 'chl_l']],
diag_sharey=False)
g.map_diag(sb.kdeplot, )
g.map_offdiag(sb.scatterplot, alpha=0.75, edgecolor='k');
make_lower_triangle(g)
g.axes[1,0].set_ylabel(r'$log_{10}(chl)$');
g.axes[1,1].set_xlabel(r'$log_{10}(chl)$');
%%time
df_grid_3 = compute_bayes_framework(df_data_s, priors_dict=weak_prior)
f , axp = pl.subplots(ncols=2, nrows=2, figsize=(12, 9))
axp = axp.ravel()
plot_posterior(df_grid_3, axp[0], axp[1])
plot_marginals(df_grid_3, axp[2], axp[3])
axp[2].legend(['weak prior', 'posterior'])
axp[3].legend(['flat prior', 'posterior'])
f.tight_layout()
f.savefig('./figJar/Presentation/grid5.svg')
"""
Explanation: Back to Contents
<a id='DataImpact'></a>
Impact of data set size
sub-sample size is now 500 samples,
same two priors used
End of explanation
"""
df_grid_4 = compute_bayes_framework(df_data_s, priors_dict=strong_prior)
f , axp = pl.subplots(ncols=2, nrows=2, figsize=(12, 9))
axp = axp.ravel()
plot_posterior(df_grid_4, axp[0], axp[1])
plot_marginals(df_grid_4, axp[2], axp[3])
axp[2].legend(['strong prior', 'posterior'])
axp[3].legend(['flat prior', 'posterior'])
f.tight_layout()
f.savefig('./figJar/Presentation/grid6.svg')
"""
Explanation: <img src=./resources/grid5.svg/>
End of explanation
"""
f , axp = pl.subplots(ncols=2, nrows=2, figsize=(12, 8), sharey=True)
axp = axp.ravel()
plot_marginals(df_grid_3, axp[0], axp[1])
plot_marginals(df_grid_4, axp[2], axp[3])
axp[0].legend(['weak prior', 'posterior'])
axp[1].legend(['flat prior', 'posterior'])
axp[2].legend(['strong prior', 'posterior'])
axp[3].legend(['flat prior', 'posterior'])
f.tight_layout()
f.savefig('./figJar/Presentation/grid7.svg')
"""
Explanation: <img src=./resources/grid6.svg/>
End of explanation
"""
%%time
priors=dict(μ_mean=-1.5, μ_sd=.1, σ_lo=0, σ_hi=2)
try:
df_grid_all_data= compute_bayes_framework(df_data, priors_dict=priors)
except MemoryError:
print("OUT OF MEMORY!")
print("--------------")
"""
Explanation: And using all the data?
End of explanation
"""
|
hunterherrin/phys202-2015-work | assignments/assignment11/OptimizationEx01.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import scipy.optimize as opt
"""
Explanation: Optimization Exercise 1
Imports
End of explanation
"""
def hat(x,a,b):
return b*x**4-a*x**2
assert hat(0.0, 1.0, 1.0)==0.0
assert hat(0.0, 1.0, 1.0)==0.0
assert hat(1.0, 10.0, 1.0)==-9.0
"""
Explanation: Hat potential
End of explanation
"""
a = 5.0
b = 1.0
x = np.linspace(-3,3,60)
plt.plot(x,hat(x,a,b))
assert True # leave this to grade the plot
"""
Explanation: Plot this function over the range $x\in\left[-3,3\right]$ with $b=1.0$ and $a=5.0$:
End of explanation
"""
min_1=opt.minimize(hat,1.5,args=(a,b))
minsx=[-1.58113881,1.58113881]
minsy=[-6.24999999,-6.24999999]
plt.figure(figsize=(10,5))
plt.plot(x,hat(x,a,b))
plt.scatter(minsx,minsy,100,c='r',marker='o')
plt.tight_layout()
plt.title('Hat Potential With Local Minimums')
plt.xlabel('x')
plt.ylabel('Hat Potential')
plt.grid()
assert True # leave this for grading the plot
"""
Explanation: Write code that finds the two local minima of this function for $b=1.0$ and $a=5.0$.
Use scipy.optimize.minimize to find the minima. You will have to think carefully about how to get this function to find both minima.
Print the x values of the minima.
Plot the function as a blue line.
On the same axes, show the minima as red circles.
Customize your visualization to make it beatiful and effective.
End of explanation
"""
|
cranium/deep-learning | tv-script-generation/dlnd_tv_script_generation.ipynb | mit | """
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
"""
Explanation: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
"""
view_sentence_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
"""
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
"""
import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
words = set(text)
vocab_to_int = {}
int_to_vocab = {}
for i, word in enumerate(words):
vocab_to_int[word] = i
int_to_vocab[i] = word
return vocab_to_int, int_to_vocab
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
"""
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
"""
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
"""
return {
'.': '%%dash%%',
',': '%%comma%%',
'"': '%%quote%%',
';': '%%semicolon%%',
'!': '%%exclamation%%',
'?': '%%questionmark%%',
'(': '%%leftparen%%',
')': '%%rightparen%%',
'--': '%%dash%%',
'\n': '%%newline%%'
}
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
"""
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
"""
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
"""
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
"""
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
"""
def get_inputs():
"""
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
"""
return tf.placeholder(tf.int32, shape=[None, None], name='input'), tf.placeholder(tf.int32, shape=[None, None], name='target'), tf.placeholder(tf.float32, name='learning_rate')
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_inputs(get_inputs)
"""
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following tuple (Input, Targets, LearningRate)
End of explanation
"""
def get_init_cell(batch_size, rnn_size):
"""
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
"""
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
cell = tf.contrib.rnn.MultiRNNCell([lstm])
initial_state = tf.identity(cell.zero_state(batch_size, tf.int32), name="initial_state")
return cell, initial_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_init_cell(get_init_cell)
"""
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
"""
def get_embed(input_data, vocab_size, embed_dim):
"""
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
"""
embedding = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))
embed = tf.nn.embedding_lookup(embedding, input_data)
return embed
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_embed(get_embed)
"""
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
End of explanation
"""
def build_rnn(cell, inputs):
"""
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
"""
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
final_state = tf.identity(final_state, name="final_state")
return outputs, final_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_rnn(build_rnn)
"""
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
"""
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
"""
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
"""
embed = get_embed(input_data,vocab_size,rnn_size)
rnn, final_state = build_rnn(cell, embed)
logits = tf.contrib.layers.fully_connected(rnn, vocab_size, activation_fn=None)
return logits, final_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_nn(build_nn)
"""
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
"""
def get_batches(int_text, batch_size, seq_length):
"""
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
"""
# TODO: Implement Function
n_batches = int(len(int_text) / (batch_size * seq_length))
# Drop the last few characters to make only full batches
xdata = np.array(int_text[: n_batches * batch_size * seq_length])
ydata = np.array(int_text[1: n_batches * batch_size * seq_length + 1])
ydata[-1] = int_text[0]
x_batches = np.split(xdata.reshape(batch_size, -1), n_batches, 1)
y_batches = np.split(ydata.reshape(batch_size, -1), n_batches, 1)
return np.array(list(zip(x_batches, y_batches)))
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_batches(get_batches)
"""
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2], [ 7 8], [13 14]]
# Batch of targets
[[ 2 3], [ 8 9], [14 15]]
]
# Second Batch
[
# Batch of Input
[[ 3 4], [ 9 10], [15 16]]
# Batch of targets
[[ 4 5], [10 11], [16 17]]
]
# Third Batch
[
# Batch of Input
[[ 5 6], [11 12], [17 18]]
# Batch of targets
[[ 6 7], [12 13], [18 1]]
]
]
```
Notice that the last target value in the last batch is the first input value of the first batch. In this case, 1. This is a common technique used when creating sequence batches, although it is rather unintuitive.
End of explanation
"""
# Number of Epochs
num_epochs = 100
# Batch Size
batch_size = 1024
# RNN Size
rnn_size = 512
# Embedding Dimension Size
embed_dim = 2
# Sequence Length
seq_length = 10
# Learning Rate
learning_rate = 0.01
# Show stats for every n number of batches
show_every_n_batches = 1
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
save_dir = './save'
"""
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set embed_dim to the size of the embedding.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
"""
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
"""
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
"""
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
"""
Explanation: Checkpoint
End of explanation
"""
def get_tensors(loaded_graph):
"""
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
"""
return loaded_graph.get_tensor_by_name('input:0'), loaded_graph.get_tensor_by_name('initial_state:0'), loaded_graph.get_tensor_by_name('final_state:0'), loaded_graph.get_tensor_by_name('probs:0')
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_tensors(get_tensors)
"""
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
"""
def pick_word(probabilities, int_to_vocab):
"""
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
"""
return int_to_vocab[np.argmax(probabilities)]
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_pick_word(pick_word)
"""
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
"""
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'homer_simpson'
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
"""
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation
"""
|
strandbygaard/deep-learning | weight-initialization/weight_initialization.ipynb | mit | %matplotlib inline
import tensorflow as tf
import helper
from tensorflow.examples.tutorials.mnist import input_data
print('Getting MNIST Dataset...')
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
print('Data Extracted.')
"""
Explanation: Weight Initialization
In this lesson, you'll learn how to find good initial weights for a neural network. Having good initial weights can place the neural network close to the optimal solution. This allows the neural network to come to the best solution quicker.
Testing Weights
Dataset
To see how different weights perform, we'll test on the same dataset and neural network. Let's go over the dataset and neural network.
We'll be using the MNIST dataset to demonstrate the different initial weights. As a reminder, the MNIST dataset contains images of handwritten numbers, 0-9, with normalized input (0.0 - 1.0). Run the cell below to download and load the MNIST dataset.
End of explanation
"""
# Save the shapes of weights for each layer
layer_1_weight_shape = (mnist.train.images.shape[1], 256)
layer_2_weight_shape = (256, 128)
layer_3_weight_shape = (128, mnist.train.labels.shape[1])
"""
Explanation: Neural Network
<img style="float: left" src="images/neural_network.png"/>
For the neural network, we'll test on a 3 layer neural network with ReLU activations and an Adam optimizer. The lessons you learn apply to other neural networks, including different activations and optimizers.
End of explanation
"""
all_zero_weights = [
tf.Variable(tf.zeros(layer_1_weight_shape)),
tf.Variable(tf.zeros(layer_2_weight_shape)),
tf.Variable(tf.zeros(layer_3_weight_shape))
]
all_one_weights = [
tf.Variable(tf.ones(layer_1_weight_shape)),
tf.Variable(tf.ones(layer_2_weight_shape)),
tf.Variable(tf.ones(layer_3_weight_shape))
]
helper.compare_init_weights(
mnist,
'All Zeros vs All Ones',
[
(all_zero_weights, 'All Zeros'),
(all_one_weights, 'All Ones')])
"""
Explanation: Initialize Weights
Let's start looking at some initial weights.
All Zeros or Ones
If you follow the principle of Occam's razor, you might think setting all the weights to 0 or 1 would be the best solution. This is not the case.
With every weight the same, all the neurons at each layer are producing the same output. This makes it hard to decide which weights to adjust.
Let's compare the loss with all ones and all zero weights using helper.compare_init_weights. This function will run two different initial weights on the neural network above for 2 epochs. It will plot the loss for the first 100 batches and print out stats after the 2 epochs (~860 batches). We plot the first 100 batches to better judge which weights performed better at the start.
Run the cell below to see the difference between weights of all zeros against all ones.
End of explanation
"""
helper.hist_dist('Random Uniform (minval=-3, maxval=3)', tf.random_uniform([1000], -3, 3))
"""
Explanation: As you can see the accuracy is close to guessing for both zeros and ones, around 10%.
The neural network is having a hard time determining which weights need to be changed, since the neurons have the same output for each layer. To avoid neurons with the same output, let's use unique weights. We can also randomly select these weights to avoid being stuck in a local minimum for each run.
A good solution for getting these random weights is to sample from a uniform distribution.
Uniform Distribution
A [uniform distribution](https://en.wikipedia.org/wiki/Uniform_distribution_(continuous%29) has the equal probability of picking any number from a set of numbers. We'll be picking from a continous distribution, so the chance of picking the same number is low. We'll use TensorFlow's tf.random_uniform function to pick random numbers from a uniform distribution.
tf.random_uniform(shape, minval=0, maxval=None, dtype=tf.float32, seed=None, name=None)
Outputs random values from a uniform distribution.
The generated values follow a uniform distribution in the range [minval, maxval). The lower bound minval is included in the range, while the upper bound maxval is excluded.
shape: A 1-D integer Tensor or Python array. The shape of the output tensor.
minval: A 0-D Tensor or Python value of type dtype. The lower bound on the range of random values to generate. Defaults to 0.
maxval: A 0-D Tensor or Python value of type dtype. The upper bound on the range of random values to generate. Defaults to 1 if dtype is floating point.
dtype: The type of the output: float32, float64, int32, or int64.
seed: A Python integer. Used to create a random seed for the distribution. See tf.set_random_seed for behavior.
name: A name for the operation (optional).
We can visualize the uniform distribution by using a histogram. Let's map the values from tf.random_uniform([1000], -3, 3) to a histogram using the helper.hist_dist function. This will be 1000 random float values from -3 to 3, excluding the value 3.
End of explanation
"""
# Default for tf.random_uniform is minval=0 and maxval=1
basline_weights = [
tf.Variable(tf.random_uniform(layer_1_weight_shape)),
tf.Variable(tf.random_uniform(layer_2_weight_shape)),
tf.Variable(tf.random_uniform(layer_3_weight_shape))
]
helper.compare_init_weights(
mnist,
'Baseline',
[(basline_weights, 'tf.random_uniform [0, 1)')])
"""
Explanation: The histogram used 500 buckets for the 1000 values. Since the chance for any single bucket is the same, there should be around 2 values for each bucket. That's exactly what we see with the histogram. Some buckets have more and some have less, but they trend around 2.
Now that you understand the tf.random_uniform function, let's apply it to some initial weights.
Baseline
Let's see how well the neural network trains using the default values for tf.random_uniform, where minval=0.0 and maxval=1.0.
End of explanation
"""
uniform_neg1to1_weights = [
tf.Variable(tf.random_uniform(layer_1_weight_shape, -1, 1)),
tf.Variable(tf.random_uniform(layer_2_weight_shape, -1, 1)),
tf.Variable(tf.random_uniform(layer_3_weight_shape, -1, 1))
]
helper.compare_init_weights(
mnist,
'[0, 1) vs [-1, 1)',
[
(basline_weights, 'tf.random_uniform [0, 1)'),
(uniform_neg1to1_weights, 'tf.random_uniform [-1, 1)')])
"""
Explanation: The loss graph is showing the neural network is learning, which it didn't with all zeros or all ones. We're headed in the right direction.
General rule for setting weights
The general rule for setting the weights in a neural network is to be close to zero without being too small. A good pracitce is to start your weights in the range of $[-y, y]$ where
$y=1/\sqrt{n}$ ($n$ is the number of inputs to a given neuron).
Let's see if this holds true, let's first center our range over zero. This will give us the range [-1, 1).
End of explanation
"""
uniform_neg01to01_weights = [
tf.Variable(tf.random_uniform(layer_1_weight_shape, -0.1, 0.1)),
tf.Variable(tf.random_uniform(layer_2_weight_shape, -0.1, 0.1)),
tf.Variable(tf.random_uniform(layer_3_weight_shape, -0.1, 0.1))
]
uniform_neg001to001_weights = [
tf.Variable(tf.random_uniform(layer_1_weight_shape, -0.01, 0.01)),
tf.Variable(tf.random_uniform(layer_2_weight_shape, -0.01, 0.01)),
tf.Variable(tf.random_uniform(layer_3_weight_shape, -0.01, 0.01))
]
uniform_neg0001to0001_weights = [
tf.Variable(tf.random_uniform(layer_1_weight_shape, -0.001, 0.001)),
tf.Variable(tf.random_uniform(layer_2_weight_shape, -0.001, 0.001)),
tf.Variable(tf.random_uniform(layer_3_weight_shape, -0.001, 0.001))
]
helper.compare_init_weights(
mnist,
'[-1, 1) vs [-0.1, 0.1) vs [-0.01, 0.01) vs [-0.001, 0.001)',
[
(uniform_neg1to1_weights, '[-1, 1)'),
(uniform_neg01to01_weights, '[-0.1, 0.1)'),
(uniform_neg001to001_weights, '[-0.01, 0.01)'),
(uniform_neg0001to0001_weights, '[-0.001, 0.001)')],
plot_n_batches=None)
"""
Explanation: We're going in the right direction, the accuracy and loss is better with [-1, 1). We still want smaller weights. How far can we go before it's too small?
Too small
Let's compare [-0.1, 0.1), [-0.01, 0.01), and [-0.001, 0.001) to see how small is too small. We'll also set plot_n_batches=None to show all the batches in the plot.
End of explanation
"""
import numpy as np
general_rule_weights = [
tf.Variable(tf.random_uniform(layer_1_weight_shape, -1/np.sqrt(layer_1_weight_shape[0]), 1/np.sqrt(layer_1_weight_shape[0]))),
tf.Variable(tf.random_uniform(layer_2_weight_shape, -1/np.sqrt(layer_2_weight_shape[0]), 1/np.sqrt(layer_2_weight_shape[0]))),
tf.Variable(tf.random_uniform(layer_3_weight_shape, -1/np.sqrt(layer_3_weight_shape[0]), 1/np.sqrt(layer_3_weight_shape[0])))
]
helper.compare_init_weights(
mnist,
'[-0.1, 0.1) vs General Rule',
[
(uniform_neg01to01_weights, '[-0.1, 0.1)'),
(general_rule_weights, 'General Rule')],
plot_n_batches=None)
"""
Explanation: Looks like anything [-0.01, 0.01) or smaller is too small. Let's compare this to our typical rule of using the range $y=1/\sqrt{n}$.
End of explanation
"""
helper.hist_dist('Random Normal (mean=0.0, stddev=1.0)', tf.random_normal([1000]))
"""
Explanation: The range we found and $y=1/\sqrt{n}$ are really close.
Since the uniform distribution has the same chance to pick anything in the range, what if we used a distribution that had a higher chance of picking numbers closer to 0. Let's look at the normal distribution.
Normal Distribution
Unlike the uniform distribution, the normal distribution has a higher likelihood of picking number close to it's mean. To visualize it, let's plot values from TensorFlow's tf.random_normal function to a histogram.
tf.random_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None)
Outputs random values from a normal distribution.
shape: A 1-D integer Tensor or Python array. The shape of the output tensor.
mean: A 0-D Tensor or Python value of type dtype. The mean of the normal distribution.
stddev: A 0-D Tensor or Python value of type dtype. The standard deviation of the normal distribution.
dtype: The type of the output.
seed: A Python integer. Used to create a random seed for the distribution. See tf.set_random_seed for behavior.
name: A name for the operation (optional).
End of explanation
"""
normal_01_weights = [
tf.Variable(tf.random_normal(layer_1_weight_shape, stddev=0.1)),
tf.Variable(tf.random_normal(layer_2_weight_shape, stddev=0.1)),
tf.Variable(tf.random_normal(layer_3_weight_shape, stddev=0.1))
]
helper.compare_init_weights(
mnist,
'Uniform [-0.1, 0.1) vs Normal stddev 0.1',
[
(uniform_neg01to01_weights, 'Uniform [-0.1, 0.1)'),
(normal_01_weights, 'Normal stddev 0.1')])
"""
Explanation: Let's compare the normal distribution against the previous uniform distribution.
End of explanation
"""
helper.hist_dist('Truncated Normal (mean=0.0, stddev=1.0)', tf.truncated_normal([1000]))
"""
Explanation: The normal distribution gave a slight increasse in accuracy and loss. Let's move closer to 0 and drop picked numbers that are x number of standard deviations away. This distribution is called Truncated Normal Distribution.
Truncated Normal Distribution
tf.truncated_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None)
Outputs random values from a truncated normal distribution.
The generated values follow a normal distribution with specified mean and standard deviation, except that values whose magnitude is more than 2 standard deviations from the mean are dropped and re-picked.
shape: A 1-D integer Tensor or Python array. The shape of the output tensor.
mean: A 0-D Tensor or Python value of type dtype. The mean of the truncated normal distribution.
stddev: A 0-D Tensor or Python value of type dtype. The standard deviation of the truncated normal distribution.
dtype: The type of the output.
seed: A Python integer. Used to create a random seed for the distribution. See tf.set_random_seed for behavior.
name: A name for the operation (optional).
End of explanation
"""
trunc_normal_01_weights = [
tf.Variable(tf.truncated_normal(layer_1_weight_shape, stddev=0.1)),
tf.Variable(tf.truncated_normal(layer_2_weight_shape, stddev=0.1)),
tf.Variable(tf.truncated_normal(layer_3_weight_shape, stddev=0.1))
]
helper.compare_init_weights(
mnist,
'Normal vs Truncated Normal',
[
(normal_01_weights, 'Normal'),
(trunc_normal_01_weights, 'Truncated Normal')])
"""
Explanation: Again, let's compare the previous results with the previous distribution.
End of explanation
"""
helper.compare_init_weights(
mnist,
'Baseline vs Truncated Normal',
[
(basline_weights, 'Baseline'),
(trunc_normal_01_weights, 'Truncated Normal')])
"""
Explanation: There's no difference between the two, but that's because the neural network we're using is too small. A larger neural network will pick more points on the normal distribution, increasing the likelihood it's choices are larger than 2 standard deviations.
We've come a long way from the first set of weights we tested. Let's see the difference between the weights we used then and now.
End of explanation
"""
|
Reddone/CarIncidentJupyter | RandomForest.ipynb | mit | # Load dataset
load_path = r"0_CarIncident_2014"
dataset = pd.read_pickle(load_path)
dataset.drop('IDProtocollo', inplace=True, axis=1)
dataset.drop('Progressivo', inplace=True, axis=1)
dataset.describe()
"""
Explanation: Partendo dal dataset salvato precedentemente, cerchiamo di mettere su un algoritmo predittivo, atto a capire quali fattori causano o meno un incidente il cui esito sul tipo di lesione posso essere "Illeso", "Rimandato", "Ricoverato" o "Deceduto". Dato che abbiamo scelto di lavorare con dati di tipo categorico, la scelta migliore è quella di usare gli alberi, che si prestano molto bene a lavorare con questo tipologia di dati. In particolare, utilizzeremo la tecnica del bootstrap aggregation (bagging), per ottenere un modello predittivo robusto e con buona capacità di generalizzazione.
End of explanation
"""
from sklearn.preprocessing import LabelEncoder
# Create labels for classification
number = LabelEncoder()
dataset['TipoLesione'] = number.fit_transform(dataset['TipoLesione'].astype('str'))
print(list(number.classes_))
print(number.transform(number.classes_))
dataset.head()
# NaturaIncidente dummies
NaturaIncidente_dummies = pd.get_dummies(dataset['NaturaIncidente'], prefix='NaturaIncidente')
dataset = pd.concat([dataset, NaturaIncidente_dummies], axis=1)
dataset.drop('NaturaIncidente', inplace=True, axis=1)
# ParticolaritaStrade dummies
ParticolaritaStrade_dummies = pd.get_dummies(dataset['ParticolaritaStrade'], prefix='ParticolaritaStrade')
dataset = pd.concat([dataset, ParticolaritaStrade_dummies], axis=1)
dataset.drop('ParticolaritaStrade', inplace=True, axis=1)
# TipoStrada dummies
TipoStrada_dummies = pd.get_dummies(dataset['TipoStrada'], prefix='TipoStrada')
dataset = pd.concat([dataset, TipoStrada_dummies], axis=1)
dataset.drop('TipoStrada', inplace=True, axis=1)
# FondoStradale dummies
FondoStradale_dummies = pd.get_dummies(dataset['FondoStradale'], prefix='FondoStradale')
dataset = pd.concat([dataset, FondoStradale_dummies], axis=1)
dataset.drop('FondoStradale', inplace=True, axis=1)
# Pavimentazione dummies
Pavimentazione_dummies = pd.get_dummies(dataset['Pavimentazione'], prefix='Pavimentazione')
dataset = pd.concat([dataset, Pavimentazione_dummies], axis=1)
dataset.drop('Pavimentazione', inplace=True, axis=1)
# Segnaletica dummies
Segnaletica_dummies = pd.get_dummies(dataset['Segnaletica'], prefix='Segnaletica')
dataset = pd.concat([dataset, Segnaletica_dummies], axis=1)
dataset.drop('Segnaletica', inplace=True, axis=1)
# CondizioneAtmosferica dummies
CondizioneAtmosferica_dummies = pd.get_dummies(dataset['CondizioneAtmosferica'], prefix='CondizioneAtmosferica')
dataset = pd.concat([dataset, CondizioneAtmosferica_dummies], axis=1)
dataset.drop('CondizioneAtmosferica', inplace=True, axis=1)
# Traffico dummies
Traffico_dummies = pd.get_dummies(dataset['Traffico'], prefix='Traffico')
dataset = pd.concat([dataset, Traffico_dummies], axis=1)
dataset.drop('Traffico', inplace=True, axis=1)
# Visibilità dummies
Visibilità_dummies = pd.get_dummies(dataset['Visibilità'], prefix='Visibilità')
dataset = pd.concat([dataset, Visibilità_dummies], axis=1)
dataset.drop('Visibilità', inplace=True, axis=1)
# Illuminazione dummies
Illuminazione_dummies = pd.get_dummies(dataset['Illuminazione'], prefix='Illuminazione')
dataset = pd.concat([dataset, Illuminazione_dummies], axis=1)
dataset.drop('Illuminazione', inplace=True, axis=1)
# TipoVeicolo dummies
TipoVeicolo_dummies = pd.get_dummies(dataset['TipoVeicolo'], prefix='TipoVeicolo')
dataset = pd.concat([dataset, TipoVeicolo_dummies], axis=1)
dataset.drop('TipoVeicolo', inplace=True, axis=1)
# TipoPersona dummies
TipoPersona_dummies = pd.get_dummies(dataset['TipoPersona'], prefix='TipoPersona')
dataset = pd.concat([dataset, TipoPersona_dummies], axis=1)
dataset.drop('TipoPersona', inplace=True, axis=1)
# Sesso dummies
Sesso_dummies = pd.get_dummies(dataset['Sesso'], prefix='Sesso')
dataset = pd.concat([dataset, Sesso_dummies], axis=1)
dataset.drop('Sesso', inplace=True, axis=1)
# FaseGiorno dummies
FaseGiorno_dummies = pd.get_dummies(dataset['FaseGiorno'], prefix='FaseGiorno')
dataset = pd.concat([dataset, FaseGiorno_dummies], axis=1)
dataset.drop('FaseGiorno', inplace=True, axis=1)
# DimensioneIncidente dummies
DimensioneIncidente_dummies = pd.get_dummies(dataset['DimensioneIncidente'], prefix='DimensioneIncidente')
dataset = pd.concat([dataset, DimensioneIncidente_dummies], axis=1)
dataset.drop('DimensioneIncidente', inplace=True, axis=1)
# FasciaEta dummies
FasciaEta_dummies = pd.get_dummies(dataset['FasciaEta'], prefix='FasciaEta')
dataset = pd.concat([dataset, FasciaEta_dummies], axis=1)
dataset.drop('FasciaEta', inplace=True, axis=1)
dataset.head()
dataset.shape
"""
Explanation: Per prima cosa, creiamo la colonna delle label. Questo passaggio è necessario perchè la maggior parte delle librerie si aspetta le label in formato [0, 1] nei casi di classificazione binaria e in formato [0, 1, ..., N] nel caso di classificazione a N classi. Inoltre, per trattare i dati di tipo categorico, scegliamo la tecnica del one-hot encoding, anche detta tecnica delle dummy variables. Per fare un esempio, la colonna "TipoPersona", che ha i valori "Conducente", "Passeggero" e "Pedone", viene trasformata in 3 colonne, ovvero "TipoPersona_Conducente", "TipoPersona_Passeggero" e "TipoPersona_Pedone"; le colonne avranno un 1 in corrispondenza della categoria indicata dal record in esame.
End of explanation
"""
from sklearn.pipeline import make_pipeline
from sklearn.ensemble import RandomForestClassifier
from sklearn.feature_selection import SelectKBest
from sklearn.cross_validation import StratifiedKFold
from sklearn.grid_search import GridSearchCV
from sklearn.ensemble.gradient_boosting import GradientBoostingClassifier
from sklearn.cross_validation import cross_val_score
from sklearn.cross_validation import train_test_split
# Generate training set and test set
targets = dataset['TipoLesione']
dataset.drop('TipoLesione', inplace=True, axis=1)
X_train, X_test, y_train, y_test = train_test_split(dataset, targets, test_size=0.25, random_state=42)
print(X_train.shape)
print(X_test.shape)
print(y_train.shape)
print(y_test.shape)
"""
Explanation: Dividiamo ora il dataset in training set e test set, mantenendo un rapporto 0,75 0,25 tra i due. Il training set verrà usato per fare cross validation, mentre il test set per misurare le performance del modello.
End of explanation
"""
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.feature_selection import SelectFromModel
# Feature selection
clf = ExtraTreesClassifier(n_estimators=200, class_weight='balanced')
clf = clf.fit(X_train, y_train)
# Show feature importance
features = pd.DataFrame()
features['feature'] = X_train.columns
features['importance'] = clf.feature_importances_
features.sort(['importance'], ascending=False)
model = SelectFromModel(clf, prefit=True)
X_train_new = model.transform(X_train)
X_test_new = model.transform(X_test)
print(X_train_new.shape)
print(X_test_new.shape)
"""
Explanation: La tecnica del one-hot encoding aumenta notevolmente in numero di features con le quali l'algoritmo deve avere a che fare, appesantendolo di conseguenza e introducendo, in alcuni casi, troppa varianza. Infatti, arrivati a questo punto abbiamo ottenuto 79 features, e sarebbe preferibile lavorare con molte meno. Utilizziamo perciò la tecnica degli extra-tree per selezionare, all'interno del training set le feature più importanti e, successivamente, rimappare training e test in due nuovi set, più facili da trattare.
End of explanation
"""
# Random Forest with grid search Cross Validation
forest = RandomForestClassifier(max_features='sqrt', class_weight='balanced')
parameter_grid = {
'max_depth' : [6, 8, 10, 12, 14],
'n_estimators': [200, 220, 240, 260],
'criterion': ['gini', 'entropy'],
}
cross_validation = StratifiedKFold(y_train, n_folds=5)
grid_search = GridSearchCV(forest,
param_grid=parameter_grid,
cv=cross_validation)
grid_search.fit(X_train_new, y_train)
print('Best score: {}'.format(grid_search.best_score_))
print('Best parameters: {}'.format(grid_search.best_params_))
"""
Explanation: Tutto è pronto per lanciare l'algoritmo!!! Andiamo ad impostare i parametri per la cross validation, lanciamo l'algoritmo e aspettiamo che il Random Forrest faccia il suo lavoro.
End of explanation
"""
from sklearn.metrics import accuracy_score
# Compute score
y_pred = grid_search.predict(X_test_new).astype(int)
accuracy_score(y_test, y_pred)
"""
Explanation: Come possiamo vedere, la capacità di generalizzazione del modello è molto buona, dato che lo score ottenuto e pressocchè simile sia per il training set che per il test set. L'accuracy ottenuta è molto buona, considerando che stiamo lavorando su una dataset proveniente dal mondo Open Data. Possiamo certamente migliorare, scegliendo migliori mappature per le feature in fase di preparazione e raffinamento del dataset e provando modelli diversi, come logistic regression, gradient boosted trees o gaussian process.
End of explanation
"""
|
ayushmaskey/ayushmaskey.github.io | jupyter/pandas_resampling.ipynb | mit | rng = pd.date_range('1/1/2011', periods=72, freq='H')
rng[1:4]
ts = pd.Series(list(range(len(rng))), index=rng)
ts.head()
"""
Explanation: resampling
does not have frequency and we want it
does not have the frequency we want
End of explanation
"""
converted = ts.asfreq('45Min', method='ffill')
converted.head(10)
ts.shape
converted.shape
converted2 = ts.asfreq('3H')
converted2.head()
"""
Explanation: convert hourly to 45 min frequency and fill data
ffill --> forward fill --> use previous month data
bfill
End of explanation
"""
#mean of 0 and 1, 2 and 3 etc
ts.resample('2H').mean()[0:10]
#resampling events in irregular time series
irreq_ts = ts[ list( np.random.choice( a = list( range( len(ts))), size=10, replace=False ))]
irreq_ts
irreq_ts = irreq_ts.sort_index()
irreq_ts
irreq_ts.resample('H').fillna( method='ffill', limit=5)
irreq_ts.resample('H').count()
"""
Explanation: resampling better option to not lose all the data
End of explanation
"""
|
postBG/DL_project | sentiment-network/Sentiment_Classification_Projects.ipynb | mit | def pretty_print_review_and_label(i):
print(labels[i] + "\t:\t" + reviews[i][:80] + "...")
g = open('reviews.txt','r') # What we know!
reviews = list(map(lambda x:x[:-1],g.readlines()))
g.close()
g = open('labels.txt','r') # What we WANT to know!
labels = list(map(lambda x:x[:-1].upper(),g.readlines()))
g.close()
"""
Explanation: Sentiment Classification & How To "Frame Problems" for a Neural Network
by Andrew Trask
Twitter: @iamtrask
Blog: http://iamtrask.github.io
What You Should Already Know
neural networks, forward and back-propagation
stochastic gradient descent
mean squared error
and train/test splits
Where to Get Help if You Need it
Re-watch previous Udacity Lectures
Leverage the recommended Course Reading Material - Grokking Deep Learning (Check inside your classroom for a discount code)
Shoot me a tweet @iamtrask
Tutorial Outline:
Intro: The Importance of "Framing a Problem" (this lesson)
Curate a Dataset
Developing a "Predictive Theory"
PROJECT 1: Quick Theory Validation
Transforming Text to Numbers
PROJECT 2: Creating the Input/Output Data
Putting it all together in a Neural Network (video only - nothing in notebook)
PROJECT 3: Building our Neural Network
Understanding Neural Noise
PROJECT 4: Making Learning Faster by Reducing Noise
Analyzing Inefficiencies in our Network
PROJECT 5: Making our Network Train and Run Faster
Further Noise Reduction
PROJECT 6: Reducing Noise by Strategically Reducing the Vocabulary
Analysis: What's going on in the weights?
Lesson: Curate a Dataset<a id='lesson_1'></a>
The cells from here until Project 1 include code Andrew shows in the videos leading up to mini project 1. We've included them so you can run the code along with the videos without having to type in everything.
End of explanation
"""
len(reviews)
reviews[0]
labels[0]
"""
Explanation: Note: The data in reviews.txt we're using has already been preprocessed a bit and contains only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like The, the, and THE, all the same way.
End of explanation
"""
print("labels.txt \t : \t reviews.txt\n")
pretty_print_review_and_label(2137)
pretty_print_review_and_label(12816)
pretty_print_review_and_label(6267)
pretty_print_review_and_label(21934)
pretty_print_review_and_label(5297)
pretty_print_review_and_label(4998)
"""
Explanation: Lesson: Develop a Predictive Theory<a id='lesson_2'></a>
데이터를 들여다보면서 가설을 세워보자
End of explanation
"""
from collections import Counter
import numpy as np
"""
Explanation: Project 1: Quick Theory Validation<a id='project_1'></a>
There are multiple ways to implement these projects, but in order to get your code closer to what Andrew shows in his solutions, we've provided some hints and starter code throughout this notebook.
You'll find the Counter class to be useful in this exercise, as well as the numpy library.
End of explanation
"""
# Create three Counter objects to store positive, negative and total counts
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
"""
Explanation: We'll create three Counter objects, one for words from postive reviews, one for words from negative reviews, and one for all the words.
End of explanation
"""
# TODO: Loop over all the words in all the reviews and increment the counts in the appropriate counter objects
for i in range(len(reviews)):
words = reviews[i].split(' ')
if (labels[i] == 'POSITIVE'):
for word in words:
positive_counts[word] += 1
total_counts[word] += 1
else:
for word in words:
negative_counts[word] += 1
total_counts[word] += 1
"""
Explanation: TODO: Examine all the reviews. For each word in a positive review, increase the count for that word in both your positive counter and the total words counter; likewise, for each word in a negative review, increase the count for that word in both your negative counter and the total words counter.
Note: Throughout these projects, you should use split(' ') to divide a piece of text (such as a review) into individual words. If you use split() instead, you'll get slightly different results than what the videos and solutions show.
End of explanation
"""
# Examine the counts of the most common words in positive reviews
positive_counts.most_common()
# Examine the counts of the most common words in negative reviews
negative_counts.most_common()
"""
Explanation: Run the following two cells to list the words used in positive reviews and negative reviews, respectively, ordered from most to least commonly used.
End of explanation
"""
# Create Counter object to store positive/negative ratios
pos_neg_ratios = Counter()
# TODO: Calculate the ratios of positive and negative uses of the most common words
# Consider words to be "common" if they've been used at least 100 times
for word, cnt in total_counts.most_common():
if(cnt >= 100):
pos_neg_ratios[word] = positive_counts[word] / float(negative_counts[word] + 1)
"""
Explanation: As you can see, common words like "the" appear very often in both positive and negative reviews. Instead of finding the most common words in positive or negative reviews, what you really want are the words found in positive reviews more often than in negative reviews, and vice versa. To accomplish this, you'll need to calculate the ratios of word usage between positive and negative reviews.
TODO: Check all the words you've seen and calculate the ratio of postive to negative uses and store that ratio in pos_neg_ratios.
Hint: the positive-to-negative ratio for a given word can be calculated with positive_counts[word] / float(negative_counts[word]+1). Notice the +1 in the denominator – that ensures we don't divide by zero for words that are only seen in positive reviews.
End of explanation
"""
print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"]))
print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"]))
print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"]))
"""
Explanation: Examine the ratios you've calculated for a few words:
End of explanation
"""
# TODO: Convert ratios to logs
for word, ratio in pos_neg_ratios.items():
if(ratio >= 1.):
pos_neg_ratios[word] = np.log(ratio)
else:
pos_neg_ratios[word] = -np.log(1 / (ratio + 0.01))
"""
Explanation: Looking closely at the values you just calculated, we see the following:
Words that you would expect to see more often in positive reviews – like "amazing" – have a ratio greater than 1. The more skewed a word is toward postive, the farther from 1 its positive-to-negative ratio will be.
Words that you would expect to see more often in negative reviews – like "terrible" – have positive values that are less than 1. The more skewed a word is toward negative, the closer to zero its positive-to-negative ratio will be.
Neutral words, which don't really convey any sentiment because you would expect to see them in all sorts of reviews – like "the" – have values very close to 1. A perfectly neutral word – one that was used in exactly the same number of positive reviews as negative reviews – would be almost exactly 1. The +1 we suggested you add to the denominator slightly biases words toward negative, but it won't matter because it will be a tiny bias and later we'll be ignoring words that are too close to neutral anyway.
Ok, the ratios tell us which words are used more often in postive or negative reviews, but the specific values we've calculated are a bit difficult to work with. A very positive word like "amazing" has a value above 4, whereas a very negative word like "terrible" has a value around 0.18. Those values aren't easy to compare for a couple of reasons:
Right now, 1 is considered neutral, but the absolute value of the postive-to-negative rations of very postive words is larger than the absolute value of the ratios for the very negative words. So there is no way to directly compare two numbers and see if one word conveys the same magnitude of positive sentiment as another word conveys negative sentiment. So we should center all the values around netural so the absolute value fro neutral of the postive-to-negative ratio for a word would indicate how much sentiment (positive or negative) that word conveys.
When comparing absolute values it's easier to do that around zero than one.
To fix these issues, we'll convert all of our ratios to new values using logarithms.
TODO: Go through all the ratios you calculated and convert them to logarithms. (i.e. use np.log(ratio))
In the end, extremely positive and extremely negative words will have positive-to-negative ratios with similar magnitudes but opposite signs.
End of explanation
"""
print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"]))
print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"]))
print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"]))
"""
Explanation: Examine the new ratios you've calculated for the same words from before:
End of explanation
"""
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
# Note: Above is the code Andrew uses in his solution video,
# so we've included it here to avoid confusion.
# If you explore the documentation for the Counter class,
# you will see you could also find the 30 least common
# words like this: pos_neg_ratios.most_common()[:-31:-1]
"""
Explanation: If everything worked, now you should see neutral words with values close to zero. In this case, "the" is near zero but slightly positive, so it was probably used in more positive reviews than negative reviews. But look at "amazing"'s ratio - it's above 1, showing it is clearly a word with positive sentiment. And "terrible" has a similar score, but in the opposite direction, so it's below -1. It's now clear that both of these words are associated with specific, opposing sentiments.
Now run the following cells to see more ratios.
The first cell displays all the words, ordered by how associated they are with postive reviews. (Your notebook will most likely truncate the output so you won't actually see all the words in the list.)
The second cell displays the 30 words most associated with negative reviews by reversing the order of the first list and then looking at the first 30 words. (If you want the second cell to display all the words, ordered by how associated they are with negative reviews, you could just write reversed(pos_neg_ratios.most_common()).)
You should continue to see values similar to the earlier ones we checked – neutral words will be close to 0, words will get more positive as their ratios approach and go above 1, and words will get more negative as their ratios approach and go below -1. That's why we decided to use the logs instead of the raw ratios.
End of explanation
"""
from IPython.display import Image
review = "This was a horrible, terrible movie."
Image(filename='sentiment_network.png')
review = "The movie was excellent"
Image(filename='sentiment_network_pos.png')
"""
Explanation: End of Project 1.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Transforming Text into Numbers<a id='lesson_3'></a>
The cells here include code Andrew shows in the next video. We've included it so you can run the code along with the video without having to type in everything.
End of explanation
"""
# TODO: Create set named "vocab" containing all of the words from all of the reviews
vocab = set(total_counts.keys())
"""
Explanation: Project 2: Creating the Input/Output Data<a id='project_2'></a>
TODO: Create a set named vocab that contains every word in the vocabulary.
End of explanation
"""
vocab_size = len(vocab)
print(vocab_size)
"""
Explanation: Run the following cell to check your vocabulary size. If everything worked correctly, it should print 74074
End of explanation
"""
from IPython.display import Image
Image(filename='sentiment_network_2.png')
"""
Explanation: Take a look at the following image. It represents the layers of the neural network you'll be building throughout this notebook. layer_0 is the input layer, layer_1 is a hidden layer, and layer_2 is the output layer.
End of explanation
"""
# TODO: Create layer_0 matrix with dimensions 1 by vocab_size, initially filled with zeros
layer_0 = np.zeros((1, vocab_size))
"""
Explanation: TODO: Create a numpy array called layer_0 and initialize it to all zeros. You will find the zeros function particularly helpful here. Be sure you create layer_0 as a 2-dimensional matrix with 1 row and vocab_size columns.
End of explanation
"""
layer_0.shape
from IPython.display import Image
Image(filename='sentiment_network.png')
"""
Explanation: Run the following cell. It should display (1, 74074)
End of explanation
"""
# Create a dictionary of words in the vocabulary mapped to index positions
# (to be used in layer_0)
word2index = {}
for i,word in enumerate(vocab):
word2index[word] = i
# display the map of words to indices
word2index
"""
Explanation: layer_0 contains one entry for every word in the vocabulary, as shown in the above image. We need to make sure we know the index of each word, so run the following cell to create a lookup table that stores the index of every word.
End of explanation
"""
def update_input_layer(review):
""" Modify the global layer_0 to represent the vector form of review.
The element at a given index of layer_0 should represent
how many times the given word occurs in the review.
Args:
review(string) - the string of the review
Returns:
None
"""
global layer_0
# clear out previous state by resetting the layer to be all 0s
layer_0 *= 0
# TODO: count how many times each word is used in the given review and store the results in layer_0
for word in review.split(' '):
layer_0[0][word2index[word]] += 1
"""
Explanation: TODO: Complete the implementation of update_input_layer. It should count
how many times each word is used in the given review, and then store
those counts at the appropriate indices inside layer_0.
End of explanation
"""
update_input_layer(reviews[0])
layer_0
"""
Explanation: Run the following cell to test updating the input layer with the first review. The indices assigned may not be the same as in the solution, but hopefully you'll see some non-zero values in layer_0.
End of explanation
"""
def get_target_for_label(label):
"""Convert a label to `0` or `1`.
Args:
label(string) - Either "POSITIVE" or "NEGATIVE".
Returns:
`0` or `1`.
"""
# TODO: Your code here
if(label == 'POSITIVE'):
return 1
else:
return 0
"""
Explanation: TODO: Complete the implementation of get_target_for_labels. It should return 0 or 1,
depending on whether the given label is NEGATIVE or POSITIVE, respectively.
End of explanation
"""
labels[0]
get_target_for_label(labels[0])
"""
Explanation: Run the following two cells. They should print out'POSITIVE' and 1, respectively.
End of explanation
"""
labels[1]
get_target_for_label(labels[1])
"""
Explanation: Run the following two cells. They should print out 'NEGATIVE' and 0, respectively.
End of explanation
"""
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews, labels, hidden_nodes = 10, learning_rate = 0.1):
"""Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
"""
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
review_vocab = set()
# TODO: populate review_vocab with all of the words in the given reviews
# Remember to split reviews into individual words
# using "split(' ')" instead of "split()".
for review in reviews:
for word in review.split(' '):
review_vocab.add(word)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
label_vocab = set()
# TODO: populate label_vocab with all of the words in the given labels.
# There is no need to split the labels because each one is a single word.
for label in labels:
label_vocab.add(label)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
# TODO: populate self.word2index with indices for all the words in self.review_vocab
# like you saw earlier in the notebook
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
# TODO: do the same thing you did for self.word2index and self.review_vocab,
# but for self.label2index and self.label_vocab instead
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Store the number of nodes in input, hidden, and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# TODO: initialize self.weights_0_1 as a matrix of zeros. These are the weights between
# the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes, self.hidden_nodes))
# TODO: initialize self.weights_1_2 as a matrix of random values.
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
# TODO: Create the input layer, a two-dimensional matrix with shape
# 1 x input_nodes, with all values initialized to zero
self.layer_0 = np.zeros((1,input_nodes))
def update_input_layer(self,review):
# TODO: You can copy most of the code you wrote for update_input_layer
# earlier in this notebook.
#
# However, MAKE SURE YOU CHANGE ALL VARIABLES TO REFERENCE
# THE VERSIONS STORED IN THIS OBJECT, NOT THE GLOBAL OBJECTS.
# For example, replace "layer_0 *= 0" with "self.layer_0 *= 0"
self.layer_0 *= 0
# TODO: count how many times each word is used in the given review and store the results in layer_0
for word in review.split(' '):
if(word in self.word2index.keys()):
self.layer_0[0][self.word2index[word]] += 1
def get_target_for_label(self,label):
# TODO: Copy the code you wrote for get_target_for_label
# earlier in this notebook.
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
# TODO: Return the result of calculating the sigmoid activation function
# shown in the lectures
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
# TODO: Return the derivative of the sigmoid activation function,
# where "output" is the original output from the sigmoid fucntion
return output * (1 - output)
def train(self, training_reviews, training_labels):
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# TODO: Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
# TODO: Implement the forward pass through the network.
# That means use the given review to update the input layer,
# then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Do not use an activation function for the hidden layer,
# but use the sigmoid activation function for the output layer.
self.update_input_layer(review)
layer_1 = np.dot(self.layer_0, self.weights_0_1)
layer_2 = self.sigmoid(np.dot(layer_1, self.weights_1_2))
# TODO: Implement the back propagation pass here.
# That means calculate the error for the forward pass's prediction
# and update the weights in the network according to their
# contributions toward the error, as calculated via the
# gradient descent and back propagation algorithms you
# learned in class.
layer_2_error = self.get_target_for_label(label) - layer_2
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
layer_1_error = np.dot(layer_2_delta, self.weights_1_2.T)
layer_1_delta = layer_1_error
self.weights_1_2 += self.learning_rate * np.dot(layer_1.T, layer_2_delta)
self.weights_0_1 += self.learning_rate * np.dot(self.layer_0.T, layer_1_delta)
# TODO: Keep track of correct predictions. To determine if the prediction was
# correct, check that the absolute value of the output error
# is less than 0.5. If so, add one to the correct_so_far count.
if(layer_2 >= 0.5 and label == 'POSITIVE'):
correct_so_far += 1
elif(layer_2 < 0.5 and label == 'NEGATIVE'):
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
"""
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
"""
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
"""
Returns a POSITIVE or NEGATIVE prediction for the given review.
"""
# TODO: Run a forward pass through the network, like you did in the
# "train" function. That means use the given review to
# update the input layer, then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Note: The review passed into this function for prediction
# might come from anywhere, so you should convert it
# to lower case prior to using it.
self.update_input_layer(review.lower())
layer_1 = np.dot(self.layer_0, self.weights_0_1)
layer_2 = self.sigmoid(np.dot(layer_1, self.weights_1_2))
# TODO: The output layer should now contain a prediction.
# Return `POSITIVE` for predictions greater-than-or-equal-to `0.5`,
# and `NEGATIVE` otherwise.
if(layer_2 >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
"""
Explanation: End of Project 2.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Project 3: Building a Neural Network<a id='project_3'></a>
TODO: We've included the framework of a class called SentimentNetork. Implement all of the items marked TODO in the code. These include doing the following:
- Create a basic neural network much like the networks you've seen in earlier lessons and in Project 1, with an input layer, a hidden layer, and an output layer.
- Do not add a non-linearity in the hidden layer. That is, do not use an activation function when calculating the hidden layer outputs.
- Re-use the code from earlier in this notebook to create the training data (see TODOs in the code)
- Implement the pre_process_data function to create the vocabulary for our training data generating functions
- Ensure train trains over the entire corpus
Where to Get Help if You Need it
Re-watch earlier Udacity lectures
Chapters 3-5 - Grokking Deep Learning - (Check inside your classroom for a discount code)
End of explanation
"""
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
"""
Explanation: Run the following cell to create a SentimentNetwork that will train on all but the last 1000 reviews (we're saving those for testing). Here we use a learning rate of 0.1.
End of explanation
"""
mlp.test(reviews[-1000:],labels[-1000:])
"""
Explanation: Run the following cell to test the network's performance against the last 1000 reviews (the ones we held out from our training set).
We have not trained the model yet, so the results should be about 50% as it will just be guessing and there are only two possible values to choose from.
End of explanation
"""
mlp.train(reviews[:-1000],labels[:-1000])
"""
Explanation: Run the following cell to actually train the network. During training, it will display the model's accuracy repeatedly as it trains so you can see how well it's doing.
End of explanation
"""
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
"""
Explanation: That most likely didn't train very well. Part of the reason may be because the learning rate is too high. Run the following cell to recreate the network with a smaller learning rate, 0.01, and then train the new network.
End of explanation
"""
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.001)
mlp.train(reviews[:-1000],labels[:-1000])
"""
Explanation: That probably wasn't much different. Run the following cell to recreate the network one more time with an even smaller learning rate, 0.001, and then train the new network.
End of explanation
"""
from IPython.display import Image
Image(filename='sentiment_network.png')
def update_input_layer(review):
global layer_0
# clear out previous state, reset the layer to be all 0s
layer_0 *= 0
for word in review.split(" "):
layer_0[0][word2index[word]] += 1
update_input_layer(reviews[0])
"""
Explanation: With a learning rate of 0.001, the network should finall have started to improve during training. It's still not very good, but it shows that this solution has potential. We will improve it in the next lesson.
End of Project 3.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Understanding Neural Noise<a id='lesson_4'></a>
The following cells include includes the code Andrew shows in the next video. We've included it here so you can run the cells along with the video without having to type in everything.
Project3에서의 모델은 너무 학습이 느렸음. 우리는 이럴 때 다양한 해결책을 강구할 수 있지만, 기본은 data를 다시 한번 살펴보는 것이다.
End of explanation
"""
layer_0
"""
Explanation: 아래의 결과를 보면 첫번째 요소의 수가 18이나 되는데, 우리의 network는 위의 구조와 같이 하나의 큰 input element가 다른 모든 input element의 요소를 dominant하고, 또한 모든 hidden layer unit들에게 영향을 미치는 구조다.
End of explanation
"""
list(vocab)[0]
review_counter = Counter()
for word in reviews[0].split(" "):
review_counter[word] += 1
"""
Explanation: 심지어 위의 벡터에서 18은 '' 같은 아무 의미 없는 값이다. 아마도 다른 경우도 띄어쓰기나 조사같은 의미없는 값이 많을 것이다.
End of explanation
"""
review_counter.most_common()
"""
Explanation: Dominant한 값들은 대부분 ' ', '', 'the' 같은 단어임. 즉, 단순한 count는 data가 가진 signal을 highlight 해주지 않는다.
이는 단순한 count는 noise를 많이 내포하고 있음을 의미한다.
End of explanation
"""
# TODO: -Copy the SentimentNetwork class from Projet 3 lesson
# -Modify it to reduce noise, like in the video
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews, labels, hidden_nodes = 10, learning_rate = 0.1):
"""Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
"""
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
review_vocab = set()
# TODO: populate review_vocab with all of the words in the given reviews
# Remember to split reviews into individual words
# using "split(' ')" instead of "split()".
for review in reviews:
for word in review.split(' '):
review_vocab.add(word)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
label_vocab = set()
# TODO: populate label_vocab with all of the words in the given labels.
# There is no need to split the labels because each one is a single word.
for label in labels:
label_vocab.add(label)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
# TODO: populate self.word2index with indices for all the words in self.review_vocab
# like you saw earlier in the notebook
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
# TODO: do the same thing you did for self.word2index and self.review_vocab,
# but for self.label2index and self.label_vocab instead
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Store the number of nodes in input, hidden, and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# TODO: initialize self.weights_0_1 as a matrix of zeros. These are the weights between
# the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes, self.hidden_nodes))
# TODO: initialize self.weights_1_2 as a matrix of random values.
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
# TODO: Create the input layer, a two-dimensional matrix with shape
# 1 x input_nodes, with all values initialized to zero
self.layer_0 = np.zeros((1,input_nodes))
def update_input_layer(self,review):
# TODO: You can copy most of the code you wrote for update_input_layer
# earlier in this notebook.
self.layer_0 *= 0
# JUST SET CORRESPONDENT ELEMENT
for word in review.split(' '):
if(word in self.word2index.keys()):
self.layer_0[0][self.word2index[word]] = 1
def get_target_for_label(self,label):
# TODO: Copy the code you wrote for get_target_for_label
# earlier in this notebook.
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
# TODO: Return the result of calculating the sigmoid activation function
# shown in the lectures
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
# TODO: Return the derivative of the sigmoid activation function,
# where "output" is the original output from the sigmoid fucntion
return output * (1 - output)
def train(self, training_reviews, training_labels):
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# TODO: Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
# TODO: Implement the forward pass through the network.
self.update_input_layer(review)
layer_1 = np.dot(self.layer_0, self.weights_0_1)
layer_2 = self.sigmoid(np.dot(layer_1, self.weights_1_2))
# TODO: Implement the back propagation pass here.
layer_2_error = self.get_target_for_label(label) - layer_2
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
layer_1_error = np.dot(layer_2_delta, self.weights_1_2.T)
layer_1_delta = layer_1_error
self.weights_1_2 += self.learning_rate * np.dot(layer_1.T, layer_2_delta)
self.weights_0_1 += self.learning_rate * np.dot(self.layer_0.T, layer_1_delta)
# TODO: Keep track of correct predictions. To determine if the prediction was
# correct, check that the absolute value of the output error
# is less than 0.5. If so, add one to the correct_so_far count.
if(layer_2 >= 0.5 and label == 'POSITIVE'):
correct_so_far += 1
elif(layer_2 < 0.5 and label == 'NEGATIVE'):
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
"""
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
"""
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
"""
Returns a POSITIVE or NEGATIVE prediction for the given review.
"""
# TODO: Run a forward pass through the network, like you did in the
# "train" function. That means use the given review to
# update the input layer, then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Note: The review passed into this function for prediction
# might come from anywhere, so you should convert it
# to lower case prior to using it.
self.update_input_layer(review.lower())
layer_1 = np.dot(self.layer_0, self.weights_0_1)
layer_2 = self.sigmoid(np.dot(layer_1, self.weights_1_2))
# TODO: The output layer should now contain a prediction.
# Return `POSITIVE` for predictions greater-than-or-equal-to `0.5`,
# and `NEGATIVE` otherwise.
if(layer_2 >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
"""
Explanation: Project 4: Reducing Noise in Our Input Data<a id='project_4'></a>
TODO: Attempt to reduce the noise in the input data like Andrew did in the previous video. Specifically, do the following:
* Copy the SentimentNetwork class you created earlier into the following cell.
* Modify update_input_layer so it does not count how many times each word is used, but rather just stores whether or not a word was used.
End of explanation
"""
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.train(reviews[:-1000],labels[:-1000])
"""
Explanation: Run the following cell to recreate the network and train it. Notice we've gone back to the higher learning rate of 0.1.
만일 이것을 위의 Project 3 에서처럼 learning rate 같은 것으로 발전시키려고 했다면 아주 고생하고 별 성과도 없었을 것.
하지만 데이터와 네트워크의 구조를 보면서 접근하면 아주 빠르게 모델을 발전시킬 수 있음.
End of explanation
"""
mlp.test(reviews[-1000:],labels[-1000:])
"""
Explanation: That should have trained much better than the earlier attempts. It's still not wonderful, but it should have improved dramatically. Run the following cell to test your model with 1000 predictions.
End of explanation
"""
Image(filename='sentiment_network_sparse.png')
"""
Explanation: End of Project 4.
Andrew's solution was actually in the previous video, so rewatch that video if you had any problems with that project. Then continue on to the next lesson.
Analyzing Inefficiencies in our Network<a id='lesson_5'></a>
The following cells include the code Andrew shows in the next video. We've included it here so you can run the cells along with the video without having to type in everything.
End of explanation
"""
layer_0 = np.zeros(10)
layer_0
layer_0[4] = 1
layer_0[9] = 1
layer_0
weights_0_1 = np.random.randn(10,5)
layer_0.dot(weights_0_1)
indices = [4,9]
layer_1 = np.zeros(5)
for index in indices:
layer_1 += (1 * weights_0_1[index])
layer_1
"""
Explanation: Project 4 에서 개선한 네트워크도 사실 학습 속도는 매우 느린 편인데, 그 이유는 위의 그림처럼 대부분의 값이 sparse 하기 때문인 것으로 생각됨.
End of explanation
"""
Image(filename='sentiment_network_sparse_2.png')
layer_1 = np.zeros(5)
for index in indices:
layer_1 += (weights_0_1[index])
layer_1
"""
Explanation: 위에서의 과정을 통해 알 수 있는 것은 우리가 이제까지 사용했던 matrix multiplication 과정이 사실상 그냥 일부 index의 값을 더한 것일 뿐이라는 것이다. 즉, sparse 한 네트워크에서 굳이 대부분이 곱하고 더하는 연산을 하는 과정을 단순히 몇 개 숫자의 합을 구하는 연산으로 간추릴 수 있다는 것.
End of explanation
"""
# TODO: -Copy the SentimentNetwork class from Project 4 lesson
# -Modify it according to the above instructions
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews, labels, hidden_nodes = 10, learning_rate = 0.1):
"""Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
"""
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
review_vocab = set()
# TODO: populate review_vocab with all of the words in the given reviews
# Remember to split reviews into individual words
# using "split(' ')" instead of "split()".
for review in reviews:
for word in review.split(' '):
review_vocab.add(word)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
label_vocab = set()
# TODO: populate label_vocab with all of the words in the given labels.
# There is no need to split the labels because each one is a single word.
for label in labels:
label_vocab.add(label)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
# TODO: populate self.word2index with indices for all the words in self.review_vocab
# like you saw earlier in the notebook
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
# TODO: do the same thing you did for self.word2index and self.review_vocab,
# but for self.label2index and self.label_vocab instead
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Store the number of nodes in input, hidden, and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# TODO: initialize self.weights_0_1 as a matrix of zeros. These are the weights between
# the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes, self.hidden_nodes))
# TODO: initialize self.weights_1_2 as a matrix of random values.
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.layer_1 = np.zeros((1, self.hidden_nodes))
def get_target_for_label(self,label):
# TODO: Copy the code you wrote for get_target_for_label
# earlier in this notebook.
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
# TODO: Return the result of calculating the sigmoid activation function
# shown in the lectures
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
# TODO: Return the derivative of the sigmoid activation function,
# where "output" is the original output from the sigmoid fucntion
return output * (1 - output)
def train(self, training_reviews_raw, training_labels):
training_reviews = list()
for review in training_reviews_raw:
indices = set()
for word in review.split(' '):
if(word in self.word2index.keys()):
indices.add(self.word2index[word])
training_reviews.append(list(indices))
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# TODO: Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
# TODO: Implement the forward pass through the network.
self.layer_1 *= 0
for index in review:
self.layer_1 += self.weights_0_1[index]
layer_2 = self.sigmoid(np.dot(self.layer_1, self.weights_1_2))
# TODO: Implement the back propagation pass here.
layer_2_error = self.get_target_for_label(label) - layer_2
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
layer_1_error = np.dot(layer_2_delta, self.weights_1_2.T)
layer_1_delta = layer_1_error
self.weights_1_2 += self.learning_rate * np.dot(self.layer_1.T, layer_2_delta)
for index in review:
self.weights_0_1[index] += self.learning_rate * layer_1_delta[0]
# TODO: Keep track of correct predictions. To determine if the prediction was
# correct, check that the absolute value of the output error
# is less than 0.5. If so, add one to the correct_so_far count.
if(layer_2 >= 0.5 and label == 'POSITIVE'):
correct_so_far += 1
elif(layer_2 < 0.5 and label == 'NEGATIVE'):
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
"""
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
"""
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
"""
Returns a POSITIVE or NEGATIVE prediction for the given review.
"""
# TODO: Run a forward pass through the network, like you did in the
# "train" function.
layer_0 = set()
for word in review.lower().split(' '):
if(word in self.word2index.keys()):
layer_0.add(self.word2index[word])
self.layer_1 *= 0
for index in layer_0:
self.layer_1 += self.weights_0_1[index]
layer_2 = self.sigmoid(np.dot(self.layer_1, self.weights_1_2))
# TODO: The output layer should now contain a prediction.
# Return `POSITIVE` for predictions greater-than-or-equal-to `0.5`,
# and `NEGATIVE` otherwise.
if(layer_2 >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
"""
Explanation: Project 5: Making our Network More Efficient<a id='project_5'></a>
TODO: Make the SentimentNetwork class more efficient by eliminating unnecessary multiplications and additions that occur during forward and backward propagation. To do that, you can do the following:
* Copy the SentimentNetwork class from the previous project into the following cell.
* Remove the update_input_layer function - you will not need it in this version.
* Modify init_network:
You no longer need a separate input layer, so remove any mention of self.layer_0
You will be dealing with the old hidden layer more directly, so create self.layer_1, a two-dimensional matrix with shape 1 x hidden_nodes, with all values initialized to zero
Modify train:
Change the name of the input parameter training_reviews to training_reviews_raw. This will help with the next step.
At the beginning of the function, you'll want to preprocess your reviews to convert them to a list of indices (from word2index) that are actually used in the review. This is equivalent to what you saw in the video when Andrew set specific indices to 1. Your code should create a local list variable named training_reviews that should contain a list for each review in training_reviews_raw. Those lists should contain the indices for words found in the review.
Remove call to update_input_layer
Use self's layer_1 instead of a local layer_1 object.
In the forward pass, replace the code that updates layer_1 with new logic that only adds the weights for the indices used in the review.
When updating weights_0_1, only update the individual weights that were used in the forward pass.
Modify run:
Remove call to update_input_layer
Use self's layer_1 instead of a local layer_1 object.
Much like you did in train, you will need to pre-process the review so you can work with word indices, then update layer_1 by adding weights for the indices used in the review.
End of explanation
"""
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.train(reviews[:-1000],labels[:-1000])
"""
Explanation: Run the following cell to recreate the network and train it once again.
End of explanation
"""
mlp.test(reviews[-1000:],labels[-1000:])
"""
Explanation: That should have trained much better than the earlier attempts. Run the following cell to test your model with 1000 predictions.
End of explanation
"""
Image(filename='sentiment_network_sparse_2.png')
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
from bokeh.models import ColumnDataSource, LabelSet
from bokeh.plotting import figure, show, output_file
from bokeh.io import output_notebook
output_notebook()
hist, edges = np.histogram(list(map(lambda x:x[1],pos_neg_ratios.most_common())), density=True, bins=100, normed=True)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="Word Positive/Negative Affinity Distribution")
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555")
show(p)
frequency_frequency = Counter()
for word, cnt in total_counts.most_common():
frequency_frequency[cnt] += 1
hist, edges = np.histogram(list(map(lambda x:x[1],frequency_frequency.most_common())), density=True, bins=100, normed=True)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="The frequency distribution of the words in our corpus")
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555")
show(p)
"""
Explanation: End of Project 5.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Further Noise Reduction<a id='lesson_6'></a>
End of explanation
"""
# TODO: -Copy the SentimentNetwork class from Project 5 lesson
# -Modify it according to the above instructions
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews, labels, min_count = 10, polarity_cutoff = 0.1, hidden_nodes = 10, learning_rate = 0.1):
"""Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
min_count -
polarity_cutoff -
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
"""
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels, polarity_cutoff, min_count)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels, polarity_cutoff, min_count):
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
for i in range(len(reviews)):
if(labels[i] == 'POSITIVE'):
for word in reviews[i].split(" "):
positive_counts[word] += 1
total_counts[word] += 1
else:
for word in reviews[i].split(" "):
negative_counts[word] += 1
total_counts[word] += 1
pos_neg_ratios = Counter()
for term,cnt in list(total_counts.most_common()):
if(cnt >= 50):
pos_neg_ratio = positive_counts[term] / float(negative_counts[term]+1)
pos_neg_ratios[term] = pos_neg_ratio
for word,ratio in pos_neg_ratios.most_common():
if(ratio > 1):
pos_neg_ratios[word] = np.log(ratio)
else:
pos_neg_ratios[word] = -np.log((1 / (ratio + 0.01)))
# TODO: populate review_vocab with all of the words in the given reviews
# Remember to split reviews into individual words
# using "split(' ')" instead of "split()".
review_vocab = set()
for review in reviews:
for word in review.split(' '):
if(total_counts[word] > min_count):
if(word in pos_neg_ratios.keys()):
if(pos_neg_ratios[word] >= polarity_cutoff or pos_neg_ratios[word] <= -polarity_cutoff):
review_vocab.add(word)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
label_vocab = set()
# TODO: populate label_vocab with all of the words in the given labels.
# There is no need to split the labels because each one is a single word.
for label in labels:
label_vocab.add(label)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
# TODO: populate self.word2index with indices for all the words in self.review_vocab
# like you saw earlier in the notebook
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
# TODO: do the same thing you did for self.word2index and self.review_vocab,
# but for self.label2index and self.label_vocab instead
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Store the number of nodes in input, hidden, and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# TODO: initialize self.weights_0_1 as a matrix of zeros. These are the weights between
# the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes, self.hidden_nodes))
# TODO: initialize self.weights_1_2 as a matrix of random values.
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.layer_1 = np.zeros((1, self.hidden_nodes))
def get_target_for_label(self,label):
# TODO: Copy the code you wrote for get_target_for_label
# earlier in this notebook.
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
# TODO: Return the result of calculating the sigmoid activation function
# shown in the lectures
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
# TODO: Return the derivative of the sigmoid activation function,
# where "output" is the original output from the sigmoid fucntion
return output * (1 - output)
def train(self, training_reviews_raw, training_labels):
training_reviews = list()
for review in training_reviews_raw:
indices = set()
for word in review.split(' '):
if(word in self.word2index.keys()):
indices.add(self.word2index[word])
training_reviews.append(list(indices))
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# TODO: Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
# TODO: Implement the forward pass through the network.
self.layer_1 *= 0
for index in review:
self.layer_1 += self.weights_0_1[index]
layer_2 = self.sigmoid(np.dot(self.layer_1, self.weights_1_2))
# TODO: Implement the back propagation pass here.
layer_2_error = self.get_target_for_label(label) - layer_2
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
layer_1_error = np.dot(layer_2_delta, self.weights_1_2.T)
layer_1_delta = layer_1_error
self.weights_1_2 += self.learning_rate * np.dot(self.layer_1.T, layer_2_delta)
for index in review:
self.weights_0_1[index] += self.learning_rate * layer_1_delta[0]
# TODO: Keep track of correct predictions. To determine if the prediction was
# correct, check that the absolute value of the output error
# is less than 0.5. If so, add one to the correct_so_far count.
if(layer_2 >= 0.5 and label == 'POSITIVE'):
correct_so_far += 1
elif(layer_2 < 0.5 and label == 'NEGATIVE'):
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
"""
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
"""
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
"""
Returns a POSITIVE or NEGATIVE prediction for the given review.
"""
# TODO: Run a forward pass through the network, like you did in the
# "train" function.
layer_0 = set()
for word in review.lower().split(' '):
if(word in self.word2index.keys()):
layer_0.add(self.word2index[word])
self.layer_1 *= 0
for index in layer_0:
self.layer_1 += self.weights_0_1[index]
layer_2 = self.sigmoid(np.dot(self.layer_1, self.weights_1_2))
# TODO: The output layer should now contain a prediction.
# Return `POSITIVE` for predictions greater-than-or-equal-to `0.5`,
# and `NEGATIVE` otherwise.
if(layer_2 >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
"""
Explanation: 위의 통계들을 보면, 필요없는 중립적인 단어가 매우 많은데, 이런 것들을 잘라냄으로써 더 중요한 자료에 집중하고, 필요없는 연산의 횟수를 줄일 수 있다. 또 많이 사용되지 않는 단어들을 제거함으로써 패턴에 영향을 덜 끼치는 아웃라이어도 줄일 수 있다.
Project 6: Reducing Noise by Strategically Reducing the Vocabulary<a id='project_6'></a>
TODO: Improve SentimentNetwork's performance by reducing more noise in the vocabulary. Specifically, do the following:
* Copy the SentimentNetwork class from the previous project into the following cell.
* Modify pre_process_data:
Add two additional parameters: min_count and polarity_cutoff
Calculate the positive-to-negative ratios of words used in the reviews. (You can use code you've written elsewhere in the notebook, but we are moving it into the class like we did with other helper code earlier.)
Andrew's solution only calculates a postive-to-negative ratio for words that occur at least 50 times. This keeps the network from attributing too much sentiment to rarer words. You can choose to add this to your solution if you would like.
Change so words are only added to the vocabulary if they occur in the vocabulary more than min_count times.
Change so words are only added to the vocabulary if the absolute value of their postive-to-negative ratio is at least polarity_cutoff
Modify __init__:
Add the same two parameters (min_count and polarity_cutoff) and use them when you call pre_process_data
End of explanation
"""
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.05,learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
"""
Explanation: Run the following cell to train your network with a small polarity cutoff.
End of explanation
"""
mlp.test(reviews[-1000:],labels[-1000:])
"""
Explanation: And run the following cell to test it's performance. It should be
End of explanation
"""
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.8,learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
"""
Explanation: Run the following cell to train your network with a much larger polarity cutoff.
이 경우 속도는 7배 정도 빨라지고, 정확도는 3% 정도 떨어졌는데, 실제로 문제를 푸는 경우 나쁘지 않은 trade-off.
실제 문제 중에서 training data가 아주 많은 경우, 속도를 높이는 것이 중요하기 때문.
End of explanation
"""
mlp.test(reviews[-1000:],labels[-1000:])
"""
Explanation: And run the following cell to test it's performance.
End of explanation
"""
mlp_full = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=0,polarity_cutoff=0,learning_rate=0.01)
mlp_full.train(reviews[:-1000],labels[:-1000])
Image(filename='sentiment_network_sparse.png')
def get_most_similar_words(focus = "horrible"):
most_similar = Counter()
for word in mlp_full.word2index.keys():
most_similar[word] = np.dot(mlp_full.weights_0_1[mlp_full.word2index[word]],mlp_full.weights_0_1[mlp_full.word2index[focus]])
return most_similar.most_common()
get_most_similar_words("excellent")
get_most_similar_words("terrible")
"""
Explanation: End of Project 6.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Analysis: What's Going on in the Weights?<a id='lesson_7'></a>
End of explanation
"""
import matplotlib.colors as colors
words_to_visualize = list()
for word, ratio in pos_neg_ratios.most_common(500):
if(word in mlp_full.word2index.keys()):
words_to_visualize.append(word)
for word, ratio in list(reversed(pos_neg_ratios.most_common()))[0:500]:
if(word in mlp_full.word2index.keys()):
words_to_visualize.append(word)
pos = 0
neg = 0
colors_list = list()
vectors_list = list()
for word in words_to_visualize:
if word in pos_neg_ratios.keys():
vectors_list.append(mlp_full.weights_0_1[mlp_full.word2index[word]])
if(pos_neg_ratios[word] > 0):
pos+=1
colors_list.append("#00ff00")
else:
neg+=1
colors_list.append("#000000")
from sklearn.manifold import TSNE
tsne = TSNE(n_components=2, random_state=0)
words_top_ted_tsne = tsne.fit_transform(vectors_list)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="vector T-SNE for most polarized words")
source = ColumnDataSource(data=dict(x1=words_top_ted_tsne[:,0],
x2=words_top_ted_tsne[:,1],
names=words_to_visualize,
color=colors_list))
p.scatter(x="x1", y="x2", size=8, source=source, fill_color="color")
word_labels = LabelSet(x="x1", y="x2", text="names", y_offset=6,
text_font_size="8pt", text_color="#555555",
source=source, text_align='center')
p.add_layout(word_labels)
show(p)
# green indicates positive words, black indicates negative words
"""
Explanation: 위의 두 결과를 보면 network가 서로 비슷한 단어들을 잘 detect함. 즉, 제대로 학습되었음을 볼 수 있음.
End of explanation
"""
|
YuriyGuts/kaggle-quora-question-pairs | notebooks/preproc-embeddings-fasttext.ipynb | mit | from pygoose import *
import os
import subprocess
"""
Explanation: Preprocessing: Create a FastText Vector Database
Based on the vocabulary extracted from question texts, use a pretrained FastText model to query and save word vectors.
Imports
This utility package imports numpy, pandas, matplotlib and a helper kg module into the root namespace.
End of explanation
"""
project = kg.Project.discover()
"""
Explanation: Config
Automatically discover the paths to various data folders and compose the project structure.
End of explanation
"""
EMBEDDING_DIM = 300
"""
Explanation: Number of word embedding dimensions.
End of explanation
"""
FASTTEXT_EXECUTABLE = 'fasttext'
"""
Explanation: Path to FastText executable.
End of explanation
"""
PRETRAINED_MODEL_FILE = os.path.join(project.aux_dir, 'fasttext', 'wiki.en.bin')
"""
Explanation: Path to the FastText binary model pre-trained on Wikipedia.
End of explanation
"""
VOCAB_FILE = project.preprocessed_data_dir + 'tokens_lowercase_spellcheck.vocab'
"""
Explanation: Input vocab file (one word per line).
End of explanation
"""
OUTPUT_FILE = project.aux_dir + 'fasttext_vocab.vec'
"""
Explanation: Vector output file (one vector per line).
End of explanation
"""
vocab = kg.io.load_lines(VOCAB_FILE)
with open(OUTPUT_FILE, 'w') as f:
print(f'{len(vocab)} {EMBEDDING_DIM}', file=f)
"""
Explanation: Save FastText metadata
Add a header containing the number of words and embedding size to be readable by gensim.
End of explanation
"""
with open(VOCAB_FILE) as f_vocab:
with open(OUTPUT_FILE, 'a') as f_output:
subprocess.run(
[FASTTEXT_EXECUTABLE, 'print-word-vectors', PRETRAINED_MODEL_FILE],
stdin=f_vocab,
stdout=f_output,
)
"""
Explanation: Query and save FastText vectors
Replicate the command fasttext print-vectors model.bin < words.txt >> vectors.vec.
End of explanation
"""
|
d00d/quantNotebooks | Notebooks/quantopian_research_public/notebooks/lectures/Long-Short_Equity/notebook.ipynb | unlicense | import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# We'll generate a random factor
current_factor_values = np.random.normal(0, 1, 10000)
equity_names = ['Equity ' + str(x) for x in range(10000)]
# Put it into a dataframe
factor_data = pd.Series(current_factor_values, index = equity_names)
factor_data = pd.DataFrame(factor_data, columns=['Factor Value'])
# Take a look at the dataframe
factor_data.head(10)
# Now let's say our future returns are dependent on our factor values
future_returns = current_factor_values + np.random.normal(0, 1, 10000)
returns_data = pd.Series(future_returns, index=equity_names)
returns_data = pd.DataFrame(returns_data, columns=['Returns'])
# Put both the factor values and returns into one dataframe
data = returns_data.join(factor_data)
# Take a look
data.head(10)
"""
Explanation: Long-Short Equity Strategies
By Delaney Granizo-Mackenzie
Part of the Quantopian Lecture Series:
www.quantopian.com/lectures
https://github.com/quantopian/research_public
Notebook released under the Creative Commons Attribution 4.0 License. Please do not remove this attribution.
Long-short equity refers to the fact that the strategy is both long and short in the equity market. This is a rather general statement, but has over time grown to mean a specific family of strategies. These strategies rank all stocks in the market using some model. The strategy then goes long (buys) the top $n$ equities of the ranking, and goes short on (sells) the bottom $n$ while maintaining equal dollar volume between the long and short positions. This has the advantage of being statistically robust, as by ranking stocks and entering hundreds or thousands of positions, you are making many bets on your ranking model rather than just a few risky bets. You are also betting purely on the quality of your ranking scheme, as the equal dollar volume long and short positions ensure that the strategy will remain market neutral (immune to market movements).
Ranking Scheme
A ranking scheme is any model that can assign each stocks a number, where higher is better or worse. Examples could be value factors, technical indicators, pricing models, or a combination of all of the above. The Ranking Universes by Factors lecture will cover ranking schemes in more detail. Ranking schemes are the secret sauce of any long-short equity strategy, so developing them is nontrivial.
Making a Bet on the Ranking Scheme
Once we have determined a ranking scheme, we would like to be able to profit from it. We do this by investing an equal amount of money long into the top of the ranking, and short into the bottom. This ensures that the strategy will make money proportionally to the quality of the ranking only, and will be market neutral.
Long and Short Baskets
If you are ranking $m$ equities, have $d$ dollars to invest, and your total target number of positions to hold is $2n$, then the long and short baskets are created as follors. For each equity in spots $1, \dots, n$ in the ranking, sell $\frac{1}{2n} * d$ dollars of that equity. For each equity in spots $m - n, \dots, m$ in the ranking, buy $\frac{1}{2n} * d$ dollars of that equity.
Friction Because of Prices
Because equity prices will not always divide $\frac{1}{2n} * d$ evenly, and equities must be bought in integer amounts, there will be some imprecision and the algorithm should get as close as it can to this number. Most algorithms will have access to some leverage during execution, so it is fine to buy slightly more than $\frac{1}{2n} * d$ dollars per equity. This does, however, cause some friction at low capital amounts. For a strategy running $d = 100000$, and $n = 500$, we see that
$$\frac{1}{2n} * d = \frac{1}{1000} * 100000 = 100$$
This will cause big problems for expensive equities, and cause the algorithm to be overlevered. This is alleviated by trading fewer equities or increasing the capital, $d$. Luckily, long-short equity strategies tend to be very high capicity, so there is for most purposes no ceiling on the amount of money one can invest. For more information on algorithm capacities, refer to the algorithm capacity lecture when it is released.
Returns Come From The Ranking Spread
The returns of a long-short equity strategy are dependent on how well the ranking spreads out the high and low returns. To see how this works, consider this hypothetical example.
End of explanation
"""
# Rank the equities
ranked_data = data.sort('Factor Value')
# Compute the returns of each basket
# Baskets of size 500, so we create an empty array of shape (10000/500)
number_of_baskets = 10000/500
basket_returns = np.zeros(number_of_baskets)
for i in range(number_of_baskets):
start = i * 500
end = i * 500 + 500
basket_returns[i] = ranked_data[start:end]['Returns'].mean()
# Plot the returns of each basket
plt.bar(range(number_of_baskets), basket_returns)
plt.ylabel('Returns')
plt.xlabel('Basket')
plt.legend(['Returns of Each Basket']);
"""
Explanation: Now that we have factor values and returns, we can see what would happen if we ranked our equities based on factor values, and then entered the long and short positions.
End of explanation
"""
basket_returns[number_of_baskets-1] - basket_returns[0]
"""
Explanation: Let's compute the returns if we go long the top basket and short the bottom basket.
End of explanation
"""
# We'll generate a random factor
current_factor_values = np.random.normal(0, 1, 10000)
equity_names = ['Equity ' + str(x) for x in range(10000)]
# Put it into a dataframe
factor_data = pd.Series(current_factor_values, index = equity_names)
factor_data = pd.DataFrame(factor_data, columns=['Factor Value'])
# Now let's say our future returns are dependent on our factor values
future_returns = -10 + current_factor_values + np.random.normal(0, 1, 10000)
returns_data = pd.Series(future_returns, index=equity_names)
returns_data = pd.DataFrame(returns_data, columns=['Returns'])
# Put both the factor values and returns into one dataframe
data = returns_data.join(factor_data)
# Rank the equities
ranked_data = data.sort('Factor Value')
# Compute the returns of each basket
# Baskets of size 500, so we create an empty array of shape (10000/500
number_of_baskets = 10000/500
basket_returns = np.zeros(number_of_baskets)
for i in range(number_of_baskets):
start = i * 500
end = i * 500 + 500
basket_returns[i] = ranked_data[start:end]['Returns'].mean()
basket_returns[number_of_baskets-1] - basket_returns[0]
"""
Explanation: Market Neutrality is Built-In
The nice thing about making money based on the spread of the ranking is that it is unaffected by what the market does.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/awi/cmip6/models/sandbox-2/land.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'awi', 'sandbox-2', 'land')
"""
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: AWI
Source ID: SANDBOX-2
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:38
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation
"""
|
Kaggle/learntools | notebooks/deep_learning_intro/raw/ex5.ipynb | apache-2.0 | # Setup plotting
import matplotlib.pyplot as plt
plt.style.use('seaborn-whitegrid')
# Set Matplotlib defaults
plt.rc('figure', autolayout=True)
plt.rc('axes', labelweight='bold', labelsize='large',
titleweight='bold', titlesize=18, titlepad=10)
plt.rc('animation', html='html5')
# Setup feedback system
from learntools.core import binder
binder.bind(globals())
from learntools.deep_learning_intro.ex5 import *
"""
Explanation: Introduction
In this exercise, you'll add dropout to the Spotify model from Exercise 4 and see how batch normalization can let you successfully train models on difficult datasets.
Run the next cell to get started!
End of explanation
"""
import pandas as pd
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from sklearn.compose import make_column_transformer
from sklearn.model_selection import GroupShuffleSplit
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras import callbacks
spotify = pd.read_csv('../input/dl-course-data/spotify.csv')
X = spotify.copy().dropna()
y = X.pop('track_popularity')
artists = X['track_artist']
features_num = ['danceability', 'energy', 'key', 'loudness', 'mode',
'speechiness', 'acousticness', 'instrumentalness',
'liveness', 'valence', 'tempo', 'duration_ms']
features_cat = ['playlist_genre']
preprocessor = make_column_transformer(
(StandardScaler(), features_num),
(OneHotEncoder(), features_cat),
)
def group_split(X, y, group, train_size=0.75):
splitter = GroupShuffleSplit(train_size=train_size)
train, test = next(splitter.split(X, y, groups=group))
return (X.iloc[train], X.iloc[test], y.iloc[train], y.iloc[test])
X_train, X_valid, y_train, y_valid = group_split(X, y, artists)
X_train = preprocessor.fit_transform(X_train)
X_valid = preprocessor.transform(X_valid)
y_train = y_train / 100
y_valid = y_valid / 100
input_shape = [X_train.shape[1]]
print("Input shape: {}".format(input_shape))
"""
Explanation: First load the Spotify dataset.
End of explanation
"""
# YOUR CODE HERE: Add two 30% dropout layers, one after 128 and one after 64
model = keras.Sequential([
layers.Dense(128, activation='relu', input_shape=input_shape),
layers.Dense(64, activation='relu'),
layers.Dense(1)
])
# Check your answer
q_1.check()
#%%RM_IF(PROD)%%
# Wrong dropout layers
model = keras.Sequential([
layers.Dense(128, activation='relu', input_shape=input_shape),
layers.Dropout(0.3),
layers.Dense(64, activation='relu'),
layers.Dense(1)
])
q_1.assert_check_failed()
#%%RM_IF(PROD)%%
# Wrong dropout rate
model = keras.Sequential([
layers.Dense(128, activation='relu', input_shape=input_shape),
layers.Dropout(0.7),
layers.Dense(64, activation='relu'),
layers.Dropout(0.7),
layers.Dense(1)
])
q_1.assert_check_failed()
#%%RM_IF(PROD)%%
model = keras.Sequential([
layers.Dense(128, activation='relu', input_shape=input_shape),
layers.Dropout(0.3),
layers.Dense(64, activation='relu'),
layers.Dropout(0.3),
layers.Dense(1)
])
q_1.assert_check_passed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q_1.hint()
#_COMMENT_IF(PROD)_
q_1.solution()
"""
Explanation: 1) Add Dropout to Spotify Model
Here is the last model from Exercise 4. Add two dropout layers, one after the Dense layer with 128 units, and one after the Dense layer with 64 units. Set the dropout rate on both to 0.3.
End of explanation
"""
model.compile(
optimizer='adam',
loss='mae',
)
history = model.fit(
X_train, y_train,
validation_data=(X_valid, y_valid),
batch_size=512,
epochs=50,
verbose=0,
)
history_df = pd.DataFrame(history.history)
history_df.loc[:, ['loss', 'val_loss']].plot()
print("Minimum Validation Loss: {:0.4f}".format(history_df['val_loss'].min()))
"""
Explanation: Now run this next cell to train the model see the effect of adding dropout.
End of explanation
"""
# View the solution (Run this cell to receive credit!)
q_2.check()
"""
Explanation: 2) Evaluate Dropout
Recall from Exercise 4 that this model tended to overfit the data around epoch 5. Did adding dropout seem to help prevent overfitting this time?
End of explanation
"""
import pandas as pd
concrete = pd.read_csv('../input/dl-course-data/concrete.csv')
df = concrete.copy()
df_train = df.sample(frac=0.7, random_state=0)
df_valid = df.drop(df_train.index)
X_train = df_train.drop('CompressiveStrength', axis=1)
X_valid = df_valid.drop('CompressiveStrength', axis=1)
y_train = df_train['CompressiveStrength']
y_valid = df_valid['CompressiveStrength']
input_shape = [X_train.shape[1]]
"""
Explanation: Now, we'll switch topics to explore how batch normalization can fix problems in training.
Load the Concrete dataset. We won't do any standardization this time. This will make the effect of batch normalization much more apparent.
End of explanation
"""
model = keras.Sequential([
layers.Dense(512, activation='relu', input_shape=input_shape),
layers.Dense(512, activation='relu'),
layers.Dense(512, activation='relu'),
layers.Dense(1),
])
model.compile(
optimizer='sgd', # SGD is more sensitive to differences of scale
loss='mae',
metrics=['mae'],
)
history = model.fit(
X_train, y_train,
validation_data=(X_valid, y_valid),
batch_size=64,
epochs=100,
verbose=0,
)
history_df = pd.DataFrame(history.history)
history_df.loc[0:, ['loss', 'val_loss']].plot()
print(("Minimum Validation Loss: {:0.4f}").format(history_df['val_loss'].min()))
"""
Explanation: Run the following cell to train the network on the unstandardized Concrete data.
End of explanation
"""
# YOUR CODE HERE: Add a BatchNormalization layer before each Dense layer
model = keras.Sequential([
layers.Dense(512, activation='relu', input_shape=input_shape),
layers.Dense(512, activation='relu'),
layers.Dense(512, activation='relu'),
layers.Dense(1),
])
# Check your answer
q_3.check()
#%%RM_IF(PROD)%%
# Wrong layers
model = keras.Sequential([
layers.Dense(512, activation='relu', input_shape=input_shape),
layers.BatchNormalization(),
layers.Dense(512, activation='relu'),
layers.BatchNormalization(),
layers.Dense(512, activation='relu'),
layers.BatchNormalization(),
layers.Dense(1),
])
q_3.assert_check_failed()
#%%RM_IF(PROD)%%
model = keras.Sequential([
layers.BatchNormalization(input_shape=input_shape),
layers.Dense(512, activation='relu'),
layers.BatchNormalization(),
layers.Dense(512, activation='relu'),
layers.BatchNormalization(),
layers.Dense(512, activation='relu'),
layers.BatchNormalization(),
layers.Dense(1),
])
q_3.assert_check_passed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q_3.hint()
#_COMMENT_IF(PROD)_
q_3.solution()
"""
Explanation: Did you end up with a blank graph? Trying to train this network on this dataset will usually fail. Even when it does converge (due to a lucky weight initialization), it tends to converge to a very large number.
3) Add Batch Normalization Layers
Batch normalization can help correct problems like this.
Add four BatchNormalization layers, one before each of the dense layers. (Remember to move the input_shape argument to the new first layer.)
End of explanation
"""
model.compile(
optimizer='sgd',
loss='mae',
metrics=['mae'],
)
EPOCHS = 100
history = model.fit(
X_train, y_train,
validation_data=(X_valid, y_valid),
batch_size=64,
epochs=EPOCHS,
verbose=0,
)
history_df = pd.DataFrame(history.history)
history_df.loc[0:, ['loss', 'val_loss']].plot()
print(("Minimum Validation Loss: {:0.4f}").format(history_df['val_loss'].min()))
"""
Explanation: Run the next cell to see if batch normalization will let us train the model.
End of explanation
"""
# View the solution (Run this cell to receive credit!)
q_4.check()
"""
Explanation: 4) Evaluate Batch Normalization
Did adding batch normalization help?
End of explanation
"""
|
rasbt/pattern_classification | data_collecting/reading_mnist.ipynb | gpl-3.0 | import os
import struct
import numpy as np
def load_mnist(path, which='train'):
if which == 'train':
labels_path = os.path.join(path, 'train-labels-idx1-ubyte')
images_path = os.path.join(path, 'train-images-idx3-ubyte')
elif which == 'test':
labels_path = os.path.join(path, 't10k-labels-idx1-ubyte')
images_path = os.path.join(path, 't10k-images-idx3-ubyte')
else:
raise AttributeError('`which` must be "train" or "test"')
with open(labels_path, 'rb') as lbpath:
magic, n = struct.unpack('>II', lbpath.read(8))
labels = np.fromfile(lbpath, dtype=np.uint8)
with open(images_path, 'rb') as imgpath:
magic, n, rows, cols = struct.unpack('>IIII', imgpath.read(16))
images = np.fromfile(imgpath, dtype=np.uint8).reshape(len(labels), 784)
return images, labels
"""
Explanation: Reading MNIST into NumPy arrays
Here, I provide some instructions for reading in the MNIST dataset of handwritten digits into NumPy arrays.
The dataset consists of the following files:
Training set images: train-images-idx3-ubyte.gz (9.9 MB, 47 MB unzipped, 60,000 samples)
Training set labels: train-labels-idx1-ubyte.gz (29 KB, 60 KB unzipped, 60,000 labels)
Test set images: t10k-images-idx3-ubyte.gz (1.6 MB, 7.8 MB, 10,000 samples)
Test set labels: t10k-labels-idx1-ubyte.gz (5 KB, 10 KB unzipped, 10,000 labels)
Dataset source: http://yann.lecun.com/exdb/mnist/
After downloading the files, I recommend to unzip the files using the Unix/Linux gzip tool from the terminal for efficiency, e.g., using the command
gzip *ubyte.gz -d
in your local MNIST download directory.
Next, we define a simple function to read in the training or test images and corresponding labels.
End of explanation
"""
X, y = load_mnist(path='./', which='train')
print('Labels: %d' % y.shape[0])
print('Rows: %d, columns: %d' % (X.shape[0], X.shape[1]))
"""
Explanation: The returned images NumPy array will have the shape $n \times m$, where $n$ is the number of samples, and $m$ is the number of features. The images in the MNIST dataset consist of $28 \times 28$ pixels, and each pixel is represented by a grayscale intensity value. Here, we unroll the $28 \times 28$ images into 1D row vectors, which represent the rows in our matrix; thus $m=784$.
You may wonder why we read in the labels in such a strange way:
magic, n = struct.unpack('>II', lbpath.read(8))
labels = np.fromfile(lbpath, dtype=np.int8)
This is to accomodate the way the labels where stored, which is described in the excerpt from the MNIST website:
<pre>[offset] [type] [value] [description]
0000 32 bit integer 0x00000801(2049) magic number (MSB first)
0004 32 bit integer 60000 number of items
0008 unsigned byte ?? label
0009 unsigned byte ?? label
........
xxxx unsigned byte ?? label</pre>
So, we first read in the "magic number" (describes a file format or protocol) and the "number of items" from the file buffer before we read the following bytes into a NumPy array using the fromfile method.
The fmt parameter value '>II' that we passed as an argument to struct.unpack can be composed into:
'>': big-endian (defines the order in which a sequence of bytes is stored)
'I': unsigned int
If everything executed correctly, we should now have a label vector of $60,000$ instances, and a $60,000 \times 784$ image feature matrix.
End of explanation
"""
import matplotlib.pyplot as plt
%matplotlib inline
def plot_digit(X, y, idx):
img = X[idx].reshape(28,28)
plt.imshow(img, cmap='Greys', interpolation='nearest')
plt.title('true label: %d' % y[idx])
plt.show()
for i in range(4):
plot_digit(X, y, i)
"""
Explanation: To check if the pixels were retrieved correctly, let us print a few images:
End of explanation
"""
np.savetxt('train_img.csv', X[:3000, :], delimiter=',', fmt='%i')
np.savetxt('train_labels.csv', y[:3000], delimiter=',', fmt='%i')
X = np.genfromtxt('train_img.csv', delimiter=',', dtype=int)
y = np.genfromtxt('train_labels.csv', delimiter=',', dtype=int)
"""
Explanation: Lastly, we can save the NumPy arrays as CSV files for more convenient retrievel, however, I wouldn't recommend storing the 60,000 samples as CSV files due to the enormous file size.
End of explanation
"""
|
borja876/Thinkful-DataScience-Borja | The%2BBrandy%2BBunch%2BShow.ipynb | mit | df2=pd.DataFrame()
df2['BB_age']=[14, 12, 11, 10, 8, 7, 8]
#Calculate Mean & Median
mean = np.mean(df2['BB_age'])
median = np.median(df2['BB_age'])
print(mean)
print(median)
#Calculate Mode
(values, counts) = np.unique(df2['BB_age'], return_counts=True)
ind = np.argmax(counts)
print(ind)
values[ind]
#Calculate variance, Standard deviation & standard error
a= np.var(df2['BB_age'])
b= np.std(df2['BB_age'],ddof=1)
c= b/np.sqrt(len(df2['BB_age']))
print(a)
print(b)
print(c)
"""
Explanation: Using these estimates, if you had to choose only one estimate of central tendency and one estimate of variance to describe the data, which would you pick and why?
Estimate of Central Tendency:
I would choose the median among mean, median & mode to etimate the central tendency as it is more representative of the sample. The mode is two low in this case and will not tell us the average of the age and the mean, although close to the median is altered due to the extreme low value (6).
Etimate of Variance:
To describe the variance I would use the standard deviation. I will pick up the standard deviation as the standard error shows the meanigfulness of the estimated mean and in this case my choice is to pick up the median as the most representative central tendency estimateof the sample.
Note: Having said so, the difference between the mean and the median is very little. Moreover, age can only be an integer, hence it would be 10 in both cases to be meaningful. If the mean is used as the central tendency estimate then I would choose the standard error
End of explanation
"""
df3=pd.DataFrame()
df3['BB_age']=[14, 12, 11, 10, 8, 7, 1]
#Calculate Mean & Median
mean = np.mean(df3['BB_age'])
median = np.median(df3['BB_age'])
print(mean)
print(median)
#Calculate Mode
(values, counts) = np.unique(df3['BB_age'], return_counts=True)
ind = np.argmax(counts)
print(ind)
values[ind]
#Calculate variance, Standard deviation & standard error
a= np.var(df3['BB_age'])
b= np.std(df3['BB_age'],ddof=1)
c= b/np.sqrt(len(df3['BB_age']))
print(a)
print(b)
print(c)
"""
Explanation: Next, Cindy has a birthday. Update your estimates- what changed, and what didn't?
Estimates that have not changed:
Median & mode
Estimates that have changed:
Mean, variance, standard deviation & standard error (from the variance estimates, variance is the one that experiments a larger change due to its characteritics)
End of explanation
"""
df4=pd.DataFrame()
df4['Adult_population']=[0.20, 0.23, 0.17, 0.05]
#Calculate Mean & Median
mean = np.mean(df4['Adult_population'])
median = np.median(df4['Adult_population'])
print(mean)
print(median)
#Calculate Mode
(values, counts) = np.unique(df4['Adult_population'], return_counts=True)
ind = np.argmax(counts)
print(ind)
values[ind]
#Calculate variance, Standard deviation & standard error
a= np.var(df4['Adult_population'])
b= np.std(df4['Adult_population'],ddof=1)
c= b/np.sqrt(len(df4['Adult_population']))
print(a)
print(b)
print(c)
"""
Explanation: Nobody likes Cousin Oliver. Maybe the network should have used an even younger actor. Replace Cousin Oliver with 1-year-old Jessica, then recalculate again. Does this change your choice of central tendency or variance estimation methods?
I would mantain the central tendency method: median as it doesn´t change and we can see how the mean now deviates from the original value due to the lower value that we have introduced
I would keed the standard deviation as both the standard deviation and the standard error have experienced a 68% change.
End of explanation
"""
df4=pd.DataFrame()
df4['Adult_population']=[0.20, 0.23, 0.17]
#Calculate Mean & Median
mean = np.mean(df4['Adult_population'])
median = np.median(df4['Adult_population'])
print(mean)
print(median)
#Calculate Mode
(values, counts) = np.unique(df4['Adult_population'], return_counts=True)
ind = np.argmax(counts)
print(ind)
values[ind]
#Calculate variance, Standard deviation & standard error
a= np.var(df4['Adult_population'])
b= np.std(df4['Adult_population'],ddof=1)
c= b/np.sqrt(len(df4['Adult_population']))
print(a)
print(b)
print(c)
"""
Explanation: Based on these numbers, what percentage of adult Americans would you estimate were Brady Bunch fans on the 50th anniversary of the show?
I would say that the & of adult population that were fans of Brandy Bunch show was 18.5% (median). I think we cannot use the mean as we have 5% low compared to the rest of the values. Hence the mean would be distorted by this value.
Additionally, the SciPhi fans 5% are not representative of the American Adults population but of a subset if it. Hence they can be excluded from the sample and the central tendency estimations will be more accurate. (Case 2)
Case 2: Excluding SciPhi fans (5%)
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.14/_downloads/plot_artifacts_correction_rejection.ipynb | bsd-3-clause | import numpy as np
import mne
from mne.datasets import sample
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
raw = mne.io.read_raw_fif(raw_fname)
raw.set_eeg_reference()
"""
Explanation: Rejecting bad data (channels and segments)
End of explanation
"""
raw.info['bads'] = ['MEG 2443']
"""
Explanation: Marking bad channels
Sometimes some MEG or EEG channels are not functioning properly
for various reasons. These channels should be excluded from
analysis by marking them bad as. This is done by setting the 'bads'
in the measurement info of a data container object (e.g. Raw, Epochs,
Evoked). The info['bads'] value is a Python string. Here is
example:
End of explanation
"""
# Reading data with a bad channel marked as bad:
fname = data_path + '/MEG/sample/sample_audvis-ave.fif'
evoked = mne.read_evokeds(fname, condition='Left Auditory',
baseline=(None, 0))
# restrict the evoked to EEG and MEG channels
evoked.pick_types(meg=True, eeg=True, exclude=[])
# plot with bads
evoked.plot(exclude=[])
print(evoked.info['bads'])
"""
Explanation: Why setting a channel bad?: If a channel does not show
a signal at all (flat) it is important to exclude it from the
analysis. If a channel as a noise level significantly higher than the
other channels it should be marked as bad. Presence of bad channels
can have terribe consequences on down stream analysis. For a flat channel
some noise estimate will be unrealistically low and
thus the current estimate calculations will give a strong weight
to the zero signal on the flat channels and will essentially vanish.
Noisy channels can also affect others when signal-space projections
or EEG average electrode reference is employed. Noisy bad channels can
also adversely affect averaging and noise-covariance matrix estimation by
causing unnecessary rejections of epochs.
Recommended ways to identify bad channels are:
Observe the quality of data during data
acquisition and make notes of observed malfunctioning channels to
your measurement protocol sheet.
View the on-line averages and check the condition of the channels.
Compute preliminary off-line averages with artifact rejection,
SSP/ICA, and EEG average electrode reference computation
off and check the condition of the channels.
View raw data with :func:mne.io.Raw.plot without SSP/ICA
enabled and identify bad channels.
<div class="alert alert-info"><h4>Note</h4><p>Setting the bad channels should be done as early as possible in the
analysis pipeline. That's why it's recommended to set bad channels
the raw objects/files. If present in the raw data
files, the bad channel selections will be automatically transferred
to averaged files, noise-covariance matrices, forward solution
files, and inverse operator decompositions.</p></div>
The actual removal happens using :func:pick_types <mne.pick_types> with
exclude='bads' option (see picking_channels).
Instead of removing the bad channels, you can also try to repair them.
This is done by interpolation of the data from other channels.
To illustrate how to use channel interpolation let us load some data.
End of explanation
"""
evoked.interpolate_bads(reset_bads=False)
"""
Explanation: Let's now interpolate the bad channels (displayed in red above)
End of explanation
"""
evoked.plot(exclude=[])
"""
Explanation: Let's plot the cleaned data
End of explanation
"""
eog_events = mne.preprocessing.find_eog_events(raw)
n_blinks = len(eog_events)
# Center to cover the whole blink with full duration of 0.5s:
onset = eog_events[:, 0] / raw.info['sfreq'] - 0.25
duration = np.repeat(0.5, n_blinks)
raw.annotations = mne.Annotations(onset, duration, ['bad blink'] * n_blinks,
orig_time=raw.info['meas_date'])
raw.plot(events=eog_events) # To see the annotated segments.
"""
Explanation: <div class="alert alert-info"><h4>Note</h4><p>Interpolation is a linear operation that can be performed also on
Raw and Epochs objects.</p></div>
For more details on interpolation see the page channel_interpolation.
Marking bad raw segments with annotations
MNE provides an :class:mne.Annotations class that can be used to mark
segments of raw data and to reject epochs that overlap with bad segments
of data. The annotations are automatically synchronized with raw data as
long as the timestamps of raw data and annotations are in sync.
See sphx_glr_auto_tutorials_plot_brainstorm_auditory.py
for a long example exploiting the annotations for artifact removal.
The instances of annotations are created by providing a list of onsets and
offsets with descriptions for each segment. The onsets and offsets are marked
as seconds. onset refers to time from start of the data. offset is
the duration of the annotation. The instance of :class:mne.Annotations
can be added as an attribute of :class:mne.io.Raw.
End of explanation
"""
reject = dict(grad=4000e-13, mag=4e-12, eog=150e-6)
"""
Explanation: It is also possible to draw bad segments interactively using
:meth:raw.plot <mne.io.Raw.plot> (see tut_viz_raw).
As the data is epoched, all the epochs overlapping with segments whose
description starts with 'bad' are rejected by default. To turn rejection off,
use keyword argument reject_by_annotation=False when constructing
:class:mne.Epochs. When working with neuromag data, the first_samp
offset of raw acquisition is also taken into account the same way as with
event lists. For more see :class:mne.Epochs and :class:mne.Annotations.
Rejecting bad epochs
When working with segmented data (Epochs) MNE offers a quite simple approach
to automatically reject/ignore bad epochs. This is done by defining
thresholds for peak-to-peak amplitude and flat signal detection.
In the following code we build Epochs from Raw object. One of the provided
parameter is named reject. It is a dictionary where every key is a
channel type as a sring and the corresponding values are peak-to-peak
rejection parameters (amplitude ranges as floats). Below we define
the peak-to-peak rejection values for gradiometers,
magnetometers and EOG:
End of explanation
"""
events = mne.find_events(raw, stim_channel='STI 014')
event_id = {"auditory/left": 1}
tmin = -0.2 # start of each epoch (200ms before the trigger)
tmax = 0.5 # end of each epoch (500ms after the trigger)
baseline = (None, 0) # means from the first instant to t = 0
picks_meg = mne.pick_types(raw.info, meg=True, eeg=False, eog=True,
stim=False, exclude='bads')
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,
picks=picks_meg, baseline=baseline, reject=reject,
reject_by_annotation=True)
"""
Explanation: <div class="alert alert-info"><h4>Note</h4><p>The rejection values can be highly data dependent. You should be careful
when adjusting these values. Make sure not too many epochs are rejected
and look into the cause of the rejections. Maybe it's just a matter
of marking a single channel as bad and you'll be able to save a lot
of data.</p></div>
We then construct the epochs
End of explanation
"""
epochs.drop_bad()
"""
Explanation: We then drop/reject the bad epochs
End of explanation
"""
print(epochs.drop_log[40:45]) # only a subset
epochs.plot_drop_log()
"""
Explanation: And plot the so-called drop log that details the reason for which some
epochs have been dropped.
End of explanation
"""
|
statkraft/shyft-doc | notebooks/nea-example/simulation-configured.ipynb | lgpl-3.0 | # Pure python modules and jupyter notebook functionality
# first you should import the third-party python modules which you'll use later on
# the first line enables that figures are shown inline, directly in the notebook
%pylab inline
import os
import datetime as dt
from os import path
import sys
from matplotlib import pyplot as plt
"""
Explanation: Configured Shyft simulations
Introduction
Shyft provides a toolbox for running hydrologic simulations. As it was designed to work in an operational environment, we've provided several different workflows for running a model simulation. The main concept to be aware of is that while we demonstrate and build on the use of a 'configuration', nearly all simulation functionality is also accessible with pure python through access to the API. This is the encouraged approach to simulation. The use of configurations is intended to be a mechanism of running repeated operational simulations when one is interested in archiving and storing (potentially to a database) the specifics of the simulation.
Below we start with a high level description using a configuration object, and in Part II of the simulation notebooks we describe the approach using the lower level APIs. It is recommended, if you intend to use Shyft for any kind of hydrologic exploration, to become familiar with the API functionality.
This notebook briefly runs through the simulation process for a pre-configured catchment. The following steps are described:
Loading required python modules and setting path to Shyft installation
Configuration of a Shyft simulation
Running a Shyft simulation
Post-processing: Fetching simulation results from the simulator-object.
1. Loading required python modules and setting path to SHyFT installation
Shyft requires a number of different modules to be loaded as part of the package. Below, we describe the required steps for loading the modules, and note that some steps are only required for the use of the jupyter notebook.
End of explanation
"""
# try to auto-configure the path, -will work in all cases where doc and data
# are checked out at same level
shyft_data_path = path.abspath("../../../shyft-data")
if path.exists(shyft_data_path) and 'SHYFT_DATA' not in os.environ:
os.environ['SHYFT_DATA']=shyft_data_path
# shyft should be available either by it's install in python
# or by PYTHONPATH set by user prior to starting notebook.
# This is equivalent to the two lines below
# shyft_path=path.abspath('../../../shyft')
# sys.path.insert(0,shyft_path)
# once the shyft_path is set correctly, you should be able to import shyft modules
import shyft
# if you have problems here, it may be related to having your LD_LIBRARY_PATH
# pointing to the appropriate libboost_python libraries (.so files)
from shyft import api
from shyft.repository.default_state_repository import DefaultStateRepository
from shyft.orchestration.configuration.yaml_configs import YAMLSimConfig
from shyft.orchestration.simulators.config_simulator import ConfigSimulator
"""
Explanation: The Shyft Environment
This next step is highly specific on how and where you have installed Shyft. If you have followed the guidelines at github, and cloned the three shyft repositories: i) shyft, ii) shyft-data, and iii) shyft-doc, then you may need to tell jupyter notebooks where to find shyft. Uncomment the relevant lines below.
If you have a 'system' shyft, or used conda install -s sigbjorn shyft to install shyft, then you probably will want to make sure you have set the SHYFTDATA directory correctly, as otherwise, Shyft will assume the above structure and fail. This has to be done before import shyft. In that case, uncomment the relevant lines below.
note: it is most likely that you'll need to do one or the other.
End of explanation
"""
# set up configuration using *.yaml configuration files
# here is the *.yaml file that configures the simulation:
config_file_path = os.path.abspath("./nea-config/neanidelva_simulation.yaml")
# and here we pass it to the configurator, together with the name of the region
# stated in the simulation.yaml file (here: "neanidelva") which we would like to run
cfg = YAMLSimConfig(config_file_path, "neanidelva")
print(cfg.datasets_config)
# Once we have all the configuration in place (read in from the .yaml files)
# we can start to do the simulation. Here we use the ConfigSimulator class
# to initialize a simulator-object. Shyft has several ways to achieve this
# but the .yaml configs are the most straight forward
simulator = ConfigSimulator(cfg)
# Now the simulator is ready to run!
"""
Explanation: 2. Configuration of a SHyFT simulation
The following shows how to set up a Shyft simulation using the yaml_configs.YAMLSimConfig class. Note that this is a high level approach, providing a working example for a simple simulation. More advanced users will want to eventually make use of direct API calls, as outlined in Part II.
At this point, you may want to have a look to the configuration file used in this example.
```
neanidelva:
region_config_file: neanidelva_region.yaml
model_config_file: neanidelva_model_calibrated.yaml
datasets_config_file: neanidelva_datasets.yaml
interpolation_config_file: neanidelva_interpolation.yaml
start_datetime: 2013-09-01T00:00:00
run_time_step: 86400 # 1 day time step
number_of_steps: 365 # 1 year
region_model_id: 'neanidelva-ptgsk'
#interpolation_id: 2 # this is optional (default 0)
initial_state:
repository:
class: !!python/name:shyft.repository.generated_state_repository.GeneratedStateRepository
params:
model: !!python/name:shyft.api.pt_gs_k.PTGSKModel
tags: []
...
```
The file is structured as follows:
neanidelva is the name of the simulation. Your configuration file may contain multiple "stanzas" or blocks of simulation configurations. You'll see below that we use the name to instantiate a configuration object.
region_config_file points to another yaml file that contains basic information about the region of the simulation. You can explore that file here
model_config_file contains the model parameters. Note that when you are calibrating the model, this is the file that you would put your optimized parameters into once you have completed a calibrations.
datasets_config_file contains details regarding the input datasets and the repositories they are contained in. You can see this file here
interpolation_config_file provides details regarding how the observational data in your catchment or region will be interpolated to the domain of the simulation. If you are using a repository with distributed data, the interpolation is still used. See this file for more details.
The following:
start_datetime: 2013-09-01T00:00:00
run_time_step: 86400 # 1 hour time step
number_of_steps: 365 # 1 year
region_model_id: 'neanidelva-ptgsk'
are considered self-explantory. Note that region_model_id is simply a string name, but it should be unique. We will explain the details regarding initial_state later on in this tutorial.
End of explanation
"""
#simulator. #try tab completion
n_cells = simulator.region_model.size()
print(n_cells)
"""
Explanation: The simulator and the region_model
It is important to note that the simulator provides a wrapping of underlying API functionality. It is designed to provide a quick and simple interface for conducting runs based on a configuration saved in a .yaml file, or otherwise. Core functionality is contained in the region_model which is just an instance of a model stack, or what is referred to as a model in the api intro notebook. This is an import concept in Shyft. To understand the framework, one should be familiar with this class.
Before we begin the simulation, one should explore the simulator object with tab completion. As an example, you can see here how to get the number of cells in the region that was set up. This is used later for extracting the data.
Most importantly, understand the simulator has an attribute called region_model. Most of the underlying functionality of the simulator methods are actually making calls to the region_model which is an instantiation of one of Shyft's "model stack" classes. To conduct more advanced simulations one would use this object directly.
End of explanation
"""
simulator.run()
"""
Explanation: 3. Running a SHyFT simulation
Okay, so thus far we have set up our cfg object which contains most the information required to run the simulation. We can simply run the simulation using the run method.
End of explanation
"""
help(simulator.run)
"""
Explanation: But this is may be too simple. Let's explore the simulator.run method a bit further:
End of explanation
"""
# Here we are going to extact data from the simulator object.
# let's work directly with the `region_model`
region_model = simulator.region_model
print(region_model.catchment_ids)
"""
Explanation: Note that you can pass two parameters to run. To run a simulation, we need a time_axis (length of the simulation), and an initial state. Initially we got both of these from the cfg object (which takes it from the .yaml files). However, in some cases you will likely want to change these and conduct simulations for different periods, or starting from different states. We explore this further in Part II: advanced simulation
4. Post processing and data extraction
You have now completed a simple simulation. You probably are interested to explore some of the output from the simulation and to visulize the quality of the results. Let's explore first, how to access the underlying data produced from the simulation.
Visualizing the discharge for each [sub-]catchment
Recall that we earlier referred to the importance of understanding the region_model. You'll see now that this is where information from the simulation is actually contained, and that the simulator object is more or less a convenience wrapper.
End of explanation
"""
q_1228_ts = simulator.region_model.statistics.discharge([1228])
q_1228_np = simulator.region_model.statistics.discharge([1228]).values
print(type(q_1228_ts))
print(type(q_1228_np))
"""
Explanation: We see here that each sub-catchment in our simulation is associated with a unique ID. These are user defined IDs. In the case of the nea-nidelva simulation, they are taken from the GIS database used to create the example configuration files.
To get data out of the region_model you need to specify which catchments you are interested in evaluating. In the following example we are going to extract the data for each catchment and make a simple plot.
Note that Shyft uses many specialized C++ types. Many of these have methods to convert to the more familiar numpy objects. An example may be the discharge timeseries for a catchment.
End of explanation
"""
# We can make a quick plot of the data of each sub-catchment
fig, ax = plt.subplots(figsize=(20,15))
# plot each catchment discharge in the catchment_ids
for i,cid in enumerate(region_model.catchment_ids):
# a ts.time_axis can be enumerated to it's UtcPeriod,
# that will have a .start and .end of type utctimestamp
# to use matplotlib support for datetime-axis, we convert it to datetime (as above)
ts_timestamps = [dt.datetime.utcfromtimestamp(p.start) for p in region_model.time_axis]
data = region_model.statistics.discharge([int(cid)]).values
ax.plot(ts_timestamps,data, label = "{}".format(region_model.catchment_ids[i]))
fig.autofmt_xdate()
ax.legend(title="Catch. ID")
ax.set_ylabel("discharge [m3 s-1]")
"""
Explanation: Look at the discharge timeseries
As mentioned above, Shyft has it's own Timeseries class. This class is quite powerful, and the api-timeseries notebook shows more of the functionality. For now, let's look at some key aspects, and how to create a quick plot of the individual catchment discharge.
End of explanation
"""
cells = region_model.get_cells()
# Once we have the cells, we can get their coordinate information
# and fetch the x- and y-location of the cells
x = np.array([cell.geo.mid_point().x for cell in cells])
y = np.array([cell.geo.mid_point().y for cell in cells])
"""
Explanation: Visualizing the distributed catchment data
An important, but difficult concept, to remember when working with Shyft, is that internally there is no 'grid' to speak of. The simulation is vectorized, and each 'cell' represents a spatial area with it's own area and geolocation information. Therefore, we cannot just load a datacube of data, as some may be familiar with.
Visualization of this data is a bit more complex, because each individual cell is in practice an individual polygon. Depending on how the data has been configured for Shyft (see region_model), the cells may, in fact, be simple squares or more complex shapes. For the visualization below, we simply treat them as uniform size, and plot them with the scatter function in matplotlib.
Extract data for individual simulation cells
We'll start by looking at values of individual cells, rather than at the catchment level. Since Shyft does not have an underlying 'raster' model, you need to fetch all cells directly from the underlying region_model.
End of explanation
"""
# let's create the mapping of catchment_id to an integer:
catchment_ids = region_model.catchment_ids
cid_z_map = dict([ (catchment_ids[i],i) for i in range(len(catchment_ids))])
# then create an array the same length as our 'x' and 'y', which holds the
# integer value that we'll use for the 'z' value
catch_ids = np.array([cid_z_map[cell.geo.catchment_id()] for cell in cells])
# and make a quick catchment map...
# using a scatter plot of the cells
fig, ax = plt.subplots(figsize=(15,5))
cm = plt.cm.get_cmap('rainbow')
plot = ax.scatter(x, y, c=catch_ids, marker='.', s=40, lw=0, cmap=cm)
plt.colorbar(plot).set_label('zero-based mapping(proper map tbd)')
"""
Explanation: We also will need to get a 'z' value to make things interesting. Since this is the first time we've visualized our catchment, let's make a map of the sub-catchments. To do this, the first thing we need to do is get the membership of each cell. That is, to which catchment does it below. We do this by extracting the catchment_id of each cell -- and this is what we'll map. The result will be a map of the sub-catchments.
Recall from above we extracted the catchment_id_map from the region_model:
# mapping of internal catch ID to catchment
catchment_id_map = simulator.region_model.catchment_id_map
We could just use the catchment_id as the 'z' value, but since this could be a string, we'll take a different approach. We'll assign a unique integer to each catchment_id and plot those (it is also easier for the color bar scaling).
End of explanation
"""
#first, set a date: year, month, day, (hour of day if hourly time step)
oslo = api.Calendar('Europe/Oslo') # specifying input calendar in Oslo tz-id
time_x = oslo.time(2014,5,15) # the oslo calendar(incl dst) converts calendar coordinates Y,M,D.. to its utc-time
# we need to get the index of the time_axis for the time
try:
idx = simulator.region_model.time_axis.index_of(time_x) # index of time x on time-axis
except:
print("Date out of range, setting index to 0")
idx = 0
# fetching SCA (the response variable is named "snow_sca")
# You can use tab-completion to explore the `rc`, short for "response collector"
# object of the cell, to see further response variables available.
# specifying empty list [] indicates all catchments, otherwise pass catchment_id
sca = simulator.region_model.gamma_snow_response.sca([],idx)
"""
Explanation: Visualing the Snow Cover Area of all cells for a certain point in time
Here we'll do some more work to look at a snapshot value of data in each of the cells. This example is collecting the response variable (here the Snow Cover Area (SCA)) for each of the cells for a certain point of time.
The "response collector" is another concept within Shyft that is important keep in mind. We don't collect and store responses for every variable, in order to keep the simulation memory use lean. Therefore, depending on your application, it may be required to explicitly enable this. The relevant code is found in region_model.h in the C++ core source code.
For the ConfigSimulator class, which we used to instantiate the simulator, a standard collector is used that will provide access to the most relevant variables.
For a model run during calibration, we are use a collector that just does the required minimum for the calibration. And, it is still configurable: we can turn on/off the snow-collection, so if we don't calibrate for snow, they are not collected. More on calibration is shown in the tutorial: Calibration with Shyft
The state collector used for the 'highspeed' calibration models (C++), is a null-collector, so no memory allocated, and no cpu-time used.
End of explanation
"""
# for attr in dir(simulator.region_model):
# if attr[0] is not '_': #ignore privates
# print(attr)
# # and don't forget:
# help(simulator.region_model.gamma_snow_state)
"""
Explanation: Let's take a closer look at this...
simulator.region_model.time_axis.index_of(time_x)
Simply provided an index value that we can use to index the cells for the time we're interested in looking at.
Next we use:
simulator.region_model.gamma_snow_response
What is this? This is a collector from the simulation. In this case, for the gamma_snow routine. It contains a convenient method to access the response variables from the simulation on a per catchment level. Each response variable (outflow, sca, swe) can be called with two arguments. The first a list of the catchments, and the second an index to the time, as shown above. Note, this will return the values for each cell in the sub-catchment. Maybe one is only interested in the total outflow or total swe for the region. In this case you can use: .outflow_value which will return a single value.
There is also a response collector for the state variables: .gamma_snow_state.
Explore both of these further with tab completion or help. As well as the full region_model to see what other algorithm collectors are available as this example is configured.
End of explanation
"""
# We can make a simple scatter plot again for quick visualization
fig, ax = plt.subplots(figsize=(15,5))
cm = plt.cm.get_cmap('winter')
plot = ax.scatter(x, y, c=sca,
vmin=0, vmax=1,
marker='s', s=40, lw=0,
cmap=cm)
plt.colorbar(plot)
plt.title('Snow Covered area of {0} on {1}'.format(cfg.region_model_id, oslo.to_string(time_x)))
"""
Explanation: We are now ready to explore some of the variables from the simulation. We'll continue on with SCA.
End of explanation
"""
# look at the catchment-wide average:
nea_avg_sca = np.average(sca)
print("Average SCA for Nea Nidelva: {0}".format(nea_avg_sca))
# And let's compute histogram of the snow covered area as well
fig, ax = plt.subplots()
ax.hist(sca, bins=20, range=(0,1), color='y', alpha=0.5)
ax.set_xlabel("SCA of grid cell")
ax.set_ylabel("frequency")
"""
Explanation: A note about the geometry of the region
Again, keep in mind that while we have created a variable that contains the values for sca in each cell, this is only an iterable object. The only reason we know where each value is located is because we have corresponding x and y values for each cell. It is not an array.
We can calculate some statistics directly out of sca:
End of explanation
"""
|
sevo/closure_decorator | Other functional features.ipynb | mit | def add(a, b):
return a + b
def make_adder(a) :
def adder(b) :
return add(a, b)
return adder
add_two = make_adder(20)
add_two(4)
"""
Explanation: 1. Partial function application
2. Pattern matching
Ciastocna aplikacia - Partially applied functions
http://blog.dhananjaynene.com/tags/functional-programming/
Ciastocna aplikacia transformuje funkciu s nejakym poctom parametrov na inu funkciu s mensim poctom parametrov
Cize zafixuje nejake parametre
f:(X × Y × Z) → N
partial(f):(Y × Z) → N
Vcera som naznacil, ako sa nieco taketo da spravit s pomocou uzaveru
End of explanation
"""
def make_power(exponent):
def power(x):
return x**exponent
return power
square = make_power(2)
print(square(3))
print(square(30))
square(300)
"""
Explanation: Iny priklad
End of explanation
"""
from functools import partial
def power(base, exponent):
return base ** exponent
cube = partial(power, 3)
cube(2)
def power(base, exponent):
return base ** exponent
cube = partial(power, exponent=3)
cube(2)
"""
Explanation: Balicek functools ma na to funkciu, ktora definiciu takychto funkcii robi este pohodlnejsiu
End of explanation
"""
basetwo = partial(int, base=2)
basetwo('111010101')
"""
Explanation: Iny priklad, uprveny konstruktor int
End of explanation
"""
import sys
from functools import partial
print_stderr = partial(print, file=sys.stderr)
print_stderr("pokus")
"""
Explanation: Problem je v tom, ze skoro vsetky priklady na internete, ktore najdete su z toho ako vyrobit power funkcie alebo nieco podobne trivialne
Skusme nieco trivialne, ale praktickejsie
Napriklad funkciu, ktora ma vypisovat do nejakeho specialneho suboru. Napriklad chyboveho vystupu
End of explanation
"""
# print_stderr = partial(print, file=sys.stderr)
print_stderr = lambda x: print(x, file=sys.stderr)
print_stderr('hahahhaha')
"""
Explanation: Toto by som vedel dosiahnut aj dekoratorom, aj lambdou aj pomocou closure ale takto je to asi najjednoduchsie
Skusme si niektore z toho naprogramovat v ramci opakovania
End of explanation
"""
for text in lines:
if re.search('[a-zA-Z]\=', text):
some_action(text)
elif re.search('[a-zA-Z]\s\=', text):
some_other_action(text)
else:
some_default_action()
"""
Explanation: Skusme partial application pouzit na refaktorovanie takehoto kodu
End of explanation
"""
def is_grouped_together(text): # skuste z tohoto spravit partial
return re.search("[a-zA-Z]\s\=", text)
def is_spaced_apart(text):
return re.search("[a-zA-Z]\s\=", text)
def and_so_on(text):
return re.search("pattern_188364625", text)
for text in lines:
if is_grouped_together(text):
some_action(text)
elif is_spaced_apart(text):
some_other_action(text)
else:
some_default_action()
"""
Explanation: regularne vyrazy sa daju vytiahnut do funkcie
End of explanation
"""
is_spaced_apart = partial(re.search, '[a-zA-Z]\s\=')
is_grouped_together = partial(re.search, '[a-zA-Z]\=')
for text in lines:
if is_grouped_together(text):
some_action(text)
elif is_spaced_apart(text):
some_other_action(text)
else:
some_default_action()
"""
Explanation: Vidite tam to opakovanie kodu?
Ako by to bolo cele prerobene?
End of explanation
"""
class Tovar:
def __init__(self, typ, mnozstvo=0):
self.typ=typ
self.mnozstvo=mnozstvo
def write(self):
return '{}: {}'.format(self.typ, self.mnozstvo)
nakup_jablk = Tovar('jablka', 3)
print(nakup_jablk.write())
Jablko = partial(Tovar, 'jablka')
Jablko(4).write()
"""
Explanation: Dalsie priklady na pouzitie partial pri refactoringu
http://chriskiehl.com/article/Cleaner-coding-through-partially-applied-functions/
A preco to nepouzit na specializovany konstruktor?
End of explanation
"""
import pyrsistent as ps
def Tovar(typ, mnozstvo):
def write():
return '{}: {}'.format(typ, mnozstvo)
return ps.freeze({'write': write})
Jablko = partial(Tovar, 'jablka')
Jablko(5).write()
"""
Explanation: To iste by fungovalo aj na "objekt" vytvoreny pomocou closure
End of explanation
"""
def query_database(userid, password, query) :
# do query
# return results
def bar(userid, password):
return query_database(userid, password)
def foo(userid, password) :
return bar(userid, password)
def main(userid, password) :
# .. lot of code here .. eventually reaching
foo(userid, password)
"""
Explanation: Viete si tak vytvorit viacere konstruktory pre tu istu triedu
Co vam brani vytvorit si konstruktor pre nejaky specialny typ loggera alebo objektu na citanie nejakeho specialneho typu suboru.
Nemsuite stale opakovat tie iste parametre vo volani konstruktora / funkcie.
Viete to pouzit nie len na specializovanie, ale aj na oddelenie zadavania parametrov funkcie a jej vykonania v case.
Kolko krat sa vam stalo, ze ste vedeli davno v programe aku funkciu budete musiet zavolat a aj s castou argumentov, ale museli ste cakat az do nejakeho casu, kde ste dostali aj zvysok a museli ste parametre predavat spolu s funkciou / objektom na ktorom bola metoda
Ak by ste vedeli vyrobit funkciu, s niektorymi parametrami prednastavenymi, tak by vam stacilo posuvat si tuto jednu funkciu a nemuseli by ste si presuvat vsetky parametre az do miesta, kde ich nakoniec vlozite pri volani funkcie
End of explanation
"""
def get_query_agent(userid, password)
def do_query(query) :
# do query
# return results
return do_query
def bar(querying_func):
return func(querying_func)
def foo(querying_func) :
return bar(querying_func)
def main(userid, password) :
query_agent = get_query_agent(userid, password)
# .. much further down the line
foo(query_agent)
"""
Explanation: Takto by sa to dalo spravit ak by sme pouzili partial application pomocou vnorenej funkcie.
End of explanation
"""
# -- JAVA --
static void print(Fruit f) {
sysout("Hello Fruit");
}
static void print(Banana b) {
sysout("Hello Banana");
}
Banana banana = new Fruit();
print(banana)
"""
Explanation: Teraz o cool funkcionalej vlastnosti, ktora v Pythone nie je
Pattern matching
Multimethods
Multiple dispatch
Multiple dispatch (and poor men's patter matching) in Java
http://blog.efftinge.de/2010/03/multiple-dispatch-and-poor-mens-patter.html odkaz davam hlavne kvoli nazvu clanku :)
End of explanation
"""
def pokus(a):
print('pokus1')
def pokus():
print('pokus2')
pokus()
"""
Explanation: Toto nebol multiple dispatch. Toto bol overloading pretoze sa to rozhodovalo v case kompilacie.
preto by sa vypisalo "Hello banana" na zaklade typu premennej a nie "Hello Fruit" na zaklade typu objektu
multiple dispatch sa rozhoduje dynamicky na zaklade objektu
Multiple dispatch by som dosiahol napriklad ak by print bola metoda objektu.
Nanestastie Python nema ani multiple dispatch a ani overloading
Nema zmysel definovat dve funkcie s rovnakym menom
End of explanation
"""
def pokus(a:str, b:list):
print('pokus1')
def pokus(b:int):
print('pokus2')
pokus('3', [])
"""
Explanation: A je jedno, ci maju rovnaky pocet parametrov alebo rozny. Ani definovanie typu pomocou anotacie v pythone 3 mi nepomoze
Vzdy si len prepisem funkciu inou
Nikdy sa nerozhodne na zaklade parametrov, ktora by sa mala pouzit (tak ako je to napriklad v jave)
End of explanation
"""
# casto sa stava, ze kod vyzera nejak takto
def foo(a, b):
if isinstance(a, int) and isinstance(b, int):
# ...code for two ints...
elif isinstance(a, float) and isinstance(b, float):
# ...code for two floats...
elif isinstance(a, str) and isinstance(b, str):
# ...code for two strings...
else:
raise TypeError("unsupported argument types (%s, %s)" % (type(a), type(b)))
"""
Explanation: V standardnej kniznici jendoducho nie su prostriedky na to, aby som vedel definovat vacero rovnakych fukcii a na zaklade atributov rozhodnut ktora sa ma zavolat
toto plati aj pre metody
nevieme napriklad definovat ani metodu triedy a objektu, ktora sa rovnako vola :(
Vela ludom uz napadlo, ze by nieco take bolo celkom cool a spravili nejake pokusy o zapracovanie do jazyka
http://www.grantjenks.com/docs/pypatt-python-pattern-matching/
https://github.com/lihaoyi/macropy - module import
https://github.com/Suor/patterns - decorator with funky syntax - Shared at Python Brazil 2013
https://github.com/mariusae/match - http://monkey.org/~marius/pattern-matching-in-python.html - operator overloading
http://blog.chadselph.com/adding-functional-style-pattern-matching-to-python.html - multi-methods
http://svn.colorstudy.com/home/ianb/recipes/patmatch.py - multi-methods
http://www.artima.com/weblogs/viewpost.jsp?thread=101605 - the original multi-methods
http://speak.codebunk.com/post/77084204957/pattern-matching-in-python - multi-methods supporting callables
http://www.aclevername.com/projects/splarnektity/ - not sure how it works but the syntax leaves a lot to be desired
https://github.com/martinblech/pyfpm - multi-dispatch with string parsing
https://github.com/jldupont/pyfnc - multi-dispatch
http://www.pyret.org/ - It’s own language
Ziadna z tychto kniznic nie je taka dobra ako plnohodnotne zapracovana vlastnost do funkcionalneho jazyka, ale skusim aspon na takomto chabom priklade ukazat, co by sa s niecim takymto dalo robit.
Multimethods
uz aj Guido van Rossum si vsimol, ze by to mohlo byt celkom fajn
http://www.artima.com/weblogs/viewpost.jsp?thread=101605
End of explanation
"""
registry = {}
class MultiMethod(object):
def __init__(self, name):
self.name = name
self.typemap = {}
def __call__(self, *args):
types = tuple(arg.__class__ for arg in args) # a generator expression!
function = self.typemap.get(types)
if function is None:
raise TypeError("no match")
return function(*args)
def register(self, types, function):
if types in self.typemap:
raise TypeError("duplicate registration")
self.typemap[types] = function
def multimethod(*types):
def register(function):
name = function.__name__
mm = registry.get(name)
if mm is None:
mm = registry[name] = MultiMethod(name)
mm.register(types, function)
return mm
return register
@multimethod(int, int)
def foo(a, b):
print('int int')
@multimethod(float, float)
def foo(a, b):
print('float float')
@multimethod(str, str)
def foo(a, b):
print('str str')
foo(1,1)
"""
Explanation: Nevyzeralo by to ovela lepsie takto?
Tento slajd nevidite.
je tu len pre to, aby bol kod na dalsom slajde vykonatelny. Je to kod, ktorym vkladam zelanu funkcionalitu do jazyka
End of explanation
"""
from patternmatching import ifmatches, Any, OfType, Where
@ifmatches
def greet(gender=OfType(str), name="Joey"):
print("Joey, whats up man?")
@ifmatches
def greet(gender="male", name=Any):
print("Hello Mr. {}".format(name))
@ifmatches
def greet(gender="female", name=Any):
print("Hello Ms. {}".format(name))
@ifmatches
def greet(gender=Any, name=Where(str.isupper)):
print("Hello {}. IMPORTANT".format("Mr" if gender == 'male' else "Ms"))
@ifmatches
def greet(gender=Any, name=Any):
print("Hello, {}".format(name))
greet('male', 'JAKUB')
"""
Explanation: Co na to treba?
dekorator, ktory do nejakej struktury bude odkladat funkcie a parametre
je potrebne overenie, ktora funkcia je ta spravna
dekorator musi vratit funkciu, ktora sa pozrie do struktury s funkciami, postupne bude overovat, ci sa typy a pocty atributov zhoduju a potom jednu fukciu zavolat
cele to ma menej ako 20 riadkov (koho to zaujima, moze sa pozriet o par saljdov vyssie ako sa to da spravit)
Obmedzenia?
nefunguje to na zaklade pomenovanych atributov
neda sa pouzit premenlivy pocet atributov
atributy sa porovnavaju len na zaklade typov. Napada mi milion sposobov, ako by som chcel atributy porovnavat zlozitejsie
Mozno ina implementacia mi da vacsiu volnost
http://blog.chadselph.com/adding-functional-style-pattern-matching-to-python.html
End of explanation
"""
from patterns import patterns, Mismatch
@patterns
def factorial():
if 0: 1
if n is int: n * factorial(n-1)
if []: []
if [x] + xs: [factorial(x)] + factorial(xs)
if {'n': n, 'f': f}: f(factorial(n))
factorial(0)
factorial(5)
factorial([3,4,2])
factorial({'n': [5, 1], 'f': sum})
factorial('hello')
"""
Explanation: No a posledna kniznica so zaujimavou syntaxou
https://github.com/Suor/patterns
End of explanation
"""
|
mikekestemont/ghent1516 | Chapter 8 - Parsing XML.ipynb | mit | from lxml import etree
"""
Explanation: Parsing XML in Python
XML in a nutshell
So far, we have primarily dealt with unstructured data in this course: we have learned to read, for example, the contents of plain text files in the previous chapters. Such raw textual data is often called 'unstructured', because it lacks annotations that make explicit the function or meaning of the words in the documents. If we read the contents of a play as a plain text, for instance, we don't have a clue to which scene or act a particular utterance belongs, or by which character the utterance was made. Nowadays, it is therefore increasingly common to add annotations to a text that give us a better insight into the
semantics and structure of the data. Adding annotations to texts (e.g. scholarly editions of Shakepeare), is typically done using some form of markup. Various markup-languages exist that allow us to provide structured and unambiguous annotations to a (digital) text. XML or the "eXtensible Mark-up Language" is currently one of the dominant standards to encode texts in the Digital Humanities. In this chapter, we'll assume that have at least some notion of XML, although we will have a quick refresher below. XML is a pretty straightforward mark-up language: let's have a look at Shakepeare's well-known sonnet 18 encoded in XML (you can find this poem also as sonnet.txt in your data/TEI folder).
<?xml version="1.0"?>
<sonnet author="William Shakepeare" year="1609">
<line n="1">Shall I compare thee to a summer's <rhyme>day</rhyme>?</line>
<line n="2">Thou art more lovely and more <rhyme>temperate</rhyme>:</line>
<line n="3">Rough winds do shake the darling buds of <rhyme>May</rhyme><break n="3"/>,</line>
<line n="4">And summer's lease hath all too short a <rhyme>date</rhyme>:</line>
<line n="5">Sometime too hot the eye of heaven <rhyme>shines</rhyme>,</line>
<line n="6">And often is his gold complexion <rhyme>dimm'd</rhyme>;</line>
<line n="7">And every fair from fair sometime <rhyme>declines</rhyme>,</line>
<line n="8">By chance, or nature's changing course, <rhyme>untrimm'd</rhyme>;</line>
<volta/>
<line n="9">But thy eternal summer shall not <rhyme>fade</rhyme></line>
<line n="10">Nor lose possession of that fair thou <rhyme>ow'st</rhyme>;</line>
<line n="11">Nor shall Death brag thou wander'st in his <rhyme>shade</rhyme>,</line>
<line n="12">When in eternal lines to time thou <rhyme>grow'st</rhyme>;</line>
<line n="13">So long as men can breathe or eyes can <rhyme>see</rhyme>,</line>
<line n="14">So long lives this, and this gives life to <rhyme>thee</rhyme>.</line>
</sonnet>
The first line in our Shakespearean example (<?xml version="1.0"?>) declares which exact version of XML we are using, in our case version 1. As you can see at a glance, XML typically encodes pieces of text using start tags (e.g. <line>, <rhyme>) and end tags (</line>, </rhyme>). Each start tag must correspond to exactly one end tag, although XML does allow for "solo" elements such the <volta/> tag after line 8 in this example. Interestingly, XML tag are not allowed to overlap. The following line would therefore not constitue valid XML:
<line><sentence>This is a </line><line>sentence.</sentence></line>
The following two lines would be valid alternatives for this example, because here the tags don't overlap:
<sentence><line>This is a </line><line>sentence.</line></sentence>
<sentence>This is a <linebreak/>sentence.</sentence>
This limitation has to with the fact that XML is a hierarchical markup language: it assumes that we can describe a text document as a tree of branching nodes. In this tree, elements cannot have more than one direct parent element (otherwise the hearchy would be ambiguous). The one exception is the so-called root element, which as the highest node in tree does not have a parent element itself. Logically speaking, all this entails that a valid XML document can only have a single root element. Note that all non-root elements can have as many siblings as we wish. All the <line> elements in our sonnet, for example, are siblings, in the sense that they have in common a direct parent element, i.e. the <sonnet> tag. Finally, note that we can add extra information to our elements using so-called attributes. The n attribute, for example, give us the line number for each line in the sonnet, surrounded by double quotation marks. The <sonnet> element illlustrates that we can add as many attributes as we want to a tag.
XML and Python
Researchers in the Digital Humanities nowadays often put a lot of time and effort in creating digital data sets for their research, such as scholarly editions with a rich markup encoded in XML. Nevertheless, once this data has been annotated, it can be tricky to get your texts out again, so to speak, and fully exploit the information which you painstakingly encoded. Therefore, it is crucial to be able to parse XML in an efficient manner. Luckily, Python provides all the tools necessary to do this. We will make use of the lxml library, which is part of the Anaconda Python distribution:
End of explanation
"""
tree = etree.parse("data/TEI/sonnet18.xml")
print(tree)
"""
Explanation: For the record, we should mention that there exist many other libraries in Python to parse XML, such as minidom or BeautifulSoup which is an interesting library, when you intend to scrape data from the web. While these might come with more advanced bells and whistles than lxml, they can also be more complex to use, which is why we stick to lxml in this course. Let us now import our sonnet in Python, which has been saved in the file sonnet18.xml:
End of explanation
"""
print(etree.tostring(tree))
"""
Explanation: Python has now read and parsed our xml-file via the etree.parse() function. We have stored our XML tree structure, which is returned by the parse() function, in the tree variable, so that we can access it later. If we print tree as such, we don't get a lot of useful information. To have a closer look at the XML in a printable text version, we need to call the tostring() method on the tree before printing it.
End of explanation
"""
print(etree.tostring(tree).decode())
"""
Explanation: You'll notice that we actually get a string in a raw format: if we want to display it properly, we have to decode it:
End of explanation
"""
print(etree.tostring(tree, pretty_print=True).decode())
"""
Explanation: If we have more complex data, it might also be to set the pretty_print parameter to True, to obtain a more beautifully formatted string, with Python taking care of indendation etc. In our example, it doesn't change much:
End of explanation
"""
for node in tree.iterfind("//rhyme"):
print(node)
"""
Explanation: Now let us start processing the contents of our file. Suppose that we are not really interested in the full hierarchical structure of our file, but just in the rhyme words occuring in it. The high-level function interfind() allows us to easily select all rhyme-element in our tree, regardless of where exactly they occur. Because this functions returns a list of nodes, we can simply loop over them:
End of explanation
"""
for node in tree.iterfind("//rhyme"):
print(node.tag)
"""
Explanation: Note that the search expression ("//rhyme") has two forward slashes before our actual search term. This is in fact XPath syntax, and the two slashes indicate that the search term can occur anywhere (e.g. not necessarily among a node's direct children). Unfortunately, printing the nodes themselves again isn't really insightful: in this way, we only get rather prosaic information of the Python objects holding our rhyme nodes. We can use the .tag property to print the tag's name:
End of explanation
"""
for node in tree.iterfind("//rhyme"):
print(node.text)
"""
Explanation: To extract the actual rhyme word contained in the element, we can use the .text property of the nodes:
End of explanation
"""
root_node = tree.getroot()
print(root_node.tag)
"""
Explanation: That looks better!
Just now, we have been iterating over our rhyme elements in simple order of appearance: we haven't been really been exploiting the hierarchy of our XML file yet. Let's see now how we can navigate our xml tree. Let's first select our root node: there's a function for that!
End of explanation
"""
print(root_node.attrib["author"])
print(root_node.attrib["year"])
"""
Explanation: We can access the value of the attributes of an element via .attrib, just like we would access the information in a Python dictionary, that is via key-based indexing. We know that our sonnet element, for instance, should have an author and year attribute. We can inspect the value of these as follows:
End of explanation
"""
for key in root_node.attrib.keys():
print(root_node.attrib[key])
"""
Explanation: If we wouldn't know which attributes were in fact available for a node, we could also retrieve the attribute names by calling keys() on the attributes property of a node, just like we would do with a regular dictionary:
End of explanation
"""
print(len(root_node))
"""
Explanation: So far so good. Now that we have selected our root element, we can start drilling down our tree's structure. Let us first find out how many child nodes our root element has:
End of explanation
"""
for node in root_node:
print(node.tag)
"""
Explanation: Our root node turns out to have 15 child nodes, which makes a lot of sense, since we have 14 line elements and the volta. We can actually loop over these children, just as we would loop over any other list:
End of explanation
"""
for node in root_node:
if node.tag != "volta":
line_text = ""
for text in node.itertext():
line_text = line_text + text
print(line_text)
else:
print("=== Volta found! ===")
"""
Explanation: To extract the actual text in our lines, we need one additional for-loop which will allow us to iteratre over the pieces of text under each line:
End of explanation
"""
for node in root_node:
if node.tag == "line":
print(node.attrib["n"])
"""
Explanation: Note that we get an empty line at the volta, since there isn't any actual text associated with this empty tag.
Quiz!
Could you now write your own code, which iterates over the lines in our tree and prints out the line number based on the n attribute of the line element?
End of explanation
"""
root_node = tree.getroot()
root_node.attrib["author"] = "J.K. Rowling"
root_node.attrib["year"] = "2015"
root_node.attrib["new_element"] = "dummy string!"
root_node.attrib["place"] = "maynooth"
print(etree.tostring(root_node).decode())
"""
Explanation: Manipulating XML in Python
So far, we have parsed XML in Python, we haven't dealt with creating or manipulating XML in Python. Luckily, adapting or creating XML is fairly straightforward in Python. Let's first try and change the author's name in the author attribute of the sonnet. Because this boils down to manipulating a Python dictionary, the syntax should already be familiar to you:
End of explanation
"""
root_node.attrib["year"] = "2015"
"""
Explanation: That was easy, wasn't it? Did you see we can just add new attributes to an element? Just take care only to put strings as attribute values: since we are working with XML, Python won't accept e.g. numbers and you will get an error:
End of explanation
"""
break_el = etree.Element("break")
break_el.attrib["author"] = "Mike"
print(etree.tostring(break_el).decode())
"""
Explanation: Adding whole elements is fairly easy too. Let's add a single dummy element (<break/>) to indicate a line break at the end of each line. Importantly, we have to create this element inside our loop, before we can add it:
End of explanation
"""
for node in tree.iterfind("//line"):
break_el = etree.Element("break")
node.append(break_el)
print(etree.tostring(tree).decode())
"""
Explanation: You'll notice that we actually created an empty <break/> tag. Now, let's add it add the end of each line:
End of explanation
"""
break_el = etree.Element("break")
print(etree.tostring(break_el).decode())
break_el.text = "XXX"
print(etree.tostring(break_el).decode())
"""
Explanation: Adding an element with actual content is just as easy by the way:
End of explanation
"""
tree = etree.parse("data/TEI/sonnet18.xml")
root_node = tree.getroot()
for node in root_node:
if node.tag == "line":
v = node.attrib["n"]
break_el = etree.Element("break")
break_el.attrib["n"] = v
node.append(break_el)
print(etree.tostring(tree).decode())
"""
Explanation: Quiz
The <break/> element is still empty: could you add to it an n attribute, to which you assign the line number from the current <line> element?
End of explanation
"""
tree = etree.parse("data/TEI/sonnet17.xml")
print(etree.tostring(tree).decode())
"""
Explanation: Python for TEI
In Digital Humanities, you hear a lot about the TEI nowadays, or the Text Encoding Initiative (tei-c.org). The TEI refers to an initiative which has developed a highly influential "dialect" of XML for encoding texts in the Humanities. The beauty about XML is that tag names aren't predefined and you can invent your own tag and attributes. Our Shakepearean example could just have well have read:
<?xml version="1.0"?>
<poem writer="William Shakepeare" date="1609">
<l nr="1">Shall I compare thee to a summer's <last>day</last>?</l>
<l nr="2">Thou art more lovely and more <last>temperate</last>:</l>
<l nr="3">Rough winds do shake the darling buds of <last>May</last>,</l>
<l nr="4">And summer's lease hath all too short a <last>date</last>:</l>
<l nr="5">Sometime too hot the eye of heaven <last>shines</last>,</l>
<l nr="6">And often is his gold complexion <last>dimm'd</last>;</l>
<l nr="7">And every fair from fair sometime <last>declines</last>,</l>
<l nr="8">By chance, or nature's changing course, <last>untrimm'd</last>;</l>
<break/>
<l nr="9">But thy eternal summer shall not <last>fade</last></l>
<l nr="10">Nor lose possession of that fair thou <last>ow'st</last>;</l>
<l nr="11">Nor shall Death brag thou wander'st in his <last>shade</last>,</l>
<l nr="12">When in eternal lines to time thou <last>grow'st</last>;</l>
<l nr="13">So long as men can breathe or eyes can <last>see</last>,</l>
<l nr="14">So long lives this, and this gives life to <last>thee</last>.</l>
</poem>
As you can see, all the tag and attribute names are different in this version, but the essential structure is still the same. You could therefore say that XML is a markup language which provides a syntax to talk about texts, but does not come with a default semantics. This freedom in choosing name tags etc. can also be a bit daunting: this is why the TEI provides Guidelines as how tag names etc. can be used to mark up specific phenomena in texts. The TEI therefore also refers to a rather bulky set of guidelines as to which tags could be used to properly encode a text. Below, we read in a fairly advanced example of Shakepeare's 17th sonnet encoded in TEI (note the use of the <TEI> tag as our root node!). Even the metrical structure has been encoded as you will see, so this can be considered an example "TEI on steroids".
End of explanation
"""
# add your parsing code here...
"""
Explanation: Quiz
Processing TEI in Python, is really just processing XML in Python, the dark art which you already learned to master above! Let's try and practice the looping techniques we introduced above. Could you provide code which parses the xml and writes away the lines in this poem to a plain text file, with one verse line on a single line in the new file?
End of explanation
"""
import os
dirname = "data/TEI/french_plays/"
for filename in os.listdir(dirname):
if filename.endswith(".xml"):
print(filename)
"""
Explanation: A hands-on case study: French plays
OK, it time to get your hands even more dirty. For textual analyses, there are a number of great datasets out there which have been encoded in rich XML. One excellent resource which we have recently worked with, can be found at theatre-classique.fr: this website holds an extensive collection of French plays from the time of the Classical and Enlightenment era in France. Some of the plays have been authored by some of France's finest authors such as Molière pr Pierre and Thomas Corneille. What is interesting about this resource, is that it provides a very rich XML markup: apart from extensive metadata on the play or a detailed descriptions of the actors involved, the actually lines have been encoded in such a manner, that we perfectly know which character uttered a particular line, or to which scene or act a line belongs. This allows us to perform much richer textual analyses than if we would only have a raw text version of the plays. We have collected a subset of these plays for you under the data/TEIdirectory:
End of explanation
"""
for filename in os.listdir(dirname):
if filename.endswith(".xml"):
print("*****")
print("\t-", filename)
tree = etree.parse(dirname+filename)
author_element = tree.find("//author") # find vs iterfind!
print("\t-", author_element.text)
title_element = tree.find("//title")
print("\t-", title_element.text)
"""
Explanation: OK: under this directory, we appear to have a bunch of XML-files, but their titles are just numbers, which doesn't tell us a lot. Let's have a look at what's the title and author tags in these files:
End of explanation
"""
# your code goes here
"""
Explanation: As you can see, we have made you a nice subset selection of this data, containing only texts by the famous pair of brothers: Pierre and Thomas Corneille. We have provided a number of exercises in which you can practice your newly developed XML skills. In each of the fun little tasks below, you should compare the dramas of our two famous brothers:
* how many characters does each brother on average stage in a play?
* which brother has the highest vocabulary richness?
* which brother uses the lengthiest speeches per character on average?
* which brother gives most "speech time" to women, expressed in number of words (hint: you can derive a character's gender from the <castList> in most plays!)
End of explanation
"""
from IPython.core.display import HTML
def css_styling():
styles = open("styles/custom.css", "r").read()
return HTML(styles)
css_styling()
"""
Explanation:
End of explanation
"""
|
SylvainCorlay/bqplot | examples/Tutorials/Updating Plots.ipynb | apache-2.0 | import numpy as np
import bqplot.pyplot as plt
x = np.linspace(-10, 10, 100)
y = np.sin(x)
fig = plt.figure()
line = plt.plot(x=x, y=y)
fig
"""
Explanation: Updating Plots
bqplot is an interactive plotting library. Attributes of plots can be updated in place without recreating the whole figure and marks. Let's look at idiomatic ways of updating plots in bqplot
End of explanation
"""
# update y attribute of the line object
line.y = np.tan(x)
"""
Explanation: To update the attributes of the plot(x, y, color etc.) the correct way to do it is to update the attributes of the mark objects in place. Recreating figure or mark objects is not recommended
End of explanation
"""
# update both x and y together
with line.hold_sync():
line.x = np.arange(100)
line.y = x ** 3 - x
"""
Explanation: We can update multiple attributes of the mark object simultaneously by using the hold_sync method like so. (This makes only one round trip from the python kernel to front end)
End of explanation
"""
fig.animation_duration = 1000
line.y = np.cos(x)
"""
Explanation: We can also animate the changes to the x, y and other data attributes by setting the animation_duration property on the figure object. More examples of animations can found in the Animations notebook
End of explanation
"""
x, y = np.random.rand(2, 10)
fig = plt.figure(animation_duration=1000)
scat = plt.scatter(x=x, y=y)
fig
# update the x and y attreibutes in place using hold_sync
with scat.hold_sync():
scat.x, scat.y = np.random.rand(2, 10)
"""
Explanation: Let's look at an example to update a scatter plot
End of explanation
"""
|
thempel/adaptivemd | examples/tutorial/2_example_run.ipynb | lgpl-2.1 | import sys, os
from adaptivemd import Project, Event, FunctionalEvent, Trajectory
"""
Explanation: Example 2 - The Tasks
Imports
End of explanation
"""
project = Project('tutorial')
"""
Explanation: Let's open our test project by its name. If you completed the previous example this should all work out of the box.
End of explanation
"""
print project.tasks
print project.trajectories
print project.models
"""
Explanation: Open all connections to the MongoDB and Session so we can get started.
Let's see where we are. These numbers will depend on whether you run this notebook for the first time or just continue again. Unless you delete your project it will accumulate models and files over time, as is our ultimate goal.
End of explanation
"""
engine = project.generators['openmm']
modeller = project.generators['pyemma']
pdb_file = project.files['initial_pdb']
"""
Explanation: Now restore our old ways to generate tasks by loading the previously used generators.
End of explanation
"""
print pdb_file.get_file()[:1000] + ' [...]'
"""
Explanation: Remember that we stored some files in the database and of course you can look at them again, should that be important.
End of explanation
"""
file_name = next(project.traj_name) # get a unique new filename
trajectory = Trajectory(
location=file_name, # this creates a new filename
frame=pdb_file, # initial frame is the PDB
length=100, # length is 100 frames
engine=engine # the engine to be used
)
"""
Explanation: The Trajectory object
Before we talk about adaptivity, let's have a look at possibilities to generate trajectories.
We assume that you successfully ran a first trajectory using a worker. Next, we talk about lot's of ways to generate new trajectories.
Trajectories from a pdb
You will do this in the beginning. Remember we already have a PDB stored from setting up the engine. if you want to start from this configuration do as before
create the Trajectory object you want
make a task
submit the task to craft the object into existance on the HPC
A trajectory contains all necessary information to make itself. It has
a (hopefully unique) location: This will we the folder where all the files that belong to the trajectory go.
an initial frame: the initial configuration to be used to tell the MD simulation package where to start
a length in frames to run
the Engine: the actual engine I want to use to create the trajectory.
Note, the Engine is technically not required unless you want to use .run() but it makes sense, because the engine contains information about the topology and, more importantly information about which output files are generated. This is the essential information you will need for analysis, e.g. what is the filename of the trajectory file that contains the protein structure and what is its stride?
Let's first build a Trajectory from scratch
End of explanation
"""
trajectory = project.new_trajectory(
frame=pdb_file,
length=100,
engine=engine,
number=1 # if more then one you get a list of trajectories
)
"""
Explanation: Since this is tedious to write there is a shortcut
End of explanation
"""
task_run = engine.run(trajectory)
"""
Explanation: Like in the first example, now that we have the parameters of the Trajectory we can create the task to do that.
The Task object
First, an example
End of explanation
"""
task_extend = engine.extend(trajectory, 50)
"""
Explanation: This was easy, but we can do some interesting stuff. Since we know the trajectory will exist now we can also extend by some frames. Remember, the trajectory does not really exist yet (not until we submit it and a worker executes it), but we can pretend that it does, since it's relevant propertier are set.
End of explanation
"""
project.queue(task_run, task_extend)
"""
Explanation: The only problem is to make sure the tasks are run in the correct order. This would not be a problem if the worker will run tasks in the order they are place in the queue, but that defeats the purpose of parallel runs. Therefore an extended tasks knows that is depends on the existance of the source trajectory. The worker will hence only run a trajectory, once the source exists.
A queueing system ?
We might wonder at this point how we manage to construct the dependency graph between all tasks and how this is handled and optimized, etc...
Well, we don't. There is no dependency graph, at least not explicitely. All we do, is to check at a time among all task that should be run, which of there can be run. And this is easy to check, all dependent tasks need to be completed and must have succeeded. Then we can rely on their (the dependencies) results to exists and it makes sense to continue.
A real dependency graph would go even further and know about all future relations and you could identify bottleneck
tasks which are necessary to allow other tasks to be run. We don't do that (yet). It could improve performance in the sense that you will run at optimal load balance and keep all workers as busy as possible. Consider our a attempt a first order dependency graph.
End of explanation
"""
engine.native_stride
"""
Explanation: A not on simulation length
Remember that we allow an engine to output multiple trajectory types with freely chosen strides? This could leave to trouble. Imagine this (unrealistic) situation:
We have
1. full trajectory with stride=10
2. a reduced protein-only trajectory with stride=7
Now run a trajectory of length=300.
We get
30+1 full (+1 for the initial frame) and
42+1 protein frames
That per se is no problem, but if you want to extend we only have a restart file for the very last frame and while this works for the full trajectory, for the protein trajectory you are 6 frames short. Just continuing and concatenating the two leads to a gap of 6+7=13 frames instead of 7. A small big potentially significant source of error.
So, compute the least common multiple of all strides using
End of explanation
"""
# task = trajectory.run().extend(50)
"""
Explanation: simpler function calls
There is also a shorter way of writing this
End of explanation
"""
# task = trajectory.run().extend([10] * 10)
"""
Explanation: This will create two tasks that first runs the trajectory and then extend it by 50 frames (in native engine frames)
If you want to do that several times, you can pass a list of ints which is a shortcut for calling .extend(l1).extend(l2). ...
End of explanation
"""
for t in project.trajectories:
print t.short, t.length
"""
Explanation: This will create 10! tasks that eacht will extend the previous one. Each of the task requires the previous one to finish, this way the dependency is preserved. You can use this to mimick using several restarts in between and it also means that you have no idea which worker will actually start and which worker will continue or finish a trajectory.
Checking the results
For a seconds let's see if everything went fine.
End of explanation
"""
for f in project.files:
print f
"""
Explanation: If this works, then you should see one 100 frame trajectory from the setup (first example) and a second 150 length trajectory that we just generated by running 100 frames and extending it by another 50.
If not, there might be a problem or (more likely) the tasks are not finished yet. Just try the above cell again and see if it changes to the expected output.
project.trajectories will show you only existing trajectories. Not ones, that are planned or have been extended. If you want to see all the ones already in the project, you can look at project.files. Which is a bundle and bundles can be filtered. But first all files
End of explanation
"""
from adaptivemd import DT
for t in project.files.c(Trajectory):
print t.short, t.length,
if t.created:
if t.created > 0:
print 'created @ %s' % DT(t.created)
else:
print 'modified @ %s' % DT(-t.created)
else:
print 'not existent'
"""
Explanation: Now all files filtered by [c]lass Trajectory. DT is a little helper to convert time stamps into something readable.
End of explanation
"""
trajectory = project.new_trajectory(engine['system_file'], 100)
task = engine.run(trajectory)
project.queue(task)
"""
Explanation: You see, that the extended trajecory appears twice once with length 100 and once with length 150. This is correct, because at the idea of a 100 frame trajectory was used and hence is saved. But why does this one not appear in the list of trajectories. It was created first and had a timestamp of creation written to .created. This is the time when the worker finishes and was successful.
At the same time, all files that are overwritten, are marked as modified by setting a negative timestamp. So if
.created is None, the file does not exist nor has it.
.created > 0, the file exists
.created < 0, the file existed but has been overwritten
Finally, all project.trajectories are files of type Trajectory with positive created index.
Dealing with errors
Let's do something stupid and produce an error by using a wrong initial pdb file.
End of explanation
"""
task.state
"""
Explanation: Well, nothing changed obviously and we expect it to fail. So let's inspect what happened.
End of explanation
"""
print task.stdout
print task.stderr
"""
Explanation: You might need to execute this cell several times. It will first become queued, then running and finally fail and stop there.
It failed, well, we kind of knew that. No suprise here, but why? Let's look at the stdout and stderr
End of explanation
"""
# project.queue(project.new_trajectory(pdb_file, 100, engine).run()) can be called as
project.queue(project.new_trajectory(pdb_file, 100, engine))
"""
Explanation: We see, what we expect. In openmmrun.py the openmm executable it could not load the pdb file.
NOTE If your worker dies for some reason, it will not set a STDOUT or STDERR. If you think that your task should be able to execute, then you can do task.state = 'created' and reset it to be accessible to workers. This is NOT recommended, just to explain how this works. Of course you need a new worker anyway.
What else
If you have a Trajectory object and create the real trajectory file, you can also put the Trajectory directly into the queue. This is equivalent to call .run on the trajectory and submit the resulting Task to the queue. The only downside is that you do not see the task object and cannot directly work with it, check it's status, etc...
End of explanation
"""
trajectory = project.trajectories.one
"""
Explanation: Trajectories from other trajectories
This will be the most common case. At least in any remote kind of adaptivity you will not start always from the same position or extend. You want to pick any exisiting frame and continue from there. So, let's do that.
First we get a trajectory. Every Bundle in the project (e.g. .trajectories, .models, .files, .tasks) acts like an enhanced set. You can iterate over all entries as we did before, and you can get one element, which usually is the first stored, but not always. If you are interested in Bundles see the documentation. For now that is enough to know, that a bundle (as a set) has a .one function which is short for getting the first object if you iterate. As if you would call next(project.trajectories). Note, that the iterator does not ensure a fixed order. You literally might get any object, if there is at least one.
End of explanation
"""
frame = trajectory[28]
print frame, frame.exists
frame = trajectory[30]
print frame, frame.exists
"""
Explanation: Good, at least 100 frames. We pick, say, frame at index 28 (which is the 29th frame, we start counting at zero) using the way you pick an element from a python list (which is almost what a Trajectory represents, a list of frames)
End of explanation
"""
frame = trajectory[28]
task = project.new_trajectory(frame, 100, engine).run()
print task
frame = trajectory[30]
task = project.new_trajectory(frame, 100, engine).run()
print task
print task.description
"""
Explanation: This part is important! We are running only one full atom trajectory with stride larger than one, so if we want to pick a frame from this trajectory you can pick in theory every frame, but only some of these really exist. If you want to restart from a frame this needs to be the case. Otherwise you run into trouble.
To run a trajectory just use the frame as the initial frame.
End of explanation
"""
project.queue(task)
"""
Explanation: See, how the actual frame picked in the mdconvert line is -i 3 meaning index 3 which represents frame 30 with stride 10.
Now, run the task.
End of explanation
"""
project.wait_until(task.is_done)
task.state
"""
Explanation: Btw, you can wait until something happens using project.wait_until(condition). This is not so useful in notebooks, but in scripts it does. condition here is a function that evaluates to True or False. it will be tested in regular intervals and once it is True the function returns.
End of explanation
"""
from adaptivemd.analysis.pyemma import PyEMMAAnalysis
"""
Explanation: Each Task has a function is_done that you can use. It will return once a task is done. That means it either failed or succeeded or was cancelled. Basically when it is not queued anymore.
If you want to run adaptively, all you need to do is to figure out where to start new simulations from and use the methods provided to run these.
Model tasks
There are of course other things you can do besides creating new trajectories
End of explanation
"""
modeller = PyEMMAAnalysis(
engine=engine,
outtype='protein',
features={'add_inverse_distances': {'select_Backbone': None}}
).named('pyemma')
"""
Explanation: The instance to compute an MSM model of existing trajectories that you pass it. It is initialized with a .pdb file that is used to create features between the $c_\alpha$ atoms. This implementaton requires a PDB but in general this is not necessay. It is specific to my PyEMMAAnalysis show case.
End of explanation
"""
task = modeller.execute(list(project.trajectories))
project.queue(task)
project.wait_until(task.is_done)
for m in project.models:
print m
"""
Explanation: Again we name it pyemma for later reference.
The other two option chose which output type from the engine we want to analyse. We chose the protein trajectories since these are faster to load and have better time resolution.
The features dict expresses which features to use. In our case use all inverse distances between backbone c_alpha atoms.
A model generating task work similar to trajectories. You create the generator with options (so far, this will become more complex in the future) and then you create a Task from passing it a list of trajectories to be analyzed.
End of explanation
"""
model = project.models.last
print model['msm']['P']
"""
Explanation: So we generated one model. The Model objects contain (in the base version) only a .data attribute which is a dictionary of information about the generated model.
End of explanation
"""
project.find_ml_next_frame(4)
"""
Explanation: Pick frames automatically
The last thing that is implemented is a function that can utilize models to decide which frames are better to start from. The simplest one will use the counts per state, take the inverse and use this as a distribution.
End of explanation
"""
trajectories = project.new_ml_trajectory(length=100, number=4, engine=engine)
trajectories
"""
Explanation: So you can pick states according to the newest (last) model. (This will be moved to the Brain). And since we want trajectories with these frames as starting points there is also a function for that
End of explanation
"""
project.queue(trajectories)
"""
Explanation: Let's submit these before we finish this notebook with a quick discussion of workers
End of explanation
"""
project.trigger()
for w in project.workers:
if w.state == 'running':
print '[%s:%s] %s:%s' % (w.state, DT(w.seen).time, w.hostname, w.cwd)
"""
Explanation: That's it.
The Worker objects
Worker are the instances that execute tasks for you. If you did not stop the worker in the command line it will still be running and you can check its state
End of explanation
"""
# project.workers.last.command = 'shutdown'
"""
Explanation: Okay, the worker is running, was last reporting its heartbeat at ... and has a hostname and current working directory (where it was executed from). The generators specify which tasks from some generators are executed. If it is None then the worker runs all tasks it finds. You can use this to run specific workers for models and some for trajectory generation.
You can also control it remotely by sending it a command. shutdown will shut it down for you.
End of explanation
"""
project.close()
"""
Explanation: Afterwards you need to restart you worker to continue with this examples.
If you want to control Worker objects look at the documentation.
End of explanation
"""
|
NuGrid/NuPyCEE | DOC/Teaching/.ipynb_checkpoints/Section2.1-checkpoint.ipynb | bsd-3-clause | import matplotlib.pyplot as plt
import sygma
import omega
import stellab
#loading the observational data module STELLAB
stellab = stellab.stellab()
"""
Explanation: Section 2.1: Tracing the origin of C
Result: Identification of which star is responsible for the origin of C
End of explanation
"""
# OMEGA parameters for MW
mass_loading = 1 # How much mass is ejected from the galaxy per stellar mass formed
nb_1a_per_m = 3.0e-3 # Number of SNe Ia per stellar mass formed
sfe = 0.005 # Star formation efficiency, which sets the mass of gas
table = 'yield_tables/isotope_yield_table_MESA_only_ye.txt' # Yields for AGB and massive stars
#milky_way
o_mw = omega.omega(galaxy='milky_way',Z_trans=-1, table=table,sfe=sfe, DM_evolution=True,\
mass_loading=mass_loading, nb_1a_per_m=nb_1a_per_m, special_timesteps=60)
"""
Explanation: Simulation of the Milky Way
End of explanation
"""
# Choose abundance ratios
%matplotlib nbagg
xaxis = '[Fe/H]'
yaxis = '[C/Fe]'
# Plot observational data points (Stellab)
stellab.plot_spectro(xaxis=xaxis, yaxis=yaxis,norm='Grevesse_Noels_1993',galaxy='milky_way',show_err=False)
# Extract the numerical predictions (OMEGA)
xy_f = o_mw.plot_spectro(fig=3,xaxis=xaxis,yaxis=yaxis,return_x_y=True)
# Overplot the numerical predictions (they are normalized according to Grevesse & Noels 1993)
plt.plot(xy_f[0],xy_f[1],linewidth=4,color='w')
plt.plot(xy_f[0],xy_f[1],linewidth=2,color='k',label='OMEGA')
# Update the existing legend
plt.legend(loc='center left', bbox_to_anchor=(1.01, 0.5), markerscale=0.8, fontsize=13)
# Choose X and Y limits
plt.xlim(-4.5,0.5)
plt.ylim(-1.4,1.6)
"""
Explanation: Comparison of chemical evolution prediction with observation
End of explanation
"""
s0p0001=sygma.sygma(iniZ=0.0001)
s0p006=sygma.sygma(iniZ=0.006)
"""
Explanation: Tracing back to simple stellar populations.
End of explanation
"""
elem='[C/Fe]'
s0p0001.plot_spectro(fig=3,yaxis=elem,marker='D',color='b',label='Z=0.0001')
s0p006.plot_spectro(fig=3,yaxis=elem,label='Z=0.006')
"""
Explanation: What is [C/Fe] for two SSPs at Z=0.006 and Z=0.0001?
End of explanation
"""
# Plot the ejected mass of a certain element
elem='C'
s0p0001.plot_mass(fig=4,specie=elem,marker='D',color='b',label='Z=0.0001')
s0p006.plot_mass(fig=4,specie=elem,label='Z=0.006')
"""
Explanation: Now lets focus on C. What is the evolution of the total mass of C?
End of explanation
"""
elem='C'
s0p0001.plot_mass_range_contributions(specie=elem,marker='D',color='b',label='Z=0.0001')
s0p006.plot_mass_range_contributions(specie=elem,label='Z=0.006')
"""
Explanation: Which stars contribute the most to C?
End of explanation
"""
s0p0001.plot_table_yield(fig=6,iniZ=0.0001,table='yield_tables/isotope_yield_table.txt',yaxis='C-12',
masses=[1.0, 1.65, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0],marker='D',color='b',)
s0p006.plot_table_yield(fig=6,iniZ=0.006,table='yield_tables/isotope_yield_table.txt',yaxis='C-12',
masses=[1.0, 1.65, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0])
"""
Explanation: Which stellar yields are the most?
End of explanation
"""
|
geoneill12/phys202-2015-work | assignments/assignment03/NumpyEx04.ipynb | mit | import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
"""
Explanation: Numpy Exercise 4
Imports
End of explanation
"""
import networkx as nx
K_5=nx.complete_graph(5)
nx.draw(K_5)
"""
Explanation: Complete graph Laplacian
In discrete mathematics a Graph is a set of vertices or nodes that are connected to each other by edges or lines. If those edges don't have directionality, the graph is said to be undirected. Graphs are used to model social and communications networks (Twitter, Facebook, Internet) as well as natural systems such as molecules.
A Complete Graph, $K_n$ on $n$ nodes has an edge that connects each node to every other node.
Here is $K_5$:
End of explanation
"""
def complete_deg(n):
a = np.identity((n), dtype = np.int)
for element in np.nditer(a, op_flags=['readwrite']):
if element > 0:
element[...] = element + n - 2
return a
complete_deg(3)
D = complete_deg(5)
assert D.shape==(5,5)
assert D.dtype==np.dtype(int)
assert np.all(D.diagonal()==4*np.ones(5))
assert np.all(D-np.diag(D.diagonal())==np.zeros((5,5),dtype=int))
"""
Explanation: The Laplacian Matrix is a matrix that is extremely important in graph theory and numerical analysis. It is defined as $L=D-A$. Where $D$ is the degree matrix and $A$ is the adjecency matrix. For the purpose of this problem you don't need to understand the details of these matrices, although their definitions are relatively simple.
The degree matrix for $K_n$ is an $n \times n$ diagonal matrix with the value $n-1$ along the diagonal and zeros everywhere else. Write a function to compute the degree matrix for $K_n$ using NumPy.
End of explanation
"""
def complete_adj(n):
b = np.identity((n), dtype = np.int)
for element in np.nditer(b, op_flags=['readwrite']):
if element == 0:
element[...] = 1
else:
element[...] = 0
return b
complete_adj(3)
A = complete_adj(5)
assert A.shape==(5,5)
assert A.dtype==np.dtype(int)
assert np.all(A+np.eye(5,dtype=int)==np.ones((5,5),dtype=int))
"""
Explanation: The adjacency matrix for $K_n$ is an $n \times n$ matrix with zeros along the diagonal and ones everywhere else. Write a function to compute the adjacency matrix for $K_n$ using NumPy.
End of explanation
"""
def L(n):
L = complete_deg(n) - complete_adj(n)
return L
print L(1)
print L(2)
print L(3)
print L(4)
print L(5)
print L(6)
"""
Explanation: Use NumPy to explore the eigenvalues or spectrum of the Laplacian L of $K_n$. What patterns do you notice as $n$ changes? Create a conjecture about the general Laplace spectrum of $K_n$.
End of explanation
"""
|
openconnectome/ocpdocs | mrgraphs/dataset_variance/dataset_variance.ipynb | apache-2.0 | import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import nibabel as nb
import os
from histogram_window import histogram_windowing
"""
Explanation: Analysis of Dataset Variance
Data which is collected differently, look differently. This principle extends to all data (that I can think of), and of course MRI is no exception. In the case of MRI, batch effects exist across studies due to such minor differences as gradient amplitudes, technician working the machine, and time of day, as well as much larger differences such as imaging sequence used, and manufacturer of scanner. Here, we investigate these batch effect differences and illustrate where we believe we can find the true "signal" in the acquired data.
End of explanation
"""
kki2009 = histogram_windowing('./data/KKI2009_b0_pdfs.pkl','KKI2009')
"""
Explanation: KKI2009
End of explanation
"""
bnu1 = histogram_windowing('./data/BNU1_b0_pdfs.pkl', 'BNU1')
"""
Explanation: BNU1
End of explanation
"""
bnu3 = histogram_windowing('./data/BNU3_b0_pdfs.pkl', 'BNU3')
"""
Explanation: BNU3
End of explanation
"""
nki1 = histogram_windowing('./data/NKI1_b0_pdfs.pkl', 'NKI1')
"""
Explanation: NKI1
End of explanation
"""
mrn114 = histogram_windowing('./data/MRN114_b0_pdfs.pkl', 'MRN114')
"""
Explanation: MRN114
End of explanation
"""
nkienh = histogram_windowing('./data/NKIENH_b0_pdfs.pkl', 'NKIENH')
"""
Explanation: NKIENH
End of explanation
"""
swu4 = histogram_windowing('./data/SWU4_b0_pdfs.pkl', 'SWU4')
"""
Explanation: SWU4
End of explanation
"""
mrn1313 = histogram_windowing('./data/MRN1313_b0_pdfs.pkl', 'MRN1313')
"""
Explanation: MRN1313
End of explanation
"""
datasets = list(('./data/BNU1', './data/BNU3', './data/HCP500',
'./data/Jung2015', './data/KKI2009', './data/MRN114',
'./data/NKI1', './data/NKIENH', './data/SWU4'))
files = list()
for f in datasets:
files.append([f + '/' + single for single in os.listdir(f)])
for scan in files:
bval = np.loadtxt(scan[0])
bval[np.where(bval==np.min(bval))] = 0
im = nb.load(scan[2])
b0_loc = np.where(bval==0)[0][0]
dti = im.get_data()[:,:,:,b0_loc]
print "----------"
print "Scan: " + os.path.basename(scan[2])
print "Shape of B0 volume: " + str(dti.shape)
print "Datatype: " + str(dti.dtype)
try:
print "Min: " + str(dti.min()) + " (" + str(np.iinfo(dti.dtype).min) + ")"
print "Max: " + str(dti.max()) + " (" + str(np.iinfo(dti.dtype).max) + ")"
except ValueError:
print "Min: " + str(dti.min()) + " (" + str(np.finfo(dti.dtype).min) + ")"
print "Max: " + str(dti.max()) + " (" + str(np.finfo(dti.dtype).max) + ")"
plt.hist(np.ravel(dti), bins=2000) #adding 1 to prevent divide by 0
plt.title('Histogram for: ' + os.path.basename(scan[2]))
plt.xscale('log')
plt.xlabel("Value (log scale)")
plt.ylabel("Frequency")
plt.show()
"""
Explanation: (Old method)
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.20/_downloads/a47d41a5d6e12802ada8e8ab7ecc9ffc/plot_50_configure_mne.ipynb | bsd-3-clause | import os
import mne
"""
Explanation: Configuring MNE-Python
This tutorial covers how to configure MNE-Python to suit your local system and
your analysis preferences.
:depth: 1
We begin by importing the necessary Python modules:
End of explanation
"""
print(mne.get_config('MNE_USE_CUDA'))
print(type(mne.get_config('MNE_USE_CUDA')))
"""
Explanation: Getting and setting configuration variables
Configuration variables are read and written using the functions
:func:mne.get_config and :func:mne.set_config. To read a specific
configuration variable, pass its name to :func:~mne.get_config as the
key parameter (key is the first parameter so you can pass it unnamed
if you want):
End of explanation
"""
try:
mne.set_config('MNE_USE_CUDA', True)
except TypeError as err:
print(err)
"""
Explanation: Note that the string values read from the JSON file are not parsed in any
way, so :func:~mne.get_config returns a string even for true/false config
values, rather than a Python boolean <bltin-boolean-values>.
Similarly, :func:~mne.set_config will only set string values (or None
values, to unset a variable):
End of explanation
"""
print(mne.get_config('missing_config_key', default='fallback value'))
"""
Explanation: If you're unsure whether a config variable has been set, there is a
convenient way to check it and provide a fallback in case it doesn't exist:
:func:~mne.get_config has a default parameter.
End of explanation
"""
print(mne.get_config()) # same as mne.get_config(key=None)
"""
Explanation: There are also two convenience modes of :func:~mne.get_config. The first
will return a :class:dict containing all config variables (and their
values) that have been set on your system; this is done by passing
key=None (which is the default, so it can be omitted):
End of explanation
"""
print(mne.get_config(key=''))
"""
Explanation: The second convenience mode will return a :class:tuple of all the keys that
MNE-Python recognizes and uses, regardless of whether they've been set on
your system. This is done by passing an empty string '' as the key:
End of explanation
"""
mne.set_config('MNEE_USE_CUUDAA', 'false')
"""
Explanation: It is possible to add config variables that are not part of the recognized
list, by passing any arbitrary key to :func:~mne.set_config. This will
yield a warning, however, which is a nice check in cases where you meant to
set a valid key but simply misspelled it:
End of explanation
"""
mne.set_config('MNEE_USE_CUUDAA', None)
assert 'MNEE_USE_CUUDAA' not in mne.get_config('')
"""
Explanation: Let's delete that config variable we just created. To unset a config
variable, use :func:~mne.set_config with value=None. Since we're still
dealing with an unrecognized key (as far as MNE-Python is concerned) we'll
still get a warning, but the key will be unset:
End of explanation
"""
print(mne.get_config_path())
"""
Explanation: Where configurations are stored
MNE-Python stores configuration variables in a JSON_ file. By default, this
file is located in :file:{%USERPROFILE%}\\.mne\\mne-python.json on Windows
and :file:{$HOME}/.mne/mne-python.json on Linux or macOS. You can get the
full path to the config file with :func:mne.get_config_path.
End of explanation
"""
# make sure it's not in the JSON file (no error means our assertion held):
assert mne.get_config('PATH', use_env=False) is None
# but it *is* in the environment:
print(mne.get_config('PATH'))
"""
Explanation: However it is not a good idea to directly edit files in the :file:.mne
directory; use the getting and setting functions described in the
previous section <config-get-set>.
If for some reason you want to load the configuration from a different
location, you can pass the home_dir parameter to
:func:~mne.get_config_path, specifying the parent directory of the
:file:.mne directory where the configuration file you wish to load is
stored.
Using environment variables
For compatibility with :doc:MNE-C <../../install/mne_c>, MNE-Python
also reads and writes environment variables_ to specify configuration. This
is done with the same functions that read and write the JSON configuration,
and is controlled with the parameters use_env and set_env. By
default, :func:~mne.get_config will check :data:os.environ before
checking the MNE-Python JSON file; to check only the JSON file use
use_env=False. To demonstrate, here's an environment variable that is not
specific to MNE-Python (and thus is not in the JSON config file):
End of explanation
"""
mne.set_config('foo', 'bar', set_env=False)
print('foo' in os.environ.keys())
mne.set_config('foo', 'bar')
print('foo' in os.environ.keys())
mne.set_config('foo', None) # unsetting a key deletes var from environment
print('foo' in os.environ.keys())
"""
Explanation: Also by default, :func:~mne.set_config will set values in both the JSON
file and in :data:os.environ; to set a config variable only in the JSON
file use set_env=False. Here we'll use :func:print statement to confirm
that an environment variable is being created and deleted (we could have used
the Python assert statement <assert> instead, but it doesn't print any
output when it succeeds so it's a little less obvious):
End of explanation
"""
print(mne.get_config('MNE_LOGGING_LEVEL'))
"""
Explanation: Logging
One important configuration variable is MNE_LOGGING_LEVEL. Throughout the
module, messages are generated describing the actions MNE-Python is taking
behind-the-scenes. How you set MNE_LOGGING_LEVEL determines how many of
those messages you see. The default logging level on a fresh install of
MNE-Python is info:
End of explanation
"""
kit_data_path = os.path.join(os.path.abspath(os.path.dirname(mne.__file__)),
'io', 'kit', 'tests', 'data', 'test.sqd')
raw = mne.io.read_raw_kit(kit_data_path, verbose='warning')
"""
Explanation: The logging levels that can be set as config variables are debug,
info, warning, error, and critical. Around 90% of the log
messages in MNE-Python are info messages, so for most users the choice is
between info (tell me what is happening) and warning (tell me only if
something worrisome happens). The debug logging level is intended for
MNE-Python developers.
In an earlier section <config-get-set> we saw how
:func:mne.set_config is used to change the logging level for the current
Python session and all future sessions. To change the logging level only for
the current Python session, you can use :func:mne.set_log_level instead.
The :func:~mne.set_log_level function takes the same five string options
that are used for the MNE_LOGGING_LEVEL config variable; additionally, it
can accept :class:int or :class:bool values that are equivalent to those
strings. The equivalencies are given in this table:
+----------+---------+---------+
| String | Integer | Boolean |
+==========+=========+=========+
| DEBUG | 10 | |
+----------+---------+---------+
| INFO | 20 | True |
+----------+---------+---------+
| WARNING | 30 | False |
+----------+---------+---------+
| ERROR | 40 | |
+----------+---------+---------+
| CRITICAL | 50 | |
+----------+---------+---------+
With many MNE-Python functions it is possible to change the logging level
temporarily for just that function call, by using the verbose parameter.
To illustrate this, we'll load some sample data with different logging levels
set. First, with log level warning:
End of explanation
"""
raw = mne.io.read_raw_kit(kit_data_path, verbose='info')
"""
Explanation: No messages were generated, because none of the messages were of severity
"warning" or worse. Next, we'll load the same file with log level info
(the default level):
End of explanation
"""
raw = mne.io.read_raw_kit(kit_data_path, verbose='debug')
"""
Explanation: This time, we got a few messages about extracting information from the file,
converting that information into the MNE-Python :class:~mne.Info format,
etc. Finally, if we request debug-level information, we get even more
detail:
End of explanation
"""
|
rsterbentz/phys202-2015-work | days/day08/Display.ipynb | mit | class Ball(object):
pass
b = Ball()
b.__repr__()
print(b)
"""
Explanation: Display of Rich Output
In Python, objects can declare their textual representation using the __repr__ method.
End of explanation
"""
class Ball(object):
def __repr__(self):
return 'TEST'
b = Ball()
print(b)
"""
Explanation: Overriding the __repr__ method:
End of explanation
"""
from IPython.display import display
"""
Explanation: IPython expands on this idea and allows objects to declare other, rich representations including:
HTML
JSON
PNG
JPEG
SVG
LaTeX
A single object can declare some or all of these representations; all of them are handled by IPython's display system. .
Basic display imports
The display function is a general purpose tool for displaying different representations of objects. Think of it as print for these rich representations.
End of explanation
"""
from IPython.display import (
display_pretty, display_html, display_jpeg,
display_png, display_json, display_latex, display_svg
)
"""
Explanation: A few points:
Calling display on an object will send all possible representations to the Notebook.
These representations are stored in the Notebook document.
In general the Notebook will use the richest available representation.
If you want to display a particular representation, there are specific functions for that:
End of explanation
"""
from IPython.display import Image
i = Image(filename='./ipython-image.png')
display(i)
"""
Explanation: Images
To work with images (JPEG, PNG) use the Image class.
End of explanation
"""
i
"""
Explanation: Returning an Image object from an expression will automatically display it:
End of explanation
"""
Image(url='http://python.org/images/python-logo.gif')
"""
Explanation: An image can also be displayed from raw data or a URL.
End of explanation
"""
from IPython.display import HTML
s = """<table>
<tr>
<th>Header 1</th>
<th>Header 2</th>
</tr>
<tr>
<td>row 1, cell 1</td>
<td>row 1, cell 2</td>
</tr>
<tr>
<td>row 2, cell 1</td>
<td>row 2, cell 2</td>
</tr>
</table>"""
h = HTML(s)
display(h)
"""
Explanation: HTML
Python objects can declare HTML representations that will be displayed in the Notebook. If you have some HTML you want to display, simply use the HTML class.
End of explanation
"""
%%html
<table>
<tr>
<th>Header 1</th>
<th>Header 2</th>
</tr>
<tr>
<td>row 1, cell 1</td>
<td>row 1, cell 2</td>
</tr>
<tr>
<td>row 2, cell 1</td>
<td>row 2, cell 2</td>
</tr>
</table>
%%html
<style>
#notebook {
background-color: skyblue;
font-family: times new roman;
}
</style>
"""
Explanation: You can also use the %%html cell magic to accomplish the same thing.
End of explanation
"""
from IPython.display import Javascript
"""
Explanation: You can remove the abvove styling by using "Cell"$\rightarrow$"Current Output"$\rightarrow$"Clear" with that cell selected.
JavaScript
The Notebook also enables objects to declare a JavaScript representation. At first, this may seem odd as output is inherently visual and JavaScript is a programming language. However, this opens the door for rich output that leverages the full power of JavaScript and associated libraries such as d3.js for output.
End of explanation
"""
js = Javascript('alert("hi")');
display(js)
"""
Explanation: Pass a string of JavaScript source code to the JavaScript object and then display it.
End of explanation
"""
%%javascript
alert("hi");
"""
Explanation: The same thing can be accomplished using the %%javascript cell magic:
End of explanation
"""
Javascript(
"""$.getScript('https://cdnjs.cloudflare.com/ajax/libs/d3/3.2.2/d3.v3.min.js')"""
)
%%html
<style type="text/css">
circle {
fill: rgb(31, 119, 180);
fill-opacity: .25;
stroke: rgb(31, 119, 180);
stroke-width: 1px;
}
.leaf circle {
fill: #ff7f0e;
fill-opacity: 1;
}
text {
font: 10px sans-serif;
}
</style>
%%javascript
// element is the jQuery element we will append to
var e = element.get(0);
var diameter = 600,
format = d3.format(",d");
var pack = d3.layout.pack()
.size([diameter - 4, diameter - 4])
.value(function(d) { return d.size; });
var svg = d3.select(e).append("svg")
.attr("width", diameter)
.attr("height", diameter)
.append("g")
.attr("transform", "translate(2,2)");
d3.json("./flare.json", function(error, root) {
var node = svg.datum(root).selectAll(".node")
.data(pack.nodes)
.enter().append("g")
.attr("class", function(d) { return d.children ? "node" : "leaf node"; })
.attr("transform", function(d) { return "translate(" + d.x + "," + d.y + ")"; });
node.append("title")
.text(function(d) { return d.name + (d.children ? "" : ": " + format(d.size)); });
node.append("circle")
.attr("r", function(d) { return d.r; });
node.filter(function(d) { return !d.children; }).append("text")
.attr("dy", ".3em")
.style("text-anchor", "middle")
.text(function(d) { return d.name.substring(0, d.r / 3); });
});
d3.select(self.frameElement).style("height", diameter + "px");
"""
Explanation: Here is a more complicated example that loads d3.js from a CDN, uses the %%html magic to load CSS styles onto the page and then runs ones of the d3.js examples.
End of explanation
"""
from IPython.display import Audio
Audio("./scrubjay.mp3")
"""
Explanation: Audio
IPython makes it easy to work with sounds interactively. The Audio display class allows you to create an audio control that is embedded in the Notebook. The interface is analogous to the interface of the Image display class. All audio formats supported by the browser can be used. Note that no single format is presently supported in all browsers.
End of explanation
"""
import numpy as np
max_time = 3
f1 = 120.0
f2 = 124.0
rate = 8000.0
L = 3
times = np.linspace(0,L,rate*L)
signal = np.sin(2*np.pi*f1*times) + np.sin(2*np.pi*f2*times)
Audio(data=signal, rate=rate)
"""
Explanation: A NumPy array can be converted to audio. The Audio class normalizes and encodes the data and embeds the resulting audio in the Notebook.
For instance, when two sine waves with almost the same frequency are superimposed a phenomena known as beats occur:
End of explanation
"""
from IPython.display import YouTubeVideo
YouTubeVideo('sjfsUzECqK0')
"""
Explanation: Video
More exotic objects can also be displayed, as long as their representation supports the IPython display protocol. For example, videos hosted externally on YouTube are easy to load:
End of explanation
"""
from IPython.display import IFrame
IFrame('https://ipython.org', width='100%', height=350)
"""
Explanation: External sites
You can even embed an entire page from another site in an iframe; for example this is IPython's home page:
End of explanation
"""
from IPython.display import FileLink, FileLinks
FileLink('../Visualization/Matplotlib.ipynb')
"""
Explanation: Links to local files
IPython provides builtin display classes for generating links to local files. Create a link to a single file using the FileLink object:
End of explanation
"""
FileLinks('./')
"""
Explanation: Alternatively, to generate links to all of the files in a directory, use the FileLinks object, passing '.' to indicate that we want links generated for the current working directory. Note that if there were other directories under the current directory, FileLinks would work in a recursive manner creating links to files in all sub-directories as well.
End of explanation
"""
|
bioinformatica-corso/lezioni | laboratorio/lezione10-29ott21/esercizio5-soluzione.ipynb | cc0-1.0 | import re
"""
Explanation: Esercizio 5
Prendere in input un file in formato GTF (Gene Transfer Format), che annota un set di geni su una genomica di riferimento, insieme al file FASTA della genomica di riferimento e produrre:
le sequenze dei trascritti oppure le sequenze delle coding sequences (CDS) per i geni annotati in formato FASTA, a seconda della scelta dell'utente
il set degli HUGO NAMES dei geni per cui è stata prodotta una sequenza (trascritto oppure CDS) al punto precedente
L'header FASTA di ogni sequenza prodotta deve contenere:
lo HUGO name del gene di riferimento
l’identificatore del trascritto di riferimento
la lunghezza della sequenza prodotta
il tipo di sequenza (trascritto o CDS)
lo strand del gene
Esempio di header per un trascritto:
>ARHGAP4; U52112.4-003; len=3235 type=transcript; strand=-
Esempio di header per una CDS:
>AVPR2; U52112.2-003; len=642; type=cds; strand=+
Parametri in input:
file in formato GTF
file della genomica di riferimento in formato FASTA
feature della sequenza da ricostruire: exon se si vogliono ricostruire i trascritti o CDS se si vogliono ricostruire le coding sequences
Requisiti:
deve essere definita una funzione format_fasta() che prenda come argomenti un header FASTA e una sequenza, e restituisca la sequenza in formato FASTA separata in righe di 80 caratteri.
deve essere definita una funzione reverse_complement() che prenda come argomento una sequenza nucleotidica e ne restituisca il reverse&complement.
deve essere definita una funzione compose_feature() che prenda come argomenti una lista di features come tuple (start, end)) di features, la genomica di riferimento, lo strand del gene di riferimento ed effettui la concatenazione delle sequenze delle features, eventualmente operando il reverse&complement se lo strand è -.
NOTA BENE: gli attributi del nono campo del file GTF non sono ad ordine fisso all'interno del campo. Per estrarre quindi un determinato attributo si deve usare un'espressione regolare e non il metodo split().
Soluzione
Importare il modulo re per usare le espressioni regolari.
End of explanation
"""
def format_fasta(header, sequence):
return header + '\n' + '\n'.join(re.findall('\w{1,80}', sequence))
"""
Explanation: Definizione della funzione format_fasta()
La funzione prende come argomento una stringa contenente un header FASTA e una sequenza (nucleotidica o di proteina) e restituisce la sequenza in formato FASTA separata in righe di 80 caratteri.
End of explanation
"""
def reverse_complement(sequence):
sequence = sequence.lower()
sequence = sequence[::-1]
complement = {'a':'t', 't':'a', 'c':'g', 'g':'c'}
return ''.join([complement[c] for c in sequence])
reverse_complement('aaattt')
"""
Explanation: NOTA BENE: supporre che l'header in input alla funzione abbia già il simbolo > all'inizio, ma non il simbolo \n alla fine.
Definizione della funzione reverse_complement()
La funzione prende come argomento una stringa contenente una sequenza nucleotidica e restituisce la versione reverse and complement della sequenza.
End of explanation
"""
def compose_feature(feature_list, reference_sequence, strand):
reconstructed_sequence = ''.join(reference_sequence[f[0]-1:f[1]] for f in sorted(feature_list))
if strand == '-':
reconstructed_sequence = reverse_complement(reconstructed_sequence)
return reconstructed_sequence
"""
Explanation: NOTA BENE: fare in modo che la funzione sia indipendente dal caso della sequenza in input (maiuscolo o minuscolo).
Definizione della funzione compose_feature()
La funzione prende come argomenti una lista di features come tuple (start, end), la genomica di riferimento, lo strand del gene, ed effettua la concatenazione delle sequenze delle features, ed eventualmente il reverse&complement se lo strand del gene è -.
End of explanation
"""
gtf_file_name = './input.gtf'
reference_file_name = './ENm006.fa'
feature_name = 'CDS'
"""
Explanation: NOTA BENE: concatenare le sequenze delle features sempre per coordinate crescenti.
Parametri in input
End of explanation
"""
with open(reference_file_name, 'r') as input_file:
reference_file_rows = input_file.readlines()
reference_file_rows
"""
Explanation: Lettura del file FASTA della genomica di riferimento
Lettura del file della genomica di riferimento nella lista di righe reference_file_rows
End of explanation
"""
genomic_reference = ''.join(reference_file_rows[1:]).rstrip().replace('\n', '')
genomic_reference
"""
Explanation: Determinazione della sequenza di riferimento in un'unica stringa.
End of explanation
"""
with open(reference_file_name, 'r') as input_file:
reference_file_string = input_file.read()
reference_file_string
"""
Explanation: Lettura del file della genomica di riferimento nella stringa reference_file_string
End of explanation
"""
genomic_reference2 = ''.join(re.findall('\n(\w+)', reference_file_string))
genomic_reference2
genomic_reference2 == genomic_reference
"""
Explanation: Determinazione della sequenza di riferimento in un'unica stringa.
End of explanation
"""
with open(gtf_file_name, 'r') as input_file:
gtf_file_rows = input_file.readlines()
gtf_file_rows
"""
Explanation: Lettura dei record del file GTF
Lettura dei record del file GTF nella lista di righe gtf_file_rows
End of explanation
"""
gtf_file_rows = [row for row in gtf_file_rows if row.rstrip().split()[2] == feature_name]
gtf_file_rows
"""
Explanation: Filtraggio dei record GTF che occorrono per ricostruire le sequenze del tipo scelto
Eliminare dalla lista gtf_file_rows i record GTF che non corrispondono al tipo di feature che compone la sequenza che si è scelto di ricostruire, cioé per i quali il terzo campo non è uguale al valore della variabile feature_name (exon se si è scelto di ricostruire i trascritti full-length e CDS se si è scelto di ricostruire le coding sequences).
End of explanation
"""
strand_dict = {}
"""
Explanation: Costruzione del dizionario degli strand e del set dei geni annotati
A partire dalla lista precedente, costruire:
- il dizionario degli strand:
- chiave: HUGO name del gene
- valore: strand del gene (+ o -)
il set dei geni annotati relativamente al tipo di sequenza che si vuole ricostruire
NOTA BENE: il valore dello strand (settimo campo del record GTF) è costante per un determinato gene.
Inizializzazione del dizionario vuoto.
End of explanation
"""
for row in gtf_file_rows:
strand = row.rstrip().split('\t')[6]
#hugo_name = re.search('[\w\s;]+gene_id\s"(\w+)', row.rstrip().split('\t')[8]).group(1)
hugo_name = re.search('gene_id\s"(\w+)";', row).group(1)
strand_dict[hugo_name] = strand
strand_dict
"""
Explanation: Attraversare la lista dei record di tipo uguale a feature_name e riempire il dizionario.
End of explanation
"""
gene_set = set(strand_dict)
gene_set
"""
Explanation: Estrarre dal dizionario il set dei geni annotati.
End of explanation
"""
id_dict = {}
composition_dict = {}
"""
Explanation: Ricostruzione delle sequenze
Costruire:
il dizionario degli ID dei trascritti:
chiave: HUGO name del gene
valore: set dei transcript_id coinvolti in record di tipo exon (se si vogliono ricostruire i trascritti) oppure in record di tipo CDS (se si vogliono ricostruire le coding sequence)
il dizionario delle composizioni in features:
chiave: identificatore del trascritto
valore: lista delle tuple (start, end) delle features (records) che compongono la sequenza da ricostruire (trascritto oppure coding sequence) per il trascritto
Inizializzare i dizionari vuoti.
End of explanation
"""
for row in gtf_file_rows:
hugo_name = re.search('gene_id\s"(\w+)";', row).group(1)
transcript_id = re.search('transcript_id\s"([^"]+)";', row).group(1)
feature_start = row.rstrip().split('\t')[3]
feature_end = row.rstrip().split('\t')[4]
id_dict_value = id_dict.get(hugo_name, set())
id_dict_value.add(transcript_id)
id_dict.update([(hugo_name, id_dict_value)])
composition_dict_value = composition_dict.get(transcript_id, list())
composition_dict_value.append((int(feature_start), int(feature_end)))
composition_dict.update([(transcript_id, composition_dict_value)])
"""
Explanation: Attraversare la lista gtf_file_rows e riempire i due dizionari.
End of explanation
"""
id_dict
composition_dict
"""
Explanation: NOTA BENE: un'espressione regolare simile a quella usata per estrarre lo HUGO NAME, cioé 'transcript_id\s+"(\w+)";' non può funzionare per estrarre dal record l'ID del trascritto in quanto in tale ID è presente anche il simbolo di punto . che non fa parte della classe dei simboli di parola rappresentata da \w. Quindi è meglio usare l'espressione regolare 'transcript_id\s+"([^"]+)";'.
End of explanation
"""
sequence_fasta_list = []
sequence_type = {'exon' : 'transcript', 'CDS' : 'cds'}
for hugo_name in id_dict:
for transcript_id in id_dict[hugo_name]:
r_sequence = compose_feature(composition_dict[transcript_id], genomic_reference, strand_dict[hugo_name])
header = '>' + hugo_name + '; ' + transcript_id + '; len=' + str(len(r_sequence)) + '; type=' + sequence_type[feature_name] + '; strand=' + strand_dict[hugo_name]
sequence_fasta_list.append((header, r_sequence))
sequence_fasta_list
"""
Explanation: A partire dai dizionari precedenti, costruire la lista di tuple (header, sequenza) in cui il primo elemento è l'header FASTA e il secondo elemento è la sequenza ricostruita.
L'header deve essere del tipo:
>ARHGAP4; U52112.4-003; len=3235; type=transcript; strand=-
se si è scelto di ricostruire i trascritti full-length, e:
>ARHGAP4; U52112.4-005; len=642; type=cds; strand=-
se si è scelto di ricostruire le coding sequences (CDS).
End of explanation
"""
sequence_fasta_list = [format_fasta(t[0], t[1]) for t in sequence_fasta_list]
for seq in sequence_fasta_list:
print(seq)
"""
Explanation: Trasformare la lista di tuple in una lista di sequenze in formato FASTA.
End of explanation
"""
|
mdda/fossasia-2016_deep-learning | notebooks/work-in-progress/2018-08_DidTheModelUnderstandTheQuestion/VQA_playground.ipynb | mit | # Upgrade pillow to latest version (solves a colab Issue) :
! pip install -U 'Pillow>=5.2.0'
import os, sys
from matplotlib import pyplot as plt
import warnings
warnings.filterwarnings("ignore", category=UserWarning) # Cleaner demos : Don't do this normally...
"""
Explanation: VQA : Use and Abuse
To answer a question
Convert the image to features 'v'
Convert the question to a torch vector of longs
Pass both into the the VQA model
Interpret the softmax-y answer vectors
End of explanation
"""
if not os.path.isfile('./pytorch-vqa/README.md'):
!git clone https://github.com/Cyanogenoid/pytorch-vqa.git
sys.path.append(os.path.realpath('./pytorch-vqa'))
# https://github.com/Cyanogenoid/pytorch-vqa/releases
if not os.path.isfile('./2017-08-04_00.55.19.pth'): # 81Mb model
!wget https://github.com/Cyanogenoid/pytorch-vqa/releases/download/v1.0/2017-08-04_00.55.19.pth
try:
import torch
except:
from wheel.pep425tags import get_abbr_impl, get_impl_ver, get_abi_tag
platform = '{}{}-{}'.format(get_abbr_impl(), get_impl_ver(), get_abi_tag())
accelerator = 'cu80' if os.path.exists('/opt/bin/nvidia-smi') else 'cpu'
!pip install -q \
http://download.pytorch.org/whl/{accelerator}/torch-0.4.1-{platform}-linux_x86_64.whl \
torchvision
import torch
use_cuda = torch.cuda.is_available()
device = torch.device("cuda" if use_cuda else "cpu")
import model # from pytorch-vqa
#saved_state = torch.load('logs/2017-08-04_00:55:19.pth')
saved_state = torch.load('./2017-08-04_00.55.19.pth', map_location=device)
tokens = len(saved_state['vocab']['question']) + 1
saved_state.keys() # See what's in the saved state
# Load the predefined model
vqa_net = torch.nn.DataParallel(model.Net(tokens))
vqa_net.load_state_dict(saved_state['weights'])
vqa_net.to(device)
vqa_net.eval()
"""
Explanation: Download the Prebuilt VQA model and Weights
End of explanation
"""
if not os.path.isfile('./pytorch-resnet/README.md'):
!git clone https://github.com/Cyanogenoid/pytorch-resnet.git
sys.path.append(os.path.realpath('./pytorch-resnet'))
import resnet # from pytorch-resnet
import torchvision.transforms as transforms
from PIL import Image
def get_transform(target_size, central_fraction=1.0):
return transforms.Compose([
transforms.Scale(int(target_size / central_fraction)),
transforms.CenterCrop(target_size),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]),
])
class ResNetLayer4(torch.nn.Module):
def __init__(self):
super(ResNetLayer4, self).__init__()
self.model = resnet.resnet152(pretrained=True)
# from visual_qa_analysis/config.py
image_size = 448 # scale shorter end of image to this size and centre crop
#output_size = image_size // 32 # size of the feature maps after processing through a network
output_features = 2048 # number of feature maps thereof
central_fraction = 0.875 # only take this much of the centre when scaling and centre cropping
self.transform = get_transform(image_size, central_fraction)
def save_output(module, input, output):
self.buffer = output
self.model.layer4.register_forward_hook(save_output)
def forward(self, x):
self.model(x)
return self.buffer
def image_to_features(self, img_file):
img = Image.open(img_file).convert('RGB')
img_transformed = self.transform(img)
#print(img_transformed.size())
img_batch = img_transformed.unsqueeze(0).to(device)
return self.forward(img_batch)
resnet_layer4 = ResNetLayer4().to(device) # Downloads 241Mb model when first run
# Sample images :
image_urls, image_path, image_files = [
'https://www.pets4homes.co.uk/images/articles/2709/large/tabby-cat-colour-and-pattern-genetics-5516c44dbd383.jpg',
'https://imgc.allpostersimages.com/img/print/posters/cat-black-jumping-off-wall_a-G-12469828-14258383.jpg',
'https://i.ytimg.com/vi/AIwlyly7Eso/hqdefault.jpg',
'https://upload.wikimedia.org/wikipedia/commons/9/9b/Black_pussy_-_panoramio.jpg',
'https://www.thehappycatsite.com/wp-content/uploads/2017/06/siamese5.jpg',
'https://c.pxhere.com/photos/15/e5/cat_roof_home_architecture_building_roofs_animal_sit-536976.jpg!d',
'http://kitticats.com/wp-content/uploads/2015/05/cat4.jpg',
], './img/', []
os.makedirs('./img', exist_ok=True)
for url in image_urls:
image_file=os.path.join(image_path, os.path.basename(url))
image_files.append(image_file)
if not os.path.isfile(image_file):
!wget {url} --directory-prefix ./img/
image_files
v = resnet_layer4.image_to_features(image_files[0])
v.size()
"""
Explanation: Now get the Correct Image feature network
End of explanation
"""
vocab = saved_state['vocab']
vocab.keys() # dict_keys(['question', 'answer'])
list(vocab['question'].items())[:5] # [('the', 1), ('is', 2), ('what', 3), ('are', 4), ('this', 5)]
qtoken_to_index = vocab['question']
QUESTION_LENGTH_MAX = 30 # say...
def encode_question(question_str):
""" Turn a question into a vector of indices and a question length """
question_arr = question_str.lower().split(' ')
#vec = torch.zeros(QUESTION_LENGTH_MAX).long()
vec = torch.zeros(len(question_arr)).long()
for i, token in enumerate(question_arr):
vec[i] = qtoken_to_index.get(token, 0)
return vec.to(device), torch.tensor( len(question_arr) ).to(device)
list(vocab['answer'].items())[:5] # [('yes', 0), ('no', 1), ('2', 2), ('1', 3), ('white', 4)]
answer_words = ['UNDEF'] * len(vocab['answer'])
for w,idx in vocab['answer'].items():
answer_words[idx]=w
len(answer_words), answer_words[:10] # 3000, ['yes', 'no', '2', '1', 'white', '3', 'red', 'blue', '4', 'green']
# Important things to know...
'colour' in qtoken_to_index, 'color' in qtoken_to_index, 'tabby' in answer_words
"""
Explanation: Have a look at how the vocab is built
End of explanation
"""
image_idx = 1
image_filename = image_files[image_idx]
img = Image.open(image_filename).convert('RGB')
plt.imshow(img)
v0 = resnet_layer4.image_to_features(image_filename)
q, q_len = encode_question("is there a cat in the picture")
#q, q_len = encode_question("what color is the cat's fur")
#q, q_len = encode_question("is the cat jumping up or down")
q, q_len
ans = vqa_net(v0, q.unsqueeze(0), q_len.unsqueeze(0))
ans.data.cpu()[0:10]
_, answer_idx = ans.data.cpu().max(dim=1)
answer_words[ answer_idx ]
"""
Explanation: Let's test a single Image
End of explanation
"""
def vqa_single_softmax(im_features, q_str):
q, q_len = encode_question(q_str)
ans = vqa_net(im_features, q.unsqueeze(0), q_len.unsqueeze(0))
return ans.data.cpu()
def vqa(image_filename, question_arr):
plt.imshow(Image.open(image_filename).convert('RGB')); plt.show()
image_features = resnet_layer4.image_to_features(image_filename)
for question_str in question_arr:
_, answer_idx = vqa_single_softmax(image_features, question_str).max(dim=1)
#print(question_str+" -> "+answer_words[ answer_idx ])
print((answer_words[ answer_idx ]+' '*8)[:8]+" <- "+question_str)
image_idx = 0 # 6
vqa(image_files[image_idx], [
"is there a cat in the picture",
"is this a picture of a cat",
"is the animal in the picture a cat or a dog",
"what color is the cat",
"what color are the cat's eyes",
])
"""
Explanation: Let's systematise a little
End of explanation
"""
def leave_one_out(image_filename, question_base):
plt.imshow(Image.open(image_filename).convert('RGB')); plt.show()
image_features = resnet_layer4.image_to_features(image_filename)
question_arr = question_base.lower().split(' ')
for i, word_omit in enumerate(question_arr):
question_str = ' '.join( question_arr[:i]+question_arr[i+1:] )
score, answer_idx = vqa_single_softmax(image_features, question_str).max(dim=1)
#print(question_str+" -> "+answer_words[ answer_idx ])
print((answer_words[ answer_idx ]+' '*8)[:8]+" <- "+question_str) #, score
image_idx = 0
leave_one_out(image_files[image_idx], "is there a cat in the picture") # mouse? dog?
"""
Explanation: Now let's stress the model
Leave one word out
End of explanation
"""
def leave_out_combos(image_filename, question_base):
plt.imshow(Image.open(image_filename).convert('RGB')); plt.show()
image_features = resnet_layer4.image_to_features(image_filename)
question_arr = question_base.lower().split(' ')
for i in range(2 ** len(question_arr)):
q_arr = [question_arr[j] for j in range(len(question_arr)) if (i & (2**j))==0 ]
question_str = ' '.join( q_arr )
_, answer_idx = vqa_single_softmax(image_features, question_str).max(dim=1)
print((answer_words[ answer_idx ]+' '*8)[:8]+" <- "+question_str)
image_idx = 4
leave_out_combos(image_files[image_idx], "is there a cat in the picture")
#leave_out_combos(image_files[image_idx], "what color are cat's eyes")
"""
Explanation: Leave all combos of words out ( think : Binary )
End of explanation
"""
def leave_out_best(image_filename, question_base):
plt.imshow(Image.open(image_filename).convert('RGB')); plt.show()
image_features = resnet_layer4.image_to_features(image_filename)
_, answer_true = vqa_single_softmax(image_features, question_base).max(dim=1)
print((answer_words[ answer_true ]+' '*8)[:8]+" <- "+question_base)
print()
while True:
question_arr = question_base.lower().split(' ')
score_best, q_best = None, ''
for i, word_omit in enumerate(question_arr):
question_str = ' '.join( question_arr[:i]+question_arr[i+1:] )
score, answer_idx = vqa_single_softmax(image_features, question_str).max(dim=1)
if answer_idx==answer_true:
print((answer_words[ answer_idx ]+' '*8)[:8]+" <- "+question_str) #, score
if (score_best is None or score>score_best):
score_best, question_base = score, question_str
print()
if score_best is None or len(question_base)==0: break
image_idx = 3
leave_out_best(image_files[image_idx], "is there a cat in the picture")
"""
Explanation: Iteratively, leave out the word that is 'weakest'
End of explanation
"""
|
jrg365/gpytorch | examples/01_Exact_GPs/Spectral_Delta_GP_Regression.ipynb | mit | import gpytorch
import torch
"""
Explanation: Spectral GP Learning with Deltas
In this paper, we demonstrate another approach to spectral learning with GPs, learning a spectral density as a simple mixture of deltas. This has been explored, for example, as early as Lázaro-Gredilla et al., 2010.
Compared to learning Gaussian mixtures as in the SM kernel, this approach has a number of pros and cons. In its favor, it is often very robust and does not have as severe issues with local optima, as it is easier to make progress when performing gradient descent on 1 of 1000 deltas compared to the parameters of 1 of 3 Gaussians. Additionally, implemented using CG in GPyTorch, this approach affords linear time and space in the number of data points N. Against it, it has significantly more parameters which can take many more iterations of training to learn, and it corresponds to a finite basis expansion and is therefore a parametric model.
End of explanation
"""
import os
import urllib.request
smoke_test = ('CI' in os.environ)
if not smoke_test and not os.path.isfile('../BART_sample.pt'):
print('Downloading \'BART\' sample dataset...')
urllib.request.urlretrieve('https://drive.google.com/uc?export=download&id=1A6LqCHPA5lHa5S3lMH8mLMNEgeku8lRG', '../BART_sample.pt')
torch.manual_seed(1)
if smoke_test:
train_x, train_y, test_x, test_y = torch.randn(2, 100, 1), torch.randn(2, 100), torch.randn(2, 100, 1), torch.randn(2, 100)
else:
train_x, train_y, test_x, test_y = torch.load('../BART_sample.pt', map_location='cpu')
if torch.cuda.is_available():
train_x, train_y, test_x, test_y = train_x.cuda(), train_y.cuda(), test_x.cuda(), test_y.cuda()
print(train_x.shape, train_y.shape, test_x.shape, test_y.shape)
train_x_min = train_x.min()
train_x_max = train_x.max()
train_x = train_x - train_x_min
test_x = test_x - train_x_min
train_y_mean = train_y.mean(dim=-1, keepdim=True)
train_y_std = train_y.std(dim=-1, keepdim=True)
train_y = (train_y - train_y_mean) / train_y_std
test_y = (test_y - train_y_mean) / train_y_std
"""
Explanation: Load Data
For this notebook, we'll be using a sample set of timeseries data of BART ridership on the 5 most commonly traveled stations in San Francisco. This subsample of data was selected and processed from Pyro's examples http://docs.pyro.ai/en/stable/_modules/pyro/contrib/examples/bart.html
End of explanation
"""
class SpectralDeltaGP(gpytorch.models.ExactGP):
def __init__(self, train_x, train_y, num_deltas, noise_init=None):
likelihood = gpytorch.likelihoods.GaussianLikelihood(noise_constraint=gpytorch.constraints.GreaterThan(1e-11))
likelihood.register_prior("noise_prior", gpytorch.priors.HorseshoePrior(0.1), "noise")
likelihood.noise = 1e-2
super(SpectralDeltaGP, self).__init__(train_x, train_y, likelihood)
self.mean_module = gpytorch.means.ConstantMean()
base_covar_module = gpytorch.kernels.SpectralDeltaKernel(
num_dims=train_x.size(-1),
num_deltas=num_deltas,
)
base_covar_module.initialize_from_data(train_x[0], train_y[0])
self.covar_module = gpytorch.kernels.ScaleKernel(base_covar_module)
def forward(self, x):
mean_x = self.mean_module(x)
covar_x = self.covar_module(x)
return gpytorch.distributions.MultivariateNormal(mean_x, covar_x)
model = SpectralDeltaGP(train_x, train_y, num_deltas=1500)
if torch.cuda.is_available():
model = model.cuda()
"""
Explanation: Define a Model
The only thing of note here is the use of the kernel. For this example, we'll learn a kernel with 2048 deltas in the mixture, and initialize by sampling directly from the empirical spectrum of the data.
End of explanation
"""
model.train()
mll = gpytorch.mlls.ExactMarginalLogLikelihood(model.likelihood, model)
optimizer = torch.optim.Adam(model.parameters(), lr=0.01)
scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer=optimizer, milestones=[40])
num_iters = 1000 if not smoke_test else 4
with gpytorch.settings.max_cholesky_size(0): # Ensure we dont try to use Cholesky
for i in range(num_iters):
optimizer.zero_grad()
output = model(train_x)
loss = -mll(output, train_y)
if train_x.dim() == 3:
loss = loss.mean()
loss.backward()
optimizer.step()
if i % 10 == 0:
print(f'Iteration {i} - loss = {loss:.2f} - noise = {model.likelihood.noise.item():e}')
scheduler.step()
# Get into evaluation (predictive posterior) mode
model.eval()
# Test points are regularly spaced along [0,1]
# Make predictions by feeding model through likelihood
with torch.no_grad(), gpytorch.settings.max_cholesky_size(0), gpytorch.settings.fast_pred_var():
test_x_f = torch.cat([train_x, test_x], dim=-2)
observed_pred = model.likelihood(model(test_x_f))
varz = observed_pred.variance
"""
Explanation: Train
End of explanation
"""
from matplotlib import pyplot as plt
%matplotlib inline
_task = 3
plt.subplots(figsize=(15, 15), sharex=True, sharey=True)
for _task in range(2):
ax = plt.subplot(3, 1, _task + 1)
with torch.no_grad():
# Initialize plot
# f, ax = plt.subplots(1, 1, figsize=(16, 12))
# Get upper and lower confidence bounds
lower = observed_pred.mean - varz.sqrt() * 1.98
upper = observed_pred.mean + varz.sqrt() * 1.98
lower = lower[_task] # + weight * test_x_f.squeeze()
upper = upper[_task] # + weight * test_x_f.squeeze()
# Plot training data as black stars
ax.plot(train_x[_task].detach().cpu().numpy(), train_y[_task].detach().cpu().numpy(), 'k*')
ax.plot(test_x[_task].detach().cpu().numpy(), test_y[_task].detach().cpu().numpy(), 'r*')
# Plot predictive means as blue line
ax.plot(test_x_f[_task].detach().cpu().numpy(), (observed_pred.mean[_task]).detach().cpu().numpy(), 'b')
# Shade between the lower and upper confidence bounds
ax.fill_between(test_x_f[_task].detach().cpu().squeeze().numpy(), lower.detach().cpu().numpy(), upper.detach().cpu().numpy(), alpha=0.5)
# ax.set_ylim([-3, 3])
ax.legend(['Training Data', 'Test Data', 'Mean', '95% Confidence'], fontsize=16)
ax.tick_params(axis='both', which='major', labelsize=16)
ax.tick_params(axis='both', which='minor', labelsize=16)
ax.set_ylabel('Passenger Volume (Normalized)', fontsize=16)
ax.set_xlabel('Hours (Zoomed to Test)', fontsize=16)
ax.set_xticks([])
plt.xlim([1250, 1680])
plt.tight_layout()
"""
Explanation: Plot Results
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/cmcc/cmip6/models/sandbox-2/ocnbgchem.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cmcc', 'sandbox-2', 'ocnbgchem')
"""
Explanation: ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era: CMIP6
Institute: CMCC
Source ID: SANDBOX-2
Topic: Ocnbgchem
Sub-Topics: Tracers.
Properties: 65 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:50
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean biogeochemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean biogeochemistry model code (PISCES 2.0,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean biogeochemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
"""
Explanation: 1.4. Elemental Stoichiometry
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe elemental stoichiometry (fixed, variable, mix of the two)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.5. Elemental Stoichiometry Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe which elements have fixed/variable stoichiometry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all prognostic tracer variables in the ocean biogeochemistry component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.7. Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all diagnotic tracer variables in the ocean biogeochemistry component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.8. Damping
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any tracer damping used (such as artificial correction or relaxation to climatology,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for passive tracers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for passive tracers (if different from ocean)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for biology sources and sinks
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for biology sources and sinks (if different from ocean)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transport scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 4.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Transport scheme used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.3. Use Different Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Decribe transport scheme if different than that of ocean model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how atmospheric deposition is modeled
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
"""
Explanation: 5.2. River Input
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how river input is modeled
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Sediments From Boundary Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.4. Sediments From Explicit Model
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from explicit sediment model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CO2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.2. CO2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe CO2 gas exchange
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.3. O2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is O2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.4. O2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe O2 gas exchange
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.5. DMS Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is DMS gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.6. DMS Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify DMS gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.7. N2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.8. N2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.9. N2O Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2O gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.10. N2O Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2O gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.11. CFC11 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC11 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.12. CFC11 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC11 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.13. CFC12 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC12 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.14. CFC12 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC12 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.15. SF6 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is SF6 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.16. SF6 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify SF6 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.17. 13CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 13CO2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.18. 13CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 13CO2 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.19. 14CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 14CO2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.20. 14CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 14CO2 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.21. Other Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any other gas exchange
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how carbon chemistry is modeled
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7.2. PH Scale
Is Required: FALSE Type: ENUM Cardinality: 0.1
If NOT OMIP protocol, describe pH scale.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Constants If Not OMIP
Is Required: FALSE Type: STRING Cardinality: 0.1
If NOT OMIP protocol, list carbon chemistry constants.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of tracers in ocean biogeochemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.2. Sulfur Cycle Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sulfur cycle modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Nutrients Present
Is Required: TRUE Type: ENUM Cardinality: 1.N
List nutrient species present in ocean biogeochemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.4. Nitrous Species If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous species.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.5. Nitrous Processes If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous processes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required: TRUE Type: STRING Cardinality: 1.1
Definition of upper trophic level (e.g. based on size) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Upper Trophic Levels Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Define how upper trophic level are treated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
"""
Explanation: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of phytoplankton
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.2. Pft
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton functional types (PFT) (if applicable)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.3. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton size classes (if applicable)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of zooplankton
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Zooplankton size classes (if applicable)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there bacteria representation ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Lability
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe treatment of lability in dissolved organic matter
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is particulate carbon represented in ocean biogeochemistry?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Types If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, type(s) of particulate matter taken into account
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
"""
Explanation: 13.3. Size If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13.4. Size If Discrete
Is Required: FALSE Type: STRING Cardinality: 0.1
If prognostic and discrete size, describe which size classes are used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Sinking Speed If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, method for calculation of sinking speed of particules
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
"""
Explanation: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which carbon isotopes are modelled (C13, C14)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 14.2. Abiotic Carbon
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is abiotic carbon modelled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
"""
Explanation: 14.3. Alkalinity
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is alkalinity modelled ?
End of explanation
"""
|
ivannz/crossing_paper2017 | experiments/bellcore_traffic_data.ipynb | mit | import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
"""
Explanation: Bellcore LAN traffic
This notebook uses uncompressed data from here.
Namely the datasets: BC-pAug89 and BC-pOct89.
Description:
The files whose names end in TL are ASCII-format tracing data, consisting of
one 20-byte line per Ethernet packet arrival. Each line contains a floating-
point time stamp (representing the time in seconds since the start of a trace)
and an integer length (representing the Ethernet data length in bytes).
Although the times are expressed to 6 places after the decimal point, giving
the appearance of microsecond resolution, the hardware clock had an actual
resolution of 4 microseconds. Our testing of the entire monitor suggests that
jitter in the inner code loop and (much more seriously) bus contention limited
the actual accuracy to roughly 10 microseconds. The length field does not
include the Ethernet preamble, header, or CRC; however, the Ethernet protocol
forces all packets to have at least the minimum size of 64 bytes and at most
the maximum size of 1518 bytes. 99.5% of the encapsulated packets carried by
the Ethernet PDUs were IP. All traces were conducted on an Ethernet cable at
the Bellcore Morristown Research and Engineering facility, building MRE-2.
At that time, the Ethernet cable nicknamed the "purple cable" carried not
only a major portion of our Lab's traffic but also all traffic to and from
the internet and all of Bellcore. The records include all complete packets
(the monitor did not artificially "clip" traffic bursts), but do not include
any fragments or collisions. These samples are excerpts from approximately
300 million arrivals recorded; the complete trace records included Ethernet
status flags, the Ethernet source and destination, and the first 60 bytes of
each encapsulated packet (allowing identification of higher-level protocols,
IP source and destination fields, and so on).
End of explanation
"""
pAug89_t, pAug89_x = np.loadtxt("./BC-pAug89.TL.gz", unpack=True)
pOct89_t, pOct89_x = np.loadtxt("./BC-pOct89.TL.gz", unpack=True)
"""
Explanation: Load the data with numpy.
End of explanation
"""
drift = pAug89_x.mean()
T, X = pAug89_t.copy(), (pAug89_x - drift).cumsum()
# drift = pOct89_x.mean()
# T, X = pOct89_t.copy(), (pOct89_x - drift).cumsum()
print("%0.4f" % drift)
"""
Explanation: Estimate the mean packet size and produce a random walk without the drift.
End of explanation
"""
from crossing_tree import crossing_tree
## Set the base scale to the median
scale = np.median(np.abs(np.diff(X)))
origin = X[0]
## Build a crossing tree
xi, ti, offspring, Vnk, Znk, Wnk = crossing_tree(X, T, scale, origin=origin)
# Rebuild the tree
index = list([offspring[0]])
for index_ in offspring[1:]:
index.append(index[-1][index_])
Xnk = [xi.base[index_] for index_ in index]
Tnk = [ti.base[index_] for index_ in index]
"""
Explanation: Construct the crossing tree for the traffic data
End of explanation
"""
l = len(Tnk) - 2
levels = np.arange(l-4, l+1, dtype=np.int)
## Plot the sample path
fig = plt.figure(figsize=(6, 5))
ax = fig.add_subplot(111)
ax.set_xticks(Tnk[levels[0]], minor=True)
delta = 2 * scale * (1 << levels[0])
xm, xM = (Xnk[levels[0]] - origin).min() / delta, (Xnk[levels[0]] - origin).max() / delta
ax.set_yticks(origin + np.arange(xm-1, xM+2) * delta)
ax.plot(T, X, linestyle='-', color='gray', label='X(t)', alpha=0.5)
color=plt.cm.rainbow_r(np.linspace(0, 1, len(levels)))
for j, col_ in zip(levels, color):
ax.plot(Tnk[j], Xnk[j], '-s', color=col_, markersize=4, alpha=0.75)
ax.set_xlim(left=-50)
ax.grid(color='k', linestyle='-', alpha=0.15, zorder=-99)
"""
Explanation: Plot the crossing times for the last 4 levels of the tree.
End of explanation
"""
fig = plt.figure(figsize=(6, 5))
ax = fig.add_subplot(111)
ax.set_xticks(Tnk[levels[0]], minor=True)
colors = plt.cm.rainbow_r(np.linspace(0, 1, len(levels)))
for j, col_ in zip(levels, colors):
lht0, lht1 = Tnk[j], Tnk[j+1]
offs_ = offspring[j+1]
parent = np.repeat(np.arange(len(offs_) - 1), np.diff(offs_))
parent = np.r_[parent, np.repeat(len(offs_) - 1, len(lht0) - offs_[-1])]
p_ti = np.r_[np.repeat(np.nan, offs_[0]), lht1[parent]]
## Draw the line segments between two levels
delta = (1 << j)
ax.plot([p_ti, lht0], [len(lht0) * [2 * delta], len(lht0) * [delta]],
'-s', color=col_, markersize=2, lw=.5)
ax.grid(color='k', linestyle='-', alpha=0.05, zorder=-99)
ax.set_yscale("log", basey=2)
ax.set_ylim(0.9 * (1 << levels[0]), 1.1 * (1 << levels[-1] + 1))
ax.set_xlim(left=-50)
ax.set_ylabel(r"$\delta \times 2^k$")
"""
Explanation: Plot the crossing tree
End of explanation
"""
from crossing_tree import structural_statistics
scale = np.median(np.abs(np.diff(X)))
scale, Nn, Dnk, Cnkk, Vnde, Wnp, Wavgn, Wstdn = structural_statistics(X, T, scale, origin=X[0])
"""
Explanation: Get the structural statistics of the crossing tree for de-drifted traffic data.
End of explanation
"""
def offspring_hurst(Dmnk, levels, laplace=False):
# Get pooled frequencies
Dmj = Dmnk[:, levels].sum(axis=2, dtype=np.float)
# Compute the sum of the left-closed tails sums,
# and divide by the total number of offspring.
Mmj = 2 * Dmnk[:, levels, ::-1].cumsum(axis=-1).sum(axis=-1) / Dmj
Hmj = np.log(2) / np.log(Mmj)
levels = np.arange(Dmnk.shape[1], dtype=np.int)[levels]
return levels + 1, np.nanmean(Hmj, axis=0), np.nanstd(Hmj, axis=0)
"""
Explanation: Estimate the hurst exponent based on the offspring distribution.
End of explanation
"""
levels, Hj_avg, Hj_std = offspring_hurst(Dnk[np.newaxis], slice(0, None))
plt.plot(levels, Hj_avg)
"""
Explanation: Plot the hurst exponents.
End of explanation
"""
from joblib import Parallel, delayed
from numpy.lib.stride_tricks import as_strided
from crossing_tree import collect_structural_statistics
def _dewindow(arr, width, stride, ravel=True):
n_steps = width // stride
padded_ = np.pad(arr, (n_steps - 1, n_steps - 1), mode="edge")
arr_ = as_strided(padded_, shape=(padded_.shape[0] - n_steps + 1, n_steps),
strides=(arr.strides[0], arr.strides[0]))
return as_strided(arr_.mean(axis=-1), shape=(len(arr_), stride),
strides=(arr.strides[0], 0)).ravel()
def _strided_window(arr, width, stride):
n_steps = (arr.shape[0] - window - 1) // stride
return as_strided(arr, shape=(1 + n_steps, window,),
strides=(stride * arr.strides[0], arr.strides[0],))
def rolling_tree(T, X, window=1 << 15, stride=1 << 10, common_scale=True,
n_jobs=1, verbose=0):
path_windows = zip(_strided_window(T, window, stride),
_strided_window(X, window, stride))
structural_statistics_ = delayed(structural_statistics, check_pickle=False)
if common_scale:
scale = np.median(np.abs(np.diff(X)))
# scale = np.diff(X).std()
trees_ = (structural_statistics_(xx, tt, scale, origin=xx[0])
for tt, xx in path_windows)
else:
trees_ = (structural_statistics_(xx, tt, scale=np.median(np.abs(np.diff(xx))),
origin=xx[0])
for tt, xx in path_windows)
# trees_ = (structural_statistics_(xx, tt, scale=np.diff(xx).std(), origin=xx[0])
# for tt, xx in path_windows)
par_ = Parallel(n_jobs=n_jobs, verbose=verbose, max_nbytes=None)
return collect_structural_statistics(par_(trees_))
"""
Explanation: $$ (\delta 2^n )^{-\frac{1}{H}} \mathbb{E} W^n = 1 + \mathcal{o}_P(1) \,, $$
$$ \log_2 \mathbb{E} W^n \sim \beta_0 + \beta (\log_2\delta + n) \,, $$
$$ \Delta \log_2 \mathbb{E} W^n \sim \beta \,. $$
Try to make a sliding estimate of the hurst exponent with the tree.
An estimator that compute rolling crossing tree statistics for a sample path.
End of explanation
"""
from crossing_tree.processes import FractionalBrownianMotion
FBM = FractionalBrownianMotion(N=1 << 23, hurst=0.5, n_threads=4, random_state=1234)
FBM.start()
"""
Explanation: Test on the fractional brownian motion process.
End of explanation
"""
T, X = FBM.draw()
plt.plot(T, X)
"""
Explanation: Draw and plot a sample path of BM.
End of explanation
"""
scale = np.median(np.abs(np.diff(X)))
results = structural_statistics(X, T, scale=scale, origin=X[0])
scale, Nn, Dnk, Cnkk, Vnde, Wnp, Wavgn, Wstdn = results
"""
Explanation: Estimate the base scale and get the structural statistics of the path.
End of explanation
"""
log2ed_ = np.log2(Wavgn) - (np.log2(scale) + np.arange(Wavgn.shape[0], dtype=float)) / 0.5
plt.plot(1.0 / np.diff(np.log2(Wavgn)))
"""
Explanation: $ \log_2 \mathbb{E} W^n - \frac{1}{H} (n + \log_2 \delta) = f(H, d) \,. $
End of explanation
"""
T, X = FBM.draw()
window, stride = 1 << 15, 1 << 11
result_test = rolling_tree(T, X, window=window, stride=stride,
common_scale=False, n_jobs=-1, verbose=10)
scale_m, Nmn, Dmnk, Cmnkk, Vmnde, Wmnp, Wavgmn, Wstdmn = result_test
Hmj = np.stack([offspring_hurst(Dnk[np.newaxis], slice(None, -4))[1] for Dnk in Dmnk])
hurst_ = np.nanmean(1.0 / np.diff(np.log2(Wavgmn[:, 2:-4]), axis=-1), axis=-1)
plt.plot(np.nanmean(Hmj, axis=-1), "-k", markersize=3)
plt.plot(hurst_, "-r", markersize=3)
try:
from l1tf import l1_filter
l1_hurst_ = l1_filter(hurst_, C=1e-2, relative=True)
len_ = (hurst_.shape[0] + window // stride - 1) * stride
fig = plt.figure(figsize=(16, 6))
ax = fig.add_subplot(121)
ax.plot(T, X)
# ax.plot(hurst_, "k", alpha=0.25, lw=2)
# ax.plot(l1_hurst_, "r", alpha=1.0)
ax = fig.add_subplot(122)
ax.plot(T[:len_], _dewindow(hurst_, window, stride), "k", alpha=0.25, lw=2)
ax.plot(T[:len_], _dewindow(l1_hurst_, window, stride), "r", alpha=1.0)
except ImportError:
print("Please install L1-trend filter python wrapper from https://github.com/ivannz/l1_tf")
pass
plt.plot(np.log2(Nmn) / (1 + np.arange(Nmn.shape[1])[np.newaxis, ::-1]))
"""
Explanation: Draw
End of explanation
"""
drift = pAug89_x.mean()
T, X = pAug89_t.copy(), (pAug89_x - drift).cumsum()
"""
Explanation: Compute for pAug89
End of explanation
"""
window, stride = 1 << 15, 1 << 11
result = rolling_tree(T, X, window=window, stride=stride,
common_scale=False, n_jobs=8, verbose=10)
scale_m, Nmn, Dmnk, Cmnkk, Vmnde, Wmnp, Wavgmn, Wstdmn = result
"""
Explanation: Make a sliding crossing tree
End of explanation
"""
hurst_ = 1.0 / np.nanmean(np.diff(np.log2(Wavgmn[:, 2:-4]), axis=-1), axis=-1)
try:
from l1tf import l1_filter
l1_hurst_ = l1_filter(hurst_, C=1e-1)
len_ = (hurst_.shape[0] + window // stride - 1) * stride
fig = plt.figure(figsize=(16, 6))
ax = fig.add_subplot(121)
ax.plot(T, X)
# ax.plot(hurst_, "k", alpha=0.25, lw=2)
# ax.plot(l1_filter(hurst_, C=1e-1), "r", alpha=1.0)
ax = fig.add_subplot(122)
ax.plot(T[:len_], _dewindow(hurst_, window, stride), "k", alpha=0.25, lw=2)
ax.plot(T[:len_], _dewindow(l1_hurst_, window, stride), "r", alpha=1.0)
except ImportError:
print("Please install L1-trend filter python wrapper from https://github.com/ivannz/l1_tf")
pass
"""
Explanation: Use regression estimates of the hurst exponent
End of explanation
"""
Hmj = np.stack([offspring_hurst(Dnk[np.newaxis], slice(0, -4))[1] for Dnk in Dmnk])
"""
Explanation: Now derive an estimated based on the heuristic scaling properties.
End of explanation
"""
# plt.plot(np.nanmean(Hmj, axis=-1), "-sk", markersize=3)
# plt.plot(hurst_, "-^k", markersize=3)
plt.plot(np.nanmean(Hmj, axis=-1), "k", alpha=0.5)
plt.plot(hurst_, "r", alpha=0.5)
plt.plot(np.nanmean(Hmj[:, 2:-6], axis=-1))
plt.plot(np.log2(Nmn) / (1 + np.arange(Nmn.shape[1])[np.newaxis, ::-1]))
"""
Explanation: Compare
End of explanation
"""
drift = pOct89_x.mean()
T, X = pOct89_t.copy(), (pOct89_x - drift).cumsum()
plt.plot(T, X)
window, stride = 1 << 15, 1 << 11
result = rolling_tree(T, X, window=window, stride=stride,
common_scale=False, n_jobs=8, verbose=10)
scale_m, Nmn, Dmnk, Cmnkk, Vmnde, Wmnp, Wavgmn, Wstdmn = result
hurst_ = 1.0 / np.nanmean(np.diff(np.log2(Wavgmn[:, 2:-4]), axis=-1), axis=-1)
try:
from l1tf import l1_filter
l1_hurst_ = l1_filter(hurst_, C=1e-2, relative=True)
len_ = (hurst_.shape[0] + window // stride - 1) * stride
fig = plt.figure(figsize=(16, 6))
ax = fig.add_subplot(121)
ax.plot(T, X)
# ax.plot(hurst_, "k", alpha=0.25, lw=2)
# ax.plot(l1_hurst_, "r", alpha=1.0)
ax = fig.add_subplot(122)
ax.plot(T[:len_], _dewindow(hurst_, window, stride), "k", alpha=0.25, lw=2)
ax.plot(T[:len_], _dewindow(l1_hurst_, window, stride), "r", alpha=1.0)
except ImportError:
print("Please install L1-trend filter python wrapper from https://github.com/ivannz/l1_tf")
pass
"""
Explanation: Compute for pOct89
End of explanation
"""
|
tensorflow/docs-l10n | site/zh-cn/lattice/tutorials/shape_constraints.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
"""
#@test {"skip": true}
!pip install tensorflow-lattice
"""
Explanation: 使用 Tensorflow Lattice 实现形状约束
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://tensorflow.google.cn/lattice/tutorials/shape_constraints"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">在 TensorFlow.org 上查看 </a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/lattice/tutorials/shape_constraints.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">在 Google Colab 中运行 </a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/lattice/tutorials/shape_constraints.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png">在 GitHub 中查看源代码</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/lattice/tutorials/shape_constraints.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png"> 下载笔记本</a></td>
</table>
概述
本教程概述了 TensorFlow Lattice (TFL) 库提供的约束和正则化器。我们将在合成数据集上使用 TFL Canned Estimator,但请注意,本教程中的所有内容也可以使用通过 TFL Keras 层构造的模型来完成。
在继续之前,请确保您的运行时已安装所有必需的软件包(如下方代码单元中导入的软件包)。
设置
安装 TF Lattice 软件包:
End of explanation
"""
import tensorflow as tf
from IPython.core.pylabtools import figsize
import itertools
import logging
import matplotlib
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
import sys
import tensorflow_lattice as tfl
logging.disable(sys.maxsize)
"""
Explanation: 导入所需的软件包:
End of explanation
"""
NUM_EPOCHS = 1000
BATCH_SIZE = 64
LEARNING_RATE=0.01
"""
Explanation: 本指南中使用的默认值:
End of explanation
"""
def click_through_rate(avg_ratings, num_reviews, dollar_ratings):
dollar_rating_baseline = {"D": 3, "DD": 2, "DDD": 4, "DDDD": 4.5}
return 1 / (1 + np.exp(
np.array([dollar_rating_baseline[d] for d in dollar_ratings]) -
avg_ratings * np.log1p(num_reviews) / 4))
"""
Explanation: 餐厅评分的训练数据集
设想一个简化的场景,我们要确定用户是否会点击餐厅搜索结果。任务是基于给定的输入特征预测点击率 (CTR):
平均评分 (avg_rating):一个数字特征,值在 [1,5] 区间内。
评论数 (num_reviews):一个数字特征,上限值为 200,我们使用该值作为衡量餐厅热度的指标。
美元评分 (dollar_rating):一个分类特征,其字符串值位于集合 {"D", "DD", "DDD", "DDDD"} 内。
我们创建一个合成数据集,其真实 CTR 值由以下公式得出:$$ CTR = 1 / (1 + exp{\mbox{b(dollar_rating)}-\mbox{avg_rating}\times log(\mbox{num_reviews}) /4 }) $$,其中 $b(\cdot)$ 可将每个 dollar_rating 转换为基准值:$$ \mbox{D}\to 3,\ \mbox{DD}\to 2,\ \mbox{DDD}\to 4,\ \mbox{DDDD}\to 4.5。$$
此公式反映了典型的用户模式。例如,在所有其他条件固定的情况下,用户更喜欢星级较高的餐厅,"$$" 餐厅的点击数高于 "$" 餐厅,低于 "$$$" 和 "$$$$" 餐厅。
End of explanation
"""
def color_bar():
bar = matplotlib.cm.ScalarMappable(
norm=matplotlib.colors.Normalize(0, 1, True),
cmap="viridis",
)
bar.set_array([0, 1])
return bar
def plot_fns(fns, split_by_dollar=False, res=25):
"""Generates contour plots for a list of (name, fn) functions."""
num_reviews, avg_ratings = np.meshgrid(
np.linspace(0, 200, num=res),
np.linspace(1, 5, num=res),
)
if split_by_dollar:
dollar_rating_splits = ["D", "DD", "DDD", "DDDD"]
else:
dollar_rating_splits = [None]
if len(fns) == 1:
fig, axes = plt.subplots(2, 2, sharey=True, tight_layout=False)
else:
fig, axes = plt.subplots(
len(dollar_rating_splits), len(fns), sharey=True, tight_layout=False)
axes = axes.flatten()
axes_index = 0
for dollar_rating_split in dollar_rating_splits:
for title, fn in fns:
if dollar_rating_split is not None:
dollar_ratings = np.repeat(dollar_rating_split, res**2)
values = fn(avg_ratings.flatten(), num_reviews.flatten(),
dollar_ratings)
title = "{}: dollar_rating={}".format(title, dollar_rating_split)
else:
values = fn(avg_ratings.flatten(), num_reviews.flatten())
subplot = axes[axes_index]
axes_index += 1
subplot.contourf(
avg_ratings,
num_reviews,
np.reshape(values, (res, res)),
vmin=0,
vmax=1)
subplot.title.set_text(title)
subplot.set(xlabel="Average Rating")
subplot.set(ylabel="Number of Reviews")
subplot.set(xlim=(1, 5))
_ = fig.colorbar(color_bar(), cax=fig.add_axes([0.95, 0.2, 0.01, 0.6]))
figsize(11, 11)
plot_fns([("CTR", click_through_rate)], split_by_dollar=True)
"""
Explanation: 让我们看一下此 CTR 函数的等高线图。
End of explanation
"""
def sample_restaurants(n):
avg_ratings = np.random.uniform(1.0, 5.0, n)
num_reviews = np.round(np.exp(np.random.uniform(0.0, np.log(200), n)))
dollar_ratings = np.random.choice(["D", "DD", "DDD", "DDDD"], n)
ctr_labels = click_through_rate(avg_ratings, num_reviews, dollar_ratings)
return avg_ratings, num_reviews, dollar_ratings, ctr_labels
np.random.seed(42)
avg_ratings, num_reviews, dollar_ratings, ctr_labels = sample_restaurants(2000)
figsize(5, 5)
fig, axs = plt.subplots(1, 1, sharey=False, tight_layout=False)
for rating, marker in [("D", "o"), ("DD", "^"), ("DDD", "+"), ("DDDD", "x")]:
plt.scatter(
x=avg_ratings[np.where(dollar_ratings == rating)],
y=num_reviews[np.where(dollar_ratings == rating)],
c=ctr_labels[np.where(dollar_ratings == rating)],
vmin=0,
vmax=1,
marker=marker,
label=rating)
plt.xlabel("Average Rating")
plt.ylabel("Number of Reviews")
plt.legend()
plt.xlim((1, 5))
plt.title("Distribution of restaurants")
_ = fig.colorbar(color_bar(), cax=fig.add_axes([0.95, 0.2, 0.01, 0.6]))
"""
Explanation: 准备数据
现在,我们需要创建合成数据集。我们首先生成餐厅及其特征的模拟数据集。
End of explanation
"""
def sample_dataset(n, testing_set):
(avg_ratings, num_reviews, dollar_ratings, ctr_labels) = sample_restaurants(n)
if testing_set:
# Testing has a more uniform distribution over all restaurants.
num_views = np.random.poisson(lam=3, size=n)
else:
# Training/validation datasets have more views on popular restaurants.
num_views = np.random.poisson(lam=ctr_labels * num_reviews / 50.0, size=n)
return pd.DataFrame({
"avg_rating": np.repeat(avg_ratings, num_views),
"num_reviews": np.repeat(num_reviews, num_views),
"dollar_rating": np.repeat(dollar_ratings, num_views),
"clicked": np.random.binomial(n=1, p=np.repeat(ctr_labels, num_views))
})
# Generate datasets.
np.random.seed(42)
data_train = sample_dataset(500, testing_set=False)
data_val = sample_dataset(500, testing_set=False)
data_test = sample_dataset(500, testing_set=True)
# Plotting dataset densities.
figsize(12, 5)
fig, axs = plt.subplots(1, 2, sharey=False, tight_layout=False)
for ax, data, title in [(axs[0], data_train, "training"),
(axs[1], data_test, "testing")]:
_, _, _, density = ax.hist2d(
x=data["avg_rating"],
y=data["num_reviews"],
bins=(np.linspace(1, 5, num=21), np.linspace(0, 200, num=21)),
density=True,
cmap="Blues",
)
ax.set(xlim=(1, 5))
ax.set(ylim=(0, 200))
ax.set(xlabel="Average Rating")
ax.set(ylabel="Number of Reviews")
ax.title.set_text("Density of {} examples".format(title))
_ = fig.colorbar(density, ax=ax)
"""
Explanation: 我们来生成训练、验证和测试数据集。当用户查看搜索结果中的餐厅时,我们可以记录用户的参与(点击或不点击)作为样本点。
实际上,用户通常不会浏览所有搜索结果。这意味着用户可能只会看到被当前所用排名模型评为“好”的餐厅。这样一来,在训练数据集中,“好”餐厅将获得更频繁的关注并存在过度表示的情况。使用更多特征时,训练数据集在特征空间的“差”部分可能会存在较大空缺。
使用该模型进行评分时,通常会基于分布更均衡的所有相关结果进行评估,而训练数据集并不能很好地提供这种分布。在这种情况下,灵活且复杂的模型可能会因过拟合存在过度表达情况的数据点而缺乏泛化能力。我们通过运用领域知识来添加形状约束,在模型无法从训练数据集获取充分信息的情况下引导模型做出合理的预测,从而解决这一问题。
在本例中,训练数据集主要由用户与热门餐厅的交互组成。测试数据集采用均衡的分布来模拟上述评估环境。请注意,此类测试数据集在实际问题环境中将不可用。
End of explanation
"""
train_input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(
x=data_train,
y=data_train["clicked"],
batch_size=BATCH_SIZE,
num_epochs=NUM_EPOCHS,
shuffle=False,
)
# feature_analysis_input_fn is used for TF Lattice estimators.
feature_analysis_input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(
x=data_train,
y=data_train["clicked"],
batch_size=BATCH_SIZE,
num_epochs=1,
shuffle=False,
)
val_input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(
x=data_val,
y=data_val["clicked"],
batch_size=BATCH_SIZE,
num_epochs=1,
shuffle=False,
)
test_input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(
x=data_test,
y=data_test["clicked"],
batch_size=BATCH_SIZE,
num_epochs=1,
shuffle=False,
)
"""
Explanation: 定义用于训练和评估的 input_fn:
End of explanation
"""
def analyze_two_d_estimator(estimator, name):
# Extract validation metrics.
metric = estimator.evaluate(input_fn=val_input_fn)
print("Validation AUC: {}".format(metric["auc"]))
metric = estimator.evaluate(input_fn=test_input_fn)
print("Testing AUC: {}".format(metric["auc"]))
def two_d_pred(avg_ratings, num_reviews):
results = estimator.predict(
tf.compat.v1.estimator.inputs.pandas_input_fn(
x=pd.DataFrame({
"avg_rating": avg_ratings,
"num_reviews": num_reviews,
}),
shuffle=False,
))
return [x["logistic"][0] for x in results]
def two_d_click_through_rate(avg_ratings, num_reviews):
return np.mean([
click_through_rate(avg_ratings, num_reviews,
np.repeat(d, len(avg_ratings)))
for d in ["D", "DD", "DDD", "DDDD"]
],
axis=0)
figsize(11, 5)
plot_fns([("{} Estimated CTR".format(name), two_d_pred),
("CTR", two_d_click_through_rate)],
split_by_dollar=False)
"""
Explanation: 拟合梯度提升树
让我们从两个函数开始:avg_rating 和 num_reviews。
我们创建一些辅助函数,用于绘制和计算验证与测试指标。
End of explanation
"""
feature_columns = [
tf.feature_column.numeric_column("num_reviews"),
tf.feature_column.numeric_column("avg_rating"),
]
gbt_estimator = tf.estimator.BoostedTreesClassifier(
feature_columns=feature_columns,
# Hyper-params optimized on validation set.
n_batches_per_layer=1,
max_depth=2,
n_trees=50,
learning_rate=0.05,
config=tf.estimator.RunConfig(tf_random_seed=42),
)
gbt_estimator.train(input_fn=train_input_fn)
analyze_two_d_estimator(gbt_estimator, "GBT")
"""
Explanation: 我们可以基于数据集拟合 TensorFlow 梯度提升决策树:
End of explanation
"""
feature_columns = [
tf.feature_column.numeric_column("num_reviews"),
tf.feature_column.numeric_column("avg_rating"),
]
dnn_estimator = tf.estimator.DNNClassifier(
feature_columns=feature_columns,
# Hyper-params optimized on validation set.
hidden_units=[16, 8, 8],
optimizer=tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE),
config=tf.estimator.RunConfig(tf_random_seed=42),
)
dnn_estimator.train(input_fn=train_input_fn)
analyze_two_d_estimator(dnn_estimator, "DNN")
"""
Explanation: 即使该模型已捕获到真实 CTR 的总体形状并具有不错的验证指标,但它在输入空间的多个部分上都呈现出违背直觉的行为:估算的 CTR 随着平均评分或评论数量的增加而降低。这是由于训练数据集无法充分覆盖到的区域缺少样本点。该模型根本无法仅基于数据推断出正确的行为。
为了解决这个问题,我们强制应用了形状约束,使模型必须输出相对于平均评分和评论数量单调递增的值。稍后我们将展示如何在 TFL 中实现这一方案。
拟合 DNN
我们可以对 DNN 分类器重复相同的步骤。我们可以观察到类似的模式:样本点不够和评论数量过低就无法进行有效的推断。请注意,尽管验证指标优于梯度提升树解决方案,但测试指标却要差得多。
End of explanation
"""
feature_columns = [
tf.feature_column.numeric_column("num_reviews"),
tf.feature_column.numeric_column("avg_rating"),
]
model_config = tfl.configs.CalibratedLatticeConfig(
feature_configs=[
tfl.configs.FeatureConfig(
name="num_reviews",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_num_keypoints=20,
),
tfl.configs.FeatureConfig(
name="avg_rating",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_num_keypoints=20,
)
])
tfl_estimator = tfl.estimators.CannedClassifier(
feature_columns=feature_columns,
model_config=model_config,
feature_analysis_input_fn=feature_analysis_input_fn,
optimizer=tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE),
config=tf.estimator.RunConfig(tf_random_seed=42),
)
tfl_estimator.train(input_fn=train_input_fn)
analyze_two_d_estimator(tfl_estimator, "TF Lattice")
"""
Explanation: 形状约束
TensorFlow Lattice (TFL) 侧重于强制应用形状约束,以保护超出训练数据的模型行为。这些形状约束应用于 TFL Keras 层。相关详细信息可参见我们的 JMLR 论文。
在本教程中,我们使用 TF Canned Estimator 覆盖各种形状约束,但是请注意,所有这些步骤都可以使用通过 TFL Keras 层创建的模型来完成。
与任何其他 TensorFlow Estimator 一样,TFL Canned Estimator 使用特征列定义输入格式,并使用训练 input_fn 传入数据。使用 TFL Canned Estimator 还需要:
模型配置:定义模型架构以及按特征的形状约束和正则化器。
特征分析 input_fn:传递数据以供 TFL 初始化的 TF input_fn。
有关更详尽的介绍,请参阅 Canned Estimator 教程或 API 文档。
单调性
我们首先通过向两个特征添加单调性形状约束来解决单调性问题。
为了命令 TFL 强制应用形状约束,我们在特征配置中指定约束。以下代码展示了如何通过设置 monotonicity="increasing" 来要求输出相对于 num_reviews 和 avg_rating 单调递增。
End of explanation
"""
def save_and_visualize_lattice(tfl_estimator):
saved_model_path = tfl_estimator.export_saved_model(
"/tmp/TensorFlow_Lattice_101/",
tf.estimator.export.build_parsing_serving_input_receiver_fn(
feature_spec=tf.feature_column.make_parse_example_spec(
feature_columns)))
model_graph = tfl.estimators.get_model_graph(saved_model_path)
figsize(8, 8)
tfl.visualization.draw_model_graph(model_graph)
return model_graph
_ = save_and_visualize_lattice(tfl_estimator)
"""
Explanation: 使用 CalibratedLatticeConfig 可以创建一个封装分类器,先对每个输入(数字特征的分段线性函数)应用校准器,然后应用点阵层以非线性方式融合校准的特征。我们可以使用 tfl.visualization 可视化模型。特别是,下图展示了封装分类器中所含的两个已训练校准器。
End of explanation
"""
feature_columns = [
tf.feature_column.numeric_column("num_reviews"),
tf.feature_column.numeric_column("avg_rating"),
]
model_config = tfl.configs.CalibratedLatticeConfig(
feature_configs=[
tfl.configs.FeatureConfig(
name="num_reviews",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_convexity="concave",
pwl_calibration_num_keypoints=20,
),
tfl.configs.FeatureConfig(
name="avg_rating",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_num_keypoints=20,
)
])
tfl_estimator = tfl.estimators.CannedClassifier(
feature_columns=feature_columns,
model_config=model_config,
feature_analysis_input_fn=feature_analysis_input_fn,
optimizer=tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE),
config=tf.estimator.RunConfig(tf_random_seed=42),
)
tfl_estimator.train(input_fn=train_input_fn)
analyze_two_d_estimator(tfl_estimator, "TF Lattice")
_ = save_and_visualize_lattice(tfl_estimator)
"""
Explanation: 添加约束后,估算的 CTR 将始终随着平均评分的提高或评论数量的增加而提高。这是通过确保校准器和点阵的单调性来实现的。
收益递减
收益递减意味着增大某个特征值的边际收益将随着该值的递大而降低。在我们的案例中,我们希望 num_reviews 特征遵循此模式,因此我们可以相应地配置其校准器。请注意,我们可以将收益递减分解为两个充分条件:
校准器单调递增,并且
校准器为凹函数。
End of explanation
"""
feature_columns = [
tf.feature_column.numeric_column("num_reviews"),
tf.feature_column.numeric_column("avg_rating"),
]
model_config = tfl.configs.CalibratedLatticeConfig(
feature_configs=[
tfl.configs.FeatureConfig(
name="num_reviews",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_convexity="concave",
pwl_calibration_num_keypoints=20,
# Larger num_reviews indicating more trust in avg_rating.
reflects_trust_in=[
tfl.configs.TrustConfig(
feature_name="avg_rating", trust_type="edgeworth"),
],
),
tfl.configs.FeatureConfig(
name="avg_rating",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_num_keypoints=20,
)
])
tfl_estimator = tfl.estimators.CannedClassifier(
feature_columns=feature_columns,
model_config=model_config,
feature_analysis_input_fn=feature_analysis_input_fn,
optimizer=tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE),
config=tf.estimator.RunConfig(tf_random_seed=42),
)
tfl_estimator.train(input_fn=train_input_fn)
analyze_two_d_estimator(tfl_estimator, "TF Lattice")
model_graph = save_and_visualize_lattice(tfl_estimator)
"""
Explanation: 请注意添加凹函数约束将如何提升测试指标。预测图也可以更好地还原基本事实。
二维形状约束:信任
如果一家餐厅被评为 5 星级餐厅,但只有一两条评论,那么这个评分可能并不可靠(这家餐厅实际上可能不怎么样);而有数百条评论的餐厅被评为 4 星级餐厅,那么这个评分的可靠性就高得多(在这种情况下,这家餐厅可能不错)。可以看到,餐厅的评论数量会影响我们对餐厅平均评分的信任度。
我们可以使用 TFL 信任约束来告知模型,一个特征值越大(或越小)表示另一个特征的可靠或可信度越高。在特征配置中设置 reflects_trust_in 配置即可实现这一目标。
End of explanation
"""
lat_mesh_n = 12
lat_mesh_x, lat_mesh_y = tfl.test_utils.two_dim_mesh_grid(
lat_mesh_n**2, 0, 0, 1, 1)
lat_mesh_fn = tfl.test_utils.get_hypercube_interpolation_fn(
model_graph.output_node.weights.flatten())
lat_mesh_z = [
lat_mesh_fn([lat_mesh_x.flatten()[i],
lat_mesh_y.flatten()[i]]) for i in range(lat_mesh_n**2)
]
trust_plt = tfl.visualization.plot_outputs(
(lat_mesh_x, lat_mesh_y),
{"Lattice Lookup": lat_mesh_z},
figsize=(6, 6),
)
trust_plt.title("Trust")
trust_plt.xlabel("Calibrated avg_rating")
trust_plt.ylabel("Calibrated num_reviews")
trust_plt.show()
"""
Explanation: 下图展示了已训练的点阵函数。基于信任约束的强制性限制,我们期望校准的 num_reviews 值越大,相对于校准的 avg_rating 的斜率就越高,进而使点阵输出的跨度更大。
End of explanation
"""
feature_columns = [
tf.feature_column.numeric_column("num_reviews"),
tf.feature_column.numeric_column("avg_rating"),
]
model_config = tfl.configs.CalibratedLatticeConfig(
feature_configs=[
tfl.configs.FeatureConfig(
name="num_reviews",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_convexity="concave",
pwl_calibration_num_keypoints=20,
regularizer_configs=[
tfl.configs.RegularizerConfig(name="calib_wrinkle", l2=1.0),
],
reflects_trust_in=[
tfl.configs.TrustConfig(
feature_name="avg_rating", trust_type="edgeworth"),
],
),
tfl.configs.FeatureConfig(
name="avg_rating",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_num_keypoints=20,
regularizer_configs=[
tfl.configs.RegularizerConfig(name="calib_wrinkle", l2=1.0),
],
)
])
tfl_estimator = tfl.estimators.CannedClassifier(
feature_columns=feature_columns,
model_config=model_config,
feature_analysis_input_fn=feature_analysis_input_fn,
optimizer=tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE),
config=tf.estimator.RunConfig(tf_random_seed=42),
)
tfl_estimator.train(input_fn=train_input_fn)
analyze_two_d_estimator(tfl_estimator, "TF Lattice")
_ = save_and_visualize_lattice(tfl_estimator)
"""
Explanation: 平滑校准器
现在我们来看一下 avg_rating 的校准器。尽管单调递增,但其斜率的变化却唐突且难以解释。这表明我们可能要考虑使用 regularizer_configs 中的正则化器设置平滑此校准器。
在此,我们应用了 wrinkle 正则化器来减少曲率的变化。您也可以使用 laplacian 正则化器展平校准器,以及使用 hessian 正则化器使其更加线性。
End of explanation
"""
def analyze_three_d_estimator(estimator, name):
# Extract validation metrics.
metric = estimator.evaluate(input_fn=val_input_fn)
print("Validation AUC: {}".format(metric["auc"]))
metric = estimator.evaluate(input_fn=test_input_fn)
print("Testing AUC: {}".format(metric["auc"]))
def three_d_pred(avg_ratings, num_reviews, dollar_rating):
results = estimator.predict(
tf.compat.v1.estimator.inputs.pandas_input_fn(
x=pd.DataFrame({
"avg_rating": avg_ratings,
"num_reviews": num_reviews,
"dollar_rating": dollar_rating,
}),
shuffle=False,
))
return [x["logistic"][0] for x in results]
figsize(11, 22)
plot_fns([("{} Estimated CTR".format(name), three_d_pred),
("CTR", click_through_rate)],
split_by_dollar=True)
"""
Explanation: 校准器现已变得平滑,总体估算的 CTR 可更准确地符合实际情况。这在测试指标和等高线图中均有体现。
分类校准的部分单调性
到目前为止,我们一直是仅在模型中使用两个数字特征。在此,我们将使用分类校准层添加第三个特征。我们还是从设置用于绘图和指标计算的辅助函数开始。
End of explanation
"""
feature_columns = [
tf.feature_column.numeric_column("num_reviews"),
tf.feature_column.numeric_column("avg_rating"),
tf.feature_column.categorical_column_with_vocabulary_list(
"dollar_rating",
vocabulary_list=["D", "DD", "DDD", "DDDD"],
dtype=tf.string,
default_value=0),
]
model_config = tfl.configs.CalibratedLatticeConfig(
feature_configs=[
tfl.configs.FeatureConfig(
name="num_reviews",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_convexity="concave",
pwl_calibration_num_keypoints=20,
regularizer_configs=[
tfl.configs.RegularizerConfig(name="calib_wrinkle", l2=1.0),
],
reflects_trust_in=[
tfl.configs.TrustConfig(
feature_name="avg_rating", trust_type="edgeworth"),
],
),
tfl.configs.FeatureConfig(
name="avg_rating",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_num_keypoints=20,
regularizer_configs=[
tfl.configs.RegularizerConfig(name="calib_wrinkle", l2=1.0),
],
),
tfl.configs.FeatureConfig(
name="dollar_rating",
lattice_size=2,
pwl_calibration_num_keypoints=4,
# Here we only specify one monotonicity:
# `D` resturants has smaller value than `DD` restaurants
monotonicity=[("D", "DD")],
),
])
tfl_estimator = tfl.estimators.CannedClassifier(
feature_columns=feature_columns,
model_config=model_config,
feature_analysis_input_fn=feature_analysis_input_fn,
optimizer=tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE),
config=tf.estimator.RunConfig(tf_random_seed=42),
)
tfl_estimator.train(input_fn=train_input_fn)
analyze_three_d_estimator(tfl_estimator, "TF Lattice")
_ = save_and_visualize_lattice(tfl_estimator)
"""
Explanation: 要包含第三个特征 dollar_rating,请回想一下,分类特征在 TFL 中需要稍作不同处理,无论是作为特征列还是作为特征配置。在此,我们强制应用部分单调性约束,即在所有其他输入均固定不变的情况下,“DD”餐厅的输出应大于“D”餐厅的输出。这是使用特征配置中的 monotonicity 设置完成的。
End of explanation
"""
feature_columns = [
tf.feature_column.numeric_column("num_reviews"),
tf.feature_column.numeric_column("avg_rating"),
tf.feature_column.categorical_column_with_vocabulary_list(
"dollar_rating",
vocabulary_list=["D", "DD", "DDD", "DDDD"],
dtype=tf.string,
default_value=0),
]
model_config = tfl.configs.CalibratedLatticeConfig(
output_calibration=True,
output_calibration_num_keypoints=5,
regularizer_configs=[
tfl.configs.RegularizerConfig(name="output_calib_wrinkle", l2=0.1),
],
feature_configs=[
tfl.configs.FeatureConfig(
name="num_reviews",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_convexity="concave",
pwl_calibration_num_keypoints=20,
regularizer_configs=[
tfl.configs.RegularizerConfig(name="calib_wrinkle", l2=1.0),
],
reflects_trust_in=[
tfl.configs.TrustConfig(
feature_name="avg_rating", trust_type="edgeworth"),
],
),
tfl.configs.FeatureConfig(
name="avg_rating",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_num_keypoints=20,
regularizer_configs=[
tfl.configs.RegularizerConfig(name="calib_wrinkle", l2=1.0),
],
),
tfl.configs.FeatureConfig(
name="dollar_rating",
lattice_size=2,
pwl_calibration_num_keypoints=4,
# Here we only specify one monotonicity:
# `D` resturants has smaller value than `DD` restaurants
monotonicity=[("D", "DD")],
),
])
tfl_estimator = tfl.estimators.CannedClassifier(
feature_columns=feature_columns,
model_config=model_config,
feature_analysis_input_fn=feature_analysis_input_fn,
optimizer=tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE),
config=tf.estimator.RunConfig(tf_random_seed=42),
)
tfl_estimator.train(input_fn=train_input_fn)
analyze_three_d_estimator(tfl_estimator, "TF Lattice")
_ = save_and_visualize_lattice(tfl_estimator)
"""
Explanation: 此分类校准器展示了模型输出的优先级:DD > D > DDD > DDDD,这与我们的设置一致。请注意,还有一列用于缺失值。尽管我们的训练和测试数据中没有缺失特征,但当下游模型使用期间出现值缺失的情况时,该模型可提供对缺失值的推算。
在此,我们还将绘制以 dollar_rating 为条件的此模型的预测 CTR。请注意,我们需要的所有约束在每个切片内都要得到满足。
输出校准
对于到目前为止我们训练的所有 TFL 模型,点阵层(在模型图中示为“Lattice”)均直接输出模型预测。有时,我们不确定是否应对点阵输出进行重新调整以激发模型输出:
特征为 $log$ 计数,而标签为计数。
点阵被配置为仅包含少量顶点,但标签分布却相对复杂。
在这些情况下,我们可以在点阵输出和模型输出之间添加另一个校准器,以提高模型的灵活性。在此,我们向刚刚构建的模型添加一个带有 5 个关键点的校准器层。我们还会为输出校准器添加一个正则化器,使函数保持平滑。
End of explanation
"""
|
amitkaps/machine-learning | time_series/3-Refine.ipynb | mit | # Import the two library we need, which is Pandas and Numpy
import pandas as pd
import numpy as np
# Read the csv file of Month Wise Market Arrival data that has been scraped.
df = pd.read_csv('MonthWiseMarketArrivals.csv')
df.head()
df.tail()
"""
Explanation: 2. Refine the Data
"Data is messy"
We will be performing the following operation on our Onion price to refine it
- Remove e.g. remove redundant data from the data frame
- Derive e.g. State and City from the market field
- Parse e.g. extract date from year and month column
Other stuff you may need to do to refine are...
- Missing e.g. Check for missing or incomplete data
- Quality e.g. Check for duplicates, accuracy, unusual data
- Convert e.g. free text to coded value
- Calculate e.g. percentages, proportion
- Merge e.g. first and surname for full name
- Aggregate e.g. rollup by year, cluster by area
- Filter e.g. exclude based on location
- Sample e.g. extract a representative data
- Summary e.g. show summary stats like mean
End of explanation
"""
df.dtypes
# Delete the last row from the dataframe
df.tail(1)
# Delete a row from the dataframe
df.drop(df.tail(1).index, inplace = True)
df.head()
df.tail()
df.dtypes
df.iloc[:,4:7].head()
df.iloc[:,2:7] = df.iloc[:,2:7].astype(int)
df.dtypes
df.head()
df.describe()
"""
Explanation: Remove the redundant data
End of explanation
"""
df.market.value_counts().head()
df['state'] = df.market.str.split('(').str[-1]
df.head()
df['city'] = df.market.str.split('(').str[0]
df.head()
df.state.unique()
df['state'] = df.state.str.split(')').str[0]
df.state.unique()
dfState = df.groupby(['state', 'market'], as_index=False).count()
dfState.market.unique()
state_now = ['PB', 'UP', 'GUJ', 'MS', 'RAJ', 'BANGALORE', 'KNT', 'BHOPAL', 'OR',
'BHR', 'WB', 'CHANDIGARH', 'CHENNAI', 'bellary', 'podisu', 'UTT',
'DELHI', 'MP', 'TN', 'Podis', 'GUWAHATI', 'HYDERABAD', 'JAIPUR',
'WHITE', 'JAMMU', 'HR', 'KOLKATA', 'AP', 'LUCKNOW', 'MUMBAI',
'NAGPUR', 'KER', 'PATNA', 'CHGARH', 'JH', 'SHIMLA', 'SRINAGAR',
'TRIVENDRUM']
state_new =['PB', 'UP', 'GUJ', 'MS', 'RAJ', 'KNT', 'KNT', 'MP', 'OR',
'BHR', 'WB', 'CH', 'TN', 'KNT', 'TN', 'UP',
'DEL', 'MP', 'TN', 'TN', 'ASM', 'AP', 'RAJ',
'MS', 'JK', 'HR', 'WB', 'AP', 'UP', 'MS',
'MS', 'KER', 'BHR', 'HR', 'JH', 'HP', 'JK',
'KEL']
df.state = df.state.replace(state_now, state_new)
df.state.unique()
"""
Explanation: Extracting the states from market names
End of explanation
"""
df.head()
df.index
pd.to_datetime('January 2012')
df['date'] = df['month'] + '-' + df['year'].map(str)
??map
df.head()
index = pd.to_datetime(df.date)
df.index = pd.PeriodIndex(df.date, freq='M')
df.columns
df.index
df.head()
df.to_csv('MonthWiseMarketArrivals_Clean.csv', index = False)
"""
Explanation: Getting the Dates
End of explanation
"""
|
probml/pyprobml | deprecated/arhmm_example.ipynb | mit | !pip install git+git://github.com/lindermanlab/ssm-jax-refactor.git
import ssm
import copy
import jax.numpy as np
import jax.random as jr
from tensorflow_probability.substrates import jax as tfp
from ssm.distributions.linreg import GaussianLinearRegression
from ssm.arhmm import GaussianARHMM
from ssm.utils import find_permutation, random_rotation
from ssm.plots import gradient_cmap # , white_to_color_cmap
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
sns.set_style("white")
sns.set_context("talk")
color_names = ["windows blue", "red", "amber", "faded green", "dusty purple", "orange", "brown", "pink"]
colors = sns.xkcd_palette(color_names)
cmap = gradient_cmap(colors)
# Make a transition matrix
num_states = 5
transition_probs = (np.arange(num_states) ** 10).astype(float)
transition_probs /= transition_probs.sum()
transition_matrix = np.zeros((num_states, num_states))
for k, p in enumerate(transition_probs[::-1]):
transition_matrix += np.roll(p * np.eye(num_states), k, axis=1)
plt.imshow(transition_matrix, vmin=0, vmax=1, cmap="Greys")
plt.xlabel("next state")
plt.ylabel("current state")
plt.title("transition matrix")
plt.colorbar()
plt.savefig("arhmm-transmat.pdf")
# Make observation distributions
data_dim = 2
num_lags = 1
keys = jr.split(jr.PRNGKey(0), num_states)
angles = np.linspace(0, 2 * np.pi, num_states, endpoint=False)
theta = np.pi / 25 # rotational frequency
weights = np.array([0.8 * random_rotation(key, data_dim, theta=theta) for key in keys])
biases = np.column_stack([np.cos(angles), np.sin(angles), np.zeros((num_states, data_dim - 2))])
covariances = np.tile(0.001 * np.eye(data_dim), (num_states, 1, 1))
# Compute the stationary points
stationary_points = np.linalg.solve(np.eye(data_dim) - weights, biases)
print(theta / (2 * np.pi) * 360)
print(360 / 5)
"""
Explanation: <a href="https://colab.research.google.com/github/probml/probml-notebooks/blob/main/notebooks/arhmm_example.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Autoregressive (AR) HMM Demo
Modified from
https://github.com/lindermanlab/ssm-jax-refactor/blob/main/notebooks/arhmm-example.ipynb
This notebook illustrates the use of the auto_regression observation model.
Let $x_t$ denote the observation at time $t$. Let $z_t$ denote the corresponding discrete latent state.
The autoregressive hidden Markov model has the following likelihood,
$$
\begin{align}
x_t \mid x_{t-1}, z_t &\sim
\mathcal{N}\left(A_{z_t} x_{t-1} + b_{z_t}, Q_{z_t} \right).
\end{align}
$$
(Technically, higher-order autoregressive processes with extra linear terms from inputs are also implemented.)
End of explanation
"""
if data_dim == 2:
lim = 5
x = np.linspace(-lim, lim, 10)
y = np.linspace(-lim, lim, 10)
X, Y = np.meshgrid(x, y)
xy = np.column_stack((X.ravel(), Y.ravel()))
fig, axs = plt.subplots(1, num_states, figsize=(3 * num_states, 6))
for k in range(num_states):
A, b = weights[k], biases[k]
dxydt_m = xy.dot(A.T) + b - xy
axs[k].quiver(xy[:, 0], xy[:, 1], dxydt_m[:, 0], dxydt_m[:, 1], color=colors[k % len(colors)])
axs[k].set_xlabel("$y_1$")
# axs[k].set_xticks([])
if k == 0:
axs[k].set_ylabel("$y_2$")
# axs[k].set_yticks([])
axs[k].set_aspect("equal")
plt.tight_layout()
plt.savefig("arhmm-flow-matrices.pdf")
colors
print(stationary_points)
"""
Explanation: Plot dynamics functions
End of explanation
"""
# Make an Autoregressive (AR) HMM
true_initial_distribution = tfp.distributions.Categorical(logits=np.zeros(num_states))
true_transition_distribution = tfp.distributions.Categorical(probs=transition_matrix)
true_arhmm = GaussianARHMM(
num_states,
transition_matrix=transition_matrix,
emission_weights=weights,
emission_biases=biases,
emission_covariances=covariances,
)
time_bins = 10000
true_states, data = true_arhmm.sample(jr.PRNGKey(0), time_bins)
fig = plt.figure(figsize=(8, 8))
for k in range(num_states):
plt.plot(*data[true_states == k].T, "o", color=colors[k], alpha=0.75, markersize=3)
plt.plot(*data[:1000].T, "-k", lw=0.5, alpha=0.2)
plt.xlabel("$y_1$")
plt.ylabel("$y_2$")
# plt.gca().set_aspect("equal")
plt.savefig("arhmm-samples-2d.pdf")
fig = plt.figure(figsize=(8, 8))
for k in range(num_states):
ndx = true_states == k
data_k = data[ndx]
T = 12
data_k = data_k[:T, :]
plt.plot(data_k[:, 0], data_k[:, 1], "o", color=colors[k], alpha=0.75, markersize=3)
for t in range(T):
plt.text(data_k[t, 0], data_k[t, 1], t, color=colors[k], fontsize=12)
# plt.plot(*data[:1000].T, '-k', lw=0.5, alpha=0.2)
plt.xlabel("$y_1$")
plt.ylabel("$y_2$")
# plt.gca().set_aspect("equal")
plt.savefig("arhmm-samples-2d-temporal.pdf")
print(biases)
print(stationary_points)
colors
"""
Explanation: Sample data from the ARHMM
End of explanation
"""
lim
# Plot the data and the smoothed data
plot_slice = (0, 200)
lim = 1.05 * abs(data).max()
plt.figure(figsize=(8, 6))
plt.imshow(
true_states[None, :],
aspect="auto",
cmap=cmap,
vmin=0,
vmax=len(colors) - 1,
extent=(0, time_bins, -lim, (data_dim) * lim),
)
Ey = np.array(stationary_points)[true_states]
for d in range(data_dim):
plt.plot(data[:, d] + lim * d, "-k")
plt.plot(Ey[:, d] + lim * d, ":k")
plt.xlim(plot_slice)
plt.xlabel("time")
# plt.yticks(lim * np.arange(data_dim), ["$y_{{{}}}$".format(d+1) for d in range(data_dim)])
plt.ylabel("observations")
plt.tight_layout()
plt.savefig("arhmm-samples-1d.pdf")
data.shape
data[:10, :]
"""
Explanation: Below, we visualize each component of of the observation variable as a time series. The colors correspond to the latent state. The dotted lines represent the stationary point of the the corresponding AR state while the solid lines are the actual observations sampled from the HMM.
End of explanation
"""
# Now fit an HMM to the data
key1, key2 = jr.split(jr.PRNGKey(0), 2)
test_num_states = num_states
initial_distribution = tfp.distributions.Categorical(logits=np.zeros(test_num_states))
transition_distribution = tfp.distributions.Categorical(logits=np.zeros((test_num_states, test_num_states)))
emission_distribution = GaussianLinearRegression(
weights=np.tile(0.99 * np.eye(data_dim), (test_num_states, 1, 1)),
bias=0.01 * jr.normal(key2, (test_num_states, data_dim)),
scale_tril=np.tile(np.eye(data_dim), (test_num_states, 1, 1)),
)
arhmm = GaussianARHMM(test_num_states, data_dim, num_lags, seed=jr.PRNGKey(0))
lps, arhmm, posterior = arhmm.fit(data, method="em")
# Plot the log likelihoods against the true likelihood, for comparison
true_lp = true_arhmm.marginal_likelihood(data)
plt.plot(lps, label="EM")
plt.plot(true_lp * np.ones(len(lps)), ":k", label="True")
plt.xlabel("EM Iteration")
plt.ylabel("Log Probability")
plt.legend(loc="lower right")
plt.show()
# # Find a permutation of the states that best matches the true and inferred states
# most_likely_states = posterior.most_likely_states()
# arhmm.permute(find_permutation(true_states[num_lags:], most_likely_states))
# posterior.update()
# most_likely_states = posterior.most_likely_states()
if data_dim == 2:
lim = abs(data).max()
x = np.linspace(-lim, lim, 10)
y = np.linspace(-lim, lim, 10)
X, Y = np.meshgrid(x, y)
xy = np.column_stack((X.ravel(), Y.ravel()))
fig, axs = plt.subplots(2, max(num_states, test_num_states), figsize=(3 * num_states, 6))
for i, model in enumerate([true_arhmm, arhmm]):
for j in range(model.num_states):
dist = model._emissions._distribution[j]
A, b = dist.weights, dist.bias
dxydt_m = xy.dot(A.T) + b - xy
axs[i, j].quiver(xy[:, 0], xy[:, 1], dxydt_m[:, 0], dxydt_m[:, 1], color=colors[j % len(colors)])
axs[i, j].set_xlabel("$x_1$")
axs[i, j].set_xticks([])
if j == 0:
axs[i, j].set_ylabel("$x_2$")
axs[i, j].set_yticks([])
axs[i, j].set_aspect("equal")
plt.tight_layout()
plt.savefig("argmm-flow-matrices-true-and-estimated.pdf")
if data_dim == 2:
lim = abs(data).max()
x = np.linspace(-lim, lim, 10)
y = np.linspace(-lim, lim, 10)
X, Y = np.meshgrid(x, y)
xy = np.column_stack((X.ravel(), Y.ravel()))
fig, axs = plt.subplots(1, max(num_states, test_num_states), figsize=(3 * num_states, 6))
for i, model in enumerate([arhmm]):
for j in range(model.num_states):
dist = model._emissions._distribution[j]
A, b = dist.weights, dist.bias
dxydt_m = xy.dot(A.T) + b - xy
axs[j].quiver(xy[:, 0], xy[:, 1], dxydt_m[:, 0], dxydt_m[:, 1], color=colors[j % len(colors)])
axs[j].set_xlabel("$y_1$")
axs[j].set_xticks([])
if j == 0:
axs[j].set_ylabel("$y_2$")
axs[j].set_yticks([])
axs[j].set_aspect("equal")
plt.tight_layout()
plt.savefig("arhmm-flow-matrices-estimated.pdf")
# Plot the true and inferred discrete states
plot_slice = (0, 1000)
plt.figure(figsize=(8, 4))
plt.subplot(211)
plt.imshow(true_states[None, num_lags:], aspect="auto", interpolation="none", cmap=cmap, vmin=0, vmax=len(colors) - 1)
plt.xlim(plot_slice)
plt.ylabel("$z_{\\mathrm{true}}$")
plt.yticks([])
plt.subplot(212)
# plt.imshow(most_likely_states[None,: :], aspect="auto", cmap=cmap, vmin=0, vmax=len(colors)-1)
plt.imshow(posterior.expected_states[0].T, aspect="auto", interpolation="none", cmap="Greys", vmin=0, vmax=1)
plt.xlim(plot_slice)
plt.ylabel("$z_{\\mathrm{inferred}}$")
plt.yticks([])
plt.xlabel("time")
plt.tight_layout()
plt.savefig("arhmm-state-est.pdf")
# Sample the fitted model
sampled_states, sampled_data = arhmm.sample(jr.PRNGKey(0), time_bins)
fig = plt.figure(figsize=(8, 8))
for k in range(num_states):
plt.plot(*sampled_data[sampled_states == k].T, "o", color=colors[k], alpha=0.75, markersize=3)
# plt.plot(*sampled_data.T, '-k', lw=0.5, alpha=0.2)
plt.plot(*sampled_data[:1000].T, "-k", lw=0.5, alpha=0.2)
plt.xlabel("$x_1$")
plt.ylabel("$x_2$")
# plt.gca().set_aspect("equal")
plt.savefig("arhmm-samples-2d-estimated.pdf")
"""
Explanation: Fit an ARHMM
End of explanation
"""
|
tyarkoni/pliers | examples/Quickstart.ipynb | bsd-3-clause | from pliers.extractors import FaceRecognitionFaceLocationsExtractor
# A picture of Barack Obama
image = join(get_test_data_path(), 'image', 'obama.jpg')
# Initialize Extractor
ext = FaceRecognitionFaceLocationsExtractor()
# Apply Extractor to image
result = ext.transform(image)
result.to_df()
"""
Explanation: Pliers Quickstart
This notebook contains a few examples that demonstrate how to extract various kinds of features with pliers. We start with very simple examples, and gradually scale up in complexity.
Face detection
This first example uses the face_recognition package's location extraction method to detect the location of Barack Obama's face within a single image. The tools used to do this are completely local (i.e., the image isn't sent to an external API).
We output the result as a pandas DataFrame; the 'face_locations' column contains the coordinates of the bounding box in CSS format (i.e., top, right, bottom, and left edges).
End of explanation
"""
from pliers.extractors import FaceRecognitionFaceLocationsExtractor, merge_results
images = ['apple.jpg', 'obama.jpg', 'thai_people.jpg']
images = [join(get_test_data_path(), 'image', img) for img in images]
ext = FaceRecognitionFaceLocationsExtractor()
results = ext.transform(images)
df = merge_results(results)
df
"""
Explanation: Face detection with multiple inputs
What if we want to run the face detector on multiple images? Naively, we could of course just loop over input images and apply the Extractor to each one. But pliers makes this even easier for us, by natively accepting iterables as inputs. The following code is almost identical to the above snippet. The only notable difference is that, because the result we get back is now also a list (because the features extracted from each image are stored separately), we need to explicitly combine the results using the merge_results utility.
End of explanation
"""
from pliers.extractors import GoogleVisionAPIFaceExtractor
ext = GoogleVisionAPIFaceExtractor()
image = join(get_test_data_path(), 'image', 'obama.jpg')
result = ext.transform(image)
result.to_df(format='long', timing=False, object_id=False)
"""
Explanation: Note how the merged pandas DataFrame contains 5 rows, even though there were only 3 input images. The reason is that there are 5 detected faces across the inputs (0 in the first image, 1 in the second, and 4 in the third). You can discern the original sources from the stim_name and source_file columns.
Face detection using a remote API
The above examples use an entirely local package (face_recognition) for feature extraction. In this next example, we use the Google Cloud Vision API to extract various face-related attributes from an image of Barack Obama. The syntax is identical to the first example, save for the use of the GoogleVisionAPIFaceExtractor instead of the FaceRecognitionFaceLocationsExtractor. Note, however, that successful execution of this code requires you to have a GOOGLE_APPLICATION_CREDENTIALS environment variable pointing to your Google credentials JSON file. See the documentation for more details.
End of explanation
"""
from pliers.stimuli import TextStim, ComplexTextStim
from pliers.extractors import VADERSentimentExtractor, merge_results
raw = """We're not claiming that VADER is a very good sentiment analysis tool.
Sentiment analysis is a really, really difficult problem. But just to make a
point, here are some clearly valenced words: disgusting, wonderful, poop,
sunshine, smile."""
# First example: we treat all text as part of a single token
text = TextStim(text=raw)
ext = VADERSentimentExtractor()
results = ext.transform(text)
results.to_df()
"""
Explanation: Notice that the output in this case contains many more features. That's because the Google face recognition service gives us back a lot more information than just the location of the face within the image. Also, the example illustrates our ability to control the format of the output, by returning the data in "long" format, and suppressing output of columns that are uninformative in this context.
Sentiment analysis on text
Here we use the VADER sentiment analyzer (Hutto & Gilbert, 2014) implemented in the nltk package to extract sentiment for (a) a coherent block of text, and (b) each word in the text separately. This example also introduces the Stim hierarchy of objects explicitly, whereas the initialization of Stim objects was implicit in the previous examples.
Treat text as a single block
End of explanation
"""
# Second example: we construct a ComplexTextStim, which will
# cause each word to be represented as a separate TextStim.
text = ComplexTextStim(text=raw)
ext = VADERSentimentExtractor()
results = ext.transform(text)
# Because results is a list of ExtractorResult objects
# (one per word), we need to merge the results explicitly.
df = merge_results(results, object_id=False)
df.head(10)
"""
Explanation: Analyze each word individually
End of explanation
"""
from pliers.extractors import ChromaSTFTExtractor
audio = join(get_test_data_path(), 'audio', 'barber.wav')
# Audio is sampled at 11KHz; let's compute power in 1 sec bins
ext = ChromaSTFTExtractor(hop_length=11025)
result = ext.transform(audio).to_df()
result.head(10)
# And a plot of the chromagram...
plt.imshow(result.iloc[:, 4:].values.T, aspect='auto')
"""
Explanation: Extract chromagram from an audio clip
We have an audio clip, and we'd like to compute its chromagram (i.e., to extract the normalized energy in each of the 12 pitch classes). This is trivial thanks to pliers' support for the librosa package, which contains all kinds of useful functions for spectral feature extraction.
End of explanation
"""
audio = join(get_test_data_path(), 'audio', 'homer.wav')
ext = VADERSentimentExtractor()
result = ext.transform(audio)
df = merge_results(result, object_id=False)
df
"""
Explanation: Sentiment analysis on speech transcribed from audio
So far all of our examples involve the application of a feature extractor to an input of the expected modality (e.g., a text sentiment analyzer applied to text, a face recognizer applied to an image, etc.). But we often want to extract features that require us to first convert our input to a different modality. Let's see how pliers handles this kind of situation.
Say we have an audio clip. We want to run sentiment analysis on the audio. This requires us to first transcribe any speech contained in the audio. As it turns out, we don't have to do anything special here; we can just feed an audio clip directly to an Extractor class that expects a text input (e.g., the VADER sentiment analyzer we used earlier). How? Magic! Pliers is smart enough to implicitly convert the audio clip to a ComplexTextStim internally. By default, it does this using IBM's Watson speech transcription API. Which means you'll need to make sure your API key is set up properly in order for the code below to work. (But if you'd rather use, say, Google's Cloud Speech API, you could easily configure pliers to make that the default for audio-to-text conversion.)
End of explanation
"""
from pliers.filters import FrameSamplingFilter
from pliers.extractors import ClarifaiAPIImageExtractor, merge_results
video = join(get_test_data_path(), 'video', 'small.mp4')
# Sample 2 frames per second
sampler = FrameSamplingFilter(hertz=2)
frames = sampler.transform(video)
ext = ClarifaiAPIImageExtractor()
results = ext.transform(frames)
df = merge_results(results, )
df
"""
Explanation: Object recognition on selectively sampled video frames
A common scenario when analyzing video is to want to apply some kind of feature extraction tool to individual video frames (i.e., still images). Often, there's little to be gained by analyzing every single frame, so we want to sample frames with some specified frequency. The following example illustrates how easily this can be accomplished in pliers. It also demonstrates the concept of chaining multiple Transformer objects. We first convert a video to a series of images, and then apply an object-detection Extractor to each image.
Note, as with other examples above, that the ClarifaiAPIImageExtractor wraps the Clarifai object recognition API, so you'll need to have an API key set up appropriately (if you don't have an API key, and don't want to set one up, you can replace ClarifaiAPIExtractor with TensorFlowInceptionV3Extractor to get similar, though not quite as accurate, results).
End of explanation
"""
from pliers.tests.utils import get_test_data_path
from os.path import join
from pliers.filters import FrameSamplingFilter
from pliers.converters import GoogleSpeechAPIConverter
from pliers.extractors import (ClarifaiAPIImageExtractor, GoogleVisionAPIFaceExtractor,
ComplexTextExtractor, PredefinedDictionaryExtractor,
STFTAudioExtractor, VADERSentimentExtractor,
merge_results)
video = join(get_test_data_path(), 'video', 'obama_speech.mp4')
# Store all the returned features in a single list (nested lists
# are fine, the merge_results function will flatten everything)
features = []
# Sample video frames and apply the image-based extractors
sampler = FrameSamplingFilter(every=10)
frames = sampler.transform(video)
obj_ext = ClarifaiAPIImageExtractor()
obj_features = obj_ext.transform(frames)
features.append(obj_features)
face_ext = GoogleVisionAPIFaceExtractor()
face_features = face_ext.transform(frames)
features.append(face_features)
# Power in speech frequencies
stft_ext = STFTAudioExtractor(freq_bins=[(100, 300)])
speech_features = stft_ext.transform(video)
features.append(speech_features)
# Explicitly transcribe the video--we could also skip this step
# and it would be done implicitly, but this way we can specify
# that we want to use the Google Cloud Speech API rather than
# the package default (IBM Watson)
text_conv = GoogleSpeechAPIConverter()
text = text_conv.transform(video)
# Text-based features
text_ext = ComplexTextExtractor()
text_features = text_ext.transform(text)
features.append(text_features)
dict_ext = PredefinedDictionaryExtractor(
variables=['affect/V.Mean.Sum', 'subtlexusfrequency/Lg10WF'])
norm_features = dict_ext.transform(text)
features.append(norm_features)
sent_ext = VADERSentimentExtractor()
sent_features = sent_ext.transform(text)
features.append(sent_features)
# Ask for data in 'long' format, and code extractor name as a separate
# column instead of prepending it to feature names.
df = merge_results(features, format='long', extractor_names='column')
# Output rows in a sensible order
df.sort_values(['extractor', 'feature', 'onset', 'duration', 'order']).head(10)
"""
Explanation: The resulting data frame has 41 columns (!), most of which are individual object labels like 'lego', 'toy', etc., selected for us by the Clarifai API on the basis of the content detected in the video (we could have also forced the API to return values for specific labels).
Multiple extractors
So far we've only used a single Extractor at a time to extract information from our inputs. Now we'll start to get a little more ambitious. Let's say we have a video that we want to extract lots of different features from--in multiple modalities. Specifically, we want to extract all of the following:
Object recognition and face detection applied to every 10th frame of the video;
A second-by-second estimate of spectral power in the speech frequency band;
A word-by-word speech transcript;
Estimates of several lexical properties (e.g., word length, written word frequency, etc.) for every word in the transcript;
Sentiment analysis applied to the entire transcript.
We've already seen some of these features extracted individually, but now we're going to extract all of them at once. As it turns out, the code looks almost exactly like a concatenated version of several of our examples above.
End of explanation
"""
from pliers.tests.utils import get_test_data_path
from os.path import join
from pliers.graph import Graph
from pliers.filters import FrameSamplingFilter
from pliers.extractors import (PredefinedDictionaryExtractor, STFTAudioExtractor,
merge_results)
video = join(get_test_data_path(), 'video', 'obama_speech.mp4')
# Define nodes
nodes = [
(FrameSamplingFilter(every=10),
['ClarifaiAPIImageExtractor', 'GoogleVisionAPIFaceExtractor']),
(STFTAudioExtractor(freq_bins=[(100, 300)])),
('GoogleSpeechAPIConverter',
['ComplexTextExtractor',
PredefinedDictionaryExtractor(['affect/V.Mean.Sum',
'subtlexusfrequency/Lg10WF']),
'VADERSentimentExtractor'])
]
# Initialize and execute Graph
g = Graph(nodes)
# Arguments to merge_results can be passed in here
df = g.transform(video, format='long', extractor_names='column')
# Output rows in a sensible order
df.sort_values(['extractor', 'feature', 'onset', 'duration', 'order']).head(10)
"""
Explanation: The resulting pandas DataFrame is quite large; even for our 9-second video, we get back over 3,000 rows! Importantly, though, the DataFrame contains all kinds of metadata that makes it easy to filter and sort the results in whatever way we might want to (e.g., we can filter on the extractor, stim class, onset or duration, etc.).
Multiple extractors with a Graph
The above code listing is already pretty terse, and has the advantage of being explicit about every step. But if it's brevity we're after, pliers is happy to oblige us. The package includes a Graph abstraction that allows us to load an arbitrary number of Transformer into a graph, and execute them all in one shot. The code below is functionally identical to the last example, but only about the third of the length. It also requires fewer imports, since Transformer objects that we don't need to initialize with custom arguments can be passed to the Graph as strings.
The upshot of all this is that, in just a few lines of Python code, we're abvle to extract a broad range of multimodal features from video, image, audio or text inputs, using state-of-the-art tools and services!
End of explanation
"""
|
gwu-libraries/notebooks | 20161122-twitter-jq-recipes/twitter_jq_recipes.ipynb | mit | !head -n5 tweets.json | jq -c '[.id_str, .text]'
"""
Explanation: Recipes for processing Twitter data with jq
This notebook is a companion to Getting Started Working with Twitter Data Using jq. It focuses on recipes that the Social Feed Manager team has used when preparing datasets of tweets for researchers.
We will continue to add additional recipes to this notebook. If you have any suggestions, please contact us.
This notebook requires at least jq 1.5. Note that only earlier versions may be available from your package manager; manual installation may be necessary.
These recipes can be used with any data source that outputs tweets as line-oriented JSON. Within the context of SFM, this is usually the output of twitter_rest_warc_iter.py or twitter_stream_warc_iter.py within a processing container. Alternatively, Twarc is a commandline tool for retrieving data from the Twitter API that outputs tweets as line-oriented JSON.
For the purposes of this notebook, we will use a line-oriented JSON file that was created using Twarc. It contains the user timeline of @SocialFeedMgr. The command used to produce this file was twarc.py --timeline socialfeedmgr > tweets.json.
For an explanation of the fields in a tweet see the Tweet Field Guide. For other helpful tweet processing utilities, see twarc utils.
For the sake of brevity, some of the examples may only output a subset of the tweets fields and/or a subset of the tweets contained in tweets.json. The following example outputs the tweet id and text of all of the first 5 tweets.
End of explanation
"""
!head -n5 tweets.json | jq -c '[.created_at, .created_at | strptime("%A %B %d %T %z %Y") | todate]'
"""
Explanation: Dates
For both filtering and output, it is often necessary to parse and/or normalize the created_at date. The following shows the original created_at date and the date as an ISO 8601 date.
End of explanation
"""
!cat tweets.json | jq -c 'select(.text | contains("blog")) | [.id_str, .text]'
!cat tweets.json | jq -c 'select(.text | contains("BLOG")) | [.id_str, .text]'
"""
Explanation: Filtering
Filtering text
Case sensitive
End of explanation
"""
!cat tweets.json | jq -c 'select(.text | test("BLog"; "i")) | [.id_str, .text]'
"""
Explanation: Case insensitive
To ignore case, use a regular expression filter with the case-insensitive flag.
End of explanation
"""
!cat tweets.json | jq -c 'select(.text | test("BLog|twarc"; "i")) | [.id_str, .text]'
"""
Explanation: Filtering on multiple terms (OR)
End of explanation
"""
!cat tweets.json | jq -c 'select((.text | test("BLog"; "i")) and (.text | test("twitter"; "i"))) | [.id_str, .text]'
"""
Explanation: Filtering on multiple terms (AND)
End of explanation
"""
!cat tweets.json | jq -c 'select((.created_at | strptime("%A %B %d %T %z %Y") | mktime) > ("2016-11-05T00:00:00Z" | fromdateiso8601)) | [.id_str, .created_at, (.created_at | strptime("%A %B %d %T %z %Y") | todate)]'
"""
Explanation: Filter dates
The following shows tweets created after November 5, 2016.
End of explanation
"""
!cat tweets.json | jq -c 'select(has("retweeted_status")) | [.id_str, .retweeted_status.id]'
"""
Explanation: Is retweet
End of explanation
"""
!cat tweets.json | jq -c 'select(has("quoted_status")) | [.id_str, .quoted_status.id]'
"""
Explanation: Is quote
End of explanation
"""
!head -n5 tweets.json | jq -r '[(.created_at | strptime("%A %B %d %T %z %Y") | todate), .id_str, .user.screen_name, .user.followers_count, .user.friends_count, .retweet_count, .favorite_count, .in_reply_to_screen_name, "http://twitter.com/" + .user.screen_name + "/status/" + .id_str, (.text | gsub("\n";" ")), has("retweeted_status"), has("quoted_status")] | @csv'
"""
Explanation: Output
To write output to a file use > <filename>. For example: cat tweets.json | jq -r '.id_str' > tweet_ids.txt
CSV
Following is a CSV output that has fields similar to the CSV output produced by SFM's export functionality.
Note that is uses the -r flag for jq instead of the -c flag.
Also note that is it is necessary to remove line breaks from the tweet text to prevent it from breaking the CSV. This is done with (.text | gsub("\n";" ")).
End of explanation
"""
!echo "[]" | jq -r '["created_at","twitter_id","screen_name","followers_count","friends_count","retweet_count","favorite_count","in_reply_to_screen_name","twitter_url","text","is_retweet","is_quote"] | @csv'
"""
Explanation: Header row
The header row should be written to the output file with > before appending the CSV with >>.
End of explanation
"""
!head -n5 tweets.json | jq -r '.id_str'
"""
Explanation: Splitting files
Excel can load CSV files with over a million rows. Howver, for practical purposes a much smaller number is recommended.
The following uses the split command to split the CSV output into multiple files. Note that the flags accepted may be different in your environment.
cat tweets.json | jq -r '[.id_str, (.text | gsub("\n";" "))] | @csv' | split --lines=5 -d --additional-suffix=.csv - tweets
ls *.csv
tweets00.csv tweets01.csv tweets02.csv tweets03.csv tweets04.csv
tweets05.csv tweets06.csv tweets07.csv tweets08.csv tweets09.csv
--lines=5 sets the number of lines to include in each file.
--additional-suffix=.csv set the file extension.
tweets is the base name for each file.
Tweet ids
When outputting tweet ids, .id_str should be used instead of .id. See Ed Summer's blog post for an explanation.
End of explanation
"""
|
BrainIntensive/OnlineBrainIntensive | resources/matplotlib/Examples/3dplots.ipynb | mit | %load_ext watermark
%watermark -u -v -d -p matplotlib,numpy
"""
Explanation: Sebastian Raschka
back to the matplotlib-gallery at https://github.com/rasbt/matplotlib-gallery
End of explanation
"""
%matplotlib inline
"""
Explanation: <font size="1.5em">More info about the %watermark extension</font>
End of explanation
"""
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
from matplotlib import pyplot as plt
# Generate some 3D sample data
mu_vec1 = np.array([0,0,0]) # mean vector
cov_mat1 = np.array([[1,0,0],[0,1,0],[0,0,1]]) # covariance matrix
class1_sample = np.random.multivariate_normal(mu_vec1, cov_mat1, 20)
class2_sample = np.random.multivariate_normal(mu_vec1 + 1, cov_mat1, 20)
class3_sample = np.random.multivariate_normal(mu_vec1 + 2, cov_mat1, 20)
# class1_sample.shape -> (20, 3), 20 rows, 3 columns
fig = plt.figure(figsize=(8,8))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(class1_sample[:,0], class1_sample[:,1], class1_sample[:,2],
marker='x', color='blue', s=40, label='class 1')
ax.scatter(class2_sample[:,0], class2_sample[:,1], class2_sample[:,2],
marker='o', color='green', s=40, label='class 2')
ax.scatter(class3_sample[:,0], class3_sample[:,1], class3_sample[:,2],
marker='^', color='red', s=40, label='class 3')
ax.set_xlabel('variable X')
ax.set_ylabel('variable Y')
ax.set_zlabel('variable Z')
plt.title('3D Scatter Plot')
plt.show()
"""
Explanation: <br>
<br>
3D Plots in matplotlib
Sections
3D scatter plot
3D scatter plot with eigenvectors
3D cube
Multivariate Gaussian distribution with colored surface
Multivariate Gaussian distribution as mesh grid
<br>
<br>
3D scatter plot
[back to top]
End of explanation
"""
import numpy as np
from matplotlib import pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib.patches import FancyArrowPatch
from mpl_toolkits.mplot3d import proj3d
class Arrow3D(FancyArrowPatch):
def __init__(self, xs, ys, zs, *args, **kwargs):
FancyArrowPatch.__init__(self, (0,0), (0,0), *args, **kwargs)
self._verts3d = xs, ys, zs
def draw(self, renderer):
xs3d, ys3d, zs3d = self._verts3d
xs, ys, zs = proj3d.proj_transform(xs3d, ys3d, zs3d, renderer.M)
self.set_positions((xs[0],ys[0]),(xs[1],ys[1]))
FancyArrowPatch.draw(self, renderer)
# Generate some example data
mu_vec1 = np.array([0,0,0])
cov_mat1 = np.array([[1,0,0],[0,1,0],[0,0,1]])
class1_sample = np.random.multivariate_normal(mu_vec1, cov_mat1, 20)
mu_vec2 = np.array([1,1,1])
cov_mat2 = np.array([[1,0,0],[0,1,0],[0,0,1]])
class2_sample = np.random.multivariate_normal(mu_vec2, cov_mat2, 20)
# concatenate data for PCA
samples = np.concatenate((class1_sample, class2_sample), axis=0)
# mean values
mean_x = np.mean(samples[:,0])
mean_y = np.mean(samples[:,1])
mean_z = np.mean(samples[:,2])
#eigenvectors and eigenvalues
eig_val, eig_vec = np.linalg.eig(cov_mat1)
################################
#plotting eigenvectors
################################
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111, projection='3d')
ax.plot(samples[:,0], samples[:,1], samples[:,2], 'o', markersize=10, color='green', alpha=0.2)
ax.plot([mean_x], [mean_y], [mean_z], 'o', markersize=10, color='red', alpha=0.5)
for v in eig_vec.T:
a = Arrow3D([mean_x, v[0]], [mean_y, v[1]],
[mean_z, v[2]], mutation_scale=20, lw=3, arrowstyle="-|>", color="r")
ax.add_artist(a)
ax.set_xlabel('variable X')
ax.set_ylabel('variable Y')
ax.set_zlabel('variable Z')
plt.title('3D scatter plot with eigenvectors')
plt.show()
"""
Explanation: <br>
<br>
3D scatter plot with eigenvectors
[back to top]
End of explanation
"""
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
import numpy as np
from itertools import product, combinations
fig = plt.figure(figsize=(7,7))
ax = fig.gca(projection='3d')
ax.set_aspect("equal")
# Plot Points
# samples within the cube
X_inside = np.array([[0,0,0],[0.2,0.2,0.2],[0.1, -0.1, -0.3]])
X_outside = np.array([[-1.2,0.3,-0.3],[0.8,-0.82,-0.9],[1, 0.6, -0.7],
[0.8,0.7,0.2],[0.7,-0.8,-0.45],[-0.3, 0.6, 0.9],
[0.7,-0.6,-0.8]])
for row in X_inside:
ax.scatter(row[0], row[1], row[2], color="r", s=50, marker='^')
for row in X_outside:
ax.scatter(row[0], row[1], row[2], color="k", s=50)
# Plot Cube
h = [-0.5, 0.5]
for s, e in combinations(np.array(list(product(h,h,h))), 2):
if np.sum(np.abs(s-e)) == h[1]-h[0]:
ax.plot3D(*zip(s,e), color="g")
ax.set_xlim(-1.5, 1.5)
ax.set_ylim(-1.5, 1.5)
ax.set_zlim(-1.5, 1.5)
plt.show()
"""
Explanation: <br>
<br>
3D cube
[back to top]
End of explanation
"""
import numpy as np
from matplotlib import pyplot as plt
from matplotlib.mlab import bivariate_normal
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure(figsize=(10, 7))
ax = fig.gca(projection='3d')
x = np.linspace(-5, 5, 200)
y = x
X,Y = np.meshgrid(x, y)
Z = bivariate_normal(X, Y)
surf = ax.plot_surface(X, Y, Z, rstride=1, cstride=1, cmap=plt.cm.coolwarm,
linewidth=0, antialiased=False)
ax.set_zlim(0, 0.2)
ax.zaxis.set_major_locator(plt.LinearLocator(10))
ax.zaxis.set_major_formatter(plt.FormatStrFormatter('%.02f'))
fig.colorbar(surf, shrink=0.5, aspect=7, cmap=plt.cm.coolwarm)
plt.show()
"""
Explanation: <br>
<br>
Multivariate Gaussian distribution with colored surface
[back to top]
End of explanation
"""
import numpy as np
from matplotlib import pyplot as plt
from matplotlib.mlab import bivariate_normal
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure(figsize=(10, 7))
ax = fig.gca(projection='3d')
x = np.linspace(-5, 5, 200)
y = x
X,Y = np.meshgrid(x, y)
Z = bivariate_normal(X, Y)
surf = ax.plot_wireframe(X, Y, Z, rstride=4, cstride=4, color='g', alpha=0.7)
ax.set_zlim(0, 0.2)
ax.zaxis.set_major_locator(plt.LinearLocator(10))
ax.zaxis.set_major_formatter(plt.FormatStrFormatter('%.02f'))
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('p(x)')
plt.title('bivariate Gassian')
plt.show()
"""
Explanation: <br>
<br>
Multivariate Gaussian distribution as mesh grid
[back to top]
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.13/_downloads/plot_sensors_decoding.ipynb | bsd-3-clause | import numpy as np
import matplotlib.pyplot as plt
from sklearn.metrics import roc_auc_score
from sklearn.cross_validation import StratifiedKFold
import mne
from mne.datasets import sample
from mne.decoding import TimeDecoding, GeneralizationAcrossTime
data_path = sample.data_path()
plt.close('all')
"""
Explanation: Decoding sensor space data
Decoding, a.k.a MVPA or supervised machine learning applied to MEG
data in sensor space. Here the classifier is applied to every time
point.
End of explanation
"""
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
tmin, tmax = -0.2, 0.5
event_id = dict(aud_l=1, vis_l=3)
# Setup for reading the raw data
raw = mne.io.read_raw_fif(raw_fname, preload=True, add_eeg_ref=False)
raw.set_eeg_reference() # set EEG average reference
raw.filter(2, None, method='iir') # replace baselining with high-pass
events = mne.read_events(event_fname)
# Set up pick list: EEG + MEG - bad channels (modify to your needs)
raw.info['bads'] += ['MEG 2443', 'EEG 053'] # bads + 2 more
picks = mne.pick_types(raw.info, meg='grad', eeg=False, stim=True, eog=True,
exclude='bads')
# Read epochs
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,
picks=picks, baseline=None, preload=True,
reject=dict(grad=4000e-13, eog=150e-6), add_eeg_ref=False)
epochs_list = [epochs[k] for k in event_id]
mne.epochs.equalize_epoch_counts(epochs_list)
data_picks = mne.pick_types(epochs.info, meg=True, exclude='bads')
"""
Explanation: Set parameters
End of explanation
"""
td = TimeDecoding(predict_mode='cross-validation', n_jobs=1)
# Fit
td.fit(epochs)
# Compute accuracy
td.score(epochs)
# Plot scores across time
td.plot(title='Sensor space decoding')
"""
Explanation: Temporal decoding
We'll use the default classifer for a binary classification problem
which is a linear Support Vector Machine (SVM).
End of explanation
"""
# make response vector
y = np.zeros(len(epochs.events), dtype=int)
y[epochs.events[:, 2] == 3] = 1
cv = StratifiedKFold(y=y) # do a stratified cross-validation
# define the GeneralizationAcrossTime object
gat = GeneralizationAcrossTime(predict_mode='cross-validation', n_jobs=1,
cv=cv, scorer=roc_auc_score)
# fit and score
gat.fit(epochs, y=y)
gat.score(epochs)
# let's visualize now
gat.plot()
gat.plot_diagonal()
"""
Explanation: Generalization Across Time
This runs the analysis used in [1] and further detailed in [2]
Here we'll use a stratified cross-validation scheme.
End of explanation
"""
|
kaushik94/sympy | examples/notebooks/Macaulay_resultant.ipynb | bsd-3-clause | x, y, z = sym.symbols('x, y, z')
a_1_1, a_1_2, a_1_3, a_2_2, a_2_3, a_3_3 = sym.symbols('a_1_1, a_1_2, a_1_3, a_2_2, a_2_3, a_3_3')
b_1_1, b_1_2, b_1_3, b_2_2, b_2_3, b_3_3 = sym.symbols('b_1_1, b_1_2, b_1_3, b_2_2, b_2_3, b_3_3')
c_1, c_2, c_3 = sym.symbols('c_1, c_2, c_3')
variables = [x, y, z]
f_1 = a_1_1 * x ** 2 + a_1_2 * x * y + a_1_3 * x * z + a_2_2 * y ** 2 + a_2_3 * y * z + a_3_3 * z ** 2
f_2 = b_1_1 * x ** 2 + b_1_2 * x * y + b_1_3 * x * z + b_2_2 * y ** 2 + b_2_3 * y * z + b_3_3 * z ** 2
f_3 = c_1 * x + c_2 * y + c_3 * z
polynomials = [f_1, f_2, f_3]
mac = MacaulayResultant(polynomials, variables)
"""
Explanation: Macaulay Resultant
The Macauly resultant is a multivariate resultant. It is used for calculating the resultant of $n$ polynomials
in $n$ variables. The Macaulay resultant is calculated as the determinant of two matrices,
$$R = \frac{\text{det}(A)}{\text{det}(M)}.$$
Matrix $A$
There are a number of steps needed to construct matrix $A$. Let us consider an example from https://dl.acm.org/citation.cfm?id=550525 to
show the construction.
End of explanation
"""
mac.degrees
"""
Explanation: Step 1 Calculated $d_i$ for $i \in n$.
End of explanation
"""
mac.degree_m
"""
Explanation: Step 2. Get $d_M$.
End of explanation
"""
mac.get_monomials_set()
mac.monomial_set
mac.monomials_size
"""
Explanation: Step 3. All monomials of degree $d_M$ and size of set.
End of explanation
"""
mac.get_row_coefficients()
"""
Explanation: These are the columns of matrix $A$.
Step 4 Get rows and fill matrix.
End of explanation
"""
matrix = mac.get_matrix()
matrix
"""
Explanation: Each list is being multiplied by polynomials $f_1$, $f_2$ and $f_3$ equivalently. Then we fill the matrix
based on the coefficient of the monomials in the columns.
End of explanation
"""
mac.get_submatrix(matrix)
"""
Explanation: Matrix $M$
Columns that are non reduced are kept. The rows which contain one if the $a_i$s is dropoed.
$a_i$s are the coefficients of $x_i ^ {d_i}$.
End of explanation
"""
x, y, z = sym.symbols('x, y, z')
a_0, a_1, a_2 = sym.symbols('a_0, a_1, a_2')
b_0, b_1, b_2 = sym.symbols('b_0, b_1, b_2')
c_0, c_1, c_2,c_3, c_4 = sym.symbols('c_0, c_1, c_2, c_3, c_4')
f = a_0 * y - a_1 * x + a_2 * z
g = b_1 * x ** 2 + b_0 * y ** 2 - b_2 * z ** 2
h = c_0 * y - c_1 * x ** 3 + c_2 * x ** 2 * z - c_3 * x * z ** 2 + c_4 * z ** 3
polynomials = [f, g, h]
mac = MacaulayResultant(polynomials, variables=[x, y, z])
mac.degrees
mac.degree_m
mac.get_monomials_set()
mac.get_size()
mac.monomial_set
mac.get_row_coefficients()
matrix = mac.get_matrix()
matrix
matrix.shape
mac.get_submatrix(mac.get_matrix())
"""
Explanation: Second example
This is from: http://isc.tamu.edu/resources/preprints/1996/1996-02.pdf
End of explanation
"""
|
LSSTC-DSFP/LSSTC-DSFP-Sessions | Sessions/Session02/Day1/ReIntroToMachineLearning.ipynb | mit | import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: Re-Introduction to Machine Learning:
Classifying the Iris Dataset with K-Nearest Neighbors
Version 0.1
By AA Miller (Northwestern University, Adler Planetarium)
During the first session of the LSSTC DSFP we had an opportunity to work with unsupervised algorithms while clustering flowers in the famous iris data set. Here we will explore the use of the K-nearest neighbors algorithm to actually classify the flowers in the iris data set, while also re-familiarizing ourselves with scikit-learn.
End of explanation
"""
import seaborn as sns
iris = sns.load_dataset("iris")
iris
"""
Explanation: Problem 1) Visualize the Iris Data Set
Before building the model, we visualize the iris data using Seaborn. As previously covered, seaborn can be really handy when visualizing $2 < N \lesssim 10$ -dimension data sets.
As a reminder, the Iris data set measures 4 different features of 3 different types of Iris flowers. There are 150 different flowers in the data set.
Note - for those familiar with pandas seaborn is designed to integrate easily and directly with pandas DataFrame objects. In the example below the Iris data are loaded into a DataFrame. iPython notebooks also display the DataFrame data in a nice readable format.
End of explanation
"""
sns.set_style("darkgrid")
sns.pairplot(iris, vars = ["sepal_length", "sepal_width", "petal_length", "petal_width"],
hue = "species", diag_kind = 'kde')
"""
Explanation: Problem 1a
Can you identify anything interesting about the Iris data set in Table format?
Solution 1a
Type answer here
The iris data set is (probably) best visualized using a seaborn pair plot, which shows the pair-wise distributions of all the features included in the data set.
Recall - KDEs are (typically) better than histograms, and color, when used properly conveys a lot of information.
Thus, we set diag_kind = 'kde' and color the data by class, using hue = 'species' in the seaborn pair plot
End of explanation
"""
from sklearn.datasets import load_iris
iris = load_iris()
"""
Explanation: Problem 1b
Based on the pair-plot, do you think iris classification will be easy or difficult?
Solution 1b
Type answer here
Moving forward, in the interest of speed and clarity, we will compare our classification results to the correct class labels using 2D plots of sepal length vs. sepal width.
We will also load the data as a scikit-learn Bunch which enables dictionary-like properties, and easy integration with all the scikit-learn algorithms.
Recall that the scikit-learn Bunch consists of several keys, of which we are primarily interested in the data and target information.
End of explanation
"""
plt.scatter( # complete
plt.xlabel('sepal length')
plt.ylabel('sepal width')
"""
Explanation: Problem 1c
Make a scatter plot of sepal length vs. sepal width for the iris data set. Color the points by their respective iris type (i.e. labels).
End of explanation
"""
from sklearn.neighbors import KNeighborsClassifier
KNNclf = KNeighborsClassifier( # complete
preds = # complete
plt.figure()
plt.scatter( # complete
KNNclf = KNeighborsClassifier( # complete
preds = # complete
plt.figure()
plt.scatter( # complete
"""
Explanation: Problem 2) Supervised Machine Learning
Supervised machine learning aims to predict a target class or produce a regression result based on the location of labelled sources (i.e. the training set) in the multidimensional feature space. The "supervised" comes from the fact that we are specifying the allowed outputs from the model. Using the training set labels, we can estimate the accuracy of each model we generate. (though there are generally important caveats about generalization, which we will explore in further detail later).
We will begin with a simple, but nevertheless, elegant algorithm for classification and regression: $k$-nearest-neighbors ($k$NN). In brief, the classification or regression output is determined by examining the $k$ nearest neighbors in the training set, where $k$ is a user defined number. Typically, though not always, calculated distances are Euclidean, and the final classification is assigned to whichever class has a plurality within the $k$ nearest neighbors (in the case of regression, the average of the $k$ neighbors is the output from the model).
Note - you should be worried about how to select the number $k$. We will re-visit this in further detail on Friday.
In scikit-learn the KNeighborsClassifer algorithm is implemented as part of the sklearn.neighbors module.
from sklearn.neighbors import KNeighborsClassifier
KNNclf = KNeighborsClassifier()
See the docs to learn about the default options for KNeighborsClassifer.
Problem 2a
Fit two different $k$NN models to the iris data, one with 3 neighbors and one with 10 neighbors. Plot the resulting class predictions in the sepal length-sepal width plane (same plot as above).
How do the results compare to the true classifications?
Is there any reason to be suspect of this procedure?
Hint - recall that sklearn models are fit using the .fit(X,y) method, where X is the training data, or feature array, with shape [n_samples, n_features] and y is the targets, or label array, with shape [n_samples].
Hint 2 - after you have constructed the model, it is possible to obtain model predictions using the .predict() method, which requires a feature array, Xpred, using the same features and order as the training set, as input.
Hint 3 - (this isn't essential, but is worth thinking about) - should the features be re-scaled in any way?
End of explanation
"""
from sklearn.model_selection import cross_val_predict
CVpreds = cross_val_predict(# complete
plt.figure()
plt.scatter( # complete
print("The accuracy of the kNN = 5 model is ~{:.4}".format( # complete
CVpreds50 = cross_val_predict( # complete
print("The accuracy of the kNN = 50 model is ~{:.4}".format( # complete
"""
Explanation: These results are almost identical to the training classifications. However, we have cheated!
We are evaluating the accuracy of the model (98% in this case) using the same data that defines the model. Thus, what we have really evaluated here is the training error.
The true test of a good model is the generalization error: how accurate are the model predictions on new data?
We can approximate the generalization error (under the assumption that new observations are similar to the training set - an often poor assumption in astronomy...) via cross validation (CV).
In brief, CV provides predictions for training set objects that are withheld from the model construction in order to avoid "double-dipping." The most common forms of CV iterate over all sources in the training set.
Using cross_val_predict, we can obtain CV predictions for each source in the training set.
from sklearn.model_selection import cross_val_predict
CVpreds = cross_val_predict(sklearn.model(), X, y)
where sklearn.model() is the desired model, X is the feature array, and y is the label array.
Hint - if you are running an old (< 0.18) version of scikit-learn you may need to run conda update.
Problem 2b
Produce cross-validation predictions for the iris dataset and a $k$NN model with 5 neighbors.
Plot the resulting classifications, as above, and estimate the accuracy of the model as applied to new data.
How does this accuracy compare to a $k$NN model with 50 neighbors?
End of explanation
"""
# complete
"""
Explanation: Wow! The 5-neighbor model only misclassifies 2 of the flowers via CV. The 50-neighbor model misclassifies 17 flowers. While this overall accuracy is still relatively high, it would be useful to understand which flowers are being misclassified.
Problem 2c
Calculate the accuracy for each class in the iris set, as determined via CV for the $k$ = 50 model.
Which class is most accurate? Does this meet your expectations?
End of explanation
"""
from sklearn.metrics import confusion_matrix
cm = confusion_matrix( # complete
print(cm)
"""
Explanation: The classifier does a much better job classifying setosa and versicolor than it does for virginica. This is what we expected based on our previous visualization of the data set.
Measuring the accuracy for each class is useful, but there is greater utility in determining the full cross-class confusion for the model. We can visualize which sources are being mis-classified via a confusion matrix.
In a confusion matrix, one axis shows the true class and the other shows the predicted class. For a perfect classifier all of the power will be along the diagonal, while confusion is represented by off-diagonal signal.
Fortunately, scikit-learn makes it easy to compute a confusion matrix. This can be accomplished with the following:
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
Problem 2d
Calculate the confusion matrix for the iris training set and the $k$NN = 50 model.
End of explanation
"""
normalized_cm = cm.astype('float')/cm.sum(axis = 1)[:,np.newaxis]
normalized_cm
"""
Explanation: The confusion matrix reveals that most of the virginica misclassifications are predicted to be versicolor. The iris data set features 50 members of each class, but problems with class imbalance are more difficult to visualize in this way. Thus, sometimes it's helpful to normalize each value relative to the true number of sources.
Better still, we can visualize the confusion matrix in a readily digestible fashion. First - let's normalize the confusion matrix.
Problem 2e
Calculate the normalized confusion matrix. Be careful, you have to sum along one axis, and then divide along the other.
Anti-hint: This operation is actually straightforward using some array manipulation that we have not covered up to this point. Thus, we have performed the necessary operations for you below. If you have extra time, you should try to develop an alternate way to arrive at the same normalization.
End of explanation
"""
plt.imshow( # complete
"""
Explanation: Normalization makes it easier to compare the classes when there is class imbalance.
We can visualize the confusion matrix using imshow() within pyplot. A colorbar and axes labels will be needed.
Problem 2f
Plot the confusion matrix. Be sure to label each axis.
Hint - you might find the sklearn confusion matrix tutorial helpful for making a nice plot.
End of explanation
"""
# complete
"""
Explanation: Challenge Problem) ROC Curves
Earlier today we learned about receiver operating characteristic (ROC) curves as a means of measuring model performance. In brief, ROC curves plot the true positive rate (TPR) as a function of the false positive rate (FPR). Typically, the model that gets closest to TPR = 1 and FPR = 0 is considered best.
Measuring TPR as a function of FPR requires classification probabilities, which are not typically available for $kNN$ models, but this is possible using sklearn (in brief, the probabilities are just the relative fraction of each class within the $k$ neighbors).
Challenge Problem
Plot the ROC curve for each class in the iris data set using a $k = 50$ $kNN$ model and 10-fold cross validation predictions. Be sure to clearly label each of the curves.
Hint 1 - ROC curves only work for 2 class problems. You need to create three 1 vs. all models in this case.
Hint 2 - in cross_val_predict you'll want to set method = 'predict_proba' in order to return class probabilities.
Hint 3 - (sklearn to the rescue again!) sklearn.metrics.roc_curve quickly calculates the FPR and TPR when given class labels and prediction probabilities.
End of explanation
"""
|
leriomaggio/deep-learning-keras-tensorflow | 6. AutoEncoders and Embeddings/6.1. AutoEncoders and Embeddings.ipynb | mit | from keras.layers import Input, Dense
from keras.models import Model
from keras.datasets import mnist
import numpy as np
# this is the size of our encoded representations
encoding_dim = 32 # 32 floats -> compression of factor 24.5, assuming the input is 784 floats
# this is our input placeholder
input_img = Input(shape=(784,))
# "encoded" is the encoded representation of the input
encoded = Dense(encoding_dim, activation='relu')(input_img)
# "decoded" is the lossy reconstruction of the input
decoded = Dense(784, activation='sigmoid')(encoded)
# this model maps an input to its reconstruction
autoencoder = Model(input_img, decoded)
# this model maps an input to its encoded representation
encoder = Model(input_img, encoded)
# create a placeholder for an encoded (32-dimensional) input
encoded_input = Input(shape=(encoding_dim,))
# retrieve the last layer of the autoencoder model
decoder_layer = autoencoder.layers[-1]
# create the decoder model
decoder = Model(encoded_input, decoder_layer(encoded_input))
autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')
(x_train, _), (x_test, _) = mnist.load_data()
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
x_train = x_train.reshape((len(x_train), np.prod(x_train.shape[1:])))
x_test = x_test.reshape((len(x_test), np.prod(x_test.shape[1:])))
#note: x_train, x_train :)
autoencoder.fit(x_train, x_train,
epochs=50,
batch_size=256,
shuffle=True,
validation_data=(x_test, x_test))
"""
Explanation: Unsupervised learning
AutoEncoders
An autoencoder, is an artificial neural network used for learning efficient codings.
The aim of an autoencoder is to learn a representation (encoding) for a set of data, typically for the purpose of dimensionality reduction.
<img src="../imgs/autoencoder.png" width="25%">
Unsupervised learning is a type of machine learning algorithm used to draw inferences from datasets consisting of input data without labeled responses. The most common unsupervised learning method is cluster analysis, which is used for exploratory data analysis to find hidden patterns or grouping in data.
Reference
Based on https://blog.keras.io/building-autoencoders-in-keras.html
Introducing Keras Functional API
The Keras functional API is the way to go for defining complex models, such as multi-output models, directed acyclic graphs, or models with shared layers.
All the Functional API relies on the fact that each keras.Layer object is a callable object!
See 8.2 Multi-Modal Networks for further details.
End of explanation
"""
from matplotlib import pyplot as plt
%matplotlib inline
encoded_imgs = encoder.predict(x_test)
decoded_imgs = decoder.predict(encoded_imgs)
n = 10
plt.figure(figsize=(20, 4))
for i in range(n):
# original
ax = plt.subplot(2, n, i + 1)
plt.imshow(x_test[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# reconstruction
ax = plt.subplot(2, n, i + 1 + n)
plt.imshow(decoded_imgs[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
"""
Explanation: Testing the Autoencoder
End of explanation
"""
encoded_imgs = np.random.rand(10,32)
decoded_imgs = decoder.predict(encoded_imgs)
n = 10
plt.figure(figsize=(20, 4))
for i in range(n):
# generation
ax = plt.subplot(2, n, i + 1 + n)
plt.imshow(decoded_imgs[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
"""
Explanation: Sample generation with Autoencoder
End of explanation
"""
from keras.layers import Input, Dense, Conv2D, MaxPooling2D, UpSampling2D
from keras.models import Model
from keras import backend as K
input_img = Input(shape=(28, 28, 1)) # adapt this if using `channels_first` image data format
x = Conv2D(16, (3, 3), activation='relu', padding='same')(input_img)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(8, (3, 3), activation='relu', padding='same')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(8, (3, 3), activation='relu', padding='same')(x)
encoded = MaxPooling2D((2, 2), padding='same')(x)
# at this point the representation is (4, 4, 8) i.e. 128-dimensional
x = Conv2D(8, (3, 3), activation='relu', padding='same')(encoded)
x = UpSampling2D((2, 2))(x)
x = Conv2D(8, (3, 3), activation='relu', padding='same')(x)
x = UpSampling2D((2, 2))(x)
x = Conv2D(16, (3, 3), activation='relu')(x)
x = UpSampling2D((2, 2))(x)
decoded = Conv2D(1, (3, 3), activation='sigmoid', padding='same')(x)
conv_autoencoder = Model(input_img, decoded)
conv_autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')
from keras import backend as K
if K.image_data_format() == 'channels_last':
shape_ord = (28, 28, 1)
else:
shape_ord = (1, 28, 28)
(x_train, _), (x_test, _) = mnist.load_data()
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
x_train = np.reshape(x_train, ((x_train.shape[0],) + shape_ord))
x_test = np.reshape(x_test, ((x_test.shape[0],) + shape_ord))
x_train.shape
from keras.callbacks import TensorBoard
batch_size=128
steps_per_epoch = np.int(np.floor(x_train.shape[0] / batch_size))
conv_autoencoder.fit(x_train, x_train, epochs=50, batch_size=128,
shuffle=True, validation_data=(x_test, x_test),
callbacks=[TensorBoard(log_dir='./tf_autoencoder_logs')])
decoded_imgs = conv_autoencoder.predict(x_test)
n = 10
plt.figure(figsize=(20, 4))
for i in range(n):
# display original
ax = plt.subplot(2, n, i+1)
plt.imshow(x_test[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# display reconstruction
ax = plt.subplot(2, n, i + n + 1)
plt.imshow(decoded_imgs[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
"""
Explanation: Convolutional AutoEncoder
Since our inputs are images, it makes sense to use convolutional neural networks (convnets) as encoders and decoders.
In practical settings, autoencoders applied to images are always convolutional autoencoders --they simply perform much better.
The encoder will consist in a stack of Conv2D and MaxPooling2D layers (max pooling being used for spatial down-sampling), while the decoder will consist in a stack of Conv2D and UpSampling2D layers.
End of explanation
"""
conv_encoder = Model(input_img, encoded)
encoded_imgs = conv_encoder.predict(x_test)
n = 10
plt.figure(figsize=(20, 8))
for i in range(n):
ax = plt.subplot(1, n, i+1)
plt.imshow(encoded_imgs[i].reshape(4, 4 * 8).T)
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
"""
Explanation: We coudl also have a look at the 128-dimensional encoded middle representation
End of explanation
"""
# Use the encoder to pretrain a classifier
"""
Explanation: Pretraining encoders
One of the powerful tools of auto-encoders is using the encoder to generate meaningful representation from the feature vectors.
End of explanation
"""
from keras.datasets import mnist
import numpy as np
(x_train, _), (x_test, _) = mnist.load_data()
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
x_train = np.reshape(x_train, (len(x_train), 28, 28, 1)) # adapt this if using `channels_first` image data format
x_test = np.reshape(x_test, (len(x_test), 28, 28, 1)) # adapt this if using `channels_first` image data format
noise_factor = 0.5
x_train_noisy = x_train + noise_factor * np.random.normal(loc=0.0, scale=1.0, size=x_train.shape)
x_test_noisy = x_test + noise_factor * np.random.normal(loc=0.0, scale=1.0, size=x_test.shape)
x_train_noisy = np.clip(x_train_noisy, 0., 1.)
x_test_noisy = np.clip(x_test_noisy, 0., 1.)
"""
Explanation: Application to Image Denoising
Let's put our convolutional autoencoder to work on an image denoising problem. It's simple: we will train the autoencoder to map noisy digits images to clean digits images.
Here's how we will generate synthetic noisy digits: we just apply a gaussian noise matrix and clip the images between 0 and 1.
End of explanation
"""
n = 10
plt.figure(figsize=(20, 2))
for i in range(n):
ax = plt.subplot(1, n, i+1)
plt.imshow(x_test_noisy[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
"""
Explanation: Here's how the noisy digits look like:
End of explanation
"""
from keras.layers import Input, Dense, Conv2D, MaxPooling2D, UpSampling2D
from keras.models import Model
from keras.callbacks import TensorBoard
input_img = Input(shape=(28, 28, 1)) # adapt this if using `channels_first` image data format
x = Conv2D(32, (3, 3), activation='relu', padding='same')(input_img)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(32, (3, 3), activation='relu', padding='same')(x)
encoded = MaxPooling2D((2, 2), padding='same')(x)
# at this point the representation is (7, 7, 32)
x = Conv2D(32, (3, 3), activation='relu', padding='same')(encoded)
x = UpSampling2D((2, 2))(x)
x = Conv2D(32, (3, 3), activation='relu', padding='same')(x)
x = UpSampling2D((2, 2))(x)
decoded = Conv2D(1, (3, 3), activation='sigmoid', padding='same')(x)
autoencoder = Model(input_img, decoded)
autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')
"""
Explanation: Question
If you squint you can still recognize them, but barely.
Can our autoencoder learn to recover the original digits? Let's find out.
Compared to the previous convolutional autoencoder, in order to improve the quality of the reconstructed, we'll use a slightly different model with more filters per layer:
End of explanation
"""
autoencoder.fit(x_train_noisy, x_train,
epochs=100,
batch_size=128,
shuffle=True,
validation_data=(x_test_noisy, x_test),
callbacks=[TensorBoard(log_dir='/tmp/autoencoder_denoise',
histogram_freq=0, write_graph=False)])
"""
Explanation: Let's train the AutoEncoder for 100 epochs
End of explanation
"""
decoded_imgs = autoencoder.predict(x_test_noisy)
n = 10
plt.figure(figsize=(20, 4))
for i in range(n):
# display original
ax = plt.subplot(2, n, i+1)
plt.imshow(x_test[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# display reconstruction
ax = plt.subplot(2, n, i + n + 1)
plt.imshow(decoded_imgs[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
"""
Explanation: Now Let's Take a look....
End of explanation
"""
batch_size = 100
original_dim = 784
latent_dim = 2
intermediate_dim = 256
epochs = 50
epsilon_std = 1.0
x = Input(batch_shape=(batch_size, original_dim))
h = Dense(intermediate_dim, activation='relu')(x)
z_mean = Dense(latent_dim)(h)
z_log_sigma = Dense(latent_dim)(h)
"""
Explanation: Variational AutoEncoder
(Reference https://blog.keras.io/building-autoencoders-in-keras.html)
Variational autoencoders are a slightly more modern and interesting take on autoencoding.
What is a variational autoencoder ?
It's a type of autoencoder with added constraints on the encoded representations being learned.
More precisely, it is an autoencoder that learns a latent variable model for its input data.
So instead of letting your neural network learn an arbitrary function, you are learning the parameters of a probability distribution modeling your data.
If you sample points from this distribution, you can generate new input data samples:
a VAE is a "generative model".
How does a variational autoencoder work?
First, an encoder network turns the input samples $x$ into two parameters in a latent space, which we will note $z_{\mu}$ and $z_{log_{\sigma}}$.
Then, we randomly sample similar points $z$ from the latent normal distribution that is assumed to generate the data, via $z = z_{\mu} + \exp(z_{log_{\sigma}}) * \epsilon$, where $\epsilon$ is a random normal tensor.
Finally, a decoder network maps these latent space points back to the original input data.
The parameters of the model are trained via two loss functions:
a reconstruction loss forcing the decoded samples to match the initial inputs (just like in our previous autoencoders);
and the KL divergence between the learned latent distribution and the prior distribution, acting as a regularization term.
You could actually get rid of this latter term entirely, although it does help in learning well-formed latent spaces and reducing overfitting to the training data.
Encoder Network
End of explanation
"""
from keras.layers.core import Lambda
from keras import backend as K
def sampling(args):
z_mean, z_log_sigma = args
epsilon = K.random_normal(shape=(batch_size, latent_dim),
mean=0., stddev=epsilon_std)
return z_mean + K.exp(z_log_sigma) * epsilon
# note that "output_shape" isn't necessary with the TensorFlow backend
# so you could write `Lambda(sampling)([z_mean, z_log_sigma])`
z = Lambda(sampling, output_shape=(latent_dim,))([z_mean, z_log_sigma])
"""
Explanation: We can use these parameters to sample new similar points from the latent space:
End of explanation
"""
decoder_h = Dense(intermediate_dim, activation='relu')
decoder_mean = Dense(original_dim, activation='sigmoid')
h_decoded = decoder_h(z)
x_decoded_mean = decoder_mean(h_decoded)
"""
Explanation: Decoder Network
Finally, we can map these sampled latent points back to reconstructed inputs:
End of explanation
"""
# end-to-end autoencoder
vae = Model(x, x_decoded_mean)
# encoder, from inputs to latent space
encoder = Model(x, z_mean)
# generator, from latent space to reconstructed inputs
decoder_input = Input(shape=(latent_dim,))
_h_decoded = decoder_h(decoder_input)
_x_decoded_mean = decoder_mean(_h_decoded)
generator = Model(decoder_input, _x_decoded_mean)
"""
Explanation: What we've done so far allows us to instantiate 3 models:
an end-to-end autoencoder mapping inputs to reconstructions
an encoder mapping inputs to the latent space
a generator that can take points on the latent space and will output the corresponding reconstructed samples.
End of explanation
"""
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
SVG(model_to_dot(vae).create(prog='dot', format='svg'))
## Exercise: Let's Do the Same for `encoder` and `generator` Model(s)
"""
Explanation: Let's Visualise the VAE Model
End of explanation
"""
from keras.objectives import binary_crossentropy
def vae_loss(x, x_decoded_mean):
xent_loss = binary_crossentropy(x, x_decoded_mean)
kl_loss = - 0.5 * K.mean(1 + z_log_sigma - K.square(z_mean) - K.exp(z_log_sigma), axis=-1)
return xent_loss + kl_loss
vae.compile(optimizer='rmsprop', loss=vae_loss)
"""
Explanation: VAE on MNIST
We train the model using the end-to-end model, with a custom loss function: the sum of a reconstruction term, and the KL divergence regularization term.
End of explanation
"""
from keras.datasets import mnist
import numpy as np
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
x_train = x_train.reshape((len(x_train), np.prod(x_train.shape[1:])))
x_test = x_test.reshape((len(x_test), np.prod(x_test.shape[1:])))
vae.fit(x_train, x_train,
shuffle=True,
epochs=epochs,
batch_size=batch_size,
validation_data=(x_test, x_test))
"""
Explanation: Traing on MNIST Digits
End of explanation
"""
x_test_encoded = encoder.predict(x_test, batch_size=batch_size)
plt.figure(figsize=(6, 6))
plt.scatter(x_test_encoded[:, 0], x_test_encoded[:, 1], c=y_test)
plt.colorbar()
plt.show()
"""
Explanation: Because our latent space is two-dimensional, there are a few cool visualizations that can be done at this point.
One is to look at the neighborhoods of different classes on the latent 2D plane:
End of explanation
"""
# display a 2D manifold of the digits
n = 15 # figure with 15x15 digits
digit_size = 28
figure = np.zeros((digit_size * n, digit_size * n))
# we will sample n points within [-15, 15] standard deviations
grid_x = np.linspace(-15, 15, n)
grid_y = np.linspace(-15, 15, n)
for i, yi in enumerate(grid_x):
for j, xi in enumerate(grid_y):
z_sample = np.array([[xi, yi]]) * epsilon_std
x_decoded = generator.predict(z_sample)
digit = x_decoded[0].reshape(digit_size, digit_size)
figure[i * digit_size: (i + 1) * digit_size,
j * digit_size: (j + 1) * digit_size] = digit
plt.figure(figsize=(10, 10))
plt.imshow(figure)
plt.show()
"""
Explanation: Each of these colored clusters is a type of digit. Close clusters are digits that are structurally similar (i.e. digits that share information in the latent space).
Because the VAE is a generative model, we can also use it to generate new digits! Here we will scan the latent plane, sampling latent points at regular intervals, and generating the corresponding digit for each of these points. This gives us a visualization of the latent manifold that "generates" the MNIST digits.
End of explanation
"""
|
dietmarw/EK5312_ElectricalMachines | Chapman/Ch2-Problem_2-02.ipynb | unlicense | %pylab notebook
%precision 4
"""
Explanation: Excercises Electric Machinery Fundamentals
Chapter 2
Problem 2-2
End of explanation
"""
Zline = 38.2 + 140.0j # [Ohm]
Zeq = 0.10 + 0.4j # [Ohm]
V_high = 14e3 # [V]
V_low = 2.4e3 # [V]
Pout = 90e3 # [W] load
PF = 0.8 # lagging
VS = 2.3e3 # [V] secondary voltage
"""
Explanation: Description
A single-phase power system is shown in Figure P2-1.
<img src="figs/FigC_P2-1.jpg" width="70%">
The power source feeds a 100-kVA 14/2.4-kV transformer through a feeder impedance of $38.2 + j140 \Omega$. The transformer’s equivalent series impedance
referred to its low-voltage side is $0.10 + j0.4 \Omega$ . The load on the transformer is 90 kW at 0.8 PF lagging at $V_\text{load} = 2300\,V$.
End of explanation
"""
a = V_high / V_low
a
"""
Explanation: (a)
What is the voltage at the power source of the system?
(b)
What is the voltage regulation of the transformer?
(c)
How efficient is the overall power system?
SOLUTION
To solve this problem, we will refer the circuit to the secondary (low-voltage) side. The turns ratio of this transformer $a$ is:
End of explanation
"""
Z_line = (1/a)**2 * Zline
print('Z_line = {:.2f} Ω'.format(Z_line))
"""
Explanation: The feeder’s impedance referred to the secondary side is:
$$Z'\text{line} = \left(\frac{1}{a}\right)^2Z\text{line}$$
End of explanation
"""
Is = Pout / (VS*PF)
print('Is = {:.2f} A'.format(Is))
"""
Explanation: The secondary current $I_S$ is given by :
$$I_S = \frac{P_{OUT}}{V_SPF}$$
End of explanation
"""
IS_angle = - arccos(PF) # negative because lagging PF
print('θ = {:.2f}°'.format(degrees(IS_angle)))
"""
Explanation: The power factor is 0.80 lagging, so the impedance angel $\theta = \arccos(PF)$ is:
End of explanation
"""
IS = Is * (cos(IS_angle) + sin(IS_angle)*1j)
print('IS = {:.2f} A ∠{:.2f}°'.format(abs(IS), degrees(IS_angle)))
"""
Explanation: The phasor current is:
End of explanation
"""
V_source = VS + IS*Z_line + IS*Zeq
V_source_angle = arctan(V_source.imag/V_source.real) # angle of V_source [rad]
print('V_source = {:.1f} V ∠{:.1f}°'.format(
abs(V_source), degrees(V_source_angle)))
"""
Explanation: (a)
The voltage at the power source of this system (referred to the secondary side) is:
$$\vec{V}'\text{source}= \vec{V}_S + \vec{I}_SZ'\text{line} + \vec{I}SZ\text{EQ}$$
End of explanation
"""
Vsource = V_source * a # [V]
Vsource_angle = arctan(Vsource.imag/Vsource.real) # angle of Vsource [rad]
print('Vsource = {:.1f} kV ∠{:.1f}°'.format(
abs(Vsource)/1000, # display in kV
degrees(Vsource_angle)))
"""
Explanation: Therefore, the voltage at the power source is:
$$\vec{V}\text{source} = \vec{V}'\text{source} \cdot a$$
End of explanation
"""
VP = VS + IS*Zeq
VP_angle = arctan(VP.imag/VP.real) # angle of VP [rad]
print('VP = {:.1f} V ∠{:.1f}°'.format(abs(VP), degrees(VP_angle)))
"""
Explanation: (b)
To find the voltage regulation of the transformer, we must find the voltage at the primary side of the transformer (referred to the secondary side) under these conditions:
$$\vec{V}P' = \vec{V}_S + \vec{I}_SZ\text{EQ}$$
End of explanation
"""
VR = (abs(VP)-VS) / VS * 100 # [%]
print('VR = {:.2f} %'.format(VR))
"""
Explanation: There is a voltage drop of 15 V under these load conditions.
Therefore the voltage regulation of the transformer is:
$$VR = \frac{V_P' - V_S}{V_S} \cdot 100\%$$
End of explanation
"""
R = Z_line.real + Zeq.real
Pin = Pout + abs(IS)**2 * R # [W]
print('Pin = {:.2f} kW'.format(Pin/1000)) # [kW]
"""
Explanation: (c)
The overall efficiency of the power system will be the ratio of the output power to the input power. The output power supplied to the load is $P_\text{OUT} = 90\,kW$. The input power supplied by the source is:
$$P_\text{IN} = P_\text{OUT} + P_\text{LOSS} = P_\text{OUT} + I^2R$$
End of explanation
"""
eta = Pout/Pin * 100 # [%]
print('η = {:.1f} %'.format(eta))
"""
Explanation: Therefore, the efficiency of the power system is:
End of explanation
"""
|
rafburzy/Python_EE | Capacitors/Discharging_capacitors.ipynb | bsd-3-clause | # Definitions of parameters of the circuit
# Capacitance of generator [F]
C = 1e-6
# Parallel resistance (discharging the capacitor in the generator forming the tail of the impulse) [Ohm]
R1 = 4
# Series resistance (forming the head) [Ohm]
R2 = 150
# Inductance of the loop [H]
L = 1e-3
# Capacitance of the test object [F]
Co = 1e-6
# defining the time of the analysis
import numpy as np
t = np.linspace(0,6e-4, 1000)
"""
Explanation: Discharging capacitors
End of explanation
"""
# Case 1 two capacitors and the resistor
def dx1(y,t):
uco = y[0]
duco = 1/(R2*Co)*(Uo - uco - Co/C*uco)
return [duco]
# initial conditions
Uo = 200
y0 = [0.0]
# time constant
Cz = C*Co/(C+Co)
R2*Cz
from scipy.integrate import odeint, ode, romb, cumtrapz
X1 = odeint(dx1, y0, t)
uco = X1[:,0]
# the current and the other capacitor voltage
i = 1/(R2)*(Uo - uco - Co/C*uco)
uc = Uo - Co/C*uco
# current starting value
Uo/R2
# discharge voltage
ud = C*Uo/(C+Co)
Uo/R2, ud
from bokeh.plotting import figure, output_notebook, show
from bokeh.layouts import column
from bokeh.palettes import RdYlGn4
output_notebook()
p = figure(plot_width=800, plot_height=400)
p.line(t, uco, color="firebrick", legend = "cap charging")
p.line(t, uc, legend = "cap discharging")
p.xaxis.axis_label = "time [s]"
p.xaxis.axis_label_text_font_size = "10pt"
p.yaxis.axis_label = "voltage [V]"
p.yaxis.axis_label_text_font_size = "10pt"
p2 = figure(plot_width=800, plot_height=400)
p2.line(t, i, color='#1a9641')
p2.xaxis.axis_label = "time [s]"
p2.xaxis.axis_label_text_font_size = "10pt"
p2.yaxis.axis_label = "current [A]"
p2.yaxis.axis_label_text_font_size = "10pt"
show(column(p, p2))
"""
Explanation: Case1: Two capacitors and the resistor
Theoretical background and derivation of the state equations
<img src="two_cap.png">
End of explanation
"""
# Adjustment of parameters for case 2
L = 1e-2 # interesting values for L: 1e-3 (very little influence); 5e-3 (small oscillation); 1e-2 bigger oscillation
t2 = np.linspace(0,10e-4,1000)
# Case 2: two capacitors, resistor and inductor
def dx2(y,t):
i, uco = y[0], y[1]
di = 1/L*(Uo - Co/C*uco - i*R2 - uco)
duco = 1/Co*i
return [di, duco]
# initial conditions
Uo = 200
y02 = [0.0, 0.0]
X2 = odeint(dx2, y02, t2)
i2 = X2[:,0]
uco2 = X2[:,1]
uc2 = Uo - Co/C*uco2
p3 = figure(plot_width=800, plot_height=400)
p3.line(t2, uco2, color="firebrick", legend = "cap charging")
p3.line(t2, uc2, legend = "cap discharging")
p3.xaxis.axis_label = "time [s]"
p3.xaxis.axis_label_text_font_size = "10pt"
p3.yaxis.axis_label = "voltage [V]"
p3.yaxis.axis_label_text_font_size = "10pt"
p4 = figure(plot_width=800, plot_height=400)
p4.line(t2, i2, color='#1a9641')
p4.xaxis.axis_label = "time [s]"
p4.xaxis.axis_label_text_font_size = "10pt"
p4.yaxis.axis_label = "current [A]"
p4.yaxis.axis_label_text_font_size = "10pt"
show(column(p3, p4))
"""
Explanation: Case 2: Two capacitors, resistor and an inductor
Theoretical background and state equations derivation:
<img src="two_cap_ind.png">
End of explanation
"""
# Adjustment of parameters for case 3
L = 1e-3 # interesting values for L: 1e-3 (very little influence); 5e-3 (small oscillation); 1e-2 bigger oscillation
t3 = np.linspace(0,10e-4,1000)
# Case 3: circuit for the impulse withstand tester
def dx(y,t):
#state vector variables
i2, uco, uc = y[0], y[1], y[2]
di2 = 1/L*(uc-i2*R2-uco)
duco = 1/Co*i2
duc = -1/C*(uc/R1+i2)
return [di2, duco, duc]
# initial conditions
# capacitor voltage [V]
Uo = 200
# vector of the initial conditions
y0 = [0.0, 0.0, Uo]
#from scipy.integrate import odeint, ode, romb, cumtrapz
X = odeint(dx, y0, t3)
i2 = X[:,0]
uco = X[:,1]
uc = X[:,2]
#output_notebook()
p5 = figure(plot_width=800, plot_height=400)
p5.line(t3, uco, color="firebrick", legend ="Object")
p5.xaxis.axis_label = "time [s]"
p5.xaxis.axis_label_text_font_size = "10pt"
p5.yaxis.axis_label = "voltage [V]"
p5.yaxis.axis_label_text_font_size = "10pt"
p7 = figure(plot_width=800, plot_height=400)
p7.line(t3, uc, legend = "Main capacitor")
p7.xaxis.axis_label = "time [s]"
p7.xaxis.axis_label_text_font_size = "10pt"
p7.yaxis.axis_label = "voltage [V]"
p7.yaxis.axis_label_text_font_size = "10pt"
p6 = figure(plot_width=800, plot_height=400)
p6.line(t3, i2, color='#1a9641')
p6.xaxis.axis_label = "time [s]"
p6.xaxis.axis_label_text_font_size = "10pt"
p6.yaxis.axis_label = "current [A]"
p6.yaxis.axis_label_text_font_size = "10pt"
show(column(p5, p7, p6))
"""
Explanation: Case 3: Circuit of the impulse withstand tester
Theoretical background and the state equations derivation
<img src="imp_withstand.png">
End of explanation
"""
# Capacitor discharging to a resistor
def dk(y,t):
u = y[0]
du = -1/(R1*C)*u
return [du]
y03 = [200.0]
X5 = odeint(dk, y03, t3)
u = X5[:,0]
p7 = figure(plot_width=800, plot_height=400, x_range = (0,4e-5))
p7.line(t3, u, color="firebrick")
p7.xaxis.axis_label = "time [s]"
p7.xaxis.axis_label_text_font_size = "10pt"
p7.yaxis.axis_label = "voltage [V]"
p7.yaxis.axis_label_text_font_size = "10pt"
show(p7)
"""
Explanation: Case 4: Capacitor discharge to a resistor
<img src="cap_discharge.png">
End of explanation
"""
|
cassiogreco/udacity-data-analyst-nanodegree | P1/P1_Cassio.ipynb | mit | import pandas as pd
import math
%pylab inline
import matplotlib.pyplot as plt
CONGRUENT = 'Congruent'
INCONGRUENT = 'Incongruent'
TCRITICAL = 2.807 # two-tailed difference with 99% Confidence and Degree of Freedom of 23
path = r'~/udacity-data-analyst-nanodegree/P1/stroopdata.csv'
initialData = pd.read_csv(path)
dataDifference = [initialData[CONGRUENT][i] - initialData[INCONGRUENT][i] for i in range(0, len(initialData[CONGRUENT]))]
congruentMean = mean(initialData[CONGRUENT])
incongruentMean = mean(initialData[INCONGRUENT])
differenceMean = mean(dataDifference)
def mean(data):
return sum(data) / len(data)
def valuesMinusMean(data):
meanOfData = mean(data)
return [value - meanOfData for value in data]
def valuesToPower(data, power):
return [value ** power for value in data]
def variance(data):
return sum(data) / (len(data) - 1)
def standardDeviation(variance):
return math.sqrt(variance)
print('Mean of Congruent values:', congruentMean)
print('Mean of Incongruent values:', incongruentMean)
print('Mean of Difference values:', differenceMean)
print()
print('Range of Congruent values:', max(initialData[CONGRUENT] - min(initialData[CONGRUENT])))
print('Range of Incongruent values:', max(initialData[INCONGRUENT] - min(initialData[INCONGRUENT])))
print('Range of Difference values:', max(dataDifference - min(dataDifference)))
print()
print('Standard Deviation of Congruent values:', standardDeviation(variance(valuesToPower(valuesMinusMean(initialData[CONGRUENT]), 2))))
print('Standard Deviation of Incongruent values:', standardDeviation(variance(valuesToPower(valuesMinusMean(initialData[INCONGRUENT]), 2))))
print('Standard Deviation of Difference values:', standardDeviation(variance(valuesToPower(valuesMinusMean(dataDifference), 2))))
"""
Explanation: 1. What is our independent variable? What is our dependent variable?
The independent and dependent variables of the experiment are:
Independent
Word/Color congruency
Dependent
Time to name ink
2. What is an appropriate set of hypotheses for this task? What kind of statistical test do you expect to perform? Justify your choices.
We have as starting data two samples gathered from the same test (time taken to say the name of the color a given word is printed in) applied in different conditions: one for Congruent word/colors (the word and color are the same. I.e. the word "blue" printed in blue) and one for Incongruent word/colors (the word is a different color than the printed color. I.e. The word "blue" is printed in red).
From the sampled data, we want to infer whether or not the time taken to say a Congruent word/color is less than the time taken to say an Incongruent word/color.
Having Con be the symbol of the Congruent words and Incon be the symbol of the Incongruent words, and Diff be the symbol of the difference between Con and Incon (Con - Incon), we have:
H0 (HNULL): muCon = muIncon <=> muDiff = 0
Ha (HALTERNATIVE): muCon != muIncon <=> muDiff != 0
HNULL hypothesis: The population mean time it takes to say the correct ink color in the Congruent condition is equal to the population mean time it takes to say the correct ink color in the Incongruent condition, based on the sample means.
HALTERNATIVE hypothesis: The population mean time it takes to say the correct ink color in the Congruent is different than the population mean time it takes to say the correct ink color in the Incongruent condition, based on the sample means.
I will be performing a two-tailed Dependent T-Test because:
- The sample size is smaller than 30
- The standard deviation of the entire population is unknown
- I am measuring the results between the same test based on two different conditions on the same subject group.
I will evaluate the results based on a confidence level of 99% (T-Critical value of 2.807, for 23 degrees of freedom).
I expect to reject the HNULL hypothesis that states that the mean time it takes to say the name of the ink colors in the Congruent group will be equal to the mean time it takes to say the name of the ink colors in the Incongruent group
3. Report some descriptive statistics regarding this dataset. Include at least one measure of central tendency and at least one measure of variability.
End of explanation
"""
plt.hist(
x=[initialData[CONGRUENT], initialData[INCONGRUENT]],
normed=False,
range=(min(initialData[CONGRUENT]), max(initialData[INCONGRUENT])),
bins=10,
label='Time to name'
)
plt.hist(
x=initialData[CONGRUENT],
normed=False,
range=(min(initialData[CONGRUENT]), max(initialData[CONGRUENT])),
bins=10,
label='Time to name'
)
plt.hist(
x=initialData[INCONGRUENT],
normed=False,
range=(min(initialData[INCONGRUENT]), max(initialData[INCONGRUENT])),
bins=10,
label='Time to name',
color='Green'
)
plt.hist(
x=dataDifference,
normed=False,
range=(min(dataDifference), max(dataDifference)),
bins=10,
label='Time to name',
color='Red'
)
"""
Explanation: 4. Provide one or two visualizations that show the distribution of the sample data. Write one or two sentences noting what you observe about the plot or plots.
End of explanation
"""
degreesOfFreedom = len(initialData[CONGRUENT]) - 1
def standardError(standardDeviation, sampleSize):
return standardDeviation / math.sqrt(sampleSize)
def getTValue(mean, se):
return mean / se
se = standardError(standardDeviation(variance(valuesToPower(valuesMinusMean(dataDifference), 2))), len(dataDifference))
tValue = getTValue(differenceMean, se)
def marginOfError(t, standardError):
return t * standardError
def getConfidenceInterval(mean, t, standardError):
return (mean - marginOfError(t, standardError), mean + marginOfError(t, standardError))
print('Degrees of Freedom:', degreesOfFreedom)
print('Standard Error:', se)
print('T Value:', tValue)
print('T Critical Regions: Less than', -TCRITICAL, 'and Greater than', TCRITICAL)
print('Is the T Value inside of the critical region?', tValue >= TCRITICAL or tValue < TCRITICAL)
print('Is p < 0.005?', tValue >= TCRITICAL or tValue < TCRITICAL)
print('Confidence Interval:', getConfidenceInterval(differenceMean, TCRITICAL, se))
"""
Explanation: From analyzing the histograms of both the Congruent and Incongruent datasets we can visualy see that the Incongruent dataset contains a greater number of higher time-to-name values than the Congruent datasets.
This is evident from looking at the values of the mean values of both datasets, previously calculated (14.051125 and 22.0159166667 for Congruent and Incongruent datasets, respectively)
5. Now, perform the statistical test and report your results. What is your confidence level and your critical statistic value? Do you reject the null hypothesis or fail to reject it? Come to a conclusion in terms of the experiment task. Did the results match up with your expectations?
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/messy-consortium/cmip6/models/sandbox-1/atmoschem.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'messy-consortium', 'sandbox-1', 'atmoschem')
"""
Explanation: ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era: CMIP6
Institute: MESSY-CONSORTIUM
Source ID: SANDBOX-1
Topic: Atmoschem
Sub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry.
Properties: 84 (39 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:10
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmospheric chemistry model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmospheric chemistry model code.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Chemistry Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the atmospheric chemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the atmospheric chemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Form of prognostic variables in the atmospheric chemistry component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of advected tracers in the atmospheric chemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry calculations (not advection) generalized into families of species?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 1.8. Coupling With Chemical Reactivity
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry transport scheme turbulence is couple with chemical reactivity?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the evolution of a given variable
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemical species advection (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for physics (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.4. Split Operator Chemistry Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemistry (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3.5. Split Operator Alternate Order
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.6. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the atmospheric chemistry model (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3.7. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.2. Convection
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.3. Precipitation
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.4. Emissions
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.5. Deposition
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.6. Gas Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.9. Photo Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.10. Aerosols
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the atmopsheric chemistry grid
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
* Does the atmospheric chemistry grid match the atmosphere grid?*
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 7.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 7.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 7.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview of transport implementation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.2. Use Atmospheric Transport
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is transport handled by the atmosphere, rather than within atmospheric cehmistry?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Transport Details
Is Required: FALSE Type: STRING Cardinality: 0.1
If transport is handled within the atmospheric chemistry scheme, describe it.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric chemistry emissions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the chemical species emitted at the surface that are taken into account in the emissions scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed as spatially uniform
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via an interactive method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via any other method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed as spatially uniform
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an interactive method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an "other method"
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.2. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview gas phase atmospheric chemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Species included in the gas phase chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.3. Number Of Bimolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of bi-molecular reactions in the gas phase chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.4. Number Of Termolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of ter-molecular reactions in the gas phase chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the tropospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the stratospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.7. Number Of Advected Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of advected species in the gas phase chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.8. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.9. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.10. Wet Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.11. Wet Oxidation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview stratospheric heterogenous atmospheric chemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
"""
Explanation: 14.2. Gas Phase Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Gas phase species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
"""
Explanation: 14.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the stratospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 14.5. Sedimentation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 14.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview tropospheric heterogenous atmospheric chemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Gas Phase Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of gas phase species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
"""
Explanation: 15.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the tropospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.5. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the tropospheric heterogeneous chemistry scheme or not?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric photo chemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 16.2. Number Of Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the photo-chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
"""
Explanation: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Photolysis scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.2. Environmental Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)
End of explanation
"""
|
GraysonRicketts/collegeScorecard | notebooks/.ipynb_checkpoints/Initial Exploration-checkpoint.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import sqlite3
import pandas as pd
import seaborn as sns
sns.set_style("white")
"""
Explanation: Initial Eploration
Goals
Load dataset into sqlite server
Make basic queries against database
Understand basic structure and fields of dataset
Start exploring different aspects of the dataset and determine possible places of interest to explore in further detail
Imports
End of explanation
"""
ls -l ../data/output | grep -v "M*.csv"
conn = sqlite3.connect('../data/output/database.sqlite')
c = conn.cursor()
def execute(sql):
'''
Executes a SQL command on the 'c' cursor and returns the results
'''
c.execute(sql)
return c.fetchall()
"""
Explanation: Load Dataset
I want to start an sqlite server, load the dataset into an active sqlite server, and be save that functionality into a method to be run in the future.
End of explanation
"""
def printByYear(data):
'''
Given a list of tuples with (year, data), prints the data next to corresponding year
'''
for datum in data:
print "{0}: {1}".format(datum[0], datum[1])
"""
Explanation: Apparently, Sqlite is a serverless database (pretty cool, didn't know) so all I need to do is give it a file path, then I can start making queries. I created an execute method to save me couple of keystrokes and make it easier if I change the cursor.
Helper methods
End of explanation
"""
# Print all tables in the database
tables = execute("SELECT Name FROM sqlite_master WHERE type='table'")
for table in tables:
print table
"""
Explanation: Basic queries and exploration
End of explanation
"""
# Print the number of rows in the database
rowCount = execute("SELECT Count(id) RowCount from Scorecard")
rowCount = rowCount[0][0]
print "Row count:", rowCount
# Number of intstiutions in dataset per year
rowCountByYear = execute("""SELECT Year, Count(id)
FROM Scorecard
GROUP BY Year""")
printByYear(rowCountByYear)
# Print the number of columns
fields = execute("PRAGMA table_info(Scorecard)")
print len(fields)
"""
Explanation: There is one table and it is called Scorecard.
End of explanation
"""
# Get locations of the universities
coordinates = execute("""SELECT Latitude, Longitude
FROM Scorecard
WHERE Latitude IS NOT NULL
AND Year=2013
AND main='Main campus'""")
print "Percent with location data (2013): {0: .2f}%".format(((len(coordinates)*1.0) / rowCountByYear[14][1]) * 100)
# Plot locations of institutions
df = pd.DataFrame()
def checkCordinates(x, y):
if x >= -128.19 and x <= -65 and y >= 24.19 and y <= 49.62:
return True
return False
df['x'] = [row[1] for row in coordinates if checkCordinates(row[1], row[0])]
df['y'] = [row[0] for row in coordinates if checkCordinates(row[1], row[0])]
locations = sns.regplot('x', 'y',
data=df,
fit_reg=False)
locations.set(title="Locations of Institutions", xticks=[], yticks=[], xlabel="", ylabel="")
sns.despine(left=True, bottom=True)
# Number of institutions that are main campuses by year
mainCampuses = execute("""SELECT Year, Count(id) mainCount
FROM Scorecard
WHERE main='Main campus'
GROUP BY Year""")
print "Number of total main campuses: {0}".format(sum([count[1] for count in mainCampuses]))
printByYear(mainCampuses)
print "\nMain campus percentages"
for i, count in enumerate(mainCampuses):
print "{0}: {1: .2f}%".format(count[0], ((count[1]*1.0)/rowCountByYear[i][1])*100)
"""
Explanation: There are too many columns to do an analysis of the data compostion from the above query. In order to understand what is included in the dataset I am looking through the FullDataDocumentation.pdf which covers the different categories of data and summarizes what is include in each category. Below is a table of things I found of interest.
All the fields in the database are in CollegeScorecardDataDictionary-09-12-2015.pdf along with the field name.
<table>
<tr>
<th>Category</th>
<th>Field Type</th>
<th>Details</th>
</tr>
<!-- SCHOOL !-->
<tr>
<td>School</td>
</tr>
<tr>
<td></td>
<td>Name</td>
<td>Name of the school</td>
</tr>
<tr>
<td></td>
<td>Location</td>
<td>Longitude, latitude, city, etc.</td>
</tr>
<tr>
<td></td>
<td>Main campus or branch</td>
<td>1 for main, 0 for branch</td>
</tr>
<tr>
<td></td>
<td>Type</td>
<td>Public, private for-profit, private non-profit</td>
</tr>
<tr>
<td></td>
<td>Revenue</td>
<td>Net tutition, instructional expenses, average faculty salary</td>
</tr>
<tr>
<td></td>
<td>Currently operating</td>
<td>1 for YES, 0 for NO</td>
</tr>
<!-- Academics !-->
<tr>
<td>Academics</td>
</tr>
<tr>
<td></td>
<td>Programs offered</td>
<td>Field of study ids</td>
</tr>
<!-- Admission !-->
<tr>
<td>Admission</td>
</tr>
<tr>
<td></td>
<td>Admission rate for <b>Undergraduates</b></td>
<td>Rates for each branch or all branches</td>
</tr>
<tr>
<td></td>
<td>SAT and ACT scores</b></td>
<td></td>
</tr>
<!-- Costs !-->
<tr>
<td>Costs</td>
</tr>
<tr>
<td></td>
<td><b>LOTS</b></td>
<td>Look at page 6-7 of data documentation</td>
</tr>
<!-- Student Body !-->
<tr>
<td>Student Body</td>
</tr>
<tr>
<td></td>
<td>Number of degree seekers</td>
<td></td>
</tr>
<tr>
<td></td>
<td>Race statistics</td>
<td>Self-reported on FAFSA</td>
</tr>
<tr>
<td></td>
<td>Undegrads by family income</td>
<td>Percentages broken into categories based on expected family contribution on FAFSA</td>
</tr>
<tr>
<td></td>
<td>Retention rate</td>
<td></td>
</tr>
<tr>
<td></td>
<td>Student body age</td>
<td>Percentage of students who are age 25-64</td>
</tr>
<tr>
<td></td>
<td>Parent education</td>
<td>Percentages of students with parents that have some level of education. Collected from FAFSA.</td>
</tr>
<!-- Financial Aid !-->
<tr>
<td>Financial Aid</td>
</tr>
<tr>
<td></td>
<td>Cumulative median debt</td>
<td></td>
</tr>
<tr>
<td></td>
<td>Percent receiving Pell Grant</td>
<td>Grant from government that does not need to be repaid</td>
</tr>
<tr>
<td></td>
<td>Percent receiving federal loans</td>
<td></td>
</tr>
<!-- Completion !-->
<tr>
<td>Completion</td>
</tr>
<tr>
<td></td>
<td>Completion rate</td>
<td>Percent of students graduating within 150% and 200% of the expected time to graduate</td>
</tr>
<!-- Title IV Students !-->
<tr>
<td>Title IV Students</td>
</tr>
<!-- Earnings !-->
<tr>
<td>Earnings</td>
</tr>
<tr>
<td></td>
<td>Mean and median income</td>
<td>Available 6 and 10 years after graduation. Data comes from W2 tax forms</td>
</tr>
<tr>
<td></td>
<td>Threshold earnings</td>
<td>Percent of students who are earning more than people age 25-34 with only a college degree. Measure of wether or not the degree was able to financially improve the students outcome had they not gone to college.</td>
</tr>
<!-- Repayment !-->
<tr>
<td>Repayment</td>
</tr>
</table>
End of explanation
"""
|
cuthbertLab/bali | documentation/Using bali module.ipynb | bsd-3-clause | import bali
"""
Explanation: Here is how you load the "bali" module
End of explanation
"""
fileReader = bali.FileReader()
fileReader.taught
fileReader.transcribed
"""
Explanation: Now we make a FileReader
End of explanation
"""
fp = bali.FileParser()
fp.taught
"""
Explanation: More useful Object
The FileParser is more useful than the file reader.
End of explanation
"""
firstPattern = fp.taught[0]
print(firstPattern)
firstPattern.title
firstPattern.drumPattern
firstPattern.gongPattern
firstPattern.beatLength()
firstPattern.strokes
for taughtPattern in fp.taught:
if taughtPattern.beatLength() == 4:
print(taughtPattern.title, " ::: ", taughtPattern.beatLength())
print(taughtPattern.gongPattern)
print(taughtPattern.drumPattern)
print("")
for taughtPattern in fp.taught:
print(taughtPattern.getStrokeByBeat(4), taughtPattern.title)
"""
Explanation: Now we have all the taught patterns! Yay!
Let's get the first taught pattern
End of explanation
"""
total = 0
for pattern in fp.taught:
total += len(pattern.strokes)
print(total)
"""
Explanation: How many strokes total are there in the whole taught set?
End of explanation
"""
lanang = [p for p in fp.taught if 'lanang' in p.title.lower()]
lanang
from music21 import *
ld = text.LanguageDetector()
ld
ld.trigrams
english = ld.trigrams['en']
english.lut
english.lut['be']
ld.trigrams['fr'].lut['be']
ld.mostLikelyLanguage("Das geht so gut heute!")
other = [p for p in fp.taught if 'lanang' not in p.title.lower() and 'wadon' not in p.title.lower()]
other
for p in fp.taught:
print(p.drumType, p.gongPattern)
print(p.drumType, p.drumPattern)
for i in [0.25, 0.75, 1.25, 1.75, 2.25, 2.75, 3.25, 3.75]:
for tp in fp.taught:
if tp.drumType != 'wadon':
continue
print(tp.getStrokeByBeat(i), end=' ')
print()
patt = fp.taught[9]
patt
patt.strokes
beat = 0
for stroke in patt.strokes:
print(beat, stroke)
beat = beat + 0.25
5 is 5.0
patt
"""
Explanation: Create a list of all the taught patterns that contain "lanang"
End of explanation
"""
subdivisionSearch = (0, 2)
strokeSearch = ('o', 'l')
for patt in fp.taught:
totalOff = 0
totalAll = 0
if patt.drumType != 'wadon':
continue
for b, s in patt.iterateStrokes():
if b == 0:
continue
if ((b*4) % 4) in subdivisionSearch and s in strokeSearch:
totalOff += 1
if s in strokeSearch:
totalAll += 1
if totalAll > 0:
perc = int(100*(totalOff/totalAll))
else:
perc = 0
print(totalAll, perc, " -- ", patt.title)
for patt in fp.taught:
if patt.title == 'Pak Dewa Wadon 0a with Dag delay':
print(patt.gongPattern)
print(patt.drumPattern)
print(patt.beatLength())
"""
Explanation: Find percentage of strokes that are on a particular beat subdivision in patterns for a given drum
End of explanation
"""
subdivisionSearch = (0, 2)
strokeSearch = ('o', 'l')
for patt in fp.taught:
totalOff = 0
totalAll = 0
if patt.drumType != 'lanang':
continue
for b, s in patt.iterateStrokes():
if b == 0:
continue
previousStrokeBeat = b - 0.25
if previousStrokeBeat >= 0:
previousStroke = patt.getStrokeByBeat(previousStrokeBeat)
if previousStroke == s:
continue
nextStrokeBeat = b + 0.25
if nextStrokeBeat <= patt.beatLength():
nextStroke = patt.getStrokeByBeat(nextStrokeBeat)
if nextStroke == s:
continue
if ((b*4) % 4) in subdivisionSearch and s in strokeSearch:
totalOff += 1
if s in strokeSearch:
totalAll += 1
if totalAll > 0:
perc = int(100*(totalOff/totalAll))
else:
perc = 0
print(totalAll, perc, " -- ", patt.title)
import re
re.match('(Pak\s\w+)\s', 'Pak Cok Lanang 7').group(1)
for patt in fp.taught:
print(patt.teacher, '--', patt.title)
"""
Explanation: Find the same as above, but eliminate all double strokes.
End of explanation
"""
|
vravishankar/Jupyter-Books | Functions.ipynb | mit | # Simple Function
def greet():
'''Simple Greet Function'''
print('Hello World')
greet()
"""
Explanation: Functions
Function is a group of related statements that perform a specific task.
Function help break large programs into smaller and modular chunks
Function makes the code more organised and easy to manage
Function avoids repetition and there by promotes code reusability
Two types of functions
Built-In Functions
User Defined Functions
Function Syntax
```python
def function_name(arguments):
'''This is the docstring for the function'''
# note the indentation, anything inside the function must be indented
# function code goes here
...
return
calling the function
function_name(arguments)
```
Example 1
End of explanation
"""
# Function with arguments
def greet(name):
'''Simple Greet Function with arguments'''
print('Hello ', name)
greet('John')
# printing the doc string
print(greet.__doc__)
"""
Explanation: Example 2
End of explanation
"""
# Function with return statement
def add_numbers(num1,num2):
return num1 + num2
print(add_numbers(2,3.0))
# Since arguments are not strongly typed you can even pass a string
print(add_numbers('Hello','World'))
"""
Explanation: Example 3
End of explanation
"""
def myfunc():
x = 5
print('Value inside the function ',x)
x = 10
myfunc()
print('Value outside the function',x)
"""
Explanation: Scope and Lifetime of Variables
Variables in python has local scope which means parameters and variables defined inside the function is not visible from outside.
Lifetime of a variable is how long the variable exists in the memory. Lifetime of variables defined inside the function exists as long as the function executes. They are destroyed once the function is returned.
End of explanation
"""
def myfunc():
#x = 5
print('Value inside the function ',x)
x = 10
myfunc()
print('Value outside the function',x)
"""
Explanation: Variables defined outside the function are visible from inside which means they have a global scope.
End of explanation
"""
def myfunc():
y = 5
print('Value inside the function ',y)
myfunc()
print('Value outside the function',y)
"""
Explanation: Global Variable
Variables declared inside the function are not available outside. The following example will generate an error.
End of explanation
"""
def myfunc():
global z
z = 5
print('Value inside the function ',z)
#z = 10
myfunc()
print('Value outside the function',z)
"""
Explanation: You have to use the global keyword for variables if you want to use those variables declared outside the function inside the function
End of explanation
"""
def greet(name,msg):
'''Simple greet function with name and message arguments'''
print("Hello " + name + ', ' + msg)
greet('John','Good Morning!')
# this will generate error since we missed one argument
greet('John')
"""
Explanation: Function Arguments
Example 1
End of explanation
"""
def greet(name,msg='Good Evening!'):
'''Simple greet function with name and message arguments'''
print("Hello " + name + ', ' + msg)
greet('John')
"""
Explanation: Default Arguments
Default arguments will be used if the argument value is not passed to the function. If a value is passed then it will overwrite the default value.
End of explanation
"""
def greet(name,msg='Good Evening!',salute):
'''Simple greet function with name and message arguments'''
print("Hello " + salute + '.' + name + ', ' + msg)
greet('John','Good Evening','Mr')
def greet(name,msg='Good Evening!',salute='Mr'):
'''Simple greet function with name and message arguments'''
print("Hello " + salute + '.' + name + ', ' + msg)
greet('John','Good Evening','Mrs')
"""
Explanation: One rule for default arguments is that once an argument has a default value then all the arguments to the right of it must also have default values. The following example will produce an error.
End of explanation
"""
def greet(name,msg='Good Evening!',salute='Mr'):
'''Simple greet function with name and message arguments'''
print("Hello " + salute + '.' + name + ', ' + msg)
# keyword arguments
greet(name="Jack",msg="How are you?")
# keyword arguments - out of order
greet(msg='How do you do?',name="Brian")
# mix of keyword and positional arguments
greet("Jill",salute='Ms',msg="Good to see you.")
"""
Explanation: Keyword Arguments
Python allows functions to be called using keyword arguments. When we call functions in this way, the order of the arguments can be changed.
End of explanation
"""
greet(name="Keith","Good Afternoon")
"""
Explanation: However please note that having a positional argument after keyword argument will result into errors. For example the following example will generate an error.
End of explanation
"""
def greet(*names):
'''This function greets all with a Hello'''
for name in names:
print('Hello ',name)
greet('John','Keith','Brian','Jose')
"""
Explanation: Arbitrary Arguments
Python allows unknown number of arguments to be passed to a function through arbitrary arguments. Arbitrary arguments are declared by placing an asterix (*) in front of the parameter name
End of explanation
"""
def factorial(num):
if(num <= 0):
return 0
elif(num == 1):
return 1
else:
return(num * factorial(num-1))
num = 4
print("Factorial of number ",num," is ",factorial(num))
"""
Explanation: Recursive Functions
Function that calls itself is called recursive function.
While the recursive functions are clean and elegant care must be taken as it might take up lot of memory and time and it is also hard to debug.
Please note the recursive function must also have a base condition that stops the recursion otherwise the function calls itself indefinitely.
End of explanation
"""
def multiply(x,y):
return x * y
multiply(2,4)
"""
Explanation: Using *args and ***kwargs in functions
When programming, you may not be aware of all the possible use cases of your code, and may want to offer more options for future programmers working with the module, or for users interacting with the code. We can pass a variable number of arguments to a function by using *args and **kwargs in our code.
Using *args
End of explanation
"""
def multiply(*args):
x = 1
for num in args:
x *= num
return(x)
print(multiply(2,4))
print(multiply(2,4,5))
print(multiply(3,5,8,9,10))
"""
Explanation: Later if we decide to extend the multiply function to accept n number of arguments, we need to use the *args feature
End of explanation
"""
def print_values(**kwargs):
for key, value in kwargs.items():
print("The value of {} is {}".format(key, value))
print_values(
name_1="Alex",
name_2="Gray",
name_3="Harper",
name_4="Phoenix",
name_5="Remy",
name_6="Val"
)
"""
Explanation: Using **kwargs
The double asterisk form of *kwargs is used to pass a keyworded, variable-length argument dictionary to a function. Again, the two asterisks (*) are the important element here, as the word kwargs is conventionally used, though not enforced by the language.
End of explanation
"""
def some_args(arg_1, arg_2, arg_3):
print("arg_1:", arg_1)
print("arg_2:", arg_2)
print("arg_3:", arg_3)
args = ("Sammy", "Casey", "Alex")
some_args(*args)
def some_args(arg_1, arg_2, arg_3):
print("arg_1:", arg_1)
print("arg_2:", arg_2)
print("arg_3:", arg_3)
my_list = [2, 3]
some_args(1, *my_list)
def some_kwargs(kwarg_1, kwarg_2, kwarg_3):
print("kwarg_1:", kwarg_1)
print("kwarg_2:", kwarg_2)
print("kwarg_3:", kwarg_3)
kwargs = {"kwarg_1": "Val", "kwarg_2": "Harper", "kwarg_3": "Remy"}
some_kwargs(**kwargs)
"""
Explanation: Ordering Arguments
When ordering arguments within a function or function call, arguments need to occur in a particular order:
Formal positional arguments
*args
Keyword arguments
**kwargs
End of explanation
"""
|
Illedran/NIPSTimeMachine | topic_evolution/topic_evolution.ipynb | gpl-3.0 | import csv
import pandas as pd
import os, re
import codecs
import os
DATA_DIR = "../nips-data"
MODEL_DIR = "../models"
papers = pd.read_csv(os.path.join(DATA_DIR, 'papers.csv'))
with open(os.path.join(MODEL_DIR, 'stopwords.txt')) as f:
stopwords=[]
for line in f:
stopwords.append(line.strip())
"""
Explanation: Topic evolution in NIPS dataset
Corpus Preprocessing
Tokenization
Remove all punctuation and numbers
Replace all whitespace by single spaces
Remove stopwords and keep the words which frequency larger than 2
Student name: Xinyi Kong
End of explanation
"""
papers=papers.sort_values(by='year')
timelist=papers.groupby('year').size()
timeseq=timelist.values
%%time
from collections import defaultdict
pre_papers=[]
for i in range(len(papers)):
paper=re.sub(r'(\-)','',papers['paper_text'][i].lower())
paper=re.sub(r'[\s,\W,\d,_]\s*',' ',paper)
paper=re.split(r'[\s]\s*',paper)
paper=[word for word in paper if len(word)>=3 and word not in stopwords]
frequency = defaultdict(int)
for word in paper:
frequency[word] += 1
paper = [word for word in paper if frequency[word] > 2]
pre_papers.append(paper)
"""
Explanation: Time slice
To train the Dynamic Topic Model, we need to separate the data into different snapshots based on time. In my case, the time slice is one year, so I just sort the papers by year and store the number of papers from 1987 to 2016.
End of explanation
"""
from gensim import corpora
dictionary = corpora.Dictionary(pre_papers)
max_freq = 0.7
min_wordcount = 10
dictionary.filter_extremes(no_below=min_wordcount, no_above=max_freq)
corpus = [dictionary.doc2bow(paper) for paper in pre_papers]
"""
Explanation: Dictionary and Corpus
Construct a dictionary
Convert tokenized documents to vectors
Construct a dictionary and filter out words that occur too frequently or too rarely.
Produce the vectorized representation of the documents by computing bag-of-words.
End of explanation
"""
dictionary.save(os.path.join(MODEL_DIR, 'topic_evolution', 'dictionary.dict'))
corpora.MmCorpus.serialize(os.path.join(MODEL_DIR, 'topic_evolution', 'corpus_mm.mm'), corpus)
corpora.BleiCorpus.serialize(os.path.join(MODEL_DIR, 'topic_evolution', 'corpus_lda.lda-c'), corpus)
from gensim.corpora import Dictionary, bleicorpus
bleicorpus.BleiCorpus.serialize(os.path.join(MODEL_DIR, 'topic_evolution', 'corpus_lda1.lda-c'), corpus)
corpus1 = bleicorpus.BleiCorpus(os.path.join(MODEL_DIR, 'topic_evolution', 'corpus_lda1.lda-c'))
"""
Explanation: Save the Dictionary and Corpus
This step can be skipped.
End of explanation
"""
from gensim.models.wrappers.dtmmodel import DtmModel
%%time
dtm_home = os.environ.get('/Users/KK/CS/web/groupwork', "dtm-master")
dtm_path = os.path.join(dtm_home, 'bin', 'dtm') if dtm_home else None
model1 = DtmModel(dtm_path, corpus, timeseq, num_topics=10,
id2word= dictionary, initialize_lda=True)
"""
Explanation: Train and use model
Use the DTM wrapper in gensim to train our model. In my case, the number of topics is 10.
End of explanation
"""
model1.save(os.path.join(MODEL_DIR, 'topic_evolution', 'dtm_papers_10'))
"""
Explanation: Save the model that we trained so that you do not need to train it again.
End of explanation
"""
model1 = DtmModel.load(os.path.join(MODEL_DIR, 'topic_evolution', 'dtm_papers_10'))
import pyLDAvis.gensim
doc_topic, topic_term, doc_lengths, term_frequency, vocab = model1.dtm_vis(time=0, corpus=corpus)
vis_wrapper = pyLDAvis.prepare(topic_term_dists=topic_term, doc_topic_dists=doc_topic, doc_lengths=doc_lengths, vocab=vocab, term_frequency=term_frequency)
pyLDAvis.display(vis_wrapper)
"""
Explanation: Visualising Dynamic Topic Models
the doc_topic file returned by our model can be used later.
End of explanation
"""
p_num=0
p_sum=[ 0 for i in range(10)]
rank_table=[]
for i in range(len(timeseq)):
for j in range(p_num,p_num+timeseq[i]):
index=doc_topic[j].tolist().index(max(doc_topic[j]))
p_sum[index]+=1
for k in range(10):
p_sum[k]=p_sum[k]/timeseq[i]
rank_table.append(p_sum)
p_sum=[ 0 for i in range(10)]
p_num+=timeseq[i]
import numpy as np
import pylab as pl
"""
Explanation: Calculated the number of papers per topic per year and the proportion of each topic per year.
The rank_table can be used to show the evolution of each topic.
p_num: It is used to identify paper indexs for every year.
p_sum: It denotes the proportion of each topic per year.
End of explanation
"""
def topic_evolution(paperid):
pnum=papers['id'].tolist().index(paperid)
tindex=doc_topic[pnum].tolist().index(max(doc_topic[pnum]))
x=[i for i in range(1,31)]
g_label = [i for i in range(1987,2017)]
pl.figure(figsize=(20,7))
for i in range(10):
y=[ 0 for i in range(30)]
for j in range(len(rank_table)):
y[j]=rank_table[j][i]
if i==tindex:
pl.plot(x, y, marker='*',label='topic'+str(i+1))
else:
pl.plot(x, y, label='topic'+str(i+1))
pl.title("Topic Evolution")# give plot a title
pl.xlabel("time")# make axis labels
pl.ylabel("topic ranking")
pl.xlim(0,33)# set axis limits
pl.xticks(x, g_label, rotation=0)
pl.legend(loc='upper right')
pl.grid()
pl.show()# show the plot on the screen
"""
Explanation: Show the evolution of all the topics and highlight topic that contains the paper ID that we search for.
End of explanation
"""
authors = pd.read_csv(os.path.join(DATA_DIR, 'authors.csv'))
paper_author = pd.read_csv(os.path.join(DATA_DIR, 'paper_authors.csv'))
p_id2a_name=pd.merge(paper_author,authors,left_on='author_id',right_on='id',how='left')
s_papers=papers[['id','year']]
s_pid2aname=p_id2a_name[['paper_id','name']]
info_papers=pd.merge(s_papers,s_pid2aname,left_on='id',right_on='paper_id',how='right')
"""
Explanation: Explore authors activities
Merge the files and reserve the information we need like paper IDs, year and author names.
End of explanation
"""
author2doc = dict()
for j in range(len(info_papers)):
if not author2doc.get(info_papers['name'][j]):
author2doc[info_papers['name'][j]]=[]
author2doc[info_papers['name'][j]].extend([info_papers['paper_id'][j]])
def authors_activities(name):
author_topic = dict()
for i in range(len(author2doc[name])):
yrs=info_papers['year'][info_papers['paper_id'].tolist().index(author2doc[name][i])]
if not author_topic.get(yrs):
author_topic[yrs]=[0 for j in range(10)]
pnum=papers['id'].tolist().index(author2doc[name][i])
tindex=doc_topic[pnum].tolist().index(max(doc_topic[pnum]))
author_topic[yrs][tindex]+=1
pl.figure(figsize=(12,5))
labels=['topic'+ str(i+1) for i in range(10)]
x=np.arange(10)
flag=1
for i in author_topic:
if flag:
pl.bar(x,author_topic[i],label=i,tick_label=labels)
lastone=author_topic[i]
flag=0
else:
pl.bar(x,author_topic[i],bottom=lastone,label=i,tick_label=labels)
lastone=[lastone[j]+author_topic[i][j] for j in range(10)]
pl.title(name + "'s activities")# give plot a title
pl.xlabel("topic name")# make axis labels
pl.ylabel("the number of papers")
pl.legend(loc='upper right')
pl.grid()
pl.show()
def paper_information(paperid):
topic_evolution(paperid)
for i in range(len(info_papers)):
if info_papers['id'][i]==paperid:
authors_activities(info_papers['name'][i])
paper_information(2)
authors_activities('Michael I. Jordan')
"""
Explanation: Construct a mapping from author names to paper IDs
End of explanation
"""
|
unmrds/cc-python | .ipynb_checkpoints/Name_Data-checkpoint.ipynb | apache-2.0 | # http://api.census.gov/data/2010/surname
import requests
import json
import pandas as pd
import matplotlib.pyplot as plt
"""
Explanation: An Introductory Python Workflow: US Census Surname Data
This notebook provides working examples of many of the concepts introduced earlier:
Importing modules or libraries to extend basic Python functionality
Declaring and using variables
Python data types and data structures
Flow control
Using the 2010 surname data from the US Census, we will develop a workflow to accomplish the following:
Retrieve information about the dataset API
Retrieve data about a single surname
Output surname data in tabular form
Visualize surname data using a pie chart
Sample dataset
Decennial Census Surname Files (2010)
https://www.census.gov/data/developers/data-sets/surnames.html
https://api.census.gov/data/2010/surname.html
Citation
US Census Bureau (2016) Decennial Census Surname Files (2010) Retrieved from https://api.census.gov/data/2010/surname.json
1. Import modules
The modules used in this exercise are popular and under active development. Follow the links for more information about methods, syntax, etc.
Requests: http://docs.python-requests.org/en/master/
JSON: https://docs.python.org/3/library/json.html
Pandas: http://pandas.pydata.org/
Matplotlib: https://matplotlib.org/
Look for information about or links to the API, developer's documentation, etc. Helpful examples are often included.
Note that we are providing an alias for Pandas and matplotlib. Whenever we need to call a method from those module, we can use the alias.
End of explanation
"""
# First, get the basic info about the dataset.
# References: Dataset API (https://api.census.gov/data/2010/surname.html)
# Requests API (http://docs.python-requests.org/en/master/)
# Python 3 JSON API (https://docs.python.org/3/library/json.html)
api_base_url = "http://api.census.gov/data/2010/surname"
api_info = requests.get(api_base_url)
api_json = api_info.json()
# Uncomment the next line(s) to see the response content.
# NOTE: JSON and TEXT don't look much different to us. They can look very different to a machine!
#print(api_info.text)
print(json.dumps(api_json, indent=4))
# The output is a dictionary - data are stored as key:value pairs and can be nested.
# Request and store a local copy of the dataset variables.
# Note that the URL could be hard coded just from referencing the API, but
# we are navigating the JSON data.
var_link = api_json['dataset'][0]['c_variablesLink']
print(var_link)
# Use the variable info link to make a new request
variables = requests.get(var_link)
jsonData = variables.json()
variable_data = jsonData['variables']
# Note that this is a dictionary of dictionaries.
print(json.dumps(variable_data, indent=4))
print(variable_data.keys())
"""
Explanation: 2. Basic interactions with the Census dataset API
Combining data from two API endpoints into a human-readable table
The dataset in our example is not excessively large, so we can explore different approaches to interacting with it:
Download some or all data to the computer we're using ('local'). Keep the data in memory and do stuff.
Only download what we need when we need it. Doing stuff may require additional calls to the API.
Both have pros and cons. Both are used in the following examples.
Some points of interest:
The data are not provided in tabular form
Human-readable variable names and definitions are stored separately from the data
In order to make a human readable table we need to:
Download variable definitions
Download data
Replace shorthand variable codes with human readable names
Reformat the data into a table
Get API and variable information:
End of explanation
"""
# References: Pandas (http://pandas.pydata.org/)
# Default vars: 'RANK,COUNT,PCTWHITE,PCTAPI,PCT2PRACE,PCTAIAN,PCTBLACK,PCTHISPANIC'
desired_vars = 'NAME,COUNT,PCTWHITE,PCTAPI,PCT2PRACE,PCTAIAN,PCTBLACK,PCTHISPANIC&RANK=1:10' # Top ten names
base_url = 'http://api.census.gov/data/2010/surname?get='
query_url = base_url + desired_vars
name_stats = requests.get(query_url)
surname_data = name_stats.json()
# The response data are not very human readable.
# Note that this is a list of lists. Data within lists are typically accessed by position number. (There are no keys.)
print('Raw response data:\n')
print(json.dumps(surname_data, indent=4))
"""
Explanation: Get surname data:
End of explanation
"""
# Pass the data to a Pandas dataframe.
# In addition to being easier to read, dataframes simplify further analysis.
# The simplest dataframe would use the variable names returned with the data. Example: PCTWHITE
# It's easier to read the descriptive labels provide via the variables API.
# The code block below replaces variable names with labels as it builds the dataframe.
column_list = []
for each in surname_data[0]: # For each variable in the response data (stored as surname_data[0])
label = variable_data[each]['label'] # look up that variable's label in the variable dictionary
column_list.append(label) # add the variable's label to the list of column headers
print(each, ":", label)
print('\n', column_list)
"""
Explanation: Laying out the API response like a table helps illustrate what we're doing here. For easier reading the "surname_data" variable has been replace with "d" in the image below.
The variable codes in d[0] will be replaced with human readable descriptions from the variable list (v).
Replace variable codes with human readable labels
End of explanation
"""
df = pd.DataFrame([surname_data[1]], columns=column_list) # Create a dataframe using the column names created above. Data
# for the dataframe comes from rows 2-10 (positions 1-9)
# of surname_data.
# The table we just created is empty. Here we add the surname data:
for surname in d[2:]:
tdf = pd.DataFrame([surname], columns=column_list)
df = df.append(tdf)
print('\n\nPandas dataframe:')
df.sort_values(by=["National Rank"])
"""
Explanation: Create a dataframe (table) with variable labels as column names and append data
End of explanation
"""
# Try 'STEUBEN' in order to break the first pie chart example.
# Update 2020-02-26: Surnames should be all caps!
name = 'WHEELER'
name_query = '&NAME=' + name
"""
Explanation: Exercises
Change the table to include the 50 most common surnames. Alternatively, create a table for the 11th - 20th most common surnames.
Sort the output table by surname or a demographic.
Change the request/table to include only surname, rank, and count.
Correct the sort order in the example table.
3. Download and tabulate statistics for a given surname
Now let's find out the rank and demographic breakdown of a particular name.
To make it easy to change the name we're looking up, assign it to a variable.
End of explanation
"""
# Default vars: 'RANK,COUNT,PCTWHITE,PCTAPI,PCT2PRACE,PCTAIAN,PCTBLACK,PCTHISPANIC'
desired_vars = 'RANK,COUNT,PCTWHITE,PCTAPI,PCT2PRACE,PCTAIAN,PCTBLACK,PCTHISPANIC'
"""
Explanation: Referring to the variables API, decide which variables are of interest and edit accordingly.
End of explanation
"""
# References: Pandas (http://pandas.pydata.org/)
base_url = 'http://api.census.gov/data/2010/surname?get='
query_url = base_url + desired_vars + name_query
name_stats = requests.get(query_url)
d = name_stats.json()
# The response data are not very human readable.
print('Raw response data:\n')
print(d)
# Pass the data to a Pandas dataframe.
# In addition to being easier to read, dataframes simplify further analysis.
# The simplest dataframe would use the variable names returned with the data. Example: PCTWHITE
# It's easier to read the descriptive labels provide via the variables API.
# The code block below replaces variable names with labels as it builds the dataframe.
column_list = []
for each in d[0]: # For each variable in the response data (stored as d[0])
label = v[each]['label'] # look up that variable's label in the variable dictionary
column_list.append(label) # add the variable's label to the list of column headers
df = pd.DataFrame([d[1]], columns=column_list) # Create a dataframe using the column names created above. Data
# for the dataframe comes from d[1]
print('\n\nPandas dataframe:')
df
"""
Explanation: Build the query URL and send the request. Pass the response data into a Pandas dataframe for viewing.
End of explanation
"""
# Using index positions is good for doing something quick, but in this case makes code easy to break.
# Selecting different surname dataset variables or re-ordering variables will result in errors.
print(d)
pcts = d[1][2:8]
print('\n\n',pcts)
# Create the labels and get the data for the pie chart.
# Note that we are using the downloaded source data, not the dataframe
# used for the table above.
labels = ['White', 'Asian', '2+ Races', 'Native American', 'Black', 'Hispanic']
pcts = d[1][2:8]
#print(pcts)
# Create a pie chart (https://matplotlib.org/2.0.2/examples/pie_and_polar_charts/pie_demo_features.html)
plt.pie(
# using data percentages
pcts,
# Use labels defined above
labels=labels,
# with no shadows
shadow=False,
# with the start angle at 90%
startangle=90,
# with the percent listed as a fraction
autopct='%1.1f%%',
)
# View the plot drop above
plt.axis('equal')
# View the plot
plt.tight_layout()
plt.show()
"""
Explanation: 4. Create a name demographic pie-chart
End of explanation
"""
# First try - just replace string with a zero.
# Here, the for loop iterates through items in a list.
pcts2 = []
for p in pcts:
if p != '(S)':
pcts2.append(p)
else:
pcts2.append(0)
# Create a pie chart (https://matplotlib.org/2.0.2/examples/pie_and_polar_charts/pie_demo_features.html)
plt.pie(
# using data percentages
pcts2,
# Use labels defined above
labels=labels,
# with no shadows
shadow=False,
# with the start angle at 90%
startangle=90,
# with the percent listed as a fraction
autopct='%1.1f%%',
)
# View the plot drop above
plt.axis('equal')
# View the plot
plt.tight_layout()
plt.show()
# Second try - exclude and corresponding label if source data for a given demographic == (S)
# This requires the list index of the data and the label.
# The for loop in this case iterates across a range of integers equal to the length of the list.
pcts3 = []
edit_labels = []
for i in range(len(pcts)):
print(pcts[i])
if pcts[i] != '(S)':
pcts3.append(pcts[i])
edit_labels.append(labels[i])
else:
pass
# Create a pie chart (https://matplotlib.org/2.0.2/examples/pie_and_polar_charts/pie_demo_features.html)
plt.pie(
# using data percentages
pcts3,
# Use labels defined above
labels=edit_labels,
# with no shadows
shadow=False,
# with the start angle at 90%
startangle=90,
# with the percent listed as a fraction
autopct='%1.1f%%',
)
# View the plot drop above
plt.axis('equal')
# View the plot
plt.tight_layout()
plt.show()
"""
Explanation: 5. Fix the data type error
End of explanation
"""
|
SciTools/courses | course_content/iris_course/5.Cube_Plotting.ipynb | gpl-3.0 | import iris
"""
Explanation: Iris introduction course
5. Cube Plotting
Learning Outcome: by the end of this section, you will be able to visualise the data stored in Iris Cubes.
Duration: 30 mins
Overview:<br>
5.1 Plotting Data<br>
5.2 Maps with cartopy<br>
5.3 Exercise<br>
5.4 Summary of the Section
Setup
End of explanation
"""
import iris.plot as iplt
import iris.quickplot as qplt
import matplotlib.pyplot as plt
airtemps = iris.load_cube(iris.sample_data_path('A1B_north_america.nc'))
timeseries = airtemps[-1, 20, ...]
print(timeseries)
qplt.plot(timeseries)
plt.show()
"""
Explanation: 5.1 Plotting data<a id='plotting'></a>
Iris comes with two plotting modules called iris.plot and iris.quickplot that "wrap" some of the common matplotlib plotting functions, such that cubes can be passed as input rather than the usual NumPy arrays. The Iris plot routines will also pass on any other arguments and keywords to the underlying matplotlib methods.
The 'plot' and 'quickplot' modules are very similar, with the primary difference being that quickplot will add extra information to the axes, such as:
a colorbar,
labels for the x and y axes, and
a title where possible.
End of explanation
"""
#
# edit space for user code ...
#
# SAMPLE SOLUTION
# %load solutions/iris_exercise_5.1a
"""
Explanation: <div class="alert alert-block alert-warning">
<b><font color="brown">Exercise: </font></b>
<p>Compare the effects of <b><font face='courier'>iplt.plot</font></b> next to <b><font face='courier'>qplt.plot</font></b> for the above data.
<br>What is the visible difference?</p>
</div>
End of explanation
"""
#
# edit space for user code ...
#
# SAMPLE SOLUTION
# %load solutions/iris_exercise_5.1b
"""
Explanation: Notice that, although the result of qplt has axis labels and a title, everything else about the axes is identical.
The plotting functions in Iris have strict rules on the dimensionality of the inputted cubes. For example, a 2d cube will be needed in order to create a contour plot.
<div class="alert alert-block alert-warning">
<b><font color="brown">Exercise: </font></b>
<p>What happens if you try to apply the '<b><font face='courier'>qplt.contourf</font></b>' plot method to the 'airtemps' cube (i.e. the <i>whole</i> cube) ?</p>
</div>
End of explanation
"""
#
# edit space for user code ...
#
# SAMPLE SOLUTION
# %load solutions/iris_exercise_5.1c
"""
Explanation: <div class="alert alert-block alert-warning">
<b><font color="brown">Exercise: </font></b>
<p>How can you extract a 2-dimensional section of this data, to make a useful contour plot?</p>
</div>
End of explanation
"""
#
# edit space for user code ...
#
# SAMPLE SOLUTION
# %load solutions/iris_exercise_5.1d
"""
Explanation: A useful alternative to contouring is to make a colour 'blockplot', which colours in each datapoint rather than drawing contours. This works well where contours would be too dense and complicated, or if you need to look at every point in the data.
In matplotlib, the <a href="https://matplotlib.org/api/_as_gen/matplotlib.pyplot.pcolormesh.html"><b><font face="courier">plt.pcolormesh</font></b></a> method does this.
<div class="alert alert-block alert-warning">
<b><font color="brown">Exercise: </font></b>
<p>Plot the Iris equivalent of the colour blockplot method
<a href="https://matplotlib.org/api/_as_gen/matplotlib.pyplot.pcolormesh.html">matplotlib.pyplot.pcolormesh</a> for the first timestep of the 'airtemps' data, i.e. <b><font face="courier" color="black">airtemps[0]</font></b>.
<br>Plot just a small region, so you can see the individual data points.</p>
</div>
End of explanation
"""
import cartopy.crs as ccrs
plt.figure(figsize=(12, 8))
plt.subplot(1, 2, 1)
qplt.contourf(airtemps[0, ...], 25)
ax = plt.gca()
ax.coastlines()
ax = plt.subplot(1, 2, 2, projection=ccrs.RotatedPole(100, 37))
qplt.contourf(airtemps[0, ...], 25)
ax.coastlines()
plt.show()
"""
Explanation: Almost all the Iris plot methods have both iplt and qplt versions.
Also, most of these have the same names as similar methods in matplotlib.pyplot.
5.2 Maps with cartopy <a id='maps'></a>
When the result of a plot operation is a map, Iris will automatically create an appropriate cartopy axes if one doesn't already exist.
We can use matplotlib's gca() function to get hold of the automatically created cartopy axes:
End of explanation
"""
# space for user code ...
# SAMPLE SOLUTION
# %load solutions/iris_exercise_5.3a
"""
Explanation: 5.3 Section Review Exercise <a id='exercise'></a>
Use the above cube, with appropriate indexing, to produce the following:
1. a contourf map on a LambertConformal projection (with coastlines)
End of explanation
"""
# space for user code ...
# SAMPLE SOLUTION
# %load solutions/iris_exercise_5.3b
"""
Explanation: 2. a block plot (pcolormesh) map in its native projection (with coastlines)
End of explanation
"""
# space for user code ...
# SAMPLE SOLUTION
# %load solutions/iris_exercise_5.3c
"""
Explanation: 3. a scatter plot showing air_temperature vs longitude (hint: the inputs to scatter can be a combination of coordinates or 1D cubes)
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/miroc/cmip6/models/sandbox-1/atmoschem.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'miroc', 'sandbox-1', 'atmoschem')
"""
Explanation: ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era: CMIP6
Institute: MIROC
Source ID: SANDBOX-1
Topic: Atmoschem
Sub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry.
Properties: 84 (39 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-20 15:02:41
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmospheric chemistry model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmospheric chemistry model code.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Chemistry Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the atmospheric chemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the atmospheric chemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Form of prognostic variables in the atmospheric chemistry component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of advected tracers in the atmospheric chemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry calculations (not advection) generalized into families of species?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 1.8. Coupling With Chemical Reactivity
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry transport scheme turbulence is couple with chemical reactivity?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the evolution of a given variable
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemical species advection (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for physics (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.4. Split Operator Chemistry Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemistry (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3.5. Split Operator Alternate Order
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.6. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the atmospheric chemistry model (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3.7. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.2. Convection
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.3. Precipitation
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.4. Emissions
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.5. Deposition
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.6. Gas Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.9. Photo Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.10. Aerosols
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the atmopsheric chemistry grid
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
* Does the atmospheric chemistry grid match the atmosphere grid?*
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 7.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 7.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 7.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview of transport implementation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.2. Use Atmospheric Transport
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is transport handled by the atmosphere, rather than within atmospheric cehmistry?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Transport Details
Is Required: FALSE Type: STRING Cardinality: 0.1
If transport is handled within the atmospheric chemistry scheme, describe it.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric chemistry emissions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the chemical species emitted at the surface that are taken into account in the emissions scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed as spatially uniform
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via an interactive method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via any other method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed as spatially uniform
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an interactive method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an "other method"
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.2. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview gas phase atmospheric chemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Species included in the gas phase chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.3. Number Of Bimolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of bi-molecular reactions in the gas phase chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.4. Number Of Termolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of ter-molecular reactions in the gas phase chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the tropospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the stratospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.7. Number Of Advected Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of advected species in the gas phase chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.8. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.9. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.10. Wet Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.11. Wet Oxidation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview stratospheric heterogenous atmospheric chemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
"""
Explanation: 14.2. Gas Phase Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Gas phase species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
"""
Explanation: 14.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the stratospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 14.5. Sedimentation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 14.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview tropospheric heterogenous atmospheric chemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Gas Phase Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of gas phase species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
"""
Explanation: 15.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the tropospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.5. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the tropospheric heterogeneous chemistry scheme or not?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric photo chemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 16.2. Number Of Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the photo-chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
"""
Explanation: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Photolysis scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.2. Environmental Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)
End of explanation
"""
|
ivukotic/ML_platform_tests | tutorial/jupyter python numpy plotting/3_NumPy_Basics.ipynb | gpl-3.0 | import numpy as np
from __future__ import print_function
"""
Explanation: NumPy Basics
Numerical Python, or "NumPy" for short, is a foundational package on which many of the most common data science packages are built. Numpy provides us with high performance multi-dimensional arrays which we can use as vectors or matrices.
The key features of numpy are:
ndarrays: n-dimensional arrays of the same data type which are fast and space-efficient. There are a number of built-in methods for ndarrays which allow for rapid processing of data without using loops (e.g., compute the mean).
Broadcasting: a useful tool which defines implicit behavior between multi-dimensional arrays of different sizes.
Vectorization: enables numeric operations on ndarrays.
Input/Output: simplifies reading and writing of data from/to file.
Additional Recommended Resources:
- Numpy Documentation
In this brief tutorial, I will demonstrate some of the common NumPy operations you will see during the rest of the week.
End of explanation
"""
np.arange(-1.0, 1.0, 0.1)
print(np.random.randint(0, 5, size=10))
print(np.ones(10))
print(np.zeros(10))
rank1_array = np.array([3, 33, 333])
print(type(rank1_array))
print(rank1_array.shape)
print(rank1_array.size)
print(rank1_array.dtype)
print(rank1_array[0], rank1_array[1], rank1_array[2])
print(rank1_array[:], rank1_array[1:], rank1_array[:2])
"""
Explanation: A common habit is to import under the np namespace as you will often find yourself typing numpy a lot otherwise. Two letters is easier on your fingers and your computer.
Rank 1
End of explanation
"""
np.ones((10,2)) # 10 rows, 2 columns
np.zeros((2,10)) # 2 columns, 10 rows
np.eye(10,10)*3 # diagonal of 1s but multiplied by 3
rank2_array = np.array([[11,12,13],[21,22,23],[31,32,33]])
print(type(rank2_array))
print(rank2_array.shape)
print(rank2_array.size)
print(rank2_array.dtype)
print(rank2_array[0], rank2_array[1], rank2_array[2])
print(rank2_array[:]) # print everything in array
print(rank2_array[1:]) # slice from 2nd row and on
print(rank2_array[:,0]) # all rows, but 1st column
print(rank2_array[:,1]) # all rows, but 2nd column
print(rank2_array[:,2]) # all rows, but 3rd column
print(rank2_array[0,1]) # i=0, j=1 of the 3x3 matrix we just made
"""
Explanation: Rank 2
End of explanation
"""
np.random.randint(0, 5, (2,5,5)) # 2 x 5 x 5 [3D matrix!]
np.random.randint(0, 5, (2,5,5)).shape
"""
Explanation: Rank 3 and beyond!
End of explanation
"""
np.arange(72).reshape(3,24)
np.arange(72).reshape(24,3).T # tranpose; this is not the same as above! beware
"""
Explanation: Reshaping and Slicing Arrays
Oftentimes, we would like to change up the dimensions a bit. One natural way to do this with NumPy is to reshape arrays. Let's start with a 1-dimensional array of 72 elements to help understand how things get re-ordered or changed around.
End of explanation
"""
np.arange(72).reshape(3, 2, -1) # -1 means to let NumPy figure out the size of the remaining dimension
np.arange(72).reshape(3, -1, 12) # -1 means to let NumPy figure out the size of the remaining dimension
np.arange(36).reshape(6, 6)
"""
Explanation: Note that the transpose is just ndarray().T. But remember, things are not always what they seem. The above two examples have the exact same dimensionality -- but the reshaping will slice up the vector in different ways! Be careful!
End of explanation
"""
np.arange(36).reshape(6,6)[2:4,:3]
"""
Explanation: We can even combine multiple indices with Python slicing!
End of explanation
"""
unfiltered_arr = np.arange(72).reshape(3, -1, 12)
unfiltered_arr
condition = unfiltered_arr % 3 == 0 # divisible by 3
condition # this is a bitmask!
unfiltered_arr[condition] # this creates a view (subset) of the original array, not a copy
unfiltered_arr[condition] = 0 # only change the values matching the condition
unfiltered_arr
unfiltered_arr.reshape(-1) # flatten it back!
"""
Explanation: Filtering
End of explanation
"""
|
david-hoffman/scripts | notebooks/mandelbrot_numbapro.ipynb | apache-2.0 | %pylab inline
import numpy as np
from timeit import default_timer as timer
"""
Explanation: A NumbaPro Mandelbrot Example
This notebook was written by Mark Harris based on code examples from Continuum Analytics that I modified somewhat. This is an example that demonstrates accelerating a Mandelbrot fractal computation using "CUDA Python" with NumbaPro.
Let's start with a basic Python Mandelbrot set. We use a numpy array for the image and display it using pylab imshow.
End of explanation
"""
def mandel(x, y, max_iters):
"""
Given the real and imaginary parts of a complex number,
determine if it is a candidate for membership in the Mandelbrot
set given a fixed number of iterations.
"""
c = complex(x, y)
z = 0.0j
for i in range(max_iters):
z = z*z + c
if (z.real*z.real + z.imag*z.imag) >= 4:
return i
return max_iters
"""
Explanation: The mandel function performs the Mandelbrot set calculation for a given (x,y) position on the imaginary plane. It returns the number of iterations before the computation "escapes".
End of explanation
"""
def create_fractal(min_x, max_x, min_y, max_y, image, iters):
height = image.shape[0]
width = image.shape[1]
pixel_size_x = (max_x - min_x) / width
pixel_size_y = (max_y - min_y) / height
for x in range(width):
real = min_x + x * pixel_size_x
for y in range(height):
imag = min_y + y * pixel_size_y
color = mandel(real, imag, iters)
image[y, x] = color
"""
Explanation: create_fractal iterates over all the pixels in the image, computing the complex coordinates from the pixel coordinates, and calls the mandel function at each pixel. The return value of mandel is used to color the pixel.
End of explanation
"""
image = np.zeros((1024, 1536), dtype = np.uint8)
start = timer()
create_fractal(-2.0, 1.0, -1.0, 1.0, image, 20)
dt = timer() - start
print("Mandelbrot created in %f s" % dt)
imshow(image)
"""
Explanation: Next we create a 1024x1024 pixel image as a numpy array of bytes. We then call create_fractal with appropriate coordinates to fit the whole mandelbrot set.
End of explanation
"""
create_fractal(-2.0, -1.7, -0.1, 0.1, image, 20)
imshow(image)
"""
Explanation: You can play with the coordinates to zoom in on different regions in the fractal.
End of explanation
"""
from numba import autojit
@autojit
def mandel(x, y, max_iters):
"""
Given the real and imaginary parts of a complex number,
determine if it is a candidate for membership in the Mandelbrot
set given a fixed number of iterations.
"""
c = complex(x, y)
z = 0.0j
for i in range(max_iters):
z = z*z + c
if (z.real*z.real + z.imag*z.imag) >= 4:
return i
return max_iters
@autojit
def create_fractal(min_x, max_x, min_y, max_y, image, iters):
height = image.shape[0]
width = image.shape[1]
pixel_size_x = (max_x - min_x) / width
pixel_size_y = (max_y - min_y) / height
for x in range(width):
real = min_x + x * pixel_size_x
for y in range(height):
imag = min_y + y * pixel_size_y
color = mandel(real, imag, iters)
image[y, x] = color
"""
Explanation: Faster Execution with Numba
Numba is a Numpy-aware dynamic Python compiler based on the popular LLVM compiler infrastructure.
Numba is an Open Source NumPy-aware optimizing compiler for Python sponsored by Continuum Analytics, Inc. It uses the remarkable compiler infrastructure to compile Python syntax to machine code. It is aware of NumPy arrays as typed memory regions and so can speed-up code using NumPy arrays, such as our Mandelbrot functions.
The simplest way to use Numba is to decorate the functions you want to compile with @autojit. Numba will compile them for the CPU (if it can resolve the types used).
End of explanation
"""
image = np.zeros((1024, 1536), dtype = np.uint8)
start = timer()
create_fractal(-2.0, 1.0, -1.0, 1.0, image, 20)
dt = timer() - start
print("Mandelbrot created in %f s" % dt)
imshow(image)
"""
Explanation: Let's run the @autojit code and see if it is faster.
End of explanation
"""
from numbapro import cuda
from numba import *
mandel_gpu = cuda.jit(restype=uint32, argtypes=[f8, f8, uint32], device=True)(mandel)
"""
Explanation: On my desktop computer, the time to compute the 1024x1024 mandelbrot set dropped from 6.92s down to 0.06s. That's a speedup of 115x! The reason this is so much faster is that Numba uses Numpy type information to convert the dynamic Python code into statically compiled machine code, which is many times faster to execute than dynamically typed, interpreted Python code.
Even Bigger Speedups with CUDA Python
Anaconda, from Continuum Analytics, is a "completely free enterprise-ready Python distribution for large-scale data processing, predictive analytics, and scientific computing." Anaconda Accelerate is an add-on for Anaconda that includes the NumbaPro Python compiler.
NumbaPro is an enhanced Numba that targets multi-core CPUs and GPUs directly from simple Python syntax, providing the performance of compiled parallel code with the productivity of the Python language.
CUDA Python
In addition to various types of automatic vectorization and generalized Numpy Ufuncs, NumbaPro also enables developers to access the CUDA parallel programming model using Python syntax. With CUDA Python, you use parallelism explicitly just as in other CUDA languages such as CUDA C and CUDA Fortran.
Let's write a CUDA version of our Python Mandelbrot set. We need to import cuda from the numbapro module. Then, we need to create a version of the mandel function compiled for the GPU. We can do this without any code duplication by calling cuda.jit on the function, providing it with the return type and the argument types, and specifying device=True to indicate that this is a function that will run on the GPU device.
End of explanation
"""
@cuda.jit(argtypes=[f8, f8, f8, f8, uint8[:,:], uint32])
def mandel_kernel(min_x, max_x, min_y, max_y, image, iters):
height = image.shape[0]
width = image.shape[1]
pixel_size_x = (max_x - min_x) / width
pixel_size_y = (max_y - min_y) / height
startX, startY = cuda.grid(2)
gridX = cuda.gridDim.x * cuda.blockDim.x;
gridY = cuda.gridDim.y * cuda.blockDim.y;
for x in range(startX, width, gridX):
real = min_x + x * pixel_size_x
for y in range(startY, height, gridY):
imag = min_y + y * pixel_size_y
image[y, x] = mandel_gpu(real, imag, iters)
"""
Explanation: In CUDA, a kernel is a function that runs in parallel using many threads on the device. We can write a kernel version of our mandelbrot function by simply assuming that it will be run by a grid of threads. NumbaPro provides the familiar CUDA threadIdx, blockIdx, blockDim and gridDim intrinsics, as well as a grid() convenience function which evaluates to blockDim * blockIdx + threadIdx.
Our example juse needs a minor modification to compute a grid-size stride for the x and y ranges, since we will have many threads running in parallel. We just add these three lines:
startX, startY = cuda.grid(2)
gridX = cuda.gridDim.x * cuda.blockDim.x;
gridY = cuda.gridDim.y * cuda.blockDim.y;
And we modify the range in the x loop to use range(startX, width, gridX) (and likewise for the y loop).
We decorate the function with @cuda.jit, passing it the type signature of the function. Since kernels cannot have a return value, we do not need the restype argument.
End of explanation
"""
gimage = np.zeros((1024, 1536), dtype = np.uint8)
blockdim = (32, 8)
griddim = (32,16)
start = timer()
d_image = cuda.to_device(gimage)
mandel_kernel[griddim, blockdim](-2.0, 1.0, -1.0, 1.0, d_image, 20)
d_image.to_host()
dt = timer() - start
print("Mandelbrot created on GPU in %f s" % dt)
imshow(gimage)
"""
Explanation: Device Memory
CUDA kernels must operate on data allocated on the device. NumbaPro provides the cuda.to_device() function to copy a Numpy array to the GPU.
d_image = cuda.to_device(image)
The return value (d_image) is of type DeviceNDArray, which is a subclass of numpy.ndarray, and provides the to_host() function to copy the array back from GPU to CPU memory
d_image.to_host()
Launching Kernels
To launch a kernel on the GPU, we must configure it, specifying the size of the grid in blocks, and the size of each thread block. For a 2D image calculation like the Mandelbrot set, we use a 2D grid of 2D blocks. We'll use blocks of 32x8 threads, and launch 32x16 of them in a 2D grid so that we have plenty of blocks to occupy all of the multiprocessors on the GPU.
Putting this all together, we launch the kernel like this.
End of explanation
"""
|
flsantos/startup_acquisition_forecast | exploratory_code/2_dataset_preparation.ipynb | mit | import pandas as pd
startups = pd.read_csv('data/startups_1_1.csv', index_col=0)
startups[:3]
"""
Explanation: Dataset Preparation
Here we'll be removing nan's, normalizing numerical features, converting date features to numerical normalized features, and so on...
Importing the dataset
End of explanation
"""
#drop features
startups_dropped_features = startups.drop(['name','homepage_url', 'category_list', 'region', 'city', 'country_code'], 1)
#move status to the end
cols = list(startups_dropped_features)
cols.append(cols.pop(cols.index('status')))
startups_dropped_features = startups_dropped_features.ix[:, cols]
startups_dropped_features[:3]
"""
Explanation: Droping some features
We'll drop homepage_url, category_list, region, city, country_code
We'll also move status to the end of the dataframe
End of explanation
"""
from sklearn import preprocessing
min_max_scaler = preprocessing.MinMaxScaler()
startups_normalized = startups_dropped_features.copy()
#Convert '-' to zeros in funding_total_usd
startups_normalized['funding_total_usd'] = startups_normalized['funding_total_usd'].replace('-', 0)
columns_to_scale = list(startups_normalized.filter(regex=(".*(funding_rounds|funding_total_usd)|(number_of|avg_).*")).columns)
startups_normalized[columns_to_scale] = min_max_scaler.fit_transform(startups_normalized[columns_to_scale])
startups_normalized[:3]
"""
Explanation: Normalizing numeric variables
End of explanation
"""
from datetime import datetime
from dateutil import relativedelta
def date_to_age_in_months(date):
if date != date or date == 0: #is NaN
return 0
date1 = datetime.strptime(date, '%Y-%m-%d')
date2 = datetime.strptime('2017-01-01', '%Y-%m-%d') #get age until 01/01/2017
delta = relativedelta.relativedelta(date2, date1)
return delta.years * 12 + delta.months
startups_dates_normalized = startups_normalized.copy()
startups_dates_normalized['founded_at'] = startups_dates_normalized['founded_at'].map(date_to_age_in_months)
startups_dates_normalized['first_funding_at'] = startups_dates_normalized['first_funding_at'].map(date_to_age_in_months)
startups_dates_normalized['last_funding_at'] = startups_dates_normalized['last_funding_at'].map(date_to_age_in_months)
startups_dates_normalized[:3]
startups_dates_normalized[['founded_at', 'first_funding_at', 'last_funding_at']] = min_max_scaler.fit_transform(startups_dates_normalized[['founded_at', 'first_funding_at', 'last_funding_at']])
startups_dates_normalized[:3]
"""
Explanation: Normalizing date variables
End of explanation
"""
startups_dates_normalized['status'].unique()
startups_dates_normalized['status'].value_counts()
"""
Explanation: Analzying types of status
End of explanation
"""
startups_dates_normalized.to_csv('data/startups_2.csv')
"""
Explanation: Saved prepared database to csv
End of explanation
"""
|
rbharath/deepchem | examples/broken/protein_ligand_complex_notebook.ipynb | mit | %load_ext autoreload
%autoreload 2
%pdb off
# set DISPLAY = True when running tutorial
DISPLAY = False
# set PARALLELIZE to true if you want to use ipyparallel
PARALLELIZE = False
import warnings
warnings.filterwarnings('ignore')
dataset_file= "../datasets/pdbbind_core_df.pkl.gz"
from deepchem.utils.save import load_from_disk
dataset = load_from_disk(dataset_file)
"""
Explanation: deepchem: Machine Learning models for Drug Discovery
Tutorial 1: Basic Protein-Ligand Complex Featurized Models
Written by Evan Feinberg and Bharath Ramsundar
Copyright 2016, Stanford University
Welcome to the deepchem tutorial. In this iPython Notebook, one can follow along with the code below to learn how to fit machine learning models with rich predictive power on chemical datasets.
Overview:
In this tutorial, you will trace an arc from loading a raw dataset to fitting a cutting edge ML technique for predicting binding affinities. This will be accomplished by writing simple commands to access the deepchem Python API, encompassing the following broad steps:
Loading a chemical dataset, consisting of a series of protein-ligand complexes.
Featurizing each protein-ligand complexes with various featurization schemes.
Fitting a series of models with these featurized protein-ligand complexes.
Visualizing the results.
First, let's point to a "dataset" file. This can come in the format of a CSV file or Pandas DataFrame. Regardless
of file format, it must be columnar data, where each row is a molecular system, and each column represents
a different piece of information about that system. For instance, in this example, every row reflects a
protein-ligand complex, and the following columns are present: a unique complex identifier; the SMILES string
of the ligand; the binding affinity (Ki) of the ligand to the protein in the complex; a Python list of all lines
in a PDB file for the protein alone; and a Python list of all lines in a ligand file for the ligand alone.
This should become clearer with the example. (Make sure to set DISPLAY = True)
End of explanation
"""
print("Type of dataset is: %s" % str(type(dataset)))
print(dataset[:5])
print("Shape of dataset is: %s" % str(dataset.shape))
"""
Explanation: Let's see what dataset looks like:
End of explanation
"""
import nglview
import tempfile
import os
import mdtraj as md
import numpy as np
import deepchem.utils.visualization
from deepchem.utils.visualization import combine_mdtraj, visualize_complex, convert_lines_to_mdtraj
first_protein, first_ligand = dataset.iloc[0]["protein_pdb"], dataset.iloc[0]["ligand_pdb"]
protein_mdtraj = convert_lines_to_mdtraj(first_protein)
ligand_mdtraj = convert_lines_to_mdtraj(first_ligand)
complex_mdtraj = combine_mdtraj(protein_mdtraj, ligand_mdtraj)
if DISPLAY:
ngltraj = visualize_complex(complex_mdtraj)
ngltraj
"""
Explanation: One of the missions of deepchem is to form a synapse between the chemical and the algorithmic worlds: to be able to leverage the powerful and diverse array of tools available in Python to analyze molecules. This ethos applies to visual as much as quantitative examination:
End of explanation
"""
from deepchem.featurizers.fingerprints import CircularFingerprint
from deepchem.featurizers.basic import RDKitDescriptors
from deepchem.featurizers.nnscore import NNScoreComplexFeaturizer
from deepchem.featurizers.grid_featurizer import GridFeaturizer
grid_featurizer = GridFeaturizer(voxel_width=16.0, feature_types="voxel_combined", voxel_feature_types=["ecfp",
"splif", "hbond", "pi_stack", "cation_pi", "salt_bridge"], ecfp_power=5, splif_power=5,
parallel=True, flatten=True)
compound_featurizers = [CircularFingerprint(size=128)]
# TODO(rbharath, enf): The grid featurizer breaks. Need to debug before code release
complex_featurizers = []
#complex_featurizers = [grid_featurizer]
"""
Explanation: Now that we're oriented, let's use ML to do some chemistry.
So, step (2) will entail featurizing the dataset.
The available featurizations that come standard with deepchem are ECFP4 fingerprints, RDKit descriptors, NNScore-style bdescriptors, and hybrid binding pocket descriptors. Details can be found on deepchem.io.
End of explanation
"""
#Make a directory in which to store the featurized complexes.
import tempfile, shutil
base_dir = "./tutorial_output"
if not os.path.exists(base_dir):
os.makedirs(base_dir)
data_dir = os.path.join(base_dir, "data")
if not os.path.exists(data_dir):
os.makedirs(data_dir)
featurized_samples_file = os.path.join(data_dir, "featurized_samples.joblib")
feature_dir = os.path.join(base_dir, "features")
if not os.path.exists(feature_dir):
os.makedirs(feature_dir)
samples_dir = os.path.join(base_dir, "samples")
if not os.path.exists(samples_dir):
os.makedirs(samples_dir)
train_dir = os.path.join(base_dir, "train")
if not os.path.exists(train_dir):
os.makedirs(train_dir)
valid_dir = os.path.join(base_dir, "valid")
if not os.path.exists(valid_dir):
os.makedirs(valid_dir)
test_dir = os.path.join(base_dir, "test")
if not os.path.exists(test_dir):
os.makedirs(test_dir)
model_dir = os.path.join(base_dir, "model")
if not os.path.exists(model_dir):
os.makedirs(model_dir)
import deepchem.featurizers.featurize
from deepchem.featurizers.featurize import DataFeaturizer
featurizers = compound_featurizers + complex_featurizers
featurizer = DataFeaturizer(tasks=["label"],
smiles_field="smiles",
protein_pdb_field="protein_pdb",
ligand_pdb_field="ligand_pdb",
compound_featurizers=compound_featurizers,
complex_featurizers=complex_featurizers,
id_field="complex_id",
verbose=False)
if PARALLELIZE:
from ipyparallel import Client
c = Client()
dview = c[:]
else:
dview = None
featurized_samples = featurizer.featurize(dataset_file, feature_dir, samples_dir,
worker_pool=dview, shard_size=32)
from deepchem.utils.save import save_to_disk, load_from_disk
save_to_disk(featurized_samples, featurized_samples_file)
featurized_samples = load_from_disk(featurized_samples_file)
"""
Explanation: Note how we separate our featurizers into those that featurize individual chemical compounds, compound_featurizers, and those that featurize molecular complexes, complex_featurizers.
Now, let's perform the actual featurization. Calling featurizer.featurize() will return an instance of class FeaturizedSamples. Internally, featurizer.featurize() (a) computes the user-specified features on the data, (b) transforms the inputs into X and y NumPy arrays suitable for ML algorithms, and (c) constructs a FeaturizedSamples() instance that has useful methods, such as an iterator, over the featurized data.
End of explanation
"""
splittype = "random"
train_samples, test_samples = featurized_samples.train_test_split(
splittype, train_dir, test_dir, seed=2016)
"""
Explanation: Now, we conduct a train-test split. If you'd like, you can choose splittype="scaffold" instead to perform a train-test split based on Bemis-Murcko scaffolds.
End of explanation
"""
from deepchem.utils.dataset import Dataset
train_dataset = Dataset(data_dir=train_dir, samples=train_samples,
featurizers=compound_featurizers, tasks=["label"])
test_dataset = Dataset(data_dir=test_dir, samples=test_samples,
featurizers=compound_featurizers, tasks=["label"])
"""
Explanation: We generate separate instances of the Dataset() object to hermetically seal the train dataset from the test dataset. This style lends itself easily to validation-set type hyperparameter searches, which we will illustate in a separate section of this tutorial.
End of explanation
"""
from deepchem.transformers import NormalizationTransformer
from deepchem.transformers import ClippingTransformer
input_transformers = [NormalizationTransformer(transform_X=True, dataset=train_dataset),
ClippingTransformer(transform_X=True, dataset=train_dataset)]
output_transformers = [NormalizationTransformer(transform_y=True, dataset=train_dataset)]
transformers = input_transformers + output_transformers
for transformer in transformers:
transformer.transform(train_dataset)
for transformer in transformers:
transformer.transform(test_dataset)
"""
Explanation: The performance of many ML algorithms hinges greatly on careful data preprocessing. Deepchem comes standard with a few options for such preprocessing.
End of explanation
"""
from sklearn.ensemble import RandomForestRegressor
from deepchem.models.standard import SklearnModel
task_types = {"label": "regression"}
model_params = {"data_shape": train_dataset.get_data_shape()}
model = SklearnModel(task_types, model_params, model_instance=RandomForestRegressor())
model.fit(train_dataset)
model_dir = tempfile.mkdtemp()
model.save(model_dir)
from deepchem.utils.evaluate import Evaluator
import pandas as pd
evaluator = Evaluator(model, train_dataset, output_transformers, verbose=True)
with tempfile.NamedTemporaryFile() as train_csv_out:
with tempfile.NamedTemporaryFile() as train_stats_out:
_, train_r2score = evaluator.compute_model_performance(
train_csv_out, train_stats_out)
evaluator = Evaluator(model, test_dataset, output_transformers, verbose=True)
test_csv_out = tempfile.NamedTemporaryFile()
with tempfile.NamedTemporaryFile() as test_stats_out:
_, test_r2score = evaluator.compute_model_performance(
test_csv_out, test_stats_out)
print test_csv_out.name
train_test_performance = pd.concat([train_r2score, test_r2score])
train_test_performance["split"] = ["train", "test"]
train_test_performance
"""
Explanation: Now, we're ready to do some learning! To set up a model, we will need: (a) a dictionary task_types that maps a task, in this case label, i.e. the Ki, to the type of the task, in this case regression. For the multitask use case, one will have a series of keys, each of which is a different task (Ki, solubility, renal half-life, etc.) that maps to a different task type (regression or classification).
To fit a deepchem model, first we instantiate one of the provided (or user-written) model classes. In this case, we have a created a convenience class to wrap around any ML model available in Sci-Kit Learn that can in turn be used to interoperate with deepchem. To instantiate an SklearnModel, you will need (a) task_types, (b) model_params, another dict as illustrated below, and (c) a model_instance defining the type of model you would like to fit, in this case a RandomForestRegressor.
End of explanation
"""
predictions = pd.read_csv(test_csv_out.name)
predictions = predictions.sort(['label'], ascending=[0])
from deepchem.utils.visualization import visualize_ligand
top_ligand = predictions.iloc[0]['ids']
ligand1 = convert_lines_to_mdtraj(dataset.loc[dataset['complex_id']==top_ligand]['ligand_pdb'].values[0])
if DISPLAY:
ngltraj = visualize_ligand(ligand1)
ngltraj
worst_ligand = predictions.iloc[predictions.shape[0]-2]['ids']
ligand1 = convert_lines_to_mdtraj(dataset.loc[dataset['complex_id']==worst_ligand]['ligand_pdb'].values[0])
if DISPLAY:
ngltraj = visualize_ligand(ligand1)
ngltraj
"""
Explanation: In this simple example, in few yet intuitive lines of code, we traced the machine learning arc from featurizing a raw dataset to fitting and evaluating a model.
Here, we featurized only the ligand. The signal we observed in R^2 reflects the ability of circular fingerprints and random forests to learn general features that make ligands "drug-like."
End of explanation
"""
import deepchem.models.standard
from deepchem.models.standard import SklearnModel
from deepchem.utils.dataset import Dataset
from deepchem.utils.evaluate import Evaluator
from deepchem.hyperparameters import HyperparamOpt
train_dir, validation_dir, test_dir = tempfile.mkdtemp(), tempfile.mkdtemp(), tempfile.mkdtemp()
splittype="random"
train_samples, validation_samples, test_samples = featurized_samples.train_valid_test_split(
splittype, train_dir, validation_dir, test_dir, seed=2016)
task_types = {"label": "regression"}
performance = pd.DataFrame()
def model_builder(task_types, params_dict, verbosity):
n_estimators = params_dict["n_estimators"]
return SklearnModel(
task_types, params_dict,
model_instance=RandomForestRegressor(n_estimators=n_estimators))
params_dict = {
"n_estimators": [10, 20, 40, 80, 160],
"data_shape": [train_dataset.get_data_shape()],
}
optimizer = HyperparamOpt(model_builder, {"pIC50": "regression"})
for feature_type in (complex_featurizers + compound_featurizers):
train_dataset = Dataset(data_dir=train_dir, samples=train_samples,
featurizers=[feature_type], tasks=["label"])
validation_dataset = Dataset(data_dir=validation_dir, samples=validation_samples,
featurizers=[feature_type], tasks=["label"])
for transformer in transformers:
transformer.transform(train_dataset)
for transformer in transformers:
transformer.transform(test_dataset)
best_rf, best_rf_hyperparams, all_rf_results = optimizer.hyperparam_search(
params_dict, train_dataset, test_dataset, output_transformers, metric="r2_score")
%matplotlib inline
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
# TODO(rbharath, enf): Need to fix this to work with new hyperparam-opt framework.
#df = pd.DataFrame(performance[['r2_score','split','featurizer']].values, index=performance['n_trees'].values, columns=['r2_score', 'split', 'featurizer'])
#df = df.loc[df['split']=="validation"]
#df = df.drop('split', 1)
#fingerprint_df = df[df['featurizer'].str.contains('fingerprint')].drop('featurizer', 1)
#print fingerprint_df
#fingerprint_df.columns = ['ligand fingerprints']
#grid_df = df[df['featurizer'].str.contains('grid')].drop('featurizer', 1)
#grid_df.columns = ['complex features']
#df = pd.concat([fingerprint_df, grid_df], axis=1)
#print(df)
#plt.clf()
#df.plot()
#plt.ylabel("$R^2$")
#plt.xlabel("Number of trees")
train_dir, validation_dir, test_dir = tempfile.mkdtemp(), tempfile.mkdtemp(), tempfile.mkdtemp()
splittype="random"
train_samples, validation_samples, test_samples = featurized_samples.train_valid_test_split(
splittype, train_dir, validation_dir, test_dir, seed=2016)
feature_type = complex_featurizers
train_dataset = Dataset(data_dir=train_dir, samples=train_samples,
featurizers=feature_type, tasks=["label"])
validation_dataset = Dataset(data_dir=validation_dir, samples=validation_samples,
featurizers=feature_type, tasks=["label"])
test_dataset = Dataset(data_dir=test_dir, samples=test_samples,
featurizers=feature_type, tasks=["label"])
for transformer in transformers:
transformer.transform(train_dataset)
for transformer in transformers:
transformer.transform(valid_dataset)
for transformer in transformers:
transformer.transform(test_dataset)
model_params = {"data_shape": train_dataset.get_data_shape()}
rf_model = SklearnModel(task_types, model_params, model_instance=RandomForestRegressor(n_estimators=20))
rf_model.fit(train_dataset)
model_dir = tempfile.mkdtemp()
rf_model.save(model_dir)
evaluator = Evaluator(rf_model, train_dataset, output_transformers, verbose=True)
with tempfile.NamedTemporaryFile() as train_csv_out:
with tempfile.NamedTemporaryFile() as train_stats_out:
_, train_r2score = evaluator.compute_model_performance(
train_csv_out, train_stats_out)
evaluator = Evaluator(rf_model, test_dataset, output_transformers, verbose=True)
test_csv_out = tempfile.NamedTemporaryFile()
with tempfile.NamedTemporaryFile() as test_stats_out:
predictions, test_r2score = evaluator.compute_model_performance(
test_csv_out, test_stats_out)
train_test_performance = pd.concat([train_r2score, test_r2score])
train_test_performance["split"] = ["train", "test"]
train_test_performance["featurizer"] = [str(feature_type.__class__), str(feature_type.__class__)]
train_test_performance["n_trees"] = [n_trees, n_trees]
print(train_test_performance)
import deepchem.models.deep
from deepchem.models.deep import SingleTaskDNN
import numpy.random
from operator import mul
import itertools
params_dict = {"activation": ["relu"],
"momentum": [.9],
"batch_size": [50],
"init": ["glorot_uniform"],
"data_shape": [train_dataset.get_data_shape()],
"learning_rate": np.power(10., np.random.uniform(-5, -2, size=5)),
"decay": np.power(10., np.random.uniform(-6, -4, size=5)),
"nb_hidden": [1000],
"nb_epoch": [40],
"nesterov": [False],
"dropout": [.5],
"nb_layers": [1],
"batchnorm": [False],
}
optimizer = HyperparamOpt(SingleTaskDNN, task_types)
best_dnn, best_hyperparams, all_results = optimizer.hyperparam_search(
params_dict, train_dataset, valid_dataset, output_transformers, metric="r2_score", verbosity=None)
dnn_test_csv_out = tempfile.NamedTemporaryFile()
dnn_test_stats_out = tempfile.NamedTemporaryFile()
dnn_test_evaluator = Evaluator(best_dnn, test_dataset)
dnn_test_df, dnn_test_r2score = dnn_test_evaluator.compute_model_performance(
dnn_test_csv_out, dnn_test_stats_out)
dnn_test_r2_score = dnn_test_r2score.iloc[0]["r2_score"]
print("DNN Test set R^2 %f" % (dnn_test_r2_score))
task = "label"
dnn_predicted_test = np.array(dnn_test_df[task + "_pred"])
dnn_true_test = np.array(dnn_test_df[task])
plt.clf()
plt.scatter(dnn_true_test, dnn_predicted_test)
plt.xlabel('Predicted Ki')
plt.ylabel('True Ki')
plt.title(r'DNN predicted vs. true Ki')
plt.xlim([-2, 2])
plt.ylim([-2, 2])
plt.plot([-3, 3], [-3, 3], marker=".", color='k')
rf_test_csv_out = tempfile.NamedTemporaryFile()
rf_test_stats_out = tempfile.NamedTemporaryFile()
rf_test_evaluator = Evaluator(rf_model, test_dataset)
rf_test_df, rf_test_r2score = rf_test_evaluator.compute_model_performance(
rf_test_csv_out, rf_test_stats_out)
rf_test_r2_score = rf_test_r2score.iloc[0]["r2_score"]
print("RF Test set R^2 %f" % (rf_test_r2_score))
plt.show()
task = "label"
rf_predicted_test = np.array(rf_test_df[task + "_pred"])
rf_true_test = np.array(rf_test_df[task])
plt.scatter(rf_true_test, rf_predicted_test)
plt.xlabel('Predicted Ki')
plt.ylabel('True Ki')
plt.title(r'RF predicted vs. true Ki')
plt.xlim([-2, 2])
plt.ylim([-2, 2])
plt.plot([-3, 3], [-3, 3], marker=".", color='k')
plt.show()
predictions = dnn_test_df.sort(['label'], ascending=[0])
top_complex = predictions.iloc[0]['ids']
best_complex = dataset.loc[dataset['complex_id']==top_complex]
protein_mdtraj = convert_lines_to_mdtraj(best_complex["protein_pdb"].values[0])
ligand_mdtraj = convert_lines_to_mdtraj(best_complex["ligand_pdb"].values[0])
complex_mdtraj = combine_mdtraj(protein_mdtraj, ligand_mdtraj)
if DISPLAY:
ngltraj = visualize_complex(complex_mdtraj)
ngltraj
top_complex = predictions.iloc[1]['ids']
best_complex = dataset.loc[dataset['complex_id']==top_complex]
protein_mdtraj = convert_lines_to_mdtraj(best_complex["protein_pdb"].values[0])
ligand_mdtraj = convert_lines_to_mdtraj(best_complex["ligand_pdb"].values[0])
complex_mdtraj = combine_mdtraj(protein_mdtraj, ligand_mdtraj)
if DISPLAY:
ngltraj = visualize_complex(complex_mdtraj)
ngltraj
top_complex = predictions.iloc[predictions.shape[0]-1]['ids']
best_complex = dataset.loc[dataset['complex_id']==top_complex]
protein_mdtraj = convert_lines_to_mdtraj(best_complex["protein_pdb"].values[0])
ligand_mdtraj = convert_lines_to_mdtraj(best_complex["ligand_pdb"].values[0])
complex_mdtraj = combine_mdtraj(protein_mdtraj, ligand_mdtraj)
if DISPLAY:
ngltraj = visualize_complex(complex_mdtraj)
ngltraj
"""
Explanation: The protein-ligand complex view.
The preceding simple example, in few yet intuitive lines of code, traces the machine learning arc from featurizing a raw dataset to fitting and evaluating a model.
In this next section, we illustrate deepchem's modularity, and thereby the ease with which one can explore different featurization schemes, different models, and combinations thereof, to achieve the best performance on a given dataset. We will demonstrate this by examining protein-ligand interactions.
In the previous section, we featurized only the ligand. The signal we observed in R^2 reflects the ability of circular fingerprints and random forests to learn general features that make ligands "drug-like." However, the affinity of a drug for a target is determined not only by the drug itself, of course, but the way in which it interacts with a protein.
End of explanation
"""
|
Unidata/unidata-python-workshop | notebooks/XArray/XArray and CF.ipynb | mit | # Convention for import to get shortened namespace
import numpy as np
import xarray as xr
# Create some sample "temperature" data
data = 283 + 5 * np.random.randn(5, 3, 4)
data
"""
Explanation: <div style="width:1000 px">
<div style="float:right; width:98 px; height:98px;">
<img src="https://raw.githubusercontent.com/Unidata/MetPy/master/metpy/plots/_static/unidata_150x150.png" alt="Unidata Logo" style="height: 98px;">
</div>
<h1>XArray & CF Introduction</h1>
<h3>Unidata Sustainable Science Workshop</h3>
<div style="clear:both"></div>
</div>
<hr style="height:2px;">
<div style="float:right; width:250 px"><img src="http://xarray.pydata.org/en/stable/_static/dataset-diagram-logo.png" alt="NumPy Logo" style="height: 250px;"></div>
Overview:
Teaching: 25 minutes
Exercises: 20 minutes
Questions
What is XArray?
How does XArray fit in with Numpy and Pandas?
What is the CF convention and how do we use it with Xarray?
Objectives
Create a DataArray.
Open netCDF data using XArray
Subset the data.
Write a CF-compliant netCDF file
XArray
XArray expands on the capabilities on NumPy arrays, providing a lot of streamlined data manipulation. It is similar in that respect to Pandas, but whereas Pandas excels at working with tabular data, XArray is focused on N-dimensional arrays of data (i.e. grids). Its interface is based largely on the netCDF data model (variables, attributes, and dimensions), but it goes beyond the traditional netCDF interfaces to provide functionality similar to netCDF-java's Common Data Model (CDM).
DataArray
The DataArray is one of the basic building blocks of XArray. It provides a NumPy ndarray-like object that expands to provide two critical pieces of functionality:
Coordinate names and values are stored with the data, making slicing and indexing much more powerful
It has a built-in container for attributes
End of explanation
"""
temp = xr.DataArray(data)
temp
"""
Explanation: Here we create a basic DataArray by passing it just a numpy array of random data. Note that XArray generates some basic dimension names for us.
End of explanation
"""
temp = xr.DataArray(data, dims=['time', 'lat', 'lon'])
temp
"""
Explanation: We can also pass in our own dimension names:
End of explanation
"""
# Use pandas to create an array of datetimes
import pandas as pd
times = pd.date_range('2018-01-01', periods=5)
times
# Sample lon/lats
lons = np.linspace(-120, -60, 4)
lats = np.linspace(25, 55, 3)
"""
Explanation: This is already improved upon from a numpy array, because we have names for each of the dimensions (or axes in NumPy parlance). Even better, we can take arrays representing the values for the coordinates for each of these dimensions and associate them with the data when we create the DataArray.
End of explanation
"""
temp = xr.DataArray(data, coords=[times, lats, lons], dims=['time', 'lat', 'lon'])
temp
"""
Explanation: When we create the DataArray instance, we pass in the arrays we just created:
End of explanation
"""
temp.attrs['units'] = 'kelvin'
temp.attrs['standard_name'] = 'air_temperature'
temp
"""
Explanation: ...and we can also set some attribute metadata:
End of explanation
"""
# For example, convert Kelvin to Celsius
temp - 273.15
"""
Explanation: Notice what happens if we perform a mathematical operaton with the DataArray: the coordinate values persist, but the attributes are lost. This is done because it is very challenging to know if the attribute metadata is still correct or appropriate after arbitrary arithmetic operations.
End of explanation
"""
temp.sel(time='2018-01-02')
"""
Explanation: Selection
We can use the .sel method to select portions of our data based on these coordinate values, rather than using indices (this is similar to the CDM).
End of explanation
"""
from datetime import timedelta
temp.sel(time='2018-01-07', method='nearest', tolerance=timedelta(days=2))
"""
Explanation: .sel has the flexibility to also perform nearest neighbor sampling, taking an optional tolerance:
End of explanation
"""
# Your code goes here
"""
Explanation: Exercise
.interp() works similarly to .sel(). Using .interp(), get an interpolated time series "forecast" for Boulder (40°N, 105°W) or your favorite latitude/longitude location. (Documentation for <a href="http://xarray.pydata.org/en/stable/interpolation.html">interp</a>).
End of explanation
"""
# %load solutions/interp_solution.py
"""
Explanation: Solution
End of explanation
"""
temp.sel(time=slice('2018-01-01', '2018-01-03'), lon=slice(-110, -70), lat=slice(25, 45))
"""
Explanation: Slicing with Selection
End of explanation
"""
# As done above
temp.loc['2018-01-02']
temp.loc['2018-01-01':'2018-01-03', 25:45, -110:-70]
# This *doesn't* work however:
#temp.loc[-110:-70, 25:45,'2018-01-01':'2018-01-03']
"""
Explanation: .loc
All of these operations can also be done within square brackets on the .loc attribute of the DataArray. This permits a much more numpy-looking syntax, though you lose the ability to specify the names of the various dimensions. Instead, the slicing must be done in the correct order.
End of explanation
"""
# Open sample North American Reanalysis data in netCDF format
ds = xr.open_dataset('../../data/NARR_19930313_0000.nc')
ds
"""
Explanation: Opening netCDF data
With its close ties to the netCDF data model, XArray also supports netCDF as a first-class file format. This means it has easy support for opening netCDF datasets, so long as they conform to some of XArray's limitations (such as 1-dimensional coordinates).
End of explanation
"""
ds.isobaric1
"""
Explanation: This returns a Dataset object, which is a container that contains one or more DataArrays, which can also optionally share coordinates. We can then pull out individual fields:
End of explanation
"""
ds['isobaric1']
"""
Explanation: or
End of explanation
"""
ds_1000 = ds.sel(isobaric1=1000.0)
ds_1000
ds_1000.Temperature_isobaric
"""
Explanation: Datasets also support much of the same subsetting operations as DataArray, but will perform the operation on all data:
End of explanation
"""
u_winds = ds['u-component_of_wind_isobaric']
u_winds.std(dim=['x', 'y'])
"""
Explanation: Aggregation operations
Not only can you use the named dimensions for manual slicing and indexing of data, but you can also use it to control aggregation operations, like sum:
End of explanation
"""
# %load solutions/mean_profile.py
"""
Explanation: Exercise
Using the sample dataset, calculate the mean temperature profile (temperature as a function of pressure) over Colorado within this dataset. For this exercise, consider the bounds of Colorado to be:
* x: -182km to 424km
* y: -1450km to -990km
(37°N to 41°N and 102°W to 109°W projected to Lambert Conformal projection coordinates)
Solution
End of explanation
"""
# Import some useful Python tools
from datetime import datetime
# Twelve hours of hourly output starting at 22Z today
start = datetime.utcnow().replace(hour=22, minute=0, second=0, microsecond=0)
times = np.array([start + timedelta(hours=h) for h in range(13)])
# 3km spacing in x and y
x = np.arange(-150, 153, 3)
y = np.arange(-100, 100, 3)
# Standard pressure levels in hPa
press = np.array([1000, 925, 850, 700, 500, 300, 250])
temps = np.random.randn(times.size, press.size, y.size, x.size)
"""
Explanation: Resources
There is much more in the XArray library. To learn more, visit the XArray Documentation
Introduction to Climate and Forecasting Metadata Conventions
In order to better enable reproducible data and research, the Climate and Forecasting (CF) metadata convention was created to have proper metadata in atmospheric data files. In the remainder of this notebook, we will introduce the CF data model and discuss some netCDF implementation details to consider when deciding how to write data with CF and netCDF. We will cover gridded data in this notebook, with more in depth examples provided in the full CF notebook. Xarray makes the creation of netCDFs with proper metadata simple and straightforward, so we will use that, instead of the netCDF-Python library.
This assumes a basic understanding of netCDF.
<a name="gridded"></a>
Gridded Data
Let's say we're working with some numerical weather forecast model output. Let's walk through the steps necessary to store this data in netCDF, using the Climate and Forecasting metadata conventions to ensure that our data are available to as many tools as possible.
To start, let's assume the following about our data:
* It corresponds to forecast three dimensional temperature at several times
* The native coordinate system of the model is on a regular grid that represents the Earth on a Lambert conformal projection.
We'll also go ahead and generate some arrays of data below to get started:
End of explanation
"""
from cftime import date2num
time_units = 'hours since {:%Y-%m-%d 00:00}'.format(times[0])
time_vals = date2num(times, time_units)
time_vals
"""
Explanation: Time coordinates must contain a units attribute with a string value with a form similar to 'seconds since 2019-01-06 12:00:00.00'. 'seconds', 'minutes', 'hours', and 'days' are the most commonly used units for time. Due to the variable length of months and years, they are not recommended.
Before we can write data, we need to first need to convert our list of Python datetime instances to numeric values. We can use the cftime library to make this easy to convert using the unit string as defined above.
End of explanation
"""
ds = xr.Dataset({'temperature': (['time', 'z', 'y', 'x'], temps, {'units':'Kelvin'})},
coords={'x_dist': (['x'], x, {'units':'km'}),
'y_dist': (['y'], y, {'units':'km'}),
'pressure': (['z'], press, {'units':'hPa'}),
'forecast_time': (['time'], times)
})
ds
"""
Explanation: Now we can create the forecast_time variable just as we did before for the other coordinate variables:
Convert arrays into Xarray Dataset
End of explanation
"""
ds.forecast_time.encoding['units'] = time_units
"""
Explanation: Due to how xarray handles time units, we need to encode the units in the forecast_time coordinate.
End of explanation
"""
ds.temperature
"""
Explanation: If we look at our data variable, we can see the units printed out, so they were attached properly!
End of explanation
"""
ds.attrs['Conventions'] = 'CF-1.7'
ds.attrs['title'] = 'Forecast model run'
ds.attrs['nc.institution'] = 'Unidata'
ds.attrs['source'] = 'WRF-1.5'
ds.attrs['history'] = str(datetime.utcnow()) + ' Python'
ds.attrs['references'] = ''
ds.attrs['comment'] = ''
ds
"""
Explanation: We're going to start by adding some global attribute metadata. These are recommendations from the standard (not required), but they're easy to add and help users keep the data straight, so let's go ahead and do it.
End of explanation
"""
ds.temperature.attrs['standard_name'] = 'air_temperature'
ds.temperature.attrs['long_name'] = 'Forecast air temperature'
ds.temperature.attrs['missing_value'] = -9999
ds.temperature
"""
Explanation: We can also add attributes to this variable to define metadata. The CF conventions require a units attribute to be set for all variables that represent a dimensional quantity. The value of this attribute needs to be parsable by the UDUNITS library. Here we have already set it to a value of 'Kelvin'. We also set the standard (optional) attributes of long_name and standard_name. The former contains a longer description of the variable, while the latter comes from a controlled vocabulary in the CF conventions. This allows users of data to understand, in a standard fashion, what a variable represents. If we had missing values, we could also set the missing_value attribute to an appropriate value.
NASA Dataset Interoperability Recommendations:
Section 2.2 - Include Basic CF Attributes
Include where applicable: units, long_name, standard_name, valid_min / valid_max, scale_factor / add_offset and others.
End of explanation
"""
ds.x.attrs['axis'] = 'X' # Optional
ds.x.attrs['standard_name'] = 'projection_x_coordinate'
ds.x.attrs['long_name'] = 'x-coordinate in projected coordinate system'
ds.y.attrs['axis'] = 'Y' # Optional
ds.y.attrs['standard_name'] = 'projection_y_coordinate'
ds.y.attrs['long_name'] = 'y-coordinate in projected coordinate system'
"""
Explanation: Coordinate variables
To properly orient our data in time and space, we need to go beyond dimensions (which define common sizes and alignment) and include values along these dimensions, which are called "Coordinate Variables". Generally, these are defined by creating a one dimensional variable with the same name as the respective dimension.
To start, we define variables which define our x and y coordinate values. These variables include standard_names which allow associating them with projections (more on this later) as well as an optional axis attribute to make clear what standard direction this coordinate refers to.
End of explanation
"""
ds.pressure.attrs['axis'] = 'Z' # Optional
ds.pressure.attrs['standard_name'] = 'air_pressure'
ds.pressure.attrs['positive'] = 'down' # Optional
ds.forecast_time['axis'] = 'T' # Optional
ds.forecast_time['standard_name'] = 'time' # Optional
ds.forecast_time['long_name'] = 'time'
"""
Explanation: We also define a coordinate variable pressure to reference our data in the vertical dimension. The standard_name of 'air_pressure' is sufficient to identify this coordinate variable as the vertical axis, but let's go ahead and specify the axis as well. We also specify the attribute positive to indicate whether the variable increases when going up or down. In the case of pressure, this is technically optional.
End of explanation
"""
from pyproj import Proj
X, Y = np.meshgrid(x, y)
lcc = Proj({'proj':'lcc', 'lon_0':-105, 'lat_0':40, 'a':6371000.,
'lat_1':25})
lon, lat = lcc(X * 1000, Y * 1000, inverse=True)
"""
Explanation: Auxilliary Coordinates
Our data are still not CF-compliant because they do not contain latitude and longitude information, which is needed to properly locate the data. To solve this, we need to add variables with latitude and longitude. These are called "auxillary coordinate variables", not because they are extra, but because they are not simple one dimensional variables.
Below, we first generate longitude and latitude values from our projected coordinates using the pyproj library.
End of explanation
"""
ds = ds.assign_coords(lon = (['y', 'x'], lon))
ds = ds.assign_coords(lat = (['y', 'x'], lat))
ds
ds.lon.attrs['units'] = 'degrees_east'
ds.lon.attrs['standard_name'] = 'longitude' # Optional
ds.lon.attrs['long_name'] = 'longitude'
ds.lat.attrs['units'] = 'degrees_north'
ds.lat.attrs['standard_name'] = 'latitude' # Optional
ds.lat.attrs['long_name'] = 'latitude'
"""
Explanation: Now we can create the needed variables. Both are dimensioned on y and x and are two-dimensional. The longitude variable is identified as actually containing such information by its required units of 'degrees_east', as well as the optional 'longitude' standard_name attribute. The case is the same for latitude, except the units are 'degrees_north' and the standard_name is 'latitude'.
End of explanation
"""
ds
"""
Explanation: With the variables created, we identify these variables as containing coordinates for the Temperature variable by setting the coordinates value to a space-separated list of the names of the auxilliary coordinate variables:
End of explanation
"""
ds['lambert_projection'] = int()
ds.lambert_projection.attrs['grid_mapping_name'] = 'lambert_conformal_conic'
ds.lambert_projection.attrs['standard_parallel'] = 25.
ds.lambert_projection.attrs['latitude_of_projection_origin'] = 40.
ds.lambert_projection.attrs['longitude_of_central_meridian'] = -105.
ds.lambert_projection.attrs['semi_major_axis'] = 6371000.0
ds.lambert_projection
"""
Explanation: Coordinate System Information
With our data specified on a Lambert conformal projected grid, it would be good to include this information in our metadata. We can do this using a "grid mapping" variable. This uses a dummy scalar variable as a namespace for holding all of the required information. Relevant variables then reference the dummy variable with their grid_mapping attribute.
Below we create a variable and set it up for a Lambert conformal conic projection on a spherical earth. The grid_mapping_name attribute describes which of the CF-supported grid mappings we are specifying. The names of additional attributes vary between the mappings.
End of explanation
"""
ds.temperature.attrs['grid_mapping'] = 'lambert_projection' # or proj_var.name
ds
"""
Explanation: Now that we created the variable, all that's left is to set the grid_mapping attribute on our Temperature variable to the name of our dummy variable:
End of explanation
"""
ds.to_netcdf('test_netcdf.nc', format='NETCDF4')
!ncdump test_netcdf.nc
"""
Explanation: Write to NetCDF
Xarray has built-in support for a few flavors of netCDF. Here we'll write a netCDF4 file from our Dataset.
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.22/_downloads/1af5a35cbb809b9480120842884536c5/plot_brainstorm_auditory.ipynb | bsd-3-clause | # Authors: Mainak Jas <mainak.jas@telecom-paristech.fr>
# Eric Larson <larson.eric.d@gmail.com>
# Jaakko Leppakangas <jaeilepp@student.jyu.fi>
#
# License: BSD (3-clause)
import os.path as op
import pandas as pd
import numpy as np
import mne
from mne import combine_evoked
from mne.minimum_norm import apply_inverse
from mne.datasets.brainstorm import bst_auditory
from mne.io import read_raw_ctf
print(__doc__)
"""
Explanation: Brainstorm auditory tutorial dataset
Here we compute the evoked from raw for the auditory Brainstorm
tutorial dataset. For comparison, see [1] and the associated
brainstorm site <https://neuroimage.usc.edu/brainstorm/Tutorials/Auditory>.
Experiment:
- One subject, 2 acquisition runs 6 minutes each.
- Each run contains 200 regular beeps and 40 easy deviant beeps.
- Random ISI: between 0.7s and 1.7s seconds, uniformly distributed.
- Button pressed when detecting a deviant with the right index finger.
The specifications of this dataset were discussed initially on the
FieldTrip bug tracker
<http://bugzilla.fieldtriptoolbox.org/show_bug.cgi?id=2300>__.
References
.. [1] Tadel F, Baillet S, Mosher JC, Pantazis D, Leahy RM.
Brainstorm: A User-Friendly Application for MEG/EEG Analysis.
Computational Intelligence and Neuroscience, vol. 2011, Article ID
879716, 13 pages, 2011. doi:10.1155/2011/879716
End of explanation
"""
use_precomputed = True
"""
Explanation: To reduce memory consumption and running time, some of the steps are
precomputed. To run everything from scratch change this to False. With
use_precomputed = False running time of this script can be several
minutes even on a fast computer.
End of explanation
"""
data_path = bst_auditory.data_path()
subject = 'bst_auditory'
subjects_dir = op.join(data_path, 'subjects')
raw_fname1 = op.join(data_path, 'MEG', 'bst_auditory',
'S01_AEF_20131218_01.ds')
raw_fname2 = op.join(data_path, 'MEG', 'bst_auditory',
'S01_AEF_20131218_02.ds')
erm_fname = op.join(data_path, 'MEG', 'bst_auditory',
'S01_Noise_20131218_01.ds')
"""
Explanation: The data was collected with a CTF 275 system at 2400 Hz and low-pass
filtered at 600 Hz. Here the data and empty room data files are read to
construct instances of :class:mne.io.Raw.
End of explanation
"""
raw = read_raw_ctf(raw_fname1)
n_times_run1 = raw.n_times
mne.io.concatenate_raws([raw, read_raw_ctf(raw_fname2)])
raw_erm = read_raw_ctf(erm_fname)
"""
Explanation: In the memory saving mode we use preload=False and use the memory
efficient IO which loads the data on demand. However, filtering and some
other functions require the data to be preloaded in the memory.
End of explanation
"""
raw.set_channel_types({'HEOG': 'eog', 'VEOG': 'eog', 'ECG': 'ecg'})
if not use_precomputed:
# Leave out the two EEG channels for easier computation of forward.
raw.pick(['meg', 'stim', 'misc', 'eog', 'ecg']).load_data()
"""
Explanation: Data channel array consisted of 274 MEG axial gradiometers, 26 MEG reference
sensors and 2 EEG electrodes (Cz and Pz).
In addition:
1 stim channel for marking presentation times for the stimuli
1 audio channel for the sent signal
1 response channel for recording the button presses
1 ECG bipolar
2 EOG bipolar (vertical and horizontal)
12 head tracking channels
20 unused channels
The head tracking channels and the unused channels are marked as misc
channels. Here we define the EOG and ECG channels.
End of explanation
"""
annotations_df = pd.DataFrame()
offset = n_times_run1
for idx in [1, 2]:
csv_fname = op.join(data_path, 'MEG', 'bst_auditory',
'events_bad_0%s.csv' % idx)
df = pd.read_csv(csv_fname, header=None,
names=['onset', 'duration', 'id', 'label'])
print('Events from run {0}:'.format(idx))
print(df)
df['onset'] += offset * (idx - 1)
annotations_df = pd.concat([annotations_df, df], axis=0)
saccades_events = df[df['label'] == 'saccade'].values[:, :3].astype(int)
# Conversion from samples to times:
onsets = annotations_df['onset'].values / raw.info['sfreq']
durations = annotations_df['duration'].values / raw.info['sfreq']
descriptions = annotations_df['label'].values
annotations = mne.Annotations(onsets, durations, descriptions)
raw.set_annotations(annotations)
del onsets, durations, descriptions
"""
Explanation: For noise reduction, a set of bad segments have been identified and stored
in csv files. The bad segments are later used to reject epochs that overlap
with them.
The file for the second run also contains some saccades. The saccades are
removed by using SSP. We use pandas to read the data from the csv files. You
can also view the files with your favorite text editor.
End of explanation
"""
saccade_epochs = mne.Epochs(raw, saccades_events, 1, 0., 0.5, preload=True,
baseline=(None, None),
reject_by_annotation=False)
projs_saccade = mne.compute_proj_epochs(saccade_epochs, n_mag=1, n_eeg=0,
desc_prefix='saccade')
if use_precomputed:
proj_fname = op.join(data_path, 'MEG', 'bst_auditory',
'bst_auditory-eog-proj.fif')
projs_eog = mne.read_proj(proj_fname)[0]
else:
projs_eog, _ = mne.preprocessing.compute_proj_eog(raw.load_data(),
n_mag=1, n_eeg=0)
raw.add_proj(projs_saccade)
raw.add_proj(projs_eog)
del saccade_epochs, saccades_events, projs_eog, projs_saccade # To save memory
"""
Explanation: Here we compute the saccade and EOG projectors for magnetometers and add
them to the raw data. The projectors are added to both runs.
End of explanation
"""
raw.plot(block=True)
"""
Explanation: Visually inspect the effects of projections. Click on 'proj' button at the
bottom right corner to toggle the projectors on/off. EOG events can be
plotted by adding the event list as a keyword argument. As the bad segments
and saccades were added as annotations to the raw data, they are plotted as
well.
End of explanation
"""
if not use_precomputed:
raw.plot_psd(tmax=np.inf, picks='meg')
notches = np.arange(60, 181, 60)
raw.notch_filter(notches, phase='zero-double', fir_design='firwin2')
raw.plot_psd(tmax=np.inf, picks='meg')
"""
Explanation: Typical preprocessing step is the removal of power line artifact (50 Hz or
60 Hz). Here we notch filter the data at 60, 120 and 180 to remove the
original 60 Hz artifact and the harmonics. The power spectra are plotted
before and after the filtering to show the effect. The drop after 600 Hz
appears because the data was filtered during the acquisition. In memory
saving mode we do the filtering at evoked stage, which is not something you
usually would do.
End of explanation
"""
if not use_precomputed:
raw.filter(None, 100., h_trans_bandwidth=0.5, filter_length='10s',
phase='zero-double', fir_design='firwin2')
"""
Explanation: We also lowpass filter the data at 100 Hz to remove the hf components.
End of explanation
"""
tmin, tmax = -0.1, 0.5
event_id = dict(standard=1, deviant=2)
reject = dict(mag=4e-12, eog=250e-6)
# find events
events = mne.find_events(raw, stim_channel='UPPT001')
"""
Explanation: Epoching and averaging.
First some parameters are defined and events extracted from the stimulus
channel (UPPT001). The rejection thresholds are defined as peak-to-peak
values and are in T / m for gradiometers, T for magnetometers and
V for EOG and EEG channels.
End of explanation
"""
sound_data = raw[raw.ch_names.index('UADC001-4408')][0][0]
onsets = np.where(np.abs(sound_data) > 2. * np.std(sound_data))[0]
min_diff = int(0.5 * raw.info['sfreq'])
diffs = np.concatenate([[min_diff + 1], np.diff(onsets)])
onsets = onsets[diffs > min_diff]
assert len(onsets) == len(events)
diffs = 1000. * (events[:, 0] - onsets) / raw.info['sfreq']
print('Trigger delay removed (μ ± σ): %0.1f ± %0.1f ms'
% (np.mean(diffs), np.std(diffs)))
events[:, 0] = onsets
del sound_data, diffs
"""
Explanation: The event timing is adjusted by comparing the trigger times on detected
sound onsets on channel UADC001-4408.
End of explanation
"""
raw.info['bads'] = ['MLO52-4408', 'MRT51-4408', 'MLO42-4408', 'MLO43-4408']
"""
Explanation: We mark a set of bad channels that seem noisier than others. This can also
be done interactively with raw.plot by clicking the channel name
(or the line). The marked channels are added as bad when the browser window
is closed.
End of explanation
"""
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=['meg', 'eog'],
baseline=(None, 0), reject=reject, preload=False,
proj=True)
"""
Explanation: The epochs (trials) are created for MEG channels. First we find the picks
for MEG and EOG channels. Then the epochs are constructed using these picks.
The epochs overlapping with annotated bad segments are also rejected by
default. To turn off rejection by bad segments (as was done earlier with
saccades) you can use keyword reject_by_annotation=False.
End of explanation
"""
epochs.drop_bad()
epochs_standard = mne.concatenate_epochs([epochs['standard'][range(40)],
epochs['standard'][182:222]])
epochs_standard.load_data() # Resampling to save memory.
epochs_standard.resample(600, npad='auto')
epochs_deviant = epochs['deviant'].load_data()
epochs_deviant.resample(600, npad='auto')
del epochs
"""
Explanation: We only use first 40 good epochs from each run. Since we first drop the bad
epochs, the indices of the epochs are no longer same as in the original
epochs collection. Investigation of the event timings reveals that first
epoch from the second run corresponds to index 182.
End of explanation
"""
evoked_std = epochs_standard.average()
evoked_dev = epochs_deviant.average()
del epochs_standard, epochs_deviant
"""
Explanation: The averages for each conditions are computed.
End of explanation
"""
for evoked in (evoked_std, evoked_dev):
evoked.filter(l_freq=None, h_freq=40., fir_design='firwin')
"""
Explanation: Typical preprocessing step is the removal of power line artifact (50 Hz or
60 Hz). Here we lowpass filter the data at 40 Hz, which will remove all
line artifacts (and high frequency information). Normally this would be done
to raw data (with :func:mne.io.Raw.filter), but to reduce memory
consumption of this tutorial, we do it at evoked stage. (At the raw stage,
you could alternatively notch filter with :func:mne.io.Raw.notch_filter.)
End of explanation
"""
evoked_std.plot(window_title='Standard', gfp=True, time_unit='s')
evoked_dev.plot(window_title='Deviant', gfp=True, time_unit='s')
"""
Explanation: Here we plot the ERF of standard and deviant conditions. In both conditions
we can see the P50 and N100 responses. The mismatch negativity is visible
only in the deviant condition around 100-200 ms. P200 is also visible around
170 ms in both conditions but much stronger in the standard condition. P300
is visible in deviant condition only (decision making in preparation of the
button press). You can view the topographies from a certain time span by
painting an area with clicking and holding the left mouse button.
End of explanation
"""
times = np.arange(0.05, 0.301, 0.025)
evoked_std.plot_topomap(times=times, title='Standard', time_unit='s')
evoked_dev.plot_topomap(times=times, title='Deviant', time_unit='s')
"""
Explanation: Show activations as topography figures.
End of explanation
"""
evoked_difference = combine_evoked([evoked_dev, evoked_std], weights=[1, -1])
evoked_difference.plot(window_title='Difference', gfp=True, time_unit='s')
"""
Explanation: We can see the MMN effect more clearly by looking at the difference between
the two conditions. P50 and N100 are no longer visible, but MMN/P200 and
P300 are emphasised.
End of explanation
"""
reject = dict(mag=4e-12)
cov = mne.compute_raw_covariance(raw_erm, reject=reject)
cov.plot(raw_erm.info)
del raw_erm
"""
Explanation: Source estimation.
We compute the noise covariance matrix from the empty room measurement
and use it for the other runs.
End of explanation
"""
trans_fname = op.join(data_path, 'MEG', 'bst_auditory',
'bst_auditory-trans.fif')
trans = mne.read_trans(trans_fname)
"""
Explanation: The transformation is read from a file:
End of explanation
"""
if use_precomputed:
fwd_fname = op.join(data_path, 'MEG', 'bst_auditory',
'bst_auditory-meg-oct-6-fwd.fif')
fwd = mne.read_forward_solution(fwd_fname)
else:
src = mne.setup_source_space(subject, spacing='ico4',
subjects_dir=subjects_dir, overwrite=True)
model = mne.make_bem_model(subject=subject, ico=4, conductivity=[0.3],
subjects_dir=subjects_dir)
bem = mne.make_bem_solution(model)
fwd = mne.make_forward_solution(evoked_std.info, trans=trans, src=src,
bem=bem)
inv = mne.minimum_norm.make_inverse_operator(evoked_std.info, fwd, cov)
snr = 3.0
lambda2 = 1.0 / snr ** 2
del fwd
"""
Explanation: To save time and memory, the forward solution is read from a file. Set
use_precomputed=False in the beginning of this script to build the
forward solution from scratch. The head surfaces for constructing a BEM
solution are read from a file. Since the data only contains MEG channels, we
only need the inner skull surface for making the forward solution. For more
information: CHDBBCEJ, :func:mne.setup_source_space,
bem-model, :func:mne.bem.make_watershed_bem.
End of explanation
"""
stc_standard = mne.minimum_norm.apply_inverse(evoked_std, inv, lambda2, 'dSPM')
brain = stc_standard.plot(subjects_dir=subjects_dir, subject=subject,
surface='inflated', time_viewer=False, hemi='lh',
initial_time=0.1, time_unit='s')
del stc_standard, brain
"""
Explanation: The sources are computed using dSPM method and plotted on an inflated brain
surface. For interactive controls over the image, use keyword
time_viewer=True.
Standard condition.
End of explanation
"""
stc_deviant = mne.minimum_norm.apply_inverse(evoked_dev, inv, lambda2, 'dSPM')
brain = stc_deviant.plot(subjects_dir=subjects_dir, subject=subject,
surface='inflated', time_viewer=False, hemi='lh',
initial_time=0.1, time_unit='s')
del stc_deviant, brain
"""
Explanation: Deviant condition.
End of explanation
"""
stc_difference = apply_inverse(evoked_difference, inv, lambda2, 'dSPM')
brain = stc_difference.plot(subjects_dir=subjects_dir, subject=subject,
surface='inflated', time_viewer=False, hemi='lh',
initial_time=0.15, time_unit='s')
"""
Explanation: Difference.
End of explanation
"""
|
elenduuche/deep-learning | autoencoder/Simple_Autoencoder_Solution.ipynb | mit | %matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
"""
Explanation: A Simple Autoencoder
We'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data.
In this notebook, we'll be build a simple network architecture for the encoder and decoder. Let's get started by importing our libraries and getting the dataset.
End of explanation
"""
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
"""
Explanation: Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits.
End of explanation
"""
# Size of the encoding layer (the hidden layer)
encoding_dim = 32
image_size = mnist.train.images.shape[1]
inputs_ = tf.placeholder(tf.float32, (None, image_size), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, image_size), name='targets')
# Output of hidden layer
encoded = tf.layers.dense(inputs_, encoding_dim, activation=tf.nn.relu)
# Output layer logits
logits = tf.layers.dense(encoded, image_size, activation=None)
# Sigmoid output from
decoded = tf.nn.sigmoid(logits, name='output')
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(0.001).minimize(cost)
"""
Explanation: We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input.
Exercise: Build the graph for the autoencoder in the cell below. The input images will be flattened into 784 length vectors. The targets are the same as the inputs. And there should be one hidden layer with a ReLU activation and an output layer with a sigmoid activation. Feel free to use TensorFlow's higher level API, tf.layers. For instance, you would use tf.layers.dense(inputs, units, activation=tf.nn.relu) to create a fully connected layer with a ReLU activation. The loss should be calculated with the cross-entropy loss, there is a convenient TensorFlow function for this tf.nn.sigmoid_cross_entropy_with_logits (documentation). You should note that tf.nn.sigmoid_cross_entropy_with_logits takes the logits, but to get the reconstructed images you'll need to pass the logits through the sigmoid function.
End of explanation
"""
# Create the session
sess = tf.Session()
"""
Explanation: Training
End of explanation
"""
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
feed = {inputs_: batch[0], targets_: batch[0]}
batch_cost, _ = sess.run([cost, opt], feed_dict=feed)
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
"""
Explanation: Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss and the test loss afterwards.
Calling mnist.train.next_batch(batch_size) will return a tuple of (images, labels). We're not concerned with the labels here, we just need the images. Otherwise this is pretty straightfoward training with TensorFlow. We initialize the variables with sess.run(tf.global_variables_initializer()). Then, run the optimizer and get the loss with batch_cost, _ = sess.run([cost, opt], feed_dict=feed).
End of explanation
"""
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed, compressed = sess.run([decoded, encoded], feed_dict={inputs_: in_imgs})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
"""
Explanation: Checking out the results
Below I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts.
End of explanation
"""
|
GuillaumeDec/machine-learning | deep-multivariate-lstm-tensorflow/tensorflow deep multivariate lstm .ipynb | gpl-3.0 | import numpy as np
import pandas as pd
import tensorflow as tf
import utils as utl
from collections import Counter
"""
Explanation: Modeling Stock Market Sentiment with LSTMs and TensorFlow
In this tutorial, we will build a Long Short Term Memory (LSTM) Network to predict the stock market sentiment based on a comment about the market.
Setup
We will use the following libraries for our analysis:
numpy - numerical computing library used to work with our data
pandas - data analysis library used to read in our data from csv
tensorflow - deep learning framework used for modeling
We will also be using the python Counter object for counting our vocabulary items and we have a util module that extracts away a lot of the details of our data processing. Please read through the util.py to get a better understanding of how to preprocess the data for analysis.
End of explanation
"""
# read data from csv file
data = pd.read_csv("data/StockTwits_SPY_Sentiment_2017.gz",
encoding="utf-8",
compression="gzip",
index_col=0)
# get messages and sentiment labels
messages = data.message.values
labels = data.sentiment.values
# View sample of messages with sentiment
for i in range(10):
print("Messages: {}...".format(messages[i]),
"Sentiment: {}".format(labels[i]))
"""
Explanation: Processing Data
We will train the model using messages tagged with SPY, the S&P 500 index fund, from StockTwits.com. StockTwits is a social media network for traders and investors to share their views about the stock market. When a user posts a message, they tag the relevant stock ticker ($SPY in our case) and have the option to tag the messages with their sentiment – “bullish” if they believe the stock will go up and “bearish” if they believe the stock will go down.
Our dataset consists of approximately 100,000 messages posted in 2017 that are tagged with $SPY where the user indicated their sentiment. Before we get to our LSTM Network we have to perform some processing on our data to get it ready for modeling.
Read and View Data
First we simply read in our data using pandas, pull out our message and sentiment data into numpy arrays. Let's also take a look at a few samples to get familiar with the data set.
End of explanation
"""
messages = np.array([utl.preprocess_ST_message(message) for message in messages])
"""
Explanation: Preprocess Messages
Working with raw text data often requires preprocessing the text in some fashion to normalize for context. In our case we want to normalize for known unique "entities" that appear within messages that carry a similar contextual meaning when analyzing sentiment. This means we want to replace references to specific stock tickers, user names, url links or numbers with a special token identifying the "entity". Here we will also make everything lower case and remove punctuation.
End of explanation
"""
messages[0]
full_lexicon = " ".join(messages).split()
vocab_to_int, int_to_vocab = utl.create_lookup_tables(full_lexicon)
"""
Explanation: Generate Vocab to Index Mapping
To work with raw text we need some encoding from words to numbers for our algorithm to work with the inputs. The first step of doing this is keeping a collection of our full vocabularly and creating a mapping of each word to a unique index. We will use this word to index mapping in a little bit to prep out messages for analysis.
Note that in practice we may want to only include the vocabularly from our training set here to account for the fact that we will likely see new words when our model is out in the wild when we are assessing the results on our validation and test sets. Here, for simplicity and demonstration purposes, we will use our entire data set.
End of explanation
"""
messages_lens = Counter([len(x) for x in messages])
print("Zero-length messages: {}".format(messages_lens[0]))
print("Maximum message length: {}".format(max(messages_lens)))
print("Average message length: {}".format(np.mean([len(x) for x in messages])))
messages, labels = utl.drop_empty_messages(messages, labels)
"""
Explanation: Check Message Lengths
We will also want to get a sense of the distribution of the length of our inputs. We check for the longest and average messages. We will need to make our input length uniform to feed the data into our model so later we will have some decisions to make about possibly truncating some of the longer messages if they are too long. We also notice that one message has no content remaining after we preprocessed the data, so we will remove this message from our data set.
End of explanation
"""
messages = utl.encode_ST_messages(messages, vocab_to_int)
labels = utl.encode_ST_labels(labels)
"""
Explanation: Encode Messages and Labels
Earlier we mentioned that we need to "translate" our text to number for our algorithm to take in as inputs. We call this translation an encoding. We encode our messages to sequences of numbers where each nummber is the word index from the mapping we made earlier. The phrase "I am bullish" would now look something like [1, 234, 5345] where each number is the index for the respective word in the message. For our sentiment labels we will simply encode "bearish" as 0 and "bullish" as 1.
End of explanation
"""
messages2 = utl.zero_pad_messages(messages, seq_len=244)
mess = [i[-6:-1] for i in messages2]
labels = [i[-1] for i in messages2]
BIG_N = 1600
X = [[i for i in zip(mess[j], np.sqrt(mess[j]))] for j in range(0, BIG_N)]
labels = [labels[j] for j in range(0, BIG_N)]
some_2d_sequences = np.array([*X]).astype(float)
some_2d_labels = np.array(labels).astype(int)
# X
print('shape: n_sequences, len_sequence, dim_input', some_2d_sequences.shape)
print('shape labels: n_sequences, len_labels, dim_input', some_2d_labels.shape)
some_2d_labels[100]
"""
Explanation: Pad Messages
The last thing we need to do is make our message inputs the same length. In our case, the longest message is 244 words. LSTMs can usually handle sequence inputs up to 500 items in length so we won't truncate any of the messages here. We need to Zero Pad the rest of the messages that are shorter. We will use a left padding that will pad all of the messages that are shorter than 244 words with 0s at the beginning. So our encoded "I am bullish" messages goes from [1, 234, 5345] (length 3) to [0, 0, 0, 0, 0, 0, ... , 0, 0, 1, 234, 5345] (length 244).
End of explanation
"""
train_x, val_x, test_x, train_y, val_y, test_y = utl.train_val_test_split(some_2d_sequences, some_2d_labels, split_frac=0.80)
print("Data Set Size")
print("Train set: \t\t{}".format(train_x.shape),
"\nValidation set: \t{}".format(val_x.shape),
"\nTest set: \t\t{}".format(test_x.shape))
###### JUST SOME CHALKBOARD STUFF
# first, create a TensorFlow constant
# const = tf.constant(2.0, name="const")
foo = np.array([[1,2,3], [4,4,5]])
inputs_ = tf.constant([], name="train_x")
embedding = tf.Variable(tf.random_uniform((7, 4), -1, 1))
embed = tf.nn.embedding_lookup(embedding, foo)
# create TensorFlow variables
# b = tf.Variable(2.0, name='b')
# c = tf.Variable(1.0, name='c')
# d = tf.add(b, c, name='d')
# e = tf.add(c, const, name='e')
a = tf.multiply(inputs_, 1, name='a')
# setup the variable initialisation
init_op = tf.global_variables_initializer()
with tf.Session() as sess:
# initialise the variables
sess.run(init_op)
# compute the output of the graph
a_out = sess.run(embed)
print("train_x is: ", train_x[0:2], train_x[0:2].shape)
print("foo is: ", foo, foo.shape)
print("SHAPE a is {}".format(a_out.shape))
print("Variable a is {}".format(a_out))
# this proves that embedding_lookup DOES IN FACT TURN your sequence of words into a n-d series, with n = embedding_size
# so this should work for n-d time series
"""
Explanation: Train, Test, Validation Split
The last thing we do is split our data into tranining, validation and test sets and observe the size of each set.
End of explanation
"""
n_dims = 2 # here, we test for 2 dimensions!
def model_inputs():
"""
Create the model inputs
"""
inputs_ = tf.placeholder(tf.float32, [None, None, n_dims], name='inputs')
labels_ = tf.placeholder(tf.int32, [None, None], name='labels')
keep_prob_ = tf.placeholder(tf.float32, name='keep_prob')
return inputs_, labels_, keep_prob_
"""
Explanation: Building and Training our LSTM Network
In this section we will define a number of functions that will construct the items in our network. We will then use these functions to build and train our network.
Model Inputs
Here we simply define a function to build TensorFlow Placeholders for our message sequences, our labels and a variable called keep probability associated with drop out (we will talk more about this later).
End of explanation
"""
def build_embedding_layer(inputs_):
"""
Create the embedding layer
"""
# embedding = tf.Variable(tf.random_uniform((vocab_size, embed_size), -1, 1))
# embed = tf.nn.embedding_lookup(embedding, inputs_)
# foo = inputs_.astype(float)
return inputs_
"""
Explanation: Embedding Layer
In TensorFlow the word embeddings are represented as a vocabulary size x embedding size matrix and will learn these weights during our training process. The embedding lookup is then just a simple lookup from our embedding matrix based on the index of the current word.
End of explanation
"""
def build_lstm_layers(lstm_sizes, embed, keep_prob_, batch_size):
"""
Create the LSTM layers
"""
lstms = [tf.contrib.rnn.BasicLSTMCell(size) for size in lstm_sizes]
# Add dropout to the cell
drops = [tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob_) for lstm in lstms]
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell(drops)
# Getting an initial state of all zeros
initial_state = cell.zero_state(batch_size, tf.float32)
lstm_outputs, final_state = tf.nn.dynamic_rnn(cell, embed, initial_state=initial_state)
return initial_state, lstm_outputs, cell, final_state
"""
Explanation: LSTM Layers
TensorFlow makes it extremely easy to build LSTM Layers and stack them on top of each other. We represent each LSTM layer as a BasicLSTMCell and keep these in a list to stack them together later. Here we will define a list with our LSTM layer sizes and the number of layers.
We then take each of these LSTM layers and wrap them in a Dropout Layer. Dropout is a regularization technique using in Neural Networks in which any individual node has a probability of “dropping out” of the network during a given iteration of learning. The makes the model more generalizable by ensuring that it is not too dependent on any given nodes.
Finally, we stack these layers using a MultiRNNCell, generate a zero initial state and connect our stacked LSTM layer to our word embedding inputs using dynamic_rnn. Here we track the output and the final state of the LSTM cell, which we will need to pass between mini-batches during training.
End of explanation
"""
def build_cost_fn_and_opt(lstm_outputs, labels_, learning_rate):
"""
Create the Loss function and Optimizer
"""
predictions = tf.contrib.layers.fully_connected(lstm_outputs[:, -1], 1, activation_fn=tf.sigmoid)
loss = tf.losses.mean_squared_error(labels_, predictions)
optimzer = tf.train.AdadeltaOptimizer(learning_rate).minimize(loss)
return predictions, loss, optimzer
"""
Explanation: Loss Function and Optimizer
First, we get our predictions by passing the final output of the LSTM layers to a sigmoid activation function via a Tensorflow fully connected layer. we only care to use the final output for making predictions so we pull this out using the [: , -1] indexing on our LSTM outputs and pass it through a sigmoid activation function to make the predictions. We pass then pass these predictions to our mean squared error loss function and use the Adadelta Optimizer to minimize the loss.
End of explanation
"""
def build_accuracy(predictions, labels_):
"""
Create accuracy
"""
correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), labels_)
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
return accuracy
"""
Explanation: Accuracy
Finally, we define our accuracy metric for assessing the model performance across our training, validation and test sets. Even though accuracy is just a calculation based on results, everything in TensorFlow is part of a Computation Graph. Therefore, we need to define our loss and accuracy nodes in the context of the rest of our network graph.
End of explanation
"""
def build_and_train_network(lstm_sizes, epochs, batch_size,
learning_rate, keep_prob, train_x, val_x, train_y, val_y):
inputs_, labels_, keep_prob_ = model_inputs()
embed = build_embedding_layer(inputs_)
initial_state, lstm_outputs, lstm_cell, final_state = build_lstm_layers(lstm_sizes, embed, keep_prob_, batch_size)
predictions, loss, optimizer = build_cost_fn_and_opt(lstm_outputs, labels_, learning_rate)
accuracy = build_accuracy(predictions, labels_)
saver = tf.train.Saver()
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
n_batches = len(train_x)//batch_size
for e in range(epochs):
state = sess.run(initial_state)
train_acc = []
for ii, (x, y) in enumerate(utl.get_batches(train_x, train_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob_: keep_prob,
initial_state: state}
loss_, state, _, batch_acc = sess.run([loss, final_state, optimizer, accuracy], feed_dict=feed)
train_acc.append(batch_acc)
if (ii + 1) % n_batches == 0:
val_acc = []
val_state = sess.run(lstm_cell.zero_state(batch_size, tf.float32))
for xx, yy in utl.get_batches(val_x, val_y, batch_size):
feed = {inputs_: xx,
labels_: yy[:, None],
keep_prob_: 1,
initial_state: val_state}
val_batch_acc, val_state = sess.run([accuracy, final_state], feed_dict=feed)
val_acc.append(val_batch_acc)
print("Epoch: {}/{}...".format(e+1, epochs),
"Batch: {}/{}...".format(ii+1, n_batches),
"Train Loss: {:.3f}...".format(loss_),
"Train Accruacy: {:.3f}...".format(np.mean(train_acc)),
"Val Accuracy: {:.3f}".format(np.mean(val_acc)))
saver.save(sess, "checkpoints/sentiment.ckpt")
"""
Explanation: Training
We are finally ready to build and train our LSTM Network! First, we call each of our each of the functions we have defined to construct the network. Then we define a Saver to be able to write our model to disk to load for future use. Finally, we call a Tensorflow Session to train the model over a predefined number of epochs using mini-batches. At the end of each epoch we will print the loss, training accuracy and validation accuracy to monitor the results as we train.
End of explanation
"""
# Define Inputs and Hyperparameters
lstm_sizes = [8, 4]
# vocab_size = len(vocab_to_int) + 1 #add one for padding
# embed_size = 30
epochs = 4
batch_size = 16
learning_rate = 0.1
keep_prob = 0.5
"""
Explanation: Next we define our model hyper parameters. We will build a 2 Layer LSTM Newtork with hidden layer sizes of 128 and 64 respectively. We will use an embedding size of 300 and train over 50 epochs with mini-batches of size 256. We will use an initial learning rate of 0.1, though our Adadelta Optimizer will adapt this over time, and a keep probability of 0.5.
End of explanation
"""
with tf.Graph().as_default():
build_and_train_network(lstm_sizes, epochs, batch_size,
learning_rate, keep_prob, train_x, val_x, train_y, val_y)
"""
Explanation: and now we train!
End of explanation
"""
def test_network(model_dir, batch_size, test_x, test_y):
inputs_, labels_, keep_prob_ = model_inputs()
embed = build_embedding_layer(inputs_, vocab_size, embed_size)
initial_state, lstm_outputs, lstm_cell, final_state = build_lstm_layers(lstm_sizes, embed, keep_prob_, batch_size)
predictions, loss, optimizer = build_cost_fn_and_opt(lstm_outputs, labels_, learning_rate)
accuracy = build_accuracy(predictions, labels_)
saver = tf.train.Saver()
test_acc = []
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint(model_dir))
test_state = sess.run(lstm_cell.zero_state(batch_size, tf.float32))
for ii, (x, y) in enumerate(utl.get_batches(test_x, test_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob_: 1,
initial_state: test_state}
batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed)
test_acc.append(batch_acc)
print("Test Accuracy: {:.3f}".format(np.mean(test_acc)))
with tf.Graph().as_default():
test_network('checkpoints', batch_size, test_x, test_y)
"""
Explanation: Testing our Network
The last thing we want to do is check the model accuracy on our testing data to make sure it is in line with expecations. We build the Computational Graph just like we did before, however, now instead of training we restore our saved model from our checkpoint directory and then run our test data through the model.
End of explanation
"""
|
karhohs/boardgame-bookie | boardgames/seafall/game_engine/Logic Test.ipynb | bsd-3-clause | %matplotlib inline
import numpy
import matplotlib
from matplotlib.patches import Circle, Wedge, Polygon
from matplotlib.collections import PatchCollection
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
import matplotlib.lines as mlines
import matplotlib.path as mpath
import numpy as np
import seaborn as sns
import networkx as nx
import pandas as pd
import SeaFallLogic
"""
Explanation: Logic Test
A notebook to test the classes and methods within SeaFallLogic.py.
End of explanation
"""
class Ship():
# Rules, pg 8, "Province Boards" also inlcude information about ships
def __init__(self):
self.damage = []
# hold, a list of objects with max length hold
self.hold = []
# upgrades, a list of upgrade objects of max length 2
self.upgrades = []
# values (explore, hold, raid, sail)
self._values = (1, 1, 1, 1)
# vmax is the maximum number values can reach for (explore, hold, raid,
# sail)
self._vmax = (5, 5, 5, 5)
@property
def values(self):
return self._values
@values.setter
def values(self, values):
if not isinstance(values, tuple):
err_str = ("Not a valid data type. The data type should be a tuple"
" of 4 length.")
raise ValueError(err_str)
elif len(values) != 4:
err_str = ("Not a valid data type. The data type should be a tuple"
" of 4 length.")
raise ValueError(err_str)
for val, vmax in zip(values, self.vmax):
if val > vmax:
raise ValueError("A ship value exceeds its max.")
self._values = values
@property
def vmax(self):
return self._vmax
@vmax.setter
def vmax(self, vmax_tuple):
if not isinstance(vmax_tuple, tuple):
err_str = ("Not a valid data type. The data type should be a tuple"
" of 4 length.")
raise ValueError(err_str)
elif len(vmax_tuple) != 4:
err_str = ("Not a valid data type. The data type should be a tuple"
" of 4 length.")
raise ValueError(err_str)
for val, vmax in zip((5, 5, 5, 5), vmax_tuple):
if val > vmax:
raise ValueError("The maximum ship values are never less than (5, 5, 5, 5).")
self._vmax = vmax
ship = Ship()
ship.values
"""
Explanation: Ship test
Create a ship object and change its values.
End of explanation
"""
class Site():
def __init__(self, dangerous=False, defense=0):
# Rules, pg 10, "Dangerous Sites"
self.dangerous = dangerous
# Rules, pg 10, "Starting an Endeavor"
# Rules, pg 7, "Defense"
self.defense = defense
class IslandSiteMine(Site):
def __init__(self, dangerous=False, defense=0, gold=0):
super().__init__(dangerous=dangerous, defense=defense)
self.gold = gold
mine = IslandSiteMine()
mine.dangerous
mine.defense
mine2 = IslandSiteMine(dangerous=True, defense=10, gold=6)
mine2.gold
class Goods():
valid_goods = {
"iron",
"linen",
"spice",
"wood"
}
def __init__(self):
pass
Goods.valid_goods
"iron" in Goods.valid_goods
"""
Explanation: Island Test
See how class inheritence works on an island site.
End of explanation
"""
|
abatula/MachineLearningIntro | KNN_Tutorial.ipynb | gpl-2.0 | # Print figures in the notebook
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from sklearn import neighbors, datasets # Import the nerest neighbor function and dataset from scikit-learn
from sklearn.model_selection import train_test_split, KFold
# Import patch for drawing rectangles in the legend
from matplotlib.patches import Rectangle
# Create color maps
cmap_bold = ListedColormap(['#FF0000', '#00FF00', '#0000FF'])
cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF'])
# Create a legend for the colors, using rectangles for the corresponding colormap colors
labelList = []
for color in cmap_bold.colors:
labelList.append(Rectangle((0, 0), 1, 1, fc=color))
"""
Explanation: Nearest Neighbor Tutorial
K-nearest neighbors, or K-NN, is a simple form of supervised learning. It assigns an output label to a new input example x based on it's closest neighboring datapoints. The number K is the number of data points to use. For K=1, x is assigned the label of the closest neighbor. If K>1, the majority vote is used to label x.
The code in this tutorial is slightly modified from the scikit-learn K-NN example. There is also information on the K-NN classifier function KNeighborsClassifier.
Setup
Tell matplotlib to print figures in the notebook. Then import numpy (for numerical data), matplotlib.pyplot (for plotting figures), ListedColormap (for plotting colors), neighbors (for the scikit-learn nearest-neighbor algorithm), and datasets (to download the iris dataset from scikit-learn).
Also create the color maps to use to color the plotted data, and "labelList", which is a list of colored rectangles to use in plotted legends
End of explanation
"""
# Import some data to play with
iris = datasets.load_iris()
# Store the labels (y), label names, features (X), and feature names
y = iris.target # Labels are stored in y as numbers
labelNames = iris.target_names # Species names corresponding to labels 0, 1, and 2
X = iris.data
featureNames = iris.feature_names
"""
Explanation: Import the dataset
Import the dataset and store it to a variable called iris. Scikit-learn's explanation of the dataset is here. This dataset is similar to a python dictionary, with the keys: ['DESCR', 'target_names', 'target', 'data', 'feature_names']
The data features are stored in iris.data, where each row is an example from a single flower, and each column is a single feature. The feature names are stored in iris.feature_names. Labels are stored as the numbers 0, 1, or 2 in iris.target, and the names of these labels are in iris.target_names.
The dataset consists of measurements made on 50 examples from each of three different species of iris flowers (Setosa, Versicolour, and Virginica). Each example has four features (or measurements): sepal length, sepal width, petal length, and petal width. All measurements are in cm.
Below, we load the labels into y, the corresponding label names into labelNames, the data into X, and the names of the features into featureNames.
End of explanation
"""
# Plot the data
# Sepal length and width
X_small = X[:,:2]
# Get the minimum and maximum values with an additional 0.5 border
x_min, x_max = X_small[:, 0].min() - .5, X_small[:, 0].max() + .5
y_min, y_max = X_small[:, 1].min() - .5, X_small[:, 1].max() + .5
plt.figure(figsize=(8, 6))
# Plot the training points
plt.scatter(X_small[:, 0], X_small[:, 1], c=y, cmap=cmap_bold)
plt.xlabel('Sepal length (cm)')
plt.ylabel('Sepal width (cm)')
plt.title('Sepal width vs length')
# Set the plot limits
plt.xlim(x_min, x_max)
plt.ylim(y_min, y_max)
# Plot the legend
plt.legend(labelList, labelNames)
plt.show()
"""
Explanation: Below, we plot the first two features from the dataset (sepal length and width). Normally we would try to use all useful features, but sticking with two allows us to visualize the data more easily.
Then we plot the data to get a look at what we're dealing with. The colormap is used to determine what colors are used for each class when plotting.
End of explanation
"""
# Choose your number of neighbors
n_neighbors = 15
# we create an instance of Neighbours Classifier and fit the data.
clf = neighbors.KNeighborsClassifier(n_neighbors)
clf.fit(X_small, y)
"""
Explanation: Nearest neighbors: training
Next, we train a nearest neighbor classifier on our data.
The first section chooses the number of neighbors to use, and stores it in the variable n_neighbors (another, more intuitive, name for the K variable mentioned previously).
The last two lines create and train the classifier. Line 1 creates a classifier (clf) using the KNeighborsClassifier() function, and tells it to use the number of neighbors stored in n_neighbors. Line 2 uses the fit() method to train the classifier on the features in X_small, using the labels in y.
End of explanation
"""
h = .02 # step size in the mesh
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, m_max]x[y_min, y_max].
x_min, x_max = X_small[:, 0].min() - 1, X_small[:, 0].max() + 1
y_min, y_max = X_small[:, 1].min() - 1, X_small[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()]) # Make a prediction oat every point
# in the mesh in order to find the
# classification areas for each label
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure(figsize=(8, 6))
plt.pcolormesh(xx, yy, Z, cmap=cmap_light)
# Plot the training points
plt.scatter(X_small[:, 0], X_small[:, 1], c=y, cmap=cmap_bold)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.title("3-Class classification (k = %i)"
% (n_neighbors))
plt.xlabel('Sepal length (cm)')
plt.ylabel('Sepal width (cm)')
# Plot the legend
plt.legend(labelList, labelNames)
plt.show()
"""
Explanation: Plot the classification boundaries
Now that we have our classifier, let's visualize what it's doing.
First we plot the decision boundaries, or the lines dividing areas assigned to the different labels (species of iris). Then we plot our examples onto the space, showing where each point lies and the corresponding decision boundary.
The colored background shows the areas that are considered to belong to a certain label. If we took sepal measurements from a new flower, we could plot it in this space and use the color to determine which type of iris our classifier believes it to be.
End of explanation
"""
# Choose your number of neighbors
n_neighbors = # Choose a new number of neighbors
# we create an instance of Neighbours Classifier and fit the data.
clf = neighbors.KNeighborsClassifier(n_neighbors)
clf.fit(X_small, y)
h = .02 # step size in the mesh
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, m_max]x[y_min, y_max].
x_min, x_max = X_small[:, 0].min() - 1, X_small[:, 0].max() + 1
y_min, y_max = X_small[:, 1].min() - 1, X_small[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()]) # Make a prediction oat every point
# in the mesh in order to find the
# classification areas for each label
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure(figsize=(8, 6))
plt.pcolormesh(xx, yy, Z, cmap=cmap_light)
# Plot the training points
plt.scatter(X_small[:, 0], X_small[:, 1], c=y, cmap=cmap_bold)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.title("3-Class classification (k = %i)"
% (n_neighbors))
plt.xlabel('Sepal length (cm)')
plt.ylabel('Sepal width (cm)')
# Plot the legend
plt.legend(labelList, labelNames)
plt.show()
"""
Explanation: Changing the number of neighbors
Change the number of neighbors (n_neighbors) below, and see how the plot boundaries change. Try as many or as few as you'd like, but remember there are only 150 examples in the dataset, so selecting all 150 wouldn't be very useful!
End of explanation
"""
# Add our new data examples
examples = [[4.3, 2.5], # Plant A
[6.3, 2.1]] # Plant B
# Reset our number of neighbors
n_neighbors = 15
# Create an instance of Neighbours Classifier and fit the original data.
clf = neighbors.KNeighborsClassifier(n_neighbors)
clf.fit(X_small, y)
# Predict the labels for our new examples
labels = clf.predict(examples)
# Print the predicted species names
print('A: ' + labelNames[labels[0]])
print('B: ' + labelNames[labels[1]])
"""
Explanation: Making predictions
Now, let's say we go out and measure the sepals of two new iris plants, and want to know what species they are. We're going to use our classifier to predict the flowers with the following measurements:
Plant | Sepal length | Sepal width
------|--------------|------------
A |4.3 |2.5
B |6.3 |2.1
We can use our classifier's predict() function to predict the label for our input features. We pass in the variable examples to the predict() function, which is a list, and each element is another list containing the features (measurements) for a particular example. The output is a list of labels corresponding to the input examples.
End of explanation
"""
# Now plot the results
h = .02 # step size in the mesh
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, m_max]x[y_min, y_max].
x_min, x_max = X_small[:, 0].min() - 1, X_small[:, 0].max() + 1
y_min, y_max = X_small[:, 1].min() - 1, X_small[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()]) # Make a prediction oat every point
# in the mesh in order to find the
# classification areas for each label
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure(figsize=(8, 6))
plt.pcolormesh(xx, yy, Z, cmap=cmap_light)
# Plot the training points
plt.scatter(X_small[:, 0], X_small[:, 1], c=y, cmap=cmap_bold)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.title("3-Class classification (k = %i)"
% (n_neighbors))
plt.xlabel('Sepal length (cm)')
plt.ylabel('Sepal width (cm)')
# Display the new examples as labeled text on the graph
plt.text(examples[0][0], examples[0][1],'A', fontsize=14)
plt.text(examples[1][0], examples[1][1],'B', fontsize=14)
# Plot the legend
plt.legend(labelList, labelNames)
plt.show()
"""
Explanation: Plotting our predictions
Now let's plot our predictions to see why they were classified that way.
End of explanation
"""
# Put your code here!
# Feel free to add as many code cells as you need
"""
Explanation: What about our other features?
You may remember that our original dataset contains two additional features, the length and width of the petals.
What does the plot look like when you train on the petal length and width? How does it change when you change the number of neighbors?
How would you plot our two new plants, A and B, on these new plots? Assume we have all four measurements for each plant, as shown below.
Plant | Sepal length | Sepal width| Petal length | Petal width
------|--------------|------------|--------------|------------
A |4.3 |2.5 | 1.5 | 0.5
B |6.3 |2.1 | 4.8 | 1.5
End of explanation
"""
# Choose your number of neighbors
n_neighbors_distance = 15
# we create an instance of Neighbours Classifier and fit the data.
clf_distance = neighbors.KNeighborsClassifier(n_neighbors_distance,
weights='distance')
clf_distance.fit(X_small, y)
# Predict the labels of the new examples
labels = clf_distance.predict(examples)
# Print the predicted species names
print('A: ' + labelNames[labels[0]])
print('B: ' + labelNames[labels[1]])
# Plot the results
h = .02 # step size in the mesh
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, m_max]x[y_min, y_max].
x_min, x_max = X_small[:, 0].min() - 1, X_small[:, 0].max() + 1
y_min, y_max = X_small[:, 1].min() - 1, X_small[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
Z = clf_distance.predict(np.c_[xx.ravel(), yy.ravel()]) # Make a prediction oat every point
# in the mesh in order to find the
# classification areas for each label
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure(figsize=(8, 6))
plt.pcolormesh(xx, yy, Z, cmap=cmap_light)
# Plot the training points
plt.scatter(X_small[:, 0], X_small[:, 1], c=y, cmap=cmap_bold)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.title("3-Class classification (k = %i)"
% (n_neighbors))
plt.xlabel('Sepal length (cm)')
plt.ylabel('Sepal width (cm)')
# Display the new examples as labeled text on the graph
plt.text(examples[0][0], examples[0][1],'A', fontsize=14)
plt.text(examples[1][0], examples[1][1],'B', fontsize=14)
# Plot the legend
plt.legend(labelList, labelNames)
plt.show()
"""
Explanation: Changing neighbor weights
Looking at our orignal plot of sepal dimensions, we can see that plant A is classified as Setosa (red) and B is classified as Virginica (blue). While A seems to be clearly correct, B is much closer to two examples from Versicolour (green). Maybe we should give more importance to labels that are closer to our example?
In the previous examples, all the neighbors were considered equally important when deciding what label to give our input. But what if we want to give more importance (or a heigher weight) to neighbors that are closer to our new example? The K-NN algorithm allows you to change from uniform weights, where all included neighbors have the same importance, to distance-based weights, where closer neighbors are given more consideration.
Below, we create a new classifier using distance-based weights and plot the results. The only difference in the code is that we call KNeighborsClassifier() with the argument weights='distance'.
Look at how it's different from the original plot, and see how the classifications of plant B change. Try it with different combinations of neighbors, and compare it to the previous plots.
End of explanation
"""
# Put your code here!
"""
Explanation: Now, see how this affects your plots using other features.
End of explanation
"""
# Add our new data examples
examples = [[4.3, 2.5, 1.5, 0.5], # Plant A
[6.3, 2.1, 4.8, 1.5]] # Plant B
# Choose your number of neighbors
n_neighbors_distance = 15
# we create an instance of Neighbours Classifier and fit the data.
clf_distance = neighbors.KNeighborsClassifier(n_neighbors_distance, weights='distance')
clf_distance.fit(X, y)
# Predict the labels of the new examples
labels = clf_distance.predict(examples)
# Print the predicted species names
print('A: ' + labelNames[labels[0]])
print('B: ' + labelNames[labels[1]])
"""
Explanation: Using more than two features
Using two features is great for visualizing, but it's often not good for training a good classifier. Below, we will train a classifier using the 'distance' weighting method and all four features, and use that to predict plants A and B.
How do the predictions compare to our predictions using only two labels?
End of explanation
"""
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)
# Choose your number of neighbors
n_neighbors_distance = 15
# we create an instance of Neighbours Classifier and fit the data.
clf_distance = neighbors.KNeighborsClassifier(n_neighbors_distance, weights='distance')
clf_distance.fit(X_train, y_train)
# Predict the labels of the test data
predictions = clf_distance.predict(X_test)
"""
Explanation: Evaluating The Classifier
In order to evaluate a classifier, we need to split our dataset into training data, which we'll show to the classifier so it can learn, and testing data, which we will hold back from training and use to test its predictions.
Below, we create the training and testing datasets, using all four features. We then train our classifier on the training data, and get the predictions for the test data.
End of explanation
"""
accuracy = np.mean(predictions == y_test )*100
print('The accuracy is ' + '%.2f' % accuracy + '%')
"""
Explanation: Next, we evaluate how well the classifier did. The easiest way to do this is to get the average number of correct predictions, usually referred to as the accuracy
End of explanation
"""
# Choose our k values
kvals = [1,3,5,10,20,40]
# Create a dictionary of arrays to store accuracies
accuracies = {}
for k in kvals:
accuracies[k] = []
# Loop through 5 folds
kf = KFold(n_splits=5)
for trainInd, valInd in kf.split(X_train):
X_tr = X_train[trainInd,:]
y_tr = y_train[trainInd]
X_val = X_train[valInd,:]
y_val = y_train[valInd]
# Loop through each value of k
for k in kvals:
# Create the classifier
clf = neighbors.KNeighborsClassifier(k, weights='distance')
# Train the classifier
clf.fit(X_tr, y_tr)
# Make our predictions
pred = clf.predict(X_val)
# Calculate the accuracy
accuracies[k].append(np.mean(pred == y_val))
"""
Explanation: Comparing Models with Crossvalidation
To select the best number of neighbors to use in our model, we need to use crossvalidation. We can then get our final result using our test data.
First we choose the number of neighbors we want to investigate, then divide our training data into folds. We loop through the sets of training and validation folds. Each time, we train each model on the training data and evaluate on the validation data. We store the accuracy of each classifier on each fold so we can look at them later.
End of explanation
"""
for k in kvals:
print('k=%i: %.2f' % (k, np.mean(accuracies[k])))
"""
Explanation: Select a Model
To select a model, we look at the average accuracy across all folds.
End of explanation
"""
clf = neighbors.KNeighborsClassifier(3, weights='distance')
clf.fit(X_train, y_train)
predictions = clf.predict(X_test)
accuracy = np.mean(predictions == y_test)*100
print('The final accuracy is %.2f' % accuracy + '%')
"""
Explanation: Final Evaluation
K=3 gives us the highest accuracy, so we select it as our best model. Now we can evaluate it on our training set and get our final accuracy rating.
End of explanation
"""
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
"""
Explanation: Sidenote: Randomness and Results
Every time you run this notebook, you will get slightly different results. Why? Because data is randomly divided among the training/testing/validation data sets. Running the code again will create a different division of the data, and will make the results slightly different. However, the overall outcome should remain consistent and have approximately the same values. If you have drastically different results when running an analysis multiple times, it suggests a problem with your model or that you need more data.
If it's important that you get the exact same results every time you run the code, you can specify a random state in the random_state argument of train_test_split() and KFold.
End of explanation
"""
|
gully/starfish-demo | demo6/lnprior.ipynb | mit | import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
% config InlineBackend.figure_format = 'retina'
"""
Explanation: How to set priors on stellar parameters.
gully
https://github.com/iancze/Starfish/issues/32
The strategy here is to define a lnprior and add it to the lnprob.
We have to find the right destination in the code.
Preamble
Highlighting lines of code in markdown.
The Jupyter Notebook does not have a way to highlight lines of code in markdown. Sphinx and reST have a way of doing this, but not here. Too bad. So to draw attention to a specific line in large code blocks I will use three arrows:
```python
```
Cool? Cool.
Nomenclature: Log- Likelihood, Prior, Probability
Throughout the code lnprob denotes the natural log of the posterior probability density distribution. This practice is completely accurate, since the priors are (tacitly) flat. However now I am trying to add in explicit prior functionality. So I am faced with the challenge of renaming lnprob everywhere to lnlike, or just reassigning in place: lnprob += lnprior. I will do the latter. But note that it could be confusing whether you're looking at a lnprob value that includes the (new, explicit) prior or not.
Attempt 1: Put the prior in star.py
At first I thought the right place to make this change is in the script:
star.py's lnprob()
```python
These functions store the variables pconns, cconns, ps.
def lnprob(p):
pars = ThetaParam(grid=p[0:3], vz=p[3], vsini=p[4], logOmega=p[5])
#Distribute the calculation to each process
for ((spectrum_id, order_id), pconn) in pconns.items():
pconn.send(("LNPROB", pars))
#Collect the answer from each process
lnps = np.empty((len(Starfish.data["orders"]),))
for i, pconn in enumerate(pconns.values()):
lnps[i] = pconn.recv()
result = np.sum(lnps) # + lnprior???
print("proposed:", p, result)
return result
```
star.py is not the right place to put the prior*.
*I think. I'm not 100% certain because this part of the code is meta.
query_lnprob() does not have access to the p.grid attribute, so it can't compute a prior.
Otherwise we'd be ln(likelihood) with ln(prob). We can't do it this way.
```python
def query_lnprob():
for ((spectrum_id, order_id), pconn) in pconns.items():
pconn.send(("GET_LNPROB", None))
#Collect the answer from each process
lnps = np.empty((len(Starfish.data["orders"]),))
for i, pconn in enumerate(pconns.values()):
lnps[i] = pconn.recv()
result = np.sum(lnps) # Can't put prior here. Don't know p!
print("queried:", result)
return result
```
Attempt 2: Put the prior in parallel.py
This is what I'm doing right now and it seems to work.
```python
def lnprob_Theta(self, p):
'''
Update the model to the Theta parameters and then evaluate the lnprob.
Intended to be called from the master process via the command "LNPROB".
NOTE that setting the prior this way means:
The prior will only be effective when called from `update_Theta`
This could cause some unanticipated behavior...
'''
try:
self.update_Theta(p)
lnlike = self.evaluate() # Also sets self.lnprob to new value
lnp = lnlike + self.lnprior_fn(p) # Here is the prior!!
self.lnprob = lnp
return lnp
except C.ModelError:
self.logger.debug("ModelError in stellar parameters, sending back -np.inf {}".format(p))
return -np.inf
```
This seems to work fine, but the problem is that lnlike is defined in 3 or 4 different places. (double check this)
OptimizeCheb
SampleThetaCheb
SampleThetaPhi
SampleThetaPhiLines
So the fix above is only affecting one of those. The better solution would be to put the prior directly in evaluate, which is shared among the above 4. But I'd have to use self.p, which I'm not certain is actually defined. It's also hard to debug the parallel.py, especially since it is wrapped in an argparser.
Anyways, here is the lnprior_fn, which is very stupidly hardcoded at the moment.
```python
def lnprior_fn(self, p):
'''
Return the lnprior for input stellar parameters.
Intended to be called from lnprob_Theta
'''
#For now just hardcode the location and scale parameters.
# log-g
loc = 3.7
scl = 0.02
lnprior_logg = norm.logpdf(p.grid[1], loc=loc, scale=scl)
#Everything else will have a flat prior over the grid.
lnprior_allelse = 0.0
lnprior_out = lnprior_logg + lnprior_allelse
return lnprior_out
```
The leads me to the other big question:
How to actually get the prior parameters into the right place?
The config.yaml probably, but the details need to be worked out.
What does this prior look like?
End of explanation
"""
from scipy import stats
x = np.linspace(3.5, 4.0, 100)
loc = 3.7
scl = 0.02
y = stats.norm.pdf(x, loc=loc, scale=scl)
yalt = stats.norm.logpdf(x, loc=loc, scale=scl)
np.trapz(y, x)
"""
Explanation: We want a a continuous prior:
End of explanation
"""
plt.plot(x, np.log(y))
plt.xlabel('$\log{g}$')
plt.ylabel('$\ln{p}$')
plt.ylim(ymin=-20)
def lnprior_fn(self, p):
#For now just hardcode the location and scale parameters.
# log-g
loc = 3.7
scl = 0.1
lnprior_logg = stats.norm.logpdf(p.grid[1], loc=loc, scale=scl)
#Everything else will have a flat prior over the grid.
lnprior_allelse = 0.0
lnprior_out = lnprior_logg + lnprior_allelse
return lnprior_out
"""
Explanation: The normalization doesn't matter, but it's nice to know that it's close to normalized.
End of explanation
"""
import h5py
!cp /Users/gully/GitHub/welter/sf/m086/output/LkCa4_sm086/run03/mc.hdf5 .
f = h5py.File('mc.hdf5', mode='r')
list(f.keys())
d = f['samples']
list(d.attrs)
d.attrs['acceptance']
"""
Explanation: What do the chains look like when you use a prior?
End of explanation
"""
f.close()
"""
Explanation: Too small! Here's why: Our starting guess for log-g was 3.6. But the prior was a narrow Gaussian at 3.7.
But it still should have done better than 0.002. Whatever.
I am pretty sure we should include the prior directly in the evaluate step.
Otherwise initial conditions could cause strange lnprob values. (It's hard to tell which call to lnprob is initialized first-- with or without lnprior).
End of explanation
"""
|
VUInformationRetrieval/IR2015_2016 | 04_analysis.ipynb | gpl-2.0 | import pickle, bz2
from collections import *
import numpy as np
import matplotlib.pyplot as plt
# show plots inline within the notebook
%matplotlib inline
# set plots' resolution
plt.rcParams['savefig.dpi'] = 100
from IPython.display import display, HTML
Ids_file = 'data/air__Ids.pkl.bz2'
Summaries_file = 'data/air__Summaries.pkl.bz2'
Citations_file = 'data/air__Citations.pkl.bz2'
Abstracts_file = 'data/air__Abstracts.pkl.bz2'
Ids = pickle.load( bz2.BZ2File( Ids_file, 'rb' ) )
Summaries = pickle.load( bz2.BZ2File( Summaries_file, 'rb' ) )
paper = namedtuple( 'paper', ['title', 'authors', 'year', 'doi'] )
for (id, paper_info) in Summaries.items():
Summaries[id] = paper( *paper_info )
Citations = pickle.load( bz2.BZ2File( Citations_file, 'rb' ) )
def display_summary( id, extra_text='' ):
"""
Function for printing a paper's summary through IPython's Rich Display System.
Trims long titles or author lists, and links to the paper's DOI (when available).
"""
s = Summaries[ id ]
title = ( s.title if s.title[-1]!='.' else s.title[:-1] )
title = title[:150].rstrip() + ('' if len(title)<=150 else '...')
if s.doi!='':
title = '<a href=http://dx.doi.org/%s>%s</a>' % (s.doi, title)
authors = ', '.join( s.authors[:5] ) + ('' if len(s.authors)<=5 else ', ...')
lines = [
title,
authors,
str(s.year),
'<small>id: %d%s</small>' % (id, extra_text)
]
display( HTML( '<blockquote>%s</blockquote>' % '<br>'.join(lines) ) )
from math import log10
def tokenize(text):
return text.split(' ')
def preprocess(tokens):
result = []
for token in tokens:
result.append(token.lower())
return result
Abstracts = pickle.load( bz2.BZ2File( Abstracts_file, 'rb' ) )
inverted_index = defaultdict(set)
for (id, abstract) in Abstracts.items():
for term in preprocess(tokenize(abstract)):
inverted_index[term].add(id)
tf_matrix = defaultdict(Counter)
for (id, abstract) in Abstracts.items():
tf_matrix[id] = Counter(preprocess(tokenize(abstract)))
def tf(t,d):
return float(tf_matrix[d][t])
def df(t):
return float(len(inverted_index[t]))
numdocs = float(len(Abstracts))
def num_documents():
return numdocs
# We don't need to keep this object in memory any longer:
Abstracts = {}
"""
Explanation: Link analysis
(Inspired by and borrowed heavily from: Collective Intelligence - Luís F. Simões. IR version and assignments by J.E. Hoeksema, 2014-11-12. Converted to Python 3 and minor changes by Tobias Kuhn, 2015-11-17.)
This notebook's purpose is to give examples of how to use graph algorithms to improve search engine. We look at two graphs in particular: the co-authorship network and the citation network.
The citation network is similar to the link network of the web: Citations are like web links pointing to other documents. We can therefore apply the same network-based ranking methods.
Code from previous exercises
End of explanation
"""
papers_of_author = defaultdict(set)
for id,p in Summaries.items():
for a in p.authors:
papers_of_author[a].add( id )
papers_of_author['Vine AK']
for id in papers_of_author['Vine AK']:
display_summary(id)
"""
Explanation: Co-authorship network
Summaries maps paper ids to paper summaries. Let us now create here mappings by different criteria.
We'll start by building a mapping from authors to the set of ids of papers they authored.
We'll be using Python's sets again for that purpose.
End of explanation
"""
coauthors = defaultdict(set)
for p in Summaries.values():
for a in p.authors:
coauthors[a].update( p.authors )
# The code above results in each author being listed as having co-autored with himself/herself.
# We remove these self-references here:
for a,ca in coauthors.items():
ca.remove(a)
print(', '.join( coauthors['Vine AK'] ))
"""
Explanation: We now build a co-authorship network, a graph linking authors, to the set of co-authors they have published with.
End of explanation
"""
print('Number of nodes: %8d (node = author)' % len(coauthors))
print('Number of links: %8d (link = collaboration between the two linked authors on at least one paper)' \
% sum( len(cas) for cas in coauthors.values() ))
"""
Explanation: Now we can have a look at some basic statistics about our graph:
End of explanation
"""
plt.hist( x=[ len(ca) for ca in coauthors.values() ], bins=range(55), histtype='bar', align='left', normed=True )
plt.xlabel('number of collaborators')
plt.ylabel('fraction of scientists')
plt.xlim(0,50);
"""
Explanation: With this data in hand, we can plot the degree distribution by showing the number of collaborators a scientist has published with:
End of explanation
"""
papers_citing = Citations # no changes needed, this is what we are storing already in the Citations dataset
cited_by = defaultdict(list)
for ref, papers_citing_ref in papers_citing.items():
for id in papers_citing_ref:
cited_by[ id ].append( ref )
"""
Explanation: Citations network
We'll start by expanding the Citations dataset into two mappings:
papers_citing[id]: papers citing a given paper;
cited_by[id]: papers cited by a given paper (in other words, its list of references).
If we see the Citations dataset as a directed graph where papers are nodes, and citations links between then, then papers_citing gives you the list of a node's incoming links, whereas cited_by gives you the list of its outgoing links.
The dataset was assembled by querying for papers citing a given paper. As a result, the data mapped to in cited_by (its values) is necessarily limited to ids of papers that are part of the dataset.
End of explanation
"""
paper_id = 16820458
refs = { id : Summaries[id].title for id in cited_by[paper_id] }
print(len(refs), 'references identified for the paper with id', paper_id)
refs
"""
Explanation: Let us now look at an arbitrary paper, let's say PubMed ID 16820458 ("Changes in the spoilage-related microbiota of beef during refrigerated storage under different packaging conditions"). We can now use the cited_by mapping to retrieve what we know of its list of references.
As mentioned above, because the process generating the dataset asked for papers citing a given paper (and not papers a paper cites), the papers we get through cited_by are then necessarily all members of our datasets, and we can therefore find them in Summaries.
End of explanation
"""
{ id : Summaries.get(id,['??'])[0] for id in papers_citing[paper_id] }
"""
Explanation: If we lookup the same paper in papers_citing, we now see that some of the cited papers are themselves in our dataset, but others are not (denote here by '??'):
End of explanation
"""
paper_id2 = 17696886
refs2 = { id : Summaries[id].title for id in cited_by[paper_id2] }
print(len(refs2), 'references identified for the paper with id', paper_id2)
refs2
"""
Explanation: Paper 17696886, for example, is not in our dataset and we do not have any direct information about it, but its repeated occurrence in other papers' citation lists does allow us to reconstruct a good portion of its references. Below is the list of papers in our dataset cited by that paper:
End of explanation
"""
print('Number of core ids %d (100.00 %%)' % len(Ids))
with_cit = [ id for id in Ids if papers_citing[id]!=[] ]
print('Number of papers cited at least once: %d (%.2f %%)' % (len(with_cit), 100.*len(with_cit)/len(Ids)))
isolated = set( id for id in Ids if papers_citing[id]==[] and id not in cited_by )
print('Number of isolated nodes: %d (%.2f %%)\n\t' \
'(papers that are not cited by any others, nor do themselves cite any in the dataset)'% (
len(isolated), 100.*len(isolated)/len(Ids) ))
noCit_withRefs = [ id for id in Ids if papers_citing[id]==[] and id in cited_by ]
print('Number of dataset ids with no citations, but known references: %d (%.2f %%)' % (
len(noCit_withRefs), 100.*len(noCit_withRefs)/len(Ids)))
print('(percentages calculated with respect to just the core ids (members of `Ids`) -- exclude outsider ids)\n')
Ids_set = set( Ids )
citing_Ids = set( cited_by.keys() ) # == set( c for citing in papers_citing.itervalues() for c in citing )
outsiders = citing_Ids - Ids_set # set difference: removes from `citing_Ids` all the ids that occur in `Ids_set`
nodes = citing_Ids | Ids_set - isolated # set union, followed by set difference
print('Number of (non-isolated) nodes in the graph: %d\n\t(papers with at least 1 known citation, or 1 known reference)' % len(nodes))
print(len( citing_Ids ), 'distinct ids are citing papers in our dataset.')
print('Of those, %d (%.2f %%) are ids from outside the dataset.\n' % ( len(outsiders), 100.*len(outsiders)/len(citing_Ids) ))
all_cits = [ c for citing in papers_citing.values() for c in citing ]
outsider_cits = [ c for citing in papers_citing.values() for c in citing if c in outsiders ]
print('Number of links (citations) in the graph:', len(all_cits))
print('A total of %d citations are logged in the dataset.' % len(all_cits))
print('Citations by ids from outside the dataset comprise %d (%.2f %%) of that total.\n' % (
len(outsider_cits),
100.*len(outsider_cits)/len(all_cits) ))
"""
Explanation: Now that we have a better understanding about the data we're dealing with, let us obtain again some basic statistics about our graph.
End of explanation
"""
nr_cits_per_paper = [ (id, len(cits)) for (id,cits) in papers_citing.items() ]
for (id, cits) in sorted( nr_cits_per_paper, key=lambda i:i[1], reverse=True )[:10]:
display_summary( id, ', nr. citations: %d' % cits )
"""
Explanation: Most cited papers
Let us now find which 10 papers are the most cited in our dataset.
End of explanation
"""
import networkx as nx
G = nx.DiGraph(cited_by)
"""
Explanation: Link Analysis for Search Engines
In order to use the citation network, we need to be able to perform some complex graph algorithms on it. To make our lives easier, we will load the data into the python package NetworkX, a package for the creation, manipulation, and study of the structure, dynamics, and function of complex networks, which provides a number of these graph algorithms (such as HITS and PageRank) out of the box.
You probably have to install the NetworkX package first.
End of explanation
"""
print(nx.info(G))
print(nx.is_directed(G))
print(nx.density(G))
"""
Explanation: We now have a NetworkX Directed Graph stored in G, where a node represents a paper, and an edge represents a citation. This means we can now apply the algorithms and functions of NetworkX to our graph:
End of explanation
"""
G.add_nodes_from(isolated)
print(nx.info(G))
print(nx.is_directed(G))
print(nx.density(G))
"""
Explanation: As this graph was generated from citations only, we need to add all isolated nodes (nodes that are not cited and do not cite other papers) as well:
End of explanation
"""
# Add your code here
"""
Explanation: Assignments
Your name: ...
Plot the in-degree distribution (the distribution of the number of incoming links) for the citation network. What can you tell about the shape of this distribution, and what does this tell us about the network?
End of explanation
"""
# Add your code here
# print PageRank for paper 7168798
# print PageRank for paper 21056779
"""
Explanation: [Write your answer text here]
Using the Link Analysis algorithms provided by NetworkX, calculate the PageRank score for each node in the citation network, and store them in a variable. Print out the PageRank values for the two example papers given below.
Hint: the pagerank_scipy implementation tend to be considerably faster than its regular pagerank counterpart (but you have to install the SciPy package for that).
End of explanation
"""
# Add your code here
"""
Explanation: Copy your search engine from mini-assignment 3, and create a version that incorporates a paper's PageRank score in it's final score, in addition to tf-idf. Show the result of an example query, and explain your decision on how to combine the two scores (PageRank and tf-idf).
End of explanation
"""
|
lalonica/PhD | vehicles/VehiclesTimeCycles.ipynb | gpl-3.0 | %matplotlib inline
from pandas import Series, DataFrame
import pandas as pd
from itertools import *
import numpy as np
import csv
import math
import matplotlib.pyplot as plt
from matplotlib import pylab
from scipy.signal import hilbert, chirp
import scipy
import networkx as nx
"""
Explanation: Loading the necessary libraries
End of explanation
"""
c_dataset = ['vID','fID', 'tF', 'Time', 'lX', 'lY', 'gX', 'gY', 'vLen', 'vWid', 'vType','vVel', 'vAcc', 'vLane', 'vPrec', 'vFoll', 'spac','headway' ]
dataset = pd.read_table('D:\\zzzLola\\PhD\\DataSet\\US101\\coding\\trajectories-0750am-0805am.txt', sep=r"\s+",
header=None, names=c_dataset)
dataset[:10]
"""
Explanation: Loading the dataset 0750-0805
Description of the dataset is at:
D:/zzzLola/PhD/DataSet/US101/US101_time_series/US-101-Main-Data/vehicle-trajectory-data/trajectory-data-dictionary.htm
End of explanation
"""
numV = dataset['vID'].unique()
len(numV)
numTS = dataset['Time'].unique()
len(numTS)
"""
Explanation: What is the number of different vehicles for the 15 min
How many timestamps? Are the timestamps of the vehicles matched?
To transfor the distaces, veloc and acceleration to meters, m/s.
To compute the distances all to all.
Compute the time cycles.
End of explanation
"""
#Converting to meters
dataset['lX'] = dataset.lX * 0.3048
dataset['lY'] = dataset.lY * 0.3048
dataset['gX'] = dataset.gX * 0.3048
dataset['gY'] = dataset.gY * 0.3048
dataset['vLen'] = dataset.vLen * 0.3048
dataset['vWid'] = dataset.vWid * 0.3048
dataset['spac'] = dataset.spac * 0.3048
dataset['vVel'] = dataset.vVel * 0.3048
dataset['vAcc'] = dataset.vAcc * 0.3048
dataset[:10]
"""
Explanation: 15min = 900 s = 9000 ms //
9529ms = 952.9s = 15min 52.9s
The actual temporal length of this dataset is 15min 52.9s. Looks like the timestamp of the vehicles is matches. Which make sense attending to the way the data is obtained. There is no GPS on the vehicles, but from cameras synchronized localized at different buildings.
End of explanation
"""
dataset['tF'].describe()
des_all = dataset.describe()
des_all
des_all.to_csv('D:\\zzzLola\\PhD\\DataSet\\US101\\coding\\description_allDataset_160502.csv', sep='\t', encoding='utf-8')
dataset.to_csv('D:\\zzzLola\\PhD\\DataSet\\US101\\coding\\dataset_meters_160502.txt', sep='\t', encoding='utf-8',index=False)
#table.groupby('YEARMONTH').CLIENTCODE.nunique()
v_num_lanes = dataset.groupby('vID').vLane.nunique()
v_num_lanes[v_num_lanes > 1].count()
v_num_lanes[v_num_lanes == 1].count()
#Drop some field are not necessary for the time being.
dataset = dataset.drop(['fID','tF','lX','lY','vLen','vWid', 'vType','vVel', 'vAcc',
'vLane', 'vPrec', 'vFoll','spac','headway'], axis=1)
dataset[:10]
def save_graph(graph,file_name):
#initialze Figure
plt.figure(num=None, figsize=(20, 20), dpi=80)
plt.axis('off')
fig = plt.figure(1)
pos = nx.random_layout(graph) #spring_layout(graph)
nx.draw_networkx_nodes(graph,pos)
nx.draw_networkx_edges(graph,pos)
nx.draw_networkx_labels(graph,pos)
#cut = 1.00
#xmax = cut * max(xx for xx, yy in pos.values())
#ymax = cut * max(yy for xx, yy in pos.values())
#plt.xlim(0, xmax)
#plt.ylim(0, ymax)
plt.savefig(file_name,bbox_inches="tight")
pylab.close()
del fig
times = dataset['Time'].unique()
#data = pd.DataFrame()
#data = data.fillna(0) # with 0s rather than NaNs
dTime = pd.DataFrame()
for time in times:
#print 'Time %i ' %time
dataTime0 = dataset.loc[dataset['Time'] == time]
list_vIDs = dataTime0.vID.tolist()
#print list_vIDs
dataTime = dataTime0.set_index("vID")
#index_dataTime = dataTime.index.values
#print dataTime
perm = list(permutations(list_vIDs,2))
#print perm
dist = [((((dataTime.loc[p[0],'gX'] - dataTime.loc[p[1],'gX']))**2) +
(((dataTime.loc[p[0],'gY'] - dataTime.loc[p[1],'gY']))**2))**0.5 for p in perm]
dataDist = pd.DataFrame(dist , index=perm, columns = {'dist'})
#Create the fields vID and To
dataDist['FromTo'] = dataDist.index
dataDist['From'] = dataDist.FromTo.str[0]
dataDist['To'] = dataDist.FromTo.str[1]
#I multiply by 100 in order to scale the number
dataDist['weight'] = (1/dataDist.dist)*100
#Delete the intermediate FromTo field
dataDist = dataDist.drop('FromTo', 1)
graph = nx.from_pandas_dataframe(dataDist, 'From','To',['weight'])
save_graph(graph,'D:\\zzzLola\\PhD\\DataSet\\US101\\coding\\graphs\\%i_my_graph.png' %time)
"""
Explanation: For every time stamp, check how many vehicles are accelerating when the one behind is also or not... :
- vehicle_acceleration vs precedin_vehicl_acceleration
- vehicle_acceleration vs follower_vehicl_acceleration
When is a vehicle changing lanes?
End of explanation
"""
|
godfreyduke/deep-learning | tv-script-generation/dlnd_tv_script_generation.ipynb | mit | """
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
"""
Explanation: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
"""
view_sentence_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
"""
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
"""
import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
vocab = set(text)
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
return vocab_to_int, int_to_vocab
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
"""
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
"""
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
"""
token_dict = {
'.': '||period||',
',': '||comma||',
'"': '||quotationmark||',
';': '||semicolon||',
'!': '||exclamationmark||',
'?': '||questionmark||',
'(': '||leftparentheses||',
')': '||rightparentheses||',
'--': '||dash||',
'\n': '||return||'
}
return token_dict
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
"""
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
"""
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
"""
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
"""
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
"""
def get_inputs():
"""
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
"""
input_data = tf.placeholder(tf.int32, shape=(None, None), name='input')
targets = tf.placeholder(tf.int32, shape=(None, None), name='targets')
learning_rate = tf.placeholder(tf.float32, name='learning_rate')
return input_data, targets, learning_rate
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_inputs(get_inputs)
"""
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following tuple (Input, Targets, LearningRate)
End of explanation
"""
def get_init_cell(batch_size, rnn_size):
"""
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
"""
# TODO: Revisit the magic number 2 here to see about gathering it elsewhere with other hyperparameters
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
cell = tf.contrib.rnn.MultiRNNCell([lstm] * 2) # Where 2 is the number of layers
initial_state = tf.identity(cell.zero_state(batch_size, tf.float32), 'initial_state')
return cell, initial_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_init_cell(get_init_cell)
"""
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
"""
def get_embed(input_data, vocab_size, embed_dim):
"""
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
"""
embedding = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))
embed = tf.nn.embedding_lookup(embedding, input_data)
return embed
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_embed(get_embed)
"""
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
End of explanation
"""
def build_rnn(cell, inputs):
"""
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
"""
# Not sure why we're not supplying the initial_state to the dynamic_rnn call (we generated it in get_init_cell)
# but that's not how the build_rnn method signature works... *shrug*
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
final_state = tf.identity(final_state, name='final_state')
return outputs, final_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_rnn(build_rnn)
"""
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
"""
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
"""
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
"""
embed = get_embed(input_data, vocab_size, embed_dim=rnn_size)
outputs, final_state = build_rnn(cell, embed)
logits = tf.layers.dense(inputs=outputs, units=vocab_size) # Default activation=None for linear activation
return logits, final_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_nn(build_nn)
"""
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
"""
def get_batches(int_text, batch_size, seq_length):
"""
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
"""
# TODO: Implement Function
n_batches = int(len(int_text) / (batch_size * seq_length))
# Drop the last few characters to make only full batches
xdata = np.array(int_text[: n_batches * batch_size * seq_length])
ydata = np.array(int_text[1: n_batches * batch_size * seq_length + 1])
x_batches = np.split(xdata.reshape(batch_size, -1), n_batches, 1)
y_batches = np.split(ydata.reshape(batch_size, -1), n_batches, 1)
return np.array(list(zip(x_batches, y_batches)))
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_batches(get_batches)
"""
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2], [ 7 8], [13 14]]
# Batch of targets
[[ 2 3], [ 8 9], [14 15]]
]
# Second Batch
[
# Batch of Input
[[ 3 4], [ 9 10], [15 16]]
# Batch of targets
[[ 4 5], [10 11], [16 17]]
]
# Third Batch
[
# Batch of Input
[[ 5 6], [11 12], [17 18]]
# Batch of targets
[[ 6 7], [12 13], [18 1]]
]
]
```
Notice that the last target value in the last batch is the first input value of the first batch. In this case, 1. This is a common technique used when creating sequence batches, although it is rather unintuitive.
End of explanation
"""
# Number of Epochs
num_epochs = 1000
# Batch Size
batch_size = 100
# RNN Size
rnn_size = 100
# Embedding Dimension Size
embed_dim = 100
# Sequence Length
seq_length = 100
# Learning Rate
learning_rate = 0.01
# Show stats for every n number of batches
show_every_n_batches = 100
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
save_dir = './save'
"""
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set embed_dim to the size of the embedding.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
"""
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
"""
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
"""
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
"""
Explanation: Checkpoint
End of explanation
"""
def get_tensors(loaded_graph):
"""
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
"""
return loaded_graph.get_tensor_by_name('input:0'),\
loaded_graph.get_tensor_by_name('initial_state:0'),\
loaded_graph.get_tensor_by_name('final_state:0'),\
loaded_graph.get_tensor_by_name('probs:0')
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_tensors(get_tensors)
"""
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
"""
def pick_word(probabilities, int_to_vocab):
"""
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
"""
top_n = 5 # The number of top words to randomly chose from
probabilities[np.argsort(probabilities)[:-top_n]] = 0
probabilities = probabilities / np.sum(probabilities)
idx = np.random.choice(len(probabilities), 1, p=probabilities)[0]
return int_to_vocab[idx]
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_pick_word(pick_word)
"""
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
"""
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
"""
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.